Tagging 0.90 RC2

git-svn-id: https://svn.apache.org/repos/asf/hbase/tags/0.90.0RC2@1053486 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/0.90/.gitignore b/0.90/.gitignore
new file mode 100644
index 0000000..a8a9adc
--- /dev/null
+++ b/0.90/.gitignore
@@ -0,0 +1,11 @@
+/.classpath
+/.externalToolBuilders
+/.project
+/.settings
+/build
+/.idea/
+/logs
+/target
+*.iml
+*.orig
+*~
diff --git a/0.90/CHANGES.txt b/0.90/CHANGES.txt
new file mode 100644
index 0000000..3e0a872
--- /dev/null
+++ b/0.90/CHANGES.txt
@@ -0,0 +1,3234 @@
+HBase Change Log
+Release 0.90.0 - Unreleased
+  INCOMPATIBLE CHANGES
+   HBASE-1822  Remove the deprecated APIs
+   HBASE-1848  Fixup shell for HBASE-1822
+   HBASE-1854  Remove the Region Historian
+   HBASE-1930  Put.setTimeStamp misleading (doesn't change timestamp on
+               existing KeyValues, not copied in copy constructor)
+               (Dave Latham via Stack)
+   HBASE-1360  move up to Thrift 0.2.0 (Kay Kay and Lars Francke via Stack)
+   HBASE-2212  Refactor out lucene dependencies from HBase
+               (Kay Kay via Stack)
+   HBASE-2219  stop using code mapping for method names in the RPC
+   HBASE-1728  Column family scoping and cluster identification
+   HBASE-2099  Move build to Maven (Paul Smith via Stack)
+   HBASE-2260  Remove all traces of Ant and Ivy (Lars Francke via Stack)
+   HBASE-2255  take trunk back to hadoop 0.20
+   HBASE-2378  Bulk insert with multiple reducers broken due to improper
+               ImmutableBytesWritable comparator (Todd Lipcon via Stack)
+   HBASE-2392  Upgrade to ZooKeeper 3.3.0
+   HBASE-2294  Enumerate ACID properties of HBase in a well defined spec
+               (Todd Lipcon via Stack)
+   HBASE-2541  Remove transactional contrib (Clint Morgan via Stack)
+   HBASE-2542  Fold stargate contrib into core
+   HBASE-2565  Remove contrib module from hbase
+   HBASE-2397  Bytes.toStringBinary escapes printable chars
+   HBASE-2771  Update our hadoop jar to be latest from 0.20-append branch
+   HBASE-2803  Remove remaining Get code from Store.java,etc
+   HBASE-2553  Revisit IncrementColumnValue implementation in 0.22
+   HBASE-2692  Master rewrite and cleanup for 0.90
+               (Karthik Ranganathan, Jon Gray & Stack)
+   HBASE-2961  Close zookeeper when done with it (HCM, Master, and RS)
+   HBASE-2641  HBASE-2641 Refactor HLog splitLog, hbase-2437 continued;
+               break out split code as new classes
+               (James Kennedy via Stack)
+               
+
+  BUG FIXES
+   HBASE-1791  Timeout in IndexRecordWriter (Bradford Stephens via Andrew
+               Purtell)
+   HBASE-1737  Regions unbalanced when adding new node (recommit)
+   HBASE-1792  [Regression] Cannot save timestamp in the future
+   HBASE-1793  [Regression] HTable.get/getRow with a ts is broken
+   HBASE-1698  Review documentation for o.a.h.h.mapreduce
+   HBASE-1798  [Regression] Unable to delete a row in the future
+   HBASE-1790  filters are not working correctly (HBASE-1710 HBASE-1807 too)
+   HBASE-1779  ThriftServer logged error if getVer() result is empty
+   HBASE-1778  Improve PerformanceEvaluation (Schubert Zhang via Stack)
+   HBASE-1751  Fix KeyValue javadoc on getValue for client-side
+   HBASE-1795  log recovery doesnt reset the max sequence id, new logfiles can
+               get tossed as 'duplicates'
+   HBASE-1794  recovered log files are not inserted into the storefile map
+   HBASE-1824  [stargate] default timestamp should be LATEST_TIMESTAMP
+   HBASE-1740  ICV has a subtle race condition only visible under high load
+   HBASE-1808  [stargate] fix how columns are specified for scanners
+   HBASE-1828  CompareFilters are broken from client-side
+   HBASE-1836  test of indexed hbase broken
+   HBASE-1838  [javadoc] Add javadoc to Delete explaining behavior when no
+               timestamp provided
+   HBASE-1821  Filtering by SingleColumnValueFilter bug
+   HBASE-1840  RowLock fails when used with IndexTable
+               (Keith Thomas via Stack)
+   HBASE-818   HFile code review and refinement (Schubert Zhang via Stack)
+   HBASE-1830  HbaseObjectWritable methods should allow null HBCs
+               for when Writable is not Configurable (Stack via jgray)
+   HBASE-1847  Delete latest of a null qualifier when non-null qualifiers
+               exist throws a RuntimeException 
+   HBASE-1850  src/examples/mapred do not compile after HBASE-1822
+   HBASE-1853  Each time around the regionserver core loop, we clear the
+               messages to pass master, even if we failed to deliver them
+   HBASE-1815  HBaseClient can get stuck in an infinite loop while attempting
+               to contact a failed regionserver
+   HBASE-1856  HBASE-1765 broke MapReduce when using Result.list()
+               (Lars George via Stack)
+   HBASE-1857  WrongRegionException when setting region online after .META.
+               split (Cosmin Lehane via Stack)
+   HBASE-1809  NPE thrown in BoundedRangeFileInputStream
+   HBASE-1859  Misc shell fixes patch (Kyle Oba via Stack)
+   HBASE-1865  0.20.0 TableInputFormatBase NPE
+   HBASE-1866  Scan(Scan) copy constructor does not copy value of
+               cacheBlocks
+   HBASE-1869  IndexedTable delete fails when used in conjunction with
+               RowLock (Keith Thomas via Stack)
+   HBASE-1858  Master can't split logs created by THBase (Clint Morgan via
+               Andrew Purtell)
+   HBASE-1871  Wrong type used in TableMapReduceUtil.initTableReduceJob()
+               (Lars George via Stack)
+   HBASE-1883  HRegion passes the wrong minSequenceNumber to
+               doReconstructionLog (Clint Morgan via Stack)
+   HBASE-1878  BaseScanner results can't be trusted at all (Related to
+               hbase-1784)
+   HBASE-1831  Scanning API must be reworked to allow for fully functional
+               Filters client-side
+   HBASE-1890  hbase-1506 where assignment is done at regionserver doesn't
+               work
+   HBASE-1889  ClassNotFoundException on trunk for REST
+   HBASE-1905  Remove unused config. hbase.hstore.blockCache.blockSize
+   HBASE-1906  FilterList of prefix and columnvalue not working properly with
+               deletes and multiple values
+   HBASE-1896  WhileMatchFilter.reset should call encapsulated filter reset
+   HBASE-1912  When adding a secondary index to an existing table, it will
+               cause NPE during re-indexing (Mingjui Ray Liao via Andrew
+               Purtell)
+   HBASE-1916  FindBugs and javac warnings cleanup
+   HBASE-1908  ROOT not reassigned if only one regionserver left
+   HBASE-1915  HLog.sync is called way too often, needs to be only called one
+               time per RPC
+   HBASE-1777  column length is not checked before saved to memstore
+   HBASE-1925  IllegalAccessError: Has not been initialized (getMaxSequenceId)
+   HBASE-1929  If hbase-default.xml is not in CP, zk session timeout is 10
+               seconds!
+   HBASE-1927  Scanners not closed properly in certain circumstances
+   HBASE-1934  NullPointerException in ClientScanner (Andrew Purtell via Stack)
+   HBASE-1946  Unhandled exception at regionserver (Dmitriy Lyfar via Stack)
+   HBASE-1682  IndexedRegion does not properly handle deletes
+               (Andrew McCall via Clint Morgan and Stack)
+   HBASE-1953  Overhaul of overview.html (html fixes, typos, consistency) -
+               no content changes (Lars Francke via Stack)
+   HBASE-1954  Transactional scans do not see newest put (Clint Morgan via
+               Stack)
+   HBASE-1919  code: HRS.delete seems to ignore exceptions it shouldnt
+   HBASE-1951  Stack overflow when calling HTable.checkAndPut() 
+               when deleting a lot of values
+   HBASE-1781  Weird behavior of WildcardColumnTracker.checkColumn(), 
+               looks like recursive loop
+   HBASE-1949  KeyValue expiration by Time-to-Live during major compaction is
+               broken (Gary Helmling via Stack)
+   HBASE-1957  Get-s can't set a Filter
+   HBASE-1928  ROOT and META tables stay in transition state (making the system
+               not usable) if the designated regionServer dies before the
+               assignment is complete (Yannis Pavlidis via Stack)
+   HBASE-1962  Bulk loading script makes regions incorrectly (loadtable.rb)
+   HBASE-1966  Apply the fix from site/ to remove the forrest dependency on
+               Java 5
+   HBASE-1967  [Transactional] client.TestTransactions.testPutPutScan fails
+               sometimes -- Temporary fix
+   HBASE-1841  If multiple of same key in an hfile and they span blocks, may
+               miss the earlier keys on a lookup
+               (Schubert Zhang via Stack)
+   HBASE-1977  Add ts and allow setting VERSIONS when scanning in shell
+   HBASE-1979  MurmurHash does not yield the same results as the reference C++
+               implementation when size % 4 >= 2 (Olivier Gillet via Andrew
+               Purtell)
+   HBASE-1999  When HTable goes away, close zk session in shutdown hook or
+               something...
+   HBASE-1997  zk tick time bounds maximum zk session time
+   HBASE-2003  [shell] deleteall ignores column if specified
+   HBASE-2018  Updates to .META. blocked under high MemStore load
+   HBASE-1994  Master will lose hlog entries while splitting if region has
+               empty oldlogfile.log (Lars George via Stack)
+   HBASE-2022  NPE in housekeeping kills RS
+   HBASE-2034  [Bulk load tools] loadtable.rb calls an undefined method
+               'descendingIterator' (Ching-Shen Chen via Stack)
+   HBASE-2033  Shell scan 'limit' is off by one
+   HBASE-2040  Fixes to group commit
+   HBASE-2047  Example command in the "Getting Started" 
+               documentation doesn't work (Benoit Sigoure via JD)
+   HBASE-2048  Small inconsistency in the "Example API Usage"
+               (Benoit Sigoure via JD)
+   HBASE-2044  HBASE-1822 removed not-deprecated APIs
+   HBASE-1960  Master should wait for DFS to come up when creating
+               hbase.version
+   HBASE-2054  memstore size 0 is >= than blocking -2.0g size
+   HBASE-2064  Cannot disable a table if at the same the Master is moving 
+               its regions around
+   HBASE-2065  Cannot disable a table if any of its region is opening 
+               at the same time
+   HBASE-2026  NPE in StoreScanner on compaction
+   HBASE-2072  fs.automatic.close isn't passed to FileSystem
+   HBASE-2075  Master requires HDFS superuser privileges due to waitOnSafeMode
+   HBASE-2077  NullPointerException with an open scanner that expired causing 
+               an immediate region server shutdown (Sam Pullara via JD)
+   HBASE-2078  Add JMX settings as commented out lines to hbase-env.sh
+               (Lars George via JD)
+   HBASE-2082  TableInputFormat is ignoring input scan's stop row setting
+               (Scott Wang via Andrew Purtell)
+   HBASE-2068  MetricsRate is missing "registry" parameter
+               (Lars George and Gary Helmling via Stack)
+   HBASE-2093  [stargate] RowSpec parse bug
+   HBASE-2114  Can't start HBase in trunk (JD and Kay Kay via JD)
+   HBASE-2115  ./hbase shell would not launch due to missing jruby dependency
+               (Kay Kay via JD)
+   HBASE-2101  KeyValueSortReducer collapses all values to last passed
+   HBASE-2119  Fix top-level NOTICES.txt file. Its stale.
+   HBASE-2120  [stargate] Unable to delete column families (Greg Lu via Andrew
+               Purtell)
+   HBASE-2123  Remove 'master' command-line option from PE
+   HBASE-2024  [stargate] Deletes not working as expected (Greg Lu via Andrew
+               Purtell)
+   HBASE-2122  [stargate] Initializing scanner column families doesn't work
+               (Greg Lu via Andrew Purtell)
+   HBASE-2124  Useless exception in HMaster on startup
+   HBASE-2127  randomWrite mode of PerformanceEvaluation benchmark program
+               writes only to a small range of keys (Kannan Muthukkaruppan
+               via Stack)
+   HBASE-2126  Fix build break - ec2 (Kay Kay via JD)
+   HBASE-2134  Ivy nit regarding checking with latest snapshots (Kay Kay via
+               Andrew Purtell)
+   HBASE-2138  unknown metrics type (Stack via JD)
+   HBASE-2137  javadoc warnings from 'javadoc' target (Kay Kay via Stack)
+   HBASE-2135  ant javadoc complains about missing classe (Kay Kay via Stack)
+   HBASE-2130  bin/* scripts - not to include lib/test/**/*.jar
+               (Kay Kay via Stack)
+   HBASE-2140  findbugs issues - 2 performance warnings as suggested by
+               findbugs (Kay Kay via Stack)
+   HBASE-2139  findbugs task in build.xml (Kay Kay via Stack)
+   HBASE-2147  run zookeeper in the same jvm as master during non-distributed
+               mode
+   HBASE-65    Thrift Server should have an option to bind to ip address
+               (Lars Francke via Stack)
+   HBASE-2146  RPC related metrics are missing in 0.20.3 since recent changes
+               (Gary Helmling via Lars George)
+   HBASE-2150  Deprecated HBC(Configuration) constructor doesn't call this()
+   HBASE-2154  Fix Client#next(int) javadoc
+   HBASE-2152  Add default jmxremote.{access|password} files into conf
+               (Lars George and Gary Helmling via Stack)
+   HBASE-2156  HBASE-2037 broke Scan - only a test for trunk
+   HBASE-2057  Cluster won't stop (Gary Helmling and JD via JD)
+   HBASE-2160  Can't put with ts in shell
+   HBASE-2144  Now does \x20 for spaces
+   HBASE-2163  ZK dependencies - explicitly add them until ZK artifacts are
+               published to mvn repository (Kay Kay via Stack)
+   HBASE-2164  Ivy nit - clean up configs (Kay Kay via Stack)
+   HBASE-2184  Calling HTable.getTableDescriptor().* on a full cluster takes
+               a long time (Cristian Ivascu via Stack)
+   HBASE-2193  Better readability of - hbase.regionserver.lease.period
+               (Kay Kay via Stack)
+   HBASE-2199  hbase.client.tableindexed.IndexSpecification, lines 72-73
+               should be reversed (Adrian Popescu via Stack)
+   HBASE-2224  Broken build: TestGetRowVersions.testGetRowMultipleVersions
+   HBASE-2129  ant tar build broken since switch to Ivy (Kay Kay via Stack)
+   HBASE-2226  HQuorumPeerTest doesnt run because it doesnt start with the
+               word Test
+   HBASE-2230  SingleColumnValueFilter has an ungaurded debug log message
+   HBASE-2258  The WhileMatchFilter doesn't delegate the call to filterRow()
+   HBASE-2259  StackOverflow in ExplicitColumnTracker when row has many columns
+   HBASE-2268  [stargate] Failed tests and DEBUG output is dumped to console
+               since move to Mavenized build 
+   HBASE-2276  Hbase Shell hcd() method is broken by the replication scope 
+               parameter (Alexey Kovyrin via Lars George)
+   HBASE-2244  META gets inconsistent in a number of crash scenarios
+   HBASE-2284  fsWriteLatency metric may be incorrectly reported 
+               (Kannan Muthukkaruppan via Stack)
+   HBASE-2063  For hfileoutputformat, on timeout/failure/kill clean up
+               half-written hfile (Ruslan Salyakhov via Stack)
+   HBASE-2281  Hbase shell does not work when started from the build dir
+               (Alexey Kovyrin via Stack)
+   HBASE-2293  CME in RegionManager#isMetaServer
+   HBASE-2261  The javadoc in WhileMatchFilter and it's tests in TestFilter
+               are not accurate/wrong
+   HBASE-2299  [EC2] mapreduce fixups for PE
+   HBASE-2295  Row locks may deadlock with themselves
+               (dhruba borthakur via Stack)
+   HBASE-2308  Fix the bin/rename_table.rb script, make it work again
+   HBASE-2307  hbase-2295 changed hregion size, testheapsize broke... fix it
+   HBASE-2269  PerformanceEvaluation "--nomapred" may assign duplicate random
+               seed over multiple testing threads (Tatsuya Kawano via Stack) 
+   HBASE-2287  TypeError in shell (Alexey Kovyrin via Stack)
+   HBASE-2023  Client sync block can cause 1 thread of a multi-threaded client
+               to block all others (Karthik Ranganathan via Stack)
+   HBASE-2305  Client port for ZK has no default (Suraj Varma via Stack)
+   HBASE-2323  filter.RegexStringComparator does not work with certain bytes
+               (Benoit Sigoure via Stack)
+   HBASE-2313  Nit-pick about hbase-2279 shell fixup, if you do get with
+               non-existant column family, throws lots of exceptions
+               (Alexey Kovyrin via Stack)
+   HBASE-2334  Slimming of Maven dependency tree - improves assembly build
+               speed (Paul Smith via Stack)
+   HBASE-2336  Fix build broken with HBASE-2334 (Lars Francke via Lars George)
+   HBASE-2283  row level atomicity (Kannan Muthukkaruppan via Stack)
+   HBASE-2355  Unsynchronized logWriters map is mutated from several threads in
+               HLog splitting (Todd Lipcon via Andrew Purtell)
+   HBASE-2358  Store doReconstructionLog will fail if oldlogfile.log is empty
+               and won't load region (Cosmin Lehene via Stack)
+   HBASE-2370  saveVersion.sh doesnt properly grab the git revision
+   HBASE-2373  Remove confusing log message of how "BaseScanner GET got
+               different address/startcode than SCAN"
+   HBASE-2361  WALEdit broke replication scope
+   HBASE-2365  Double-assignment around split
+   HBASE-2398  NPE in HLog.append when calling writer.getLength
+               (Kannan Muthukkaruppan via Stack)
+   HBASE-2410  spurious warnings from util.Sleeper
+   HBASE-2335  mapred package docs don't say zookeeper jar is a dependent
+   HBASE-2416  HCM.locateRootRegion fails hard on "Connection refused"
+   HBASE-2346  Usage of FilterList slows down scans
+   HBASE-2341  ZK settings for initLimit/syncLimit should not have been removed
+               from hbase-default.xml
+   HBASE-2439  HBase can get stuck if updates to META are blocked
+               (Kannan Muthukkaruppan via Stack)
+   HBASE-2451  .META. by-passes cache; BLOCKCACHE=>'false'
+   HBASE-2453  Revisit compaction policies after HBASE-2248 commit
+               (Jonathan Gray via Stack)
+   HBASE-2458  Client stuck in TreeMap,remove (Todd Lipcon via Stack)
+   HBASE-2460  add_table.rb deletes any tables for which the target table name
+               is a prefix (Todd Lipcon via Stack)
+   HBASE-2463  Various Bytes.* functions silently ignore invalid arguments
+               (Benoit Sigoure via Stack)
+   HBASE-2443  IPC client can throw NPE if socket creation fails
+               (Todd Lipcon via Stack)
+   HBASE-2447  LogSyncer.addToSyncQueue doesn't check if syncer is still
+               running before waiting (Todd Lipcon via Stack)
+   HBASE-2494  Does not apply new.name parameter to CopyTable
+               (Yoonsik Oh via Stack)
+   HBASE-2481  Client is not getting UnknownScannerExceptions; they are
+               being eaten (Jean-Daniel Cryans via Stack)
+   HBASE-2448  Scanner threads are interrupted without acquiring lock properly
+               (Todd Lipcon via Stack)
+   HBASE-2491  master.jsp uses absolute links to table.jsp. This broke when
+               master.jsp moved under webapps/master(Cristian Ivascu via Stack)
+   HBASE-2487  Uncaught exceptions in receiving IPC responses orphan clients
+               (Todd Lipcon via Stack)
+   HBASE-2497  ProcessServerShutdown throws NullPointerException for offline
+               regiond (Miklos Kurucz via Stack)
+   HBASE-2499  Race condition when disabling a table leaves regions in transition
+   HBASE-2489  Make the "Filesystem needs to be upgraded" error message more
+               useful (Benoit Sigoure via Stack)
+   HBASE-2482  regions in transition do not get reassigned by master when RS
+               crashes (Todd Lipcon via Stack)
+   HBASE-2513  hbase-2414 added bug where we'd tight-loop if no root available
+   HBASE-2503  PriorityQueue isn't thread safe, KeyValueHeap uses it that way
+   HBASE-2431  Master does not respect generation stamps, may result in meta
+               getting permanently offlined
+   HBASE-2515  ChangeTableState considers split&&offline regions as being served
+   HBASE-2544  Forward port branch 0.20 WAL to TRUNK
+   HBASE-2546  Specify default filesystem in both the new and old way (needed
+               if we are to run on 0.20 and 0.21 hadoop)
+   HBASE-1895  HConstants.MAX_ROW_LENGTH is incorrectly 64k, should be 32k      
+   HBASE-1968  Give clients access to the write buffer      
+   HBASE-2028  Add HTable.incrementColumnValue support to shell
+               (Lars George via Andrew Purtell)  
+   HBASE-2138  unknown metrics type
+   HBASE-2551  Forward port fixes that are in branch but not in trunk (part of
+               the merge of old 0.20 into TRUNK task) -- part 1.
+   HBASE-2474  Bug in HBASE-2248 - mixed version reads (not allowed by spec)
+   HBASE-2509  NPEs in various places, HRegion.get, HRS.close
+   HBASE-2344  InfoServer and hence HBase Master doesn't fully start if you
+               have HADOOP-6151 patch (Kannan Muthukkaruppan via Stack)
+   HBASE-2382  Don't rely on fs.getDefaultReplication() to roll HLogs
+               (Nicolas Spiegelberg via Stack)  
+   HBASE-2415  Disable META splitting in 0.20 (Todd Lipcon via Stack)
+   HBASE-2421  Put hangs for 10 retries on failed region servers
+   HBASE-2442  Log lease recovery catches IOException too widely
+               (Todd Lipcon via Stack)
+   HBASE-2457  RS gets stuck compacting region ad infinitum
+   HBASE-2562  bin/hbase doesn't work in-situ in maven
+               (Todd Lipcon via Stack)
+   HBASE-2449  Local HBase does not stop properly
+   HBASE-2539  Cannot start ZK before the rest in tests anymore
+   HBASE-2561  Scanning .META. while split in progress yields
+               IllegalArgumentException (Todd Lipcon via Stack)
+   HBASE-2572  hbase/bin/set_meta_block_caching.rb:72: can't convert
+               Java::JavaLang::String into String (TypeError) - little
+               issue with script
+   HBASE-2483  Some tests do not use ephemeral ports
+   HBASE-2573  client.HConnectionManager$TableServers logs non-printable
+               binary bytes (Benoît Sigoure via Stack)
+   HBASE-2576  TestHRegion.testDelete_mixed() failing on hudson
+   HBASE-2581  Bloom commit broke some tests... fix
+   HBASE-2582  TestTableSchemaModel not passing after commit of blooms
+   HBASE-2583  Make webapps work in distributed mode again and make webapps
+               deploy at / instead of at /webapps/master/master.jsp
+   HBASE-2590  Failed parse of branch element in saveVersion.sh
+   HBASE-2591  HBASE-2587 hardcoded the port that dfscluster runs on
+   HBASE-2519  StoreFileScanner.seek swallows IOEs (Todd Lipcon via Stack)
+   HBASE-2516  Ugly IOE when region is being closed; rather, should NSRE
+               (Daniel Ploeg via Stack)
+   HBASE-2589  TestHRegion.testWritesWhileScanning flaky on trunk
+               (Todd Lipcon via Stack)
+   HBASE-2590  Failed parse of branch element in saveVersion.sh
+               (Benoît Sigoure via Stack)
+   HBASE-2586  Move hbase webapps to a hbase-webapps dir (Todd Lipcon via
+               Andrew Purtell)
+   HBASE-2610  ValueFilter copy pasted javadoc from QualifierFilter
+   HBASE-2619  HBase shell 'alter' command cannot set table properties to False
+               (Christo Wilson via Stack)
+   HBASE-2621  Fix bad link to HFile documentation in javadoc
+               (Jeff Hammerbacher via Todd Lipcon)
+   HBASE-2371  Fix 'list' command in shell (Alexey Kovyrin via Todd Lipcon)
+   HBASE-2620  REST tests don't use ephemeral ports
+   HBASE-2635  ImmutableBytesWritable ignores offset in several cases
+   HBASE-2654  Add additional maven repository temporarily to fetch Guava
+   HBASE-2560  Fix IllegalArgumentException when manually splitting table
+               from web UI
+   HBASE-2657  TestTableResource is broken in trunk
+   HBASE-2662  TestScannerResource.testScannerResource broke in trunk
+   HBASE-2667  TestHLog.testSplit failing in trunk (Cosmin and Stack)
+   HBASE-2614  killing server in TestMasterTransitions causes NPEs and test deadlock
+   HBASE-2615  M/R on bulk imported tables
+   HBASE-2676  TestInfoServers should use ephemeral ports
+   HBASE-2616  TestHRegion.testWritesWhileGetting flaky on trunk
+   HBASE-2684  TestMasterWrongRS flaky in trunk
+   HBASE-2691  LeaseStillHeldException totally ignored by RS, wrongly named
+   HBASE-2703  ui not working in distributed context
+   HBASE-2710  Shell should use default terminal width when autodetection fails
+               (Kannan Muthukkaruppan via Todd Lipcon)
+   HBASE-2712  Cached region location that went stale won't recover if 
+               asking for first row
+   HBASE-2732  TestZooKeeper was broken, HBASE-2691 showed it
+   HBASE-2670  Provide atomicity for readers even when new insert has
+               same timestamp as current row.
+   HBASE-2733  Replacement of LATEST_TIMESTAMP with real timestamp was broken
+               by HBASE-2353.
+   HBASE-2734  TestFSErrors should catch all types of exceptions, not just RTE
+   HBASE-2738  TestTimeRangeMapRed updated now that we keep multiple cells with
+               same timestamp in MemStore
+   HBASE-2725  Shutdown hook management is gone in trunk; restore
+   HBASE-2740  NPE in ReadWriteConsistencyControl
+   HBASE-2752  Don't retry forever when waiting on too many store files
+   HBASE-2737  CME in ZKW introduced in HBASE-2694 (Karthik Ranganathan via JD)
+   HBASE-2756  MetaScanner.metaScan doesn't take configurations
+   HBASE-2656  HMaster.getRegionTableClosest should not return null for closed
+               regions
+   HBASE-2760  Fix MetaScanner TableNotFoundException when scanning starting at
+               the first row in a table.
+   HBASE-1025  Reconstruction log playback has no bounds on memory used
+   HBASE-2757  Fix flaky TestFromClientSide test by forcing region assignment
+   HBASE-2741  HBaseExecutorService needs to be multi-cluster friendly
+               (Karthik Ranganathan via JD)
+   HBASE-2769  Fix typo in warning message for HBaseConfiguration
+   HBASE-2768  Fix teardown order in TestFilter
+   HBASE-2763  Cross-port HADOOP-6833 IPC parameter leak bug
+   HBASE-2758  META region stuck in RS2ZK_REGION_OPENED state
+               (Karthik Ranganathan via jgray)
+   HBASE-2767  Fix reflection in tests that was made incompatible by HDFS-1209
+   HBASE-2617  Load balancer falls into pathological state if one server under
+               average - slop; endless churn
+   HBASE-2729  Interrupted or failed memstore flushes should not corrupt the
+               region
+   HBASE-2772  Scan doesn't recover from region server failure
+   HBASE-2775  Update of hadoop jar in HBASE-2771 broke TestMultiClusters
+   HBASE-2774  Spin in ReadWriteConsistencyControl eating CPU (load > 40) and
+               no progress running YCSB on clean cluster startup                     
+   HBASE-2785  TestScannerTimeout.test2772 is flaky
+   HBASE-2787  PE is confused about flushCommits
+   HBASE-2707  Can't recover from a dead ROOT server if any exceptions happens
+               during log splitting
+   HBASE-2501  Refactor StoreFile Code
+   HBASE-2806  DNS hiccups cause uncaught NPE in HServerAddress#getBindAddress
+               (Benoit Sigoure via Stack)
+   HBASE-2806  (small compile fix via jgray)
+   HBASE-2797  Another NPE in ReadWriteConsistencyControl
+   HBASE-2831  Fix '$bin' path duplication in setup scripts
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2781  ZKW.createUnassignedRegion doesn't make sure existing znode is 
+               in the right state (Karthik Ranganathan via JD)
+   HBASE-2727  Splits writing one file only is untenable; need dir of recovered
+               edits ordered by sequenceid
+   HBASE-2843  Readd bloomfilter test over zealously removed by HBASE-2625 
+   HBASE-2846  Make rest server be same as thrift and avro servers
+   HBASE-1511  Pseudo distributed mode in LocalHBaseCluster
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2851  Remove testDynamicBloom() unit test
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2853  TestLoadIncrementalHFiles fails on TRUNK
+   HBASE-2854  broken tests on trunk         
+   HBASE-2859  Cleanup deprecated stuff in TestHLog (Alex Newman via Stack)
+   HBASE-2858  TestReplication.queueFailover fails half the time
+   HBASE-2863  HBASE-2553 removed an important edge case
+   HBASE-2866  Region permanently offlined
+   HBASE-2849  HBase clients cannot recover when their ZooKeeper session
+               becomes invalid (Benôit Sigoure via Stack)
+   HBASE-2876  HBase hbck: false positive error reported for parent regions
+               that are in offline state in meta after a split
+   HBASE-2815  not able to run the test suite in background because TestShell
+               gets suspended on tty output (Alexey Kovyrin via Stack)
+   HBASE-2852  Bloom filter NPE (pranav via jgray)
+   HBASE-2820  hbck throws an error if HBase root dir isn't on the default FS
+   HBASE-2884  TestHFileOutputFormat flaky when map tasks generate identical
+               data
+   HBASE-2890  Initialize RPC JMX metrics on startup (Gary Helmling via Stack)
+   HBASE-2755  Duplicate assignment of a region after region server recovery
+               (Kannan Muthukkaruppan via Stack)
+   HBASE-2892  Replication metrics aren't updated
+   HBASE-2461  Split doesn't handle IOExceptions when creating new region
+               reference files
+   HBASE-2871  Make "start|stop" commands symmetric for Master & Cluster
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2901  HBASE-2461 broke build
+   HBASE-2823  Entire Row Deletes not stored in Row+Col Bloom
+               (Alexander Georgiev via Stack)
+   HBASE-2897  RowResultGenerator should handle NoSuchColumnFamilyException
+   HBASE-2905  NPE when inserting mass data via REST interface (Sandy Yin via
+               Andrew Purtell)
+   HBASE-2908  Wrong order of null-check [in TIF] (Libor Dener via Stack)
+   HBASE-2909  SoftValueSortedMap is broken, can generate NPEs
+   HBASE-2919  initTableReducerJob: Unused method parameter
+               (Libor Dener via Stack)
+   HBASE-2923  Deadlock between HRegion.internalFlushCache and close
+   HBASE-2927  BaseScanner gets stale HRegionInfo in some race cases
+   HBASE-2928  Fault in logic in BinaryPrefixComparator leads to
+               ArrayIndexOutOfBoundsException (pranav via jgray)
+   HBASE-2924  TestLogRolling doesn't use the right HLog half the time
+   HBASE-2931  Do not throw RuntimeExceptions in RPC/HbaseObjectWritable
+               code, ensure we log and rethrow as IOE
+               (Karthik Ranganathan via Stack)
+   HBASE-2915  Deadlock between HRegion.ICV and HRegion.close
+   HBASE-2920  HTable.checkAndPut/Delete doesn't handle null values
+   HBASE-2944  cannot alter bloomfilter setting for a column family from
+               hbase shell (Kannan via jgray)
+   HBASE-2948  bin/hbase shell broken (after hbase-2692)
+               (Sebastian Bauer via Stack)
+   HBASE-2954  Fix broken build caused by hbase-2692 commit
+   HBASE-2918  SequenceFileLogWriter doesnt make it clear if there is no
+               append by config or by missing lib/feature
+   HBASE-2799  "Append not enabled" warning should not show if hbase
+               root dir isn't on DFS
+   HBASE-2943  major_compact (and other admin commands) broken for .META.
+   HBASE-2643  Figure how to deal with eof splitting logs
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2925  LRU of HConnectionManager.HBASE_INSTANCES breaks if
+               HBaseConfiguration is changed
+               (Robert Mahfoud via Stack)
+   HBASE-2964  Deadlock when RS tries to RPC to itself inside SplitTransaction
+   HBASE-1485  Wrong or indeterminate behavior when there are duplicate
+               versions of a column (pranav via jgray)
+   HBASE-2967  Failed split: IOE 'File is Corrupt!' -- sync length not being
+               written out to SequenceFile
+   HBASE-2969  missing sync in HTablePool.getTable()
+               (Guilherme Mauro Germoglio Barbosa via Stack)
+   HBASE-2973  NPE in LogCleaner
+   HBASE-2974  LoadBalancer ArithmeticException: / by zero
+   HBASE-2975  DFSClient names in master and RS should be unique
+   HBASE-2978  LoadBalancer IndexOutOfBoundsException
+   HBASE-2983  TestHLog unit test is mis-comparing an assertion
+               (Alex Newman via Todd Lipcon)
+   HBASE-2986  multi writable can npe causing client hang
+   HBASE-2979  Fix failing TestMultParrallel in hudson build
+   HBASE-2899  hfile.min.blocksize.size ignored/documentation wrong
+   HBASE-3006  Reading compressed HFile blocks causes way too many DFS RPC
+               calls severly impacting performance
+               (Kannan Muthukkaruppan via Stack)
+   HBASE-3010  Can't start/stop/start... cluster using new master
+   HBASE-3015  recovered.edits files not deleted if it only contain edits that
+               have already been flushed; hurts perf for all future opens of
+               the region
+   HBASE-3018  Bulk assignment on startup runs serially through the cluster
+               servers assigning in bulk to one at a time
+   HBASE-3023  NPE processing server crash in MetaReader. getServerUserRegions
+   HBASE-3024  NPE processing server crash in MetaEditor.addDaughter
+   HBASE-3026  Fixup of "missing" daughters on split is too aggressive
+   HBASE-3003  ClassSize constants dont use 'final'
+   HBASE-3002  Fix zookeepers.sh to work properly with strange JVM options
+   HBASE-3028  No basescanner means no GC'ing of split, offlined parent regions
+   HBASE-2989  [replication] RSM won't cleanup after locking if 0 peers
+   HBASE-2992  [replication] MalformedObjectNameException in ReplicationMetrics
+   HBASE-3037  When new master joins running cluster does "Received report from
+               unknown server -- telling it to STOP_REGIONSERVER.
+   HBASE-3039  Stuck in regionsInTransition because rebalance came in at same
+               time as a split
+   HBASE-3042  Use LO4J in SequenceFileLogReader
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2995  Incorrect dependency on Log class from Jetty
+   HBASE-3038  WALReaderFSDataInputStream.getPos() fails if Filesize > MAX_INT
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3047  If new master crashes, restart is messy
+   HBASE-3054  Remore TestEmptyMetaInfo; it doesn't make sense any more.
+   HBASE-3056  Fix ordering in ZKWatcher constructor to prevent weird race
+               condition
+   HBASE-3057  Race condition when closing regions that causes flakiness in
+               TestRestartCluster
+   HBASE-3058  Fix REST tests on trunk
+   HBASE-3068  IllegalStateException when new server comes online, is given
+               200 regions to open and 200th region gets timed out of regions
+               in transition
+   HBASE-3064  Long sleeping in HConnectionManager after thread is interrupted
+               (Bruno Dumon via Stack)
+   HBASE-2753  Remove sorted() methods from Result now that Gets are Scans
+   HBASE-3059  TestReadWriteConsistencyControl occasionally hangs (Hairong 
+               via Ryan)
+   HBASE-2906  [rest/stargate] URI decoding in RowResource
+   HBASE-3008  Memstore.updateColumnValue passes wrong flag to heapSizeChange
+               (Causes memstore size to go negative)
+   HBASE-3089  REST tests are broken locally and up in hudson
+   HBASE-3062  ZooKeeper KeeperException$ConnectionLossException is a
+               "recoverable" exception; we should retry a while on server
+               startup at least.
+   HBASE-3074  Zookeeper test failing on hudson
+   HBASE-3089  REST tests are broken locally and up in hudson
+   HBASE-3085  TestSchemaResource broken on TRUNK up on HUDSON
+   HBASE-3080  TestAdmin hanging on hudson
+   HBASE-3063  TestThriftServer failing in TRUNK
+   HBASE-3094  Fixes for miscellaneous broken tests
+   HBASE-3060  [replication] Reenable replication on trunk with unit tests
+   HBASE-3041  [replication] ReplicationSink shouldn't kill the whole RS when
+               it fails to replicate
+   HBASE-3044  [replication] ReplicationSource won't cleanup logs if there's
+               nothing to replicate
+   HBASE-3113  Don't reassign regions if cluster is being shutdown
+   HBASE-2933  Skip EOF Errors during Log Recovery
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3081  Log Splitting & Replay: Distinguish between Network IOE and
+               Parsing IOE (Nicolas Spiegelberg via Stack)
+   HBASE-3098  TestMetaReaderEditor is broken in TRUNK; hangs
+   HBASE-3110  TestReplicationSink failing in TRUNK up on Hudson
+   HBASE-3101  bin assembly doesn't include -tests or -source jars
+   HBASE-3121  [rest] Do not perform cache control when returning results
+   HBASE-2669  HCM.shutdownHook causes data loss with
+               hbase.client.write.buffer != 0
+   HBASE-2985  HRegionServer.multi() no longer calls HRegion.put(List) when 
+               possible
+   HBASE-3031  CopyTable MR job named "Copy Table" in Driver
+   HBASE-2658  REST (stargate) TableRegionModel Regions need to be updated to
+               work w/ new region naming convention from HBASE-2531
+   HBASE-3140  Rest schema modification throw null pointer exception
+               (David Worms via Stack)
+   HBASE-2998  rolling-restart.sh shouldn't rely on zoo.cfg
+   HBASE-3145  importtsv fails when the line contains no data
+               (Kazuki Ohta via Todd Lipcon)
+   HBASE-2984  [shell] Altering a family shouldn't reset to default unchanged
+               attributes
+   HBASE-3143  Adding the tests' hbase-site.xml to the jar breaks some clients
+   HBASE-3139  Server shutdown processor stuck because meta not online
+   HBASE-3136  Stale reads from ZK can break the atomic CAS operations we
+               have in ZKAssign
+   HBASE-2753  Remove sorted() methods from Result now that Gets are Scans
+   HBASE-3147  Regions stuck in transition after rolling restart, perpetual
+               timeout handling but nothing happens
+   HBASE-3158  Bloom File Writes Broken if keySize is large
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3155  HFile.appendMetaBlock() uses wrong comparator
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3012  TOF doesn't take zk client port for remote clusters
+   HBASE-3159  Double play of OpenedRegionHandler for a single region
+               and assorted fixes around this + TestRollingRestart added
+   HBASE-3160  Use more intelligent priorities for PriorityCompactionQueue
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3172  Reverse order of AssignmentManager and MetaNodeTracker in
+               ZooKeeperWatcher
+   HBASE-2406  Define semantics of cell timestamps/versions
+   HBASE-3175  Commit of HBASE-3160 broke TestPriorityCompactionQueue up on
+               hudson (nicolas via jgray)
+   HBASE-3163  If we timeout PENDING_CLOSE and send another closeRegion RPC,
+               need to handle NSRE from RS (comes as a RemoteException)
+   HBASE-3164  Handle case where we open META, ROOT has been closed but
+               znode location not deleted yet, and try to update META
+               location in ROOT
+   HBASE-2006  Documentation of hbase-site.xml parameters
+   HBASE-2672  README.txt should contain basic information like how to run
+               or build HBase
+   HBASE-3179  Enable ReplicationLogsCleaner only if replication is,
+               and fix its test
+   HBASE-3185  User-triggered compactions are triggering splits!
+   HBASE-1932  Encourage use of 'lzo' compression... add the wiki page to
+               getting started
+   HBASE-3151  NPE when trying to read regioninfo from .META.
+   HBASE-3191  FilterList with MUST_PASS_ONE and SCVF isn't working
+               (Stefan Seelmann via Stack)
+   HBASE-2471  Splitting logs, we'll make an output file though the
+               region no longer exists
+   HBASE-3095  Client needs to reconnect if it expires its zk session
+   HBASE-2935  Refactor "Corrupt Data" Tests in TestHLogSplit
+               (Alex Newman via Stack)
+   HBASE-3202  Closing a region, if we get a ConnectException, handle
+               it rather than abort
+   HBASE-3198  Log rolling archives files prematurely
+   HBASE-3203  We can get an order to open a region while shutting down
+               and it'll hold up regionserver shutdown
+   HBASE-3204  Reenable deferred log flush
+   HBASE-3195  [rest] Fix TestTransform breakage on Hudson
+   HBASE-3205  TableRecordReaderImpl.restart NPEs when first next is restarted
+   HBASE-3208  HLog.findMemstoresWithEditsOlderThan needs to look for edits
+               that are equal to too
+   HBASE-3141  Master RPC server needs to be started before an RS can check in
+   HBASE-3112  Enable and disable of table needs a bit of loving in new master
+   HBASE-3207  If we get IOException when closing a region, we should still
+               remove it from online regions and complete the close in ZK
+   HBASE-3199  large response handling: some fixups and cleanups
+   HBASE-3212  More testing of enable/disable uncovered base condition not in
+               place; i.e. that only one enable/disable runs at a time
+   HBASE-2898  MultiPut makes proper error handling impossible and leads to 
+   	       corrupted data
+   HBASE-3213  If do abort of backup master will get NPE instead of graceful
+               abort
+   HBASE-3214  TestMasterFailover.testMasterFailoverWithMockedRITOnDeadRS is
+               failing (Gary via jgray)
+   HBASE-3216  Move HBaseFsck from client to util
+   HBASE-3219  Split parents are reassigned on restart and on disable/enable
+   HBASE-3222  Regionserver region listing in UI is no longer ordered
+   HBASE-3221  Race between splitting and disabling
+   HBASE-3224  NPE in KeyValue$KVComparator.compare when compacting
+   HBASE-3233  Fix Long Running Stats
+   HBASE-3232  Fix KeyOnlyFilter + Add Value Length (Nicolas via Ryan)
+   HBASE-3235  Intermittent incrementColumnValue failure in TestHRegion 
+    	       (Gary via Ryan)
+   HBASE-3241  check to see if we exceeded hbase.regionserver.maxlogs limit is
+               incorrect (Kannan Muthukkaruppan via JD)
+   HBASE-3239  Handle null regions to flush in HLog.cleanOldLogs (Kannan
+               Muthukkaruppan via JD)
+   HBASE-3237  Split request accepted -- BUT CURRENTLY A NOOP
+   HBASE-3253  Thrift's missing from all the repositories in pom.xml
+   HBASE-3252  TestZooKeeperNodeTracker sometimes fails due to a race condition
+               in test notification (Gary Helmling via Andrew Purtell)
+   HBASE-3264  Remove unnecessary Guava Dependency
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3172  Reverse order of AssignmentManager and MetaNodeTracker in
+   HBASE-3249  Typing 'help shutdown' in the shell shouldn't shutdown the cluster
+   HBASE-3262  TestHMasterRPCException uses non-ephemeral port for master
+   HBASE-3272  Remove no longer used options
+   HBASE-3269  HBase table truncate semantics seems broken as "disable" table
+               is now async by default.
+   HBASE-3275  [rest] No gzip/deflate content encoding support
+   HBASE-3261  NPE out of HRS.run at startup when clock is out of sync
+   HBASE-3277  HBase Shell zk_dump command broken
+   HBASE-3280  YouAreDeadException being swallowed in HRS getMaster
+   HBASE-3267  close_region shell command breaks region
+   HBASE-3282  Need to retain DeadServers to ensure we don't allow
+               previously expired RS instances to rejoin cluster
+   HBASE-3283  NPE in AssignmentManager if processing shutdown of RS who
+               doesn't have any regions assigned to it
+   HBASE-3265  Regionservers waiting for ROOT while Master waiting for RegionServers
+   HBASE-3263  Stack overflow in AssignmentManager
+   HBASE-3234  hdfs-724 "breaks" TestHBaseTestingUtility multiClusters
+   HBASE-3286  Master passes IP and not hostname back to region server
+   HBASE-3297  If rows in .META. with no HRegionInfo cell, then hbck fails read
+               of .META.
+   HBASE-3294  WARN org.apache.hadoop.hbase.regionserver.Store: Not in set
+               (double-remove?) org.apache.hadoop.hbase.regionserver.StoreScanner@76607d3d
+   HBASE-3299  If failed open, we don't output the IOE
+   HBASE-3291  If split happens while regionserver is going down, we can stick open.
+   HBASE-3295  Dropping a 1k+ regions table likely ends in a client socket timeout
+               and it's very confusing
+   HBASE-3301  Treat java.net.SocketTimeoutException same as ConnectException
+               assigning/unassigning region
+   HBASE-3296  Newly created table ends up disabled instead of assigned
+   HBASE-3304  Get spurious master fails during bootup
+   HBASE-3298  Regionserver can close during a split causing double assignment
+   HBASE-3309  "Not running balancer because dead regionserver processing" is a lie
+   HBASE-3310  Failing creating/altering table with compression agrument from 
+               the HBase shell (Igor Ranitovic via Stack)
+   HBASE-3314  [shell] 'move' is broken
+   HBASE-3315  Add debug output for when balancer makes bad balance
+   HBASE-3278  AssertionError in LoadBalancer
+   HBASE-3173  HBase 2984 breaks ability to specify BLOOMFILTER & COMPRESSION
+               via shell
+   HBASE-3318  Split rollback leaves parent with writesEnabled=false
+   HBASE-3334  Refresh our hadoop jar because of HDFS-1520
+   HBASE-3347  Can't truncate/disable table that has rows in .META. that have empty
+               info:regioninfo column
+   HBASE-3321  Replication.join shouldn't clear the logs znode
+   HBASE-3352  enabling a non-existent table from shell prints no error
+   HBASE-3353  table.jsp doesn't handle entries in META without server info
+   HBASE-3351  ReplicationZookeeper goes to ZK every time a znode is modified
+   HBASE-3326  Replication state's znode should be created else it
+               defaults to false
+   HBASE-3337  Restore HBCK fix of unassignment and dupe assignment for new
+               master
+   HBASE-3332  Regions stuck in transition after RS failure
+   HBASE-3355  Stopping a stopped cluster leaks an HMaster
+   HBASE-3356  Add more checks in replication if RS is stopped
+   HBASE-3358  Recovered replication queue wait on themselves when terminating
+   HBASE-3359  LogRoller not added as a WAL listener when replication is enabled
+   HBASE-3360  ReplicationLogCleaner is enabled by default in 0.90 -- causes NPE
+   HBASE-3363  ReplicationSink should batch delete
+   HBASE-3365  EOFE contacting crashed RS causes Master abort
+   HBASE-3362  If .META. offline between OPENING and OPENED, then wrong server
+               location in .META. is possible
+   HBASE-3368  Split message can come in before region opened message; results
+               in 'Region has been PENDING_CLOSE for too long' cycle
+   HBASE-3366  WALObservers should be notified before the lock
+   HBASE-3367  Failed log split not retried 
+   HBASE-3370  ReplicationSource.openReader fails to locate HLogs when they
+               aren't split yet
+   HBASE-3371  Race in TestReplication can make it fail
+   HBASE-3323  OOME in master splitting logs
+   HBASE-3374  Our jruby jar has *GPL jars in it; fix
+   HBASE-3343  Server not shutting down after losing log lease
+   HBASE-3381  Interrupt of a region open comes across as a successful open
+   HBASE-3380  Master failover can split logs of live servers
+   HBASE-3386  NPE in TableRecordReaderImpl.restart
+   HBASE-3388  NPE processRegionInTransition(AssignmentManager.java:264)
+               doing rolling-restart.sh
+   HBASE-3383  [0.90RC1] bin/hbase script displays "no such file" warning on
+               target/cached_classpath.txt
+
+
+  IMPROVEMENTS
+   HBASE-1760  Cleanup TODOs in HTable
+   HBASE-1759  Ability to specify scanner caching on a per-scan basis
+               (Ken Weiner via jgray)
+   HBASE-1763  Put writeToWAL methods do not have proper getter/setter names
+               (second commit to fix compile error in hregion)
+   HBASE-1770  HTable.setWriteBufferSize does not flush the writeBuffer when
+               its size is set to a value lower than its current size.
+               (Mathias via jgray)
+   HBASE-1771  PE sequentialWrite is 7x slower because of
+               MemStoreFlusher#checkStoreFileCount
+   HBASE-1758  Extract interface out of HTable (Vaibhav Puranik via Andrew
+               Purtell)
+   HBASE-1776  Make rowcounter enum public
+   HBASE-1276  [testing] Upgrade to JUnit 4.x and use @BeforeClass
+               annotations to optimize tests
+   HBASE-1800  Too many ZK connections
+   HBASE-1819  Update to 0.20.1 hadoop and zk 3.2.1
+   HBASE-1820  Update jruby from 1.2 to 1.3.1
+   HBASE-1687  bin/hbase script doesn't allow for different memory settings
+               for each daemon type
+   HBASE-1823  Ability for Scanners to bypass the block cache
+   HBASE-1827  Add disabling block cache scanner flag to the shell
+   HBASE-1835  Add more delete tests
+   HBASE-1574  Client and server APIs to do batch deletes
+   HBASE-1833  hfile.main fixes
+   HBASE-1684  Backup (Export/Import) contrib tool for 0.20
+   HBASE-1860  Change HTablePool#createHTable from private to protected
+   HBASE-48    Bulk load tools
+   HBASE-1855  HMaster web application doesn't show the region end key in the
+               table detail page (Andrei Dragomir via Stack)
+   HBASE-1870  Bytes.toFloat(byte[], int) is marked private
+   HBASE-1874  Client Scanner mechanism that is used for HbaseAdmin methods
+               (listTables, tableExists), is very slow if the client is far
+               away from the HBase cluster (Andrei Dragomir via Stack)
+   HBASE-1879  ReadOnly transactions generate WAL activity (Clint Morgan via
+               Stack)
+   HBASE-1875  Compression test utility
+   HBASE-1832  Faster enable/disable/delete
+   HBASE-1481  Add fast row key only scanning
+   HBASE-1506  [performance] Make splits faster
+   HBASE-1722  Add support for exporting HBase metrics via JMX
+               (Gary Helming via Stack)
+   HBASE-1899  Use scanner caching in shell count
+   HBASE-1887  Update hbase trunk to latests on hadoop 0.21 branch so we can
+               all test sync/append
+   HBASE-1902  Let PerformanceEvaluation support setting tableName and compress
+               algorithm (Schubert Zhang via Stack)
+   HBASE-1885  Simplify use of IndexedTable outside Java API
+               (Kevin Patterson via Stack)
+   HBASE-1903  Enable DEBUG by default
+   HBASE-1907  Version all client writables
+   HBASE-1914  hlog should be able to set replication level for the log
+               indendently from any other files
+   HBASE-1537  Intra-row scanning
+   HBASE-1918  Don't do DNS resolving in .META. scanner for each row
+   HBASE-1756  Refactor HLog (changing package first)
+   HBASE-1926  Remove unused xmlenc jar from trunk
+   HBASE-1936  HLog group commit
+   HBASE-1921  When the Master's session times out and there's only one,
+               cluster is wedged
+   HBASE-1942  Update hadoop jars in trunk; update to r831142
+   HBASE-1943  Remove AgileJSON; unused
+   HBASE-1944  Add a "deferred log flush" attribute to HTD
+   HBASE-1945  Remove META and ROOT memcache size bandaid 
+   HBASE-1947  If HBase starts/stops often in less than 24 hours, 
+               you end up with lots of store files
+   HBASE-1829  Make use of start/stop row in TableInputFormat
+               (Lars George via Stack)
+   HBASE-1867  Tool to regenerate an hbase table from the data files
+   HBASE-1904  Add tutorial for installing HBase on Windows using Cygwin as
+               a test and development environment (Wim Van Leuven via Stack)
+   HBASE-1963  Output to multiple tables from Hadoop MR without use of HTable
+               (Kevin Peterson via Andrew Purtell)
+   HBASE-1975  SingleColumnValueFilter: Add ability to match the value of
+               previous versions of the specified column
+               (Jeremiah Jacquet via Stack)
+   HBASE-1971  Unit test the full WAL replay cycle
+   HBASE-1970  Export does one version only; make it configurable how many
+               it does
+   HBASE-1987  The Put object has no simple read methods for checking what
+               has already been added (Ryan Smith via Stack)
+   HBASE-1985  change HTable.delete(ArrayList) to HTable.delete(List)
+   HBASE-1958  Remove "# TODO: PUT BACK !!! "${HADOOP_HOME}"/bin/hadoop
+               dfsadmin -safemode wait"
+   HBASE-2011  Add zktop like output to HBase's master UI (Lars George via
+               Andrew Purtell)
+   HBASE-1995  Add configurable max value size check (Lars George via Andrew
+               Purtell)
+   HBASE-2017  Set configurable max value size check to 10MB
+   HBASE-2029  Reduce shell exception dump on console
+               (Lars George and J-D via Stack)
+   HBASE-2027  HConnectionManager.HBASE_INSTANCES leaks TableServers
+               (Dave Latham via Stack)
+   HBASE-2013  Add useful helpers to HBaseTestingUtility.java (Lars George
+               via J-D)
+   HBASE-2031  When starting HQuorumPeer, try to match on more than 1 address
+   HBASE-2043  Shell's scan broken
+   HBASE-2044  HBASE-1822 removed not-deprecated APIs
+   HBASE-2049  Cleanup HLog binary log output (Dave Latham via Stack)
+   HBASE-2052  Make hbase more 'live' when comes to noticing table creation,
+               splits, etc., for 0.20.3
+   HBASE-2059  Break out WAL reader and writer impl from HLog
+   HBASE-2060  Missing closing tag in mapreduce package info (Lars George via
+               Andrew Purtell)
+   HBASE-2028  Add HTable.incrementColumnValue support to shell (Lars George
+               via Andrew Purtell)
+   HBASE-2062  Metrics documentation outdated (Lars George via JD)
+   HBASE-2045  Update trunk and branch zk to just-release 3.2.2.
+   HBASE-2074  Improvements to the hadoop-config script (Bassam Tabbara via
+               Stack)
+   HBASE-2076  Many javadoc warnings
+   HBASE-2068  MetricsRate is missing "registry" parameter (Lars George via JD)
+   HBASE-2025  0.20.2 accessed from older client throws 
+               UndeclaredThrowableException; frustrates rolling upgrade
+   HBASE-2081  Set the retries higher in shell since client pause is lower
+   HBASE-1956  Export HDFS read and write latency as a metric
+   HBASE-2036  Use Configuration instead of HBaseConfiguration (Enis Soztutar
+               via Stack)
+   HBASE-2085  StringBuffer -> StringBuilder - conversion of references as
+               necessary (Kay Kay via Stack)
+   HBASE-2052  Upper bound of outstanding WALs can be overrun
+   HBASE-2086  Job(configuration,String) deprecated (Kay Kay via Stack)
+   HBASE-1996  Configure scanner buffer in bytes instead of number of rows
+               (Erik Rozendaal and Dave Latham via Stack)
+   HBASE-2090  findbugs issues (Kay Kay via Stack)
+   HBASE-2089  HBaseConfiguration() ctor. deprecated (Kay Kay via Stack)
+   HBASE-2035  Binary values are formatted wrong in shell
+   HBASE-2095  TIF shuold support more confs for the scanner (Bassam Tabbara
+               via Andrew Purtell)
+   HBASE-2107  Upgrading Lucene 2.2 to Lucene 3.0.0 (Kay Kay via Stack)
+   HBASE-2111  Move to ivy broke our being able to run in-place; i.e.
+               ./bin/start-hbase.sh in a checkout
+   HBASE-2136  Forward-port the old mapred package
+   HBASE-2133  Increase default number of client handlers
+   HBASE-2109  status 'simple' should show total requests per second, also 
+   	       the requests/sec is wrong as is
+   HBASE-2151  Remove onelab and include generated thrift classes in javadoc
+               (Lars Francke via Stack)
+   HBASE-2149  hbase.regionserver.global.memstore.lowerLimit is too low
+   HBASE-2157  LATEST_TIMESTAMP not replaced by current timestamp in KeyValue
+               (bulk loading)
+   HBASE-2153  Publish generated HTML documentation for Thrift on the website
+               (Lars Francke via Stack)
+   HBASE-1373  Update Thrift to use compact/framed protocol (Lars Francke via
+               Stack)
+   HBASE-2172  Add constructor to Put for row key and timestamp
+               (Lars Francke via Stack)
+   HBASE-2178  Hooks for replication
+   HBASE-2180  Bad random read performance from synchronizing
+               hfile.fddatainputstream
+   HBASE-2194  HTable - put(Put) , put(List<Put) code duplication (Kay Kay via
+               Stack)
+   HBASE-2185  Add html version of default hbase-site.xml (Kay Kay via Stack)
+   HBASE-2198  SingleColumnValueFilter should be able to find the column value
+               even when it's not specifically added as input on the sc
+               (Ferdy via Stack)
+   HBASE-2189  HCM trashes meta cache even when not needed
+   HBASE-2190  HRS should report to master when HMsg are available
+   HBASE-2209  Support of List [ ] in HBaseOutputWritable for serialization
+               (Kay Kay via Stack)
+   HBASE-2177  Add timestamping to gc logging option
+   HBASE-2066  Perf: parallelize puts
+   HBASE-2222  Improve log "Trying to contact region server Some server for
+               region, row 'ip_info_100,,1263329969690', but failed after
+               11 attempts".
+   HBASE-2220  Add a binary comparator that only compares up to the length
+               of the supplied byte array (Bruno Dumon via Stack)
+   HBASE-2211  Add a new Filter that checks a single column value but does not
+               emit it. (Ferdy via Stack)
+   HBASE-2241  Change balancer sloppyness from 0.1 to 0.3
+   HBASE-2250  typo in the maven pom
+   HBASE-2254  Improvements to the Maven POMs (Lars Francke via Stack)
+   HBASE-2262  ZKW.ensureExists should check for existence
+   HBASE-2264  Adjust the contrib apps to the Maven project layout 
+               (Lars Francke via Lars George)
+   HBASE-2245  Unnecessary call to syncWal(region); in HRegionServer 
+               (Benoit Sigoure via JD)
+   HBASE-2246  Add a getConfiguration method to HTableInterface
+               (Benoit Sigoure via JD)
+   HBASE-2282  More directories should be ignored when using git for
+               development (Alexey Kovyrin via Stack)
+   HBASE-2267  More improvements to the Maven build (Lars Francke via Stack)
+   HBASE-2174  Stop from resolving HRegionServer addresses to names using DNS
+               on every heartbeat (Karthik Ranganathan via Stack) 
+   HBASE-2302  Optimize M-R by bulk excluding regions - less InputSplit-s to
+               avoid traffic on region servers when performing M-R on a subset
+               of the table (Kay Kay via Stack) 
+   HBASE-2309  Add apache releases to pom (list of ) repositories
+               (Kay Kay via Stack)
+   HBASE-2279  Hbase Shell does not have any tests (Alexey Kovyrin via Stack)
+   HBASE-2314  [shell] Support for getting counters (Alexey Kovyrin via Stack)
+   HBASE-2324  Refactoring of TableRecordReader (mapred / mapreduce) for reuse
+               outside the scope of InputSplit / RecordReader (Kay Kay via
+               Stack)
+   HBASE-2313  Nit-pick about hbase-2279 shell fixup, if you do get with
+               non-existant column family, throws lots of exceptions
+               (Alexey Kovyrin via Stack)
+   HBASE-2331  [shell] count command needs a way to specify scan caching
+               (Alexey Kovyrin via Stack)
+   HBASE-2364  Ignore Deprecations during build (Paul Smith via Stack)
+   HBASE-2338  log recovery: deleted items may be resurrected
+               (Aravind Menon via Stack)
+   HBASE-2359  WALEdit doesn't implement HeapSize
+               (Kannan Muthukkaruppan via Stack)
+   HBASE-2348  [stargate] Stargate needs both JAR and WAR artifacts (Paul Smith
+               via Andrew Purtell)
+   HBASE-2389  HTable - delete / put unnecessary sync (Kay Kay via Stack)
+   HBASE-2385  Debug Message "Received report from unknown server" should be
+               INFO or WARN
+   HBASE-2374  TableInputFormat - Configurable parameter to add column families
+               (Kay Kay via Stack)
+   HBASE-2388  Give a very explicit message when we figure a big GC pause
+   HBASE-2270  Improve how we handle recursive calls in ExplicitColumnTracker 
+               and WildcardColumnTracker
+   HBASE-2402  [stargate] set maxVersions on gets
+   HBASE-2087  The wait on compaction because "Too many store files" 
+               holds up all flushing
+   HBASE-2252  Mapping a very big table kills region servers
+   HBASE-2412  [stargate] PerformanceEvaluation
+   HBASE-2419  Remove from RS logs the fat NotServingRegionException stack
+   HBASE-2286  [Transactional Contrib] Correctly handle or avoid cases where 
+               writes occur in same millisecond (Clint Morgan via J-D)
+   HBASE-2360  Make sure we have all the hadoop fixes in our our copy of its rpc
+               (Todd Lipcon via Stack)
+   HBASE-2423  Update 'Getting Started' for 0.20.4 including making
+               "important configurations more visiable"
+   HBASE-2435  HTablePool - method to release resources after use
+               (Kay Kay via Stack)
+   HBASE-1933  Upload Hbase jars to a public maven repository
+               (Kay Kay via Stack)
+   HBASE-2440  Master UI should check against known bad JDK versions and
+               warn the user (Todd Lipcon via Stack)
+   HBASE-2430  Disable frag display in trunk, let HBASE-2165 replace it
+   HBASE-1892  [performance] make hbase splits run faster
+   HBASE-2456  deleteChangedReaderObserver spitting warnings after HBASE-2248
+   HBASE-2452  Fix our Maven dependencies (Lars Francke via Stack)
+   HBASE-2490  Improve the javadoc of the client API for HTable
+               (Benoit Sigoure via Stack)
+   HBASE-2488  Master should warn more loudly about unexpected events
+               (Todd Lipcon via Stack)
+   HBASE-2393  ThriftServer instantiates a new HTable per request
+               (Bogdan DRAGU via Stack)
+   HBASE-2496  Less ArrayList churn on the scan path
+   HBASE-2414  Enhance test suite to be able to specify distributed scenarios
+   HBASE-2518  Kill all the trailing whitespaces in the code base
+               (Benoit Sigoure via Stack)
+   HBASE-2528  ServerManager.ServerMonitor isn't daemonized
+   HBASE-2537  Change ordering of maven repos listed in pom.xml to have
+               ibiblio first
+   HBASE-2540  Make QueryMatcher.MatchCode public (Clint Morgan via Stack)
+   HBASE-2524  Unresponsive region server, potential deadlock
+               (Todd Lipcon via Stack)
+   HBASE-2547  [mvn] assembly:assembly does not include hbase-X.X.X-test.jar
+               (Paul Smith via Stack)
+   HBASE-2037  The core elements of HBASE-2037: refactoring flushing, and adding 
+   	           configurability in which HRegion subclass is instantiated
+   HBASE-2248  Provide new non-copy mechanism to assure atomic reads in get and scan
+   HBASE-2523  Add check for licenses before rolling an RC, add to
+               how-to-release doc. and check for inlining a tool that does
+               this for us
+   HBASE-2234  HBASE-2234  Roll Hlog if any datanode in the write pipeline dies
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2340  Add end-to-end test of sync/flush (Forward-port from branch)
+   HBASE-2555  Get rid of HColumnDescriptor.MAPFILE_INDEX_INTERVAL
+   HBASE-2520  Cleanup arrays vs Lists of scanners (Todd Lipcon via Stack)
+   HBASE-2551  Forward port fixes that are in branch but not in trunk (part
+               of the merge of old 0.20 into TRUNK task)
+   HBASE-2466  Improving filter API to allow for modification of keyvalue list 
+   	           by filter (Juhani Connolly via Ryan)
+   HBASE-2566  Remove 'lib' dir; it only has libthrift and that is being
+               pulled from http://people.apache.org/~rawson/repo/....
+   HBASE-2534  Recursive deletes and misc improvements to ZKW
+   HBASE-2577  Remove 'core' maven module; move core up a level
+   HBASE-2587  Coral where tests write data when running and make sure clean
+               target removes all written
+   HBASE-2580  Make the hlog file names unique
+   HBASE-2594  Narrow pattern used finding unit tests to run -- make it same
+               was we had in 0.20
+   HBASE-2538  Work on repository order in pom (adding fbmirror to top,
+               ibiblio on bottom)
+   HBASE-2613  Remove the code around MSG_CALL_SERVER_STARTUP
+   HBASE-2599  BaseScanner says "Current assignment of X is not valid" over
+               and over for same region
+   HBASE-2630  HFile should use toStringBinary in various places
+   HBASE-2632  Shell should autodetect terminal width
+   HBASE-2636  Upgrade Jetty to 6.1.24
+   HBASE-2437  Refactor HLog splitLog (Cosmin Lehene via Stack)
+   HBASE-2638  Speed up REST tests
+   HBASE-2653  Remove unused DynamicBloomFilter (especially as its tests are
+               failing hudson on occasion)
+   HBASE-2651  Allow alternate column separators to be specified for ImportTsv
+   HBASE-2661  Add test case for row atomicity guarantee
+   HBASE-2578  Add ability for tests to override server-side timestamp 
+   	           setting (currentTimeMillis) (Daniel Ploeg via Ryan Rawson)
+   HBASE-2558  Our javadoc overview -- "Getting Started", requirements, etc. --
+               is not carried across by mvn javadoc:javadoc target
+   HBASE-2618  Don't inherit from HConstants (Benoit Sigoure via Stack)
+   HBASE-2208  TableServers # processBatchOfRows - converts from List to [ ]
+               - Expensive copy 
+   HBASE-2694  Move RS to Master region open/close messaging into ZooKeeper
+   HBASE-2716  Make HBase's maven artifacts configurable with -D
+               (Alex Newman via Stack)
+   HBASE-2718  Update .gitignore for trunk after removal of contribs
+               (Lars Francke via Stack)
+   HBASE-2468  Improvements to prewarm META cache on clients
+               (Mingjie Lai via Stack)
+   HBASE-2353  Batch puts should sync HLog as few times as possible
+   HBASE-2726  Region Server should never abort without an informative log
+               message
+   HBASE-2724  Update to new release of Guava library
+   HBASE-2735  Make HBASE-2694 replication-friendly
+   HBASE-2683  Make it obvious in the documentation that ZooKeeper needs 
+               permanent storage
+   HBASE-2764  Force all Chore tasks to have a thread name
+   HBASE-2762  Add warning to master if running without append enabled
+   HBASE-2779  Build a -src tgz to sit beside our -bin tgz when you call
+               maven assembly:assembly
+   HBASE-2783  Quick edit of 'Getting Started' for development release 0.89.x
+   HBASE-2345  Add Test in 0.20 to Check for proper HDFS-200 append/sync support
+               (Nicolas Spiegelberg via JD)
+   HBASE-2786  TestHLog.testSplit hangs (Nicolas Spiegelberg via JD)
+   HBASE-2790  Purge apache-forrest from TRUNK
+   HBASE-2793  Add ability to extract a specified list of versions of a column 
+               in a single roundtrip (Kannan via Ryan)
+   HBASE-2828  HTable unnecessarily coupled with HMaster
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2265  HFile and Memstore should maintain minimum and maximum timestamps
+               (Pranav via Ryan)
+   HBASE-2836  Speed mvn site building by removing generation of useless reports
+   HBASE-2808  Document the implementation of replication
+   HBASE-2517  During reads when passed the specified time range, seek to
+               next column (Pranav via jgray)
+   HBASE-2835  Update hadoop jar to head of branch-0.20-append to catch three
+               added patches
+   HBASE-2840  Remove the final remnants of the old Get code - the query matchers 
+               and other helper classes
+   HBASE-2845  Small edit of shell main help page cutting down some on white
+               space and text
+   HBASE-2850  slf4j version needs to be reconciled in pom: thrift wants 1.5.x
+               and hadoop/avro 1.4.x
+   HBASE-2865  Cleanup of LRU logging; its hard to read, uses custom MB'maker,
+               repeats info, too many numbers after the point, etc.
+   HBASE-2869  Regularize how we log sequenceids -- sometimes its myseqid,
+               other times its sequence id, etc.
+   HBASE-2873  Minor clean up in basescanner; fix a log and make deletes of
+               region processing run in order
+   HBASE-2830  NotServingRegionException shouldn't log a stack trace
+   HBASE-2874  Unnecessary double-synchronization in ZooKeeperWrapper
+               (Benoît Sigoure via Stack)
+   HBASE-2879  Offer ZK CLI outside of HBase Shell
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2886  Add search box to site (Alex Baranau via Stack)
+   HBASE-2792  Create a better way to chain log cleaners
+               (Chongxin Li via Stack)
+   HBASE-2844  Capping the number of regions (Pranav Khaitan via Stack)
+   HBASE-2870  Add Backup CLI Option to HMaster (Nicolas Spiegelberg via Stack)
+   HBASE-2868  Do some small cleanups in org.apache.hadoop.hbase.regionserver.wal
+               (Alex Newman via Stack)
+   HBASE-1660  HBASE-1660 script to handle rolling restarts
+               (Nicolas Spiegelberg via Stack)
+   HBASE-1517  Implement inexpensive seek operations in HFile (Pranav via Ryan)
+   HBASE-2903  ColumnPrefix filtering (Pranav via Ryan)
+   HBASE-2904  Smart seeking using filters (Pranav via Ryan)
+   HBASE-2922  HLog preparation and cleanup are done under the updateLock, 
+               major slowdown
+   HBASE-1845  MultiGet, MultiDelete, and MultiPut - batched to the 
+               appropriate region servers (Marc Limotte via Ryan)
+   HBASE-2867  Have master show its address using hostname rather than IP
+   HBASE-2696  ZooKeeper cleanup and refactor
+   HBASE-2695  HMaster cleanup and refactor
+   HBASE-2692  Open daughters immediately on parent's regionserver
+   HBASE-2405  Close, split, open of regions in RegionServer are run by a single
+               thread only.
+   HBASE-1676  load balancing on a large cluster doesn't work very well
+   HBASE-2953  Edit of hbase-default.xml removing stale configs.
+   HBASE-2857  HBaseAdmin.tableExists() should not require a full meta scan
+   HBASE-2962  Add missing methods to HTableInterface (and HTable)
+               (Lars Francke via Stack)
+   HBASE-2942  Custom filters should not require registration in 
+               HBaseObjectWritable (Gary Helmling via Andrew Purtell)
+   HBASE-2976  Running HFile tool passing fully-qualified filename I get
+               'IllegalArgumentException: Wrong FS'
+   HBASE-2977  Refactor master command line to a new class
+   HBASE-2980  Refactor region server command line to a new class
+   HBASE-2988  Support alternate compression for major compactions
+   HBASE-2941  port HADOOP-6713 - threading scalability for RPC reads - to HBase
+   HBASE-2782  QOS for META table access
+   HBASE-3017  More log pruning
+   HBASE-3022  Change format of enum messages in o.a.h.h.executor package
+   HBASE-3001  Ship dependency jars to the cluster for all jobs
+   HBASE-3033  [replication] ReplicationSink.replicateEntries improvements
+   HBASE-3040  BlockIndex readIndex too slowly in heavy write scenario
+               (Andy Chen via Stack)
+   HBASE-3030  The return code of many filesystem operations are not checked
+               (dhruba borthakur via Stack)
+   HBASE-2646  Compaction requests should be prioritized to prevent blocking
+               (Jeff Whiting via Stack)
+   HBASE-3019  Make bulk assignment on cluster startup run faster
+   HBASE-3066  We don't put the port for hregionserver up into znode since
+               new master
+   HBASE-2825  Scans respect row locks
+   HBASE-3070  Add to hbaseadmin means of shutting down a regionserver
+   HBASE-2996  Fix and clean up Maven (Lars Francke via Stack)
+   HBASE-2917  Reseek directly to next row (Pranav Khaitan)
+   HBASE-2907  [rest/stargate] Improve error response when trying to create a
+               scanner on a nonexistant table
+   HBASE-3092  Replace deprecated "new HBaseConfiguration(...)" calls
+               (Lars Francke)
+   HBASE-2968  No standard family filter provided (Andrey Stepachev)
+   HBASE-3088  TestAvroServer and TestThriftServer broken because use same
+               table in all tests and tests enable/disable/delete
+   HBASE-3097  Merge in hbase-1200 doc on bloomfilters into hbase book
+   HBASE-2700  Test of: Handle master failover for regions in transition
+   HBASE-3115  HBaseClient wastes 1 TCP packet per RPC
+   HBASE-3076  Allow to disable automatic shipping of dependency jars
+               for mapreduce jobs (Bruno Dumon)
+   HBASE-3128  On assign, if ConnectException, reassign another server
+   HBASE-3133  Only log compaction requests when a request is actually added
+               to the queue
+   HBASE-3132  Print TimestampRange and BloomFilters in HFile pretty print
+   HBASE-2514  RegionServer should refuse to be assigned a region that use 
+   	       LZO when LZO isn't available
+   HBASE-3082  For ICV gets, first look in MemStore before reading StoreFiles
+               (prakash via jgray)
+   HBASE-3167  HBase Export: Add ability to export specific Column Family;
+               Turn Block Cache off during export; improve usage doc
+               (Kannan Muthukkaruppan via Stack)
+   HBASE-3102  Enhance HBase rMetrics for Long-running Stats
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3169  NPE when master joins running cluster if a RIT references
+               a RS no longer present
+   HBASE-3174  Add ability for Get operations to enable/disable use of block
+               caching
+   HBASE-3162  Add TimeRange support into Increment to optimize for counters
+               that are partitioned on time
+   HBASE-2253  Show Block cache hit ratio for requests where
+               cacheBlocks=true
+   HBASE-3126  Force use of 'mv -f' when moving aside hbase logfiles
+   HBASE-3176  Remove compile warnings in HRegionServer
+   HBASE-3154  HBase RPC should support timeout (Hairong via jgray)
+   HBASE-3184  Xmx setting in pom to use for tests/surefire does not appear
+               to work
+   HBASE-3120  [rest] Content transcoding
+   HBASE-3181  Review, document, and fix up Regions-in-Transition timeout
+               logic
+   HBASE-3180  Review periodic master logging, especially ServerManager once
+               a minute
+   HBASE-3189  Stagger Major Compactions (Nicolas Spiegelberg via Stack)
+   HBASE-2564  [rest] Tests use deprecated foundation
+   HBASE-2819  hbck should have the ability to repair basic problems
+   HBASE-3200  Make is so can disable DEBUG logging on HConnectionImplemenation
+               without losing important messages
+   HBASE-3201  Add accounting of empty regioninfo_qualifier rows in meta to
+               hbasefsck.
+   HBASE-3048  unify code for major/minor compactions (Amit via jgray)
+   HBASE-3083  Major compaction check should use new timestamp meta
+               information in HFiles (rather than dfs timestamp) along with
+               TTL to allow major even if single file
+   HBASE-3194  HBase should run on both secure and vanilla versions of Hadoop 0.20
+               (Gary Helmling via Stack)
+   HBASE-3209  HBASE-3209 : New Compaction Algorithm
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3168  Sanity date and time check when a region server joins the
+               cluster (Jeff Whiting and jgray)
+   HBASE-3090  Don't include hbase-default in conf/ assembly
+   HBASE-3161  Provide option for Stargate to only serve GET requests
+               (Bennett Neale via Stack)
+   HBASE-3218  Shell help cleanup/cosmetics/edit
+   HBASE-3079  Shell displaying uninformative exceptions
+   HBASE-3227  Edit of log messages before branching.
+   HBASE-3230  Refresh our hadoop jar and update zookeeper to
+               just-released 3.3.2
+   HBASE-3231  Update to zookeeper 3.3.2.
+   HBASE-3273  Set the ZK default timeout to 3 minutes
+   HBASE-3279  [rest] Filter for gzip content encoding that wraps both input
+               and output side.
+   HBASE-3223  Get VersionInfo for Running HBase Process
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3303  Lower hbase.regionserver.handler.count from 25 back to 10
+   HBASE-3292  Expose block cache hit/miss/evict counts into region server
+               metrics
+   HBASE-2467  Concurrent flushers in HLog sync using HDFS-895
+   HBASE-3320  Compaction parameter minCompactSize should be configurable
+   HBASE-3308  SplitTransaction.splitStoreFiles slows splits a lot
+   HBASE-3349  Pass HBase configuration to HttpServer
+   HBASE-3372  HRS shouldn't print a full stack for ServerNotRunningException
+   HBASE-3377  Upgrade Jetty to 6.1.26
+
+
+  NEW FEATURES
+   HBASE-1961  HBase EC2 scripts
+   HBASE-1982  [EC2] Handle potentially large and uneven instance startup times
+   HBASE-2009  [EC2] Support mapreduce
+   HBASE-2012  [EC2] LZO support
+   HBASE-2019  [EC2] remember credentials if not configured
+   HBASE-2080  [EC2] Support multivolume local instance storage
+   HBASE-2083  [EC2] HDFS DataNode no longer required on master
+   HBASE-2084  [EC2] JAVA_HOME handling broken
+   HBASE-2100  [EC2] Adjust fs.file-max
+   HBASE-2103  [EC2] pull version from build
+   HBASE-2131  [EC2] Mount data volumes as xfs, noatime
+   HBASE-1901  "General" partitioner for "hbase-48" bulk (behind the api, write
+               hfiles direct) uploader
+   HBASE-1433  Update hbase build to match core, use ivy, publish jars to maven
+               repo, etc. (Kay Kay via Stack)
+   HBASE-2129  Simple Master/Slave replication
+   HBASE-2070  Collect HLogs and delete them after a period of time
+   HBASE-2221  MR to copy a table
+   HBASE-2257  [stargate] multiuser mode
+   HBASE-2263  [stargate] multiuser mode: authenticator for zookeeper
+   HBASE-2273  [stargate] export metrics via Hadoop metrics, JMX, and zookeeper
+   HBASE-2274  [stargate] filter support: JSON descriptors
+   HBASE-2316  Need an ability to run shell tests w/o invoking junit
+               (Alexey Kovyrin via Stack)
+   HBASE-2327  [EC2] Allocate elastic IP addresses for ZK and master nodes
+   HBASE-2319  [stargate] multiuser mode: request shaping
+   HBASE-2403  [stargate] client HTable interface to REST connector
+   HBASE-2438  Addition of a Column Pagination Filter (Paul Kist via Stack)
+   HBASE-2473  Add to admin create table start and end key params and
+               desired number of regions
+   HBASE-2529  Make OldLogsCleaner easier to extend
+   HBASE-2527  Add the ability to easily extend some HLog actions
+   HBASE-2559  Set hbase.hregion.majorcompaction to 0 to disable
+   HBASE-1200  Add bloomfilters (Nicolas Spiegelberg via Stack)
+   HBASE-2588  Add easier way to ship HBase dependencies to MR cluster within Job
+   HBASE-1923  Bulk incremental load into an existing table
+   HBASE-2579  Add atomic checkAndDelete support (Michael Dalton via Stack)
+   HBASE-2400  new connector for Avro RPC access to HBase cluster
+               (Jeff Hammerbacher via Ryan Rawson)
+   HBASE-7     Provide a HBase checker and repair tool similar to fsck
+               (dhruba borthakur via Stack)
+   HBASE-2223  Handle 10min+ network partitions between clusters
+   HBASE-2862  Name DFSClient for Improved Debugging
+               (Nicolas Spiegelberg via Stack)
+   HBASE-2838  Replication metrics
+   HBASE-3000  Add "hbase classpath" command to dump classpath
+   HBASE-3043  'hbase-daemon.sh stop regionserver' should kill compactions
+               that are in progress
+               (Nicolas Spiegelberg via Stack)
+   HBASE-3073  New APIs for Result, faster implementation for some calls
+   HBASE-3053  Add ability to have multiple Masters LocalHBaseCluster for
+               test writing
+   HBASE-2201  JRuby shell for replication
+   HBASE-2946  Increment multiple columns in a row at once
+   HBASE-3013  Tool to verify data in two clusters
+   HBASE-2896  Retain assignment information between cluster
+               shutdown/startup
+   HBASE-3211  Key (Index) Only Fetches
+
+
+  OPTIMIZATIONS
+   HBASE-410   [testing] Speed up the test suite
+   HBASE-2041  Change WAL default configuration values
+   HBASE-2997  Performance fixes - profiler driven
+   HBASE-2450  For single row reads of specific columns, seek to the 
+   	       first column in HFiles rather than start of row
+	       (Pranav via Ryan, some Ryan)
+
+
+Release 0.20.0 - Tue Sep  8 12:53:05 PDT 2009
+  INCOMPATIBLE CHANGES
+   HBASE-1147  Modify the scripts to use Zookeeper
+   HBASE-1144  Store the ROOT region location in Zookeeper
+               (Nitay Joffe via Stack)
+   HBASE-1146  Replace the HRS leases with Zookeeper
+   HBASE-61    Create an HBase-specific MapFile implementation
+               (Ryan Rawson via Stack)
+   HBASE-1145  Ensure that there is only 1 Master with Zookeeper (Removes
+               hbase.master) (Nitay Joffe via Stack)
+   HBASE-1289  Remove "hbase.fully.distributed" option and update docs
+               (Nitay Joffe via Stack)
+   HBASE-1234  Change HBase StoreKey format
+   HBASE-1348  Move 0.20.0 targeted TRUNK to 0.20.0 hadoop
+               (Ryan Rawson and Stack)
+   HBASE-1342  Add to filesystem info needed to rebuild .META.
+   HBASE-1361  Disable bloom filters
+   HBASE-1367  Get rid of Thrift exception 'NotFound'
+   HBASE-1381  Remove onelab and bloom filters files from hbase
+   HBASE-1411  Remove HLogEdit.
+   HBASE-1357  If one sets the hbase.master to 0.0.0.0 non local regionservers
+               can't find the master
+   HBASE-1304  New client server implementation of how gets and puts are
+               handled (holstad, jgray, rawson, stack)
+   HBASE-1582  Translate ColumnValueFilter and RowFilterSet to the new
+               Filter interface (Clint Morgan and Stack)
+   HBASE-1599  Fix TestFilterSet, broken up on hudson (Jon Gray via Stack)
+   HBASE-1799  deprecate o.a.h.h.rest in favor of stargate
+
+  BUG FIXES
+   HBASE-1140  "ant clean test" fails (Nitay Joffe via Stack)
+   HBASE-1129  Master won't go down; stuck joined on rootScanner
+   HBASE-1136  HashFunction inadvertently destroys some randomness
+               (Jonathan Ellis via Stack)
+   HBASE-1138  Test that readers opened after a sync can see all data up to the
+               sync (temporary until HADOOP-4379 is resolved)
+   HBASE-1121  Cluster confused about where -ROOT- is
+   HBASE-1148  Always flush HLog on root or meta region updates
+   HBASE-1181  src/saveVersion.sh bails on non-standard Bourne shells
+               (e.g. dash) (K M via Jean-Daniel Cryans)
+   HBASE-1175  HBA administrative tools do not work when specifying region
+               name (Jonathan Gray via Andrew Purtell)
+   HBASE-1190  TableInputFormatBase with row filters scan too far (Dave
+               Latham via Andrew Purtell)
+   HBASE-1198  OOME in IPC server does not trigger abort behavior
+   HBASE-1209  Make port displayed the same as is used in URL for RegionServer
+               table in UI (Lars George via Stack)
+   HBASE-1217  add new compression and hfile blocksize to HColumnDescriptor
+   HBASE-859   HStoreKey needs a reworking
+   HBASE-1211  NPE in retries exhausted exception
+   HBASE-1233  Transactional fixes: Overly conservative scan read-set,
+               potential CME (Clint Morgan via Stack)
+   HBASE-1239  in the REST interface does not correctly clear the character
+               buffer each iteration-1185  wrong request/sec in the gui
+               reporting wrong (Brian Beggs via Stack)
+   HBASE-1245  hfile meta block handling bugs (Ryan Rawson via Stack)
+   HBASE-1238  Under upload, region servers are unable
+               to compact when loaded with hundreds of regions
+   HBASE-1247  checkAndSave doesn't Write Ahead Log
+   HBASE-1243  oldlogfile.dat is screwed, so is it's region
+   HBASE-1169  When a shutdown is requested, stop scanning META regions
+               immediately
+   HBASE-1251  HConnectionManager.getConnection(HBaseConfiguration) returns 
+               same HConnection for different HBaseConfigurations 
+   HBASE-1157, HBASE-1156 If we do not take start code as a part of region
+               server recovery, we could inadvertantly try to reassign regions
+               assigned to a restarted server with a different start code;
+               Improve lease handling
+   HBASE-1267  binary keys broken in trunk (again) -- part 2 and 3
+               (Ryan Rawson via Stack)
+   HBASE-1268  ZooKeeper config parsing can break HBase startup
+               (Nitay Joffe via Stack)
+   HBASE-1270  Fix TestInfoServers (Nitay Joffe via Stack)
+   HBASE-1277  HStoreKey: Wrong comparator logic (Evgeny Ryabitskiy)
+   HBASE-1275  TestTable.testCreateTable broken (Ryan Rawson via Stack)
+   HBASE-1274  TestMergeTable is broken in Hudson (Nitay Joffe via Stack)
+   HBASE-1283  thrift's package descrpition needs to update for start/stop
+               procedure (Rong-en Fan via Stack)
+   HBASE-1284  drop table drops all disabled tables
+   HBASE-1290  table.jsp either 500s out or doesnt list the regions (Ryan
+               Rawson via Andrew Purtell)
+   HBASE-1293  hfile doesn't recycle decompressors (Ryan Rawson via Andrew
+               Purtell)
+   HBASE-1150  HMsg carries safemode flag; remove (Nitay Joffe via Stack)
+   HBASE-1232  zookeeper client wont reconnect if there is a problem (Nitay
+               Joffe via Andrew Purtell)
+   HBASE-1303  Secondary index configuration prevents HBase from starting
+               (Ken Weiner via Stack)
+   HBASE-1298  master.jsp & table.jsp do not URI Encode table or region
+               names in links (Lars George via Stack)
+   HBASE-1310  Off by one error in Bytes.vintToBytes
+   HBASE-1202  getRow does not always work when specifying number of versions
+   HBASE-1324  hbase-1234 broke testget2 unit test (and broke the build)
+   HBASE-1321  hbase-1234 broke TestCompaction; fix and reenable
+   HBASE-1330  binary keys broken on trunk (Ryan Rawson via Stack)
+   HBASE-1332  regionserver carrying .META. starts sucking all cpu, drives load
+               up - infinite loop? (Ryan Rawson via Stack)
+   HBASE-1334  .META. region running into hfile errors (Ryan Rawson via Stack)
+   HBASE-1338  lost use of compaction.dir; we were compacting into live store
+               subdirectory
+   HBASE-1058  Prevent runaway compactions
+   HBASE-1292  php thrift's getRow() would throw an exception if the row does
+               not exist (Rong-en Fan via Stack)
+   HBASE-1340  Fix new javadoc warnings (Evgeny Ryabitskiy via Stack)
+   HBASE-1287  Partitioner class not used in TableMapReduceUtil
+               .initTableReduceJob() (Lars George and Billy Pearson via Stack)
+   HBASE-1320  hbase-1234 broke filter tests
+   HBASE-1355  [performance] Cache family maxversions; we were calculating on
+               each access
+   HBASE-1358  Bug in reading from Memcache method (read only from snapshot)
+               (Evgeny Ryabitskiy via Stack)
+   HBASE-1322  hbase-1234 broke TestAtomicIncrement; fix and reenable
+               (Evgeny Ryabitskiy and Ryan Rawson via Stack)
+   HBASE-1347  HTable.incrementColumnValue does not take negative 'amount'
+               (Evgeny Ryabitskiy via Stack)
+   HBASE-1365  Typo in TableInputFormatBase.setInputColums (Jon Gray via Stack)
+   HBASE-1279  Fix the way hostnames and IPs are handled
+   HBASE-1368  HBASE-1279 broke the build
+   HBASE-1264  Wrong return values of comparators for ColumnValueFilter
+               (Thomas Schneider via Andrew Purtell)
+   HBASE-1374  NPE out of ZooKeeperWrapper.loadZooKeeperConfig
+   HBASE-1336  Splitting up the compare of family+column into 2 different
+               compare 
+   HBASE-1377  RS address is null in master web UI
+   HBASE-1344  WARN IllegalStateException: Cannot set a region as open if it
+               has not been pending
+   HBASE-1386  NPE in housekeeping
+   HBASE-1396  Remove unused sequencefile and mapfile config. from
+               hbase-default.xml
+   HBASE-1398  TableOperation doesnt format keys for meta scan properly
+               (Ryan Rawson via Stack)
+   HBASE-1399  Can't drop tables since HBASE-1398 (Ryan Rawson via Andrew
+               Purtell)
+   HBASE-1311  ZooKeeperWrapper: Failed to set watcher on ZNode /hbase/master
+               (Nitay Joffe via Stack)
+   HBASE-1391  NPE in TableInputFormatBase$TableRecordReader.restart if zoo.cfg
+               is wrong or missing on task trackers
+   HBASE-1323  hbase-1234 broke TestThriftServer; fix and reenable
+   HBASE-1425  ColumnValueFilter and WhileMatchFilter fixes on trunk
+               (Clint Morgan via Stack)
+   HBASE-1431  NPE in HTable.checkAndSave when row doesn't exist (Guilherme
+               Mauro Germoglio Barbosa via Andrew Purtell)
+   HBASE-1421  Processing a regionserver message -- OPEN, CLOSE, SPLIT, etc. --
+               and if we're carrying more than one message in payload, if
+               exception, all messages that follow are dropped on floor
+   HBASE-1434  Duplicate property in hbase-default.xml (Lars George via Andrew
+               Purtell)
+   HBASE-1435  HRegionServer is using wrong info bind address from
+               hbase-site.xml (Lars George via Stack)
+   HBASE-1438  HBASE-1421 broke the build (#602 up on hudson)
+   HBASE-1440  master won't go down because joined on a rootscanner that is
+               waiting for ever
+   HBASE-1441  NPE in ProcessRegionStatusChange#getMetaRegion
+   HBASE-1162  CME in Master in RegionManager.applyActions
+   HBASE-1010  IOE on regionserver shutdown because hadn't opened an HLog
+   HBASE-1415  Stuck on memcache flush
+   HBASE-1257  base64 encoded values are not contained in quotes during the
+               HBase REST JSON serialization (Brian Beggs via Stack)
+   HBASE-1436  Killing regionserver can make corrupted hfile
+   HBASE-1272  Unreadable log messages -- "... to the only server
+               localhost_1237525439599_56094" <- You'd have to be perverse
+               to recognize that as a hostname, startcode, and port
+   HBASE-1395  InfoServers no longer put up a UI
+   HBASE-1302  When a new master comes up, regionservers should continue with
+               their region assignments from the last master
+   HBASE-1457  Taking down ROOT/META regionserver can result in cluster
+               becoming in-operational (Ryan Rawson via Stack)
+   HBASE-1471  During cluster shutdown, deleting zookeeper regionserver nodes
+               causes exceptions
+   HBASE-1483  HLog split loses track of edits (Clint Morgan via Stack)
+   HBASE-1484  commit log split writes files with newest edits first
+               (since hbase-1430); should be other way round
+   HBASE-1493  New TableMapReduceUtil methods should be static (Billy Pearson
+               via Andrew Purtell)
+   HBASE-1486  BLOCKCACHE always on even when disabled (Lars George via Stack)
+   HBASE-1491  ZooKeeper errors: "Client has seen zxid 0xe our last zxid
+               is 0xd"
+   HBASE-1499  Fix javadoc warnings after HBASE-1304 commit (Lars George via
+               Stack)
+   HBASE-1504  Remove left-over debug from 1304 commit
+   HBASE-1518  Delete Trackers using compareRow, should just use raw
+               binary comparator (Jon Gray via Stack)
+   HBASE-1500  KeyValue$KeyComparator array overrun
+   HBASE-1513  Compactions too slow
+   HBASE-1516  Investigate if StoreScanner will not return the next row if 
+               earlied-out of previous row (Jon Gray)
+   HBASE-1520  StoreFileScanner catches and ignore IOExceptions from HFile
+   HBASE-1522  We delete splits before their time occasionally
+   HBASE-1523  NPE in BaseScanner
+   HBASE-1525  HTable.incrementColumnValue hangs()
+   HBASE-1526  mapreduce fixup
+   HBASE-1503  hbase-1304 dropped updating list of store files on flush
+               (jgray via stack)
+   HBASE-1480  compaction file not cleaned up after a crash/OOME server
+               (Evgeny Ryabitskiy via Stack)
+   HBASE-1529  familyMap not invalidated when a Result is (re)read as a
+               Writable
+   HBASE-1528  Ensure scanners work across memcache snapshot
+   HBASE-1447  Take last version of the hbase-1249 design doc. and make
+               documentation out of it
+   HBASE-1206  Scanner spins when there are concurrent inserts to column family
+   HBASE-1536  Controlled crash of regionserver not hosting meta/root leaves
+               master in spinning state, regions not reassigned
+   HBASE-1543  Unnecessary toString during scanning costs us some CPU
+   HBASE-1544  Cleanup HTable (Jonathan Gray via Stack)
+   HBASE-1488  After 1304 goes in, fix and reenable test of thrift, mr indexer,
+               and merge tool
+   HBASE-1531  Change new Get to use new filter API
+   HBASE-1549  in zookeeper.sh, use localhost instead of 127.0.0.1
+   HBASE-1534  Got ZooKeeper event, state: Disconnected on HRS and then NPE on
+               reinit
+   HBASE-1387  Before release verify all object sizes using Ryans' instrumented
+               JVM trick (Erik Holstad via Stack)
+   HBASE-1545  atomicIncrements creating new values with Long.MAX_VALUE
+   HBASE-1547  atomicIncrement doesnt increase hregion.memcacheSize
+   HBASE-1553  ClassSize missing in trunk
+   HBASE-1561  HTable Mismatch between javadoc and what it actually does
+   HBASE-1558  deletes use 'HConstants.LATEST_TIMESTAMP' but no one translates
+               that into 'now'
+   HBASE-1508  Shell "close_region" reveals a Master<>HRS problem, regions are
+               not reassigned
+   HBASE-1568  Client doesnt consult old row filter interface in
+               filterSaysStop() - could result in NPE or excessive scanning
+   HBASE-1564  in UI make host addresses all look the same -- not IP sometimes
+               and host at others
+   HBASE-1567  cant serialize new filters
+   HBASE-1585  More binary key/value log output cleanup
+               (Lars George via Stack)
+   HBASE-1563  incrementColumnValue does not write to WAL (Jon Gray via Stack)
+   HBASE-1569  rare race condition can take down a regionserver
+   HBASE-1450  Scripts passed to hbase shell do not have shell context set up
+               for them
+   HBASE-1566  using Scan(startRow,stopRow) will cause you to iterate the
+               entire table
+   HBASE-1560  TIF can't seem to find one region
+   HBASE-1580  Store scanner does not consult filter.filterRow at end of scan
+               (Clint Morgan via Stack)
+   HBASE-1437  broken links in hbase.org
+   HBASE-1582  Translate ColumnValueFilter and RowFilterSet to the new Filter
+               interface
+   HBASE-1594  Fix scan addcolumns after hbase-1385 commit (broke hudson build)
+   HBASE-1595  hadoop-default.xml and zoo.cfg in hbase jar
+   HBASE-1602  HRegionServer won't go down since we added in new LruBlockCache
+   HBASE-1608  TestCachedBlockQueue failing on some jvms (Jon Gray via Stack)
+   HBASE-1615  HBASE-1597 introduced a bug when compacting after a split
+               (Jon Gray via Stack)
+   HBASE-1616  Unit test of compacting referenced StoreFiles (Jon Gray via
+               Stack)
+   HBASE-1618  Investigate further into the MemStoreFlusher StoreFile limit
+               (Jon Gray via Stack)
+   HBASE-1625  Adding check to Put.add(KeyValue), to see that it has the same
+               row as when instantiated (Erik Holstad via Stack)
+   HBASE-1629  HRS unable to contact master
+   HBASE-1633  Can't delete in TRUNK shell; makes it hard doing admin repairs
+   HBASE-1641  Stargate build.xml causes error in Eclipse
+   HBASE-1627  TableInputFormatBase#nextKeyValue catches the wrong exception
+               (DoÄŸacan Güney via Stack)
+   HBASE-1644  Result.row is cached in getRow; this breaks MapReduce
+               (DoÄŸacan Güney via Stack)
+   HBASE-1639  clean checkout with empty hbase-site.xml, zk won't start
+   HBASE-1646  Scan-s can't set a Filter (DoÄŸacan Güney via Stack)
+   HBASE-1649  ValueFilter may not reset its internal state
+               (DoÄŸacan Güney via Stack)
+   HBASE-1651  client is broken, it requests ROOT region location from ZK too
+               much
+   HBASE-1650  HBASE-1551 broke the ability to manage non-regionserver
+               start-up/shut down. ie: you cant start/stop thrift on a cluster
+               anymore
+   HBASE-1658  Remove UI refresh -- its annoying
+   HBASE-1659  merge tool doesnt take binary regions with \x escape format
+   HBASE-1663  Request compaction only once instead of every time 500ms each
+               time we cycle the hstore.getStorefilesCount() >
+               this.blockingStoreFilesNumber loop
+   HBASE-1058  Disable 1058 on catalog tables
+   HBASE-1583  Start/Stop of large cluster untenable
+   HBASE-1668  hbase-1609 broke TestHRegion.testScanSplitOnRegion unit test
+   HBASE-1669  need dynamic extensibility of HBaseRPC code maps and interface
+               lists (Clint Morgan via Stack)
+   HBASE-1359  After a large truncating table HBase becomes unresponsive
+   HBASE-1215  0.19.0 -> 0.20.0 migration (hfile, HCD changes, HSK changes)
+   HBASE-1689  Fix javadoc warnings and add overview on client classes to
+               client package
+   HBASE-1680  FilterList writable only works for HBaseObjectWritable
+               defined types (Clint Morgan via Stack and Jon Gray)
+   HBASE-1607  transactions / indexing fixes: trx deletes not handeled, index
+               scan can't specify stopRow (Clint Morgan via Stack)
+   HBASE-1693  NPE close_region ".META." in shell
+   HBASE-1706  META row with missing HRI breaks UI
+   HBASE-1709  Thrift getRowWithColumns doesn't accept column-family only
+               (Mathias Lehmann via Stack)
+   HBASE-1692  Web UI is extremely slow / freezes up if you have many tables
+   HBASE-1686  major compaction can create empty store files, causing AIOOB
+               when trying to read
+   HBASE-1705  Thrift server: deletes in mutateRow/s don't delete
+               (Tim Sell and Ryan Rawson via Stack)
+   HBASE-1703  ICVs across /during a flush can cause multiple keys with the 
+               same TS (bad)
+   HBASE-1671  HBASE-1609 broke scanners riding across splits
+   HBASE-1717  Put on client-side uses passed-in byte[]s rather than always
+               using copies
+   HBASE-1647  Filter#filterRow is called too often, filters rows it shouldn't
+               have (DoÄŸacan Güney via Ryan Rawson and Stack)
+   HBASE-1718  Reuse of KeyValue during log replay could cause the wrong
+               data to be used
+   HBASE-1573  Holes in master state change; updated startcode and server
+               go into .META. but catalog scanner just got old values (redux)
+   HBASE-1534  Got ZooKeeper event, state: Disconnected on HRS and then NPE
+               on reinit
+   HBASE-1725  Old TableMap interface's definitions are not generic enough
+               (DoÄŸacan Güney via Stack)
+   HBASE-1732  Flag to disable regionserver restart
+   HBASE-1727  HTD and HCD versions need update
+   HBASE-1604  HBaseClient.getConnection() may return a broken connection
+               without throwing an exception (Eugene Kirpichov via Stack)
+   HBASE-1737  Regions unbalanced when adding new node
+   HBASE-1739  hbase-1683 broke splitting; only split three logs no matter
+               what N was
+   HBASE-1745  [tools] Tool to kick region out of inTransistion
+   HBASE-1757  REST server runs out of fds
+   HBASE-1768  REST server has upper limit of 5k PUT
+   HBASE-1766  Add advanced features to HFile.main() to be able to analyze
+               storefile problems
+   HBASE-1761  getclosest doesn't understand delete family; manifests as
+               "HRegionInfo was null or empty in .META" A.K.A the BS problem
+   HBASE-1738  Scanner doesnt reset when a snapshot is created, could miss
+               new updates into the 'kvset' (active part)
+   HBASE-1767  test zookeeper broken in trunk and 0.20 branch; broken on
+               hudson too
+   HBASE-1780  HTable.flushCommits clears write buffer in finally clause
+   HBASE-1784  Missing rows after medium intensity insert
+   HBASE-1809  NPE thrown in BoundedRangeFileInputStream
+   HBASE-1810  ConcurrentModificationException in region assignment
+               (Mathias Herberts via Stack)
+   HBASE-1804  Puts are permitted (and stored) when including an appended colon
+   HBASE-1715  Compaction failure in ScanWildcardColumnTracker.checkColumn
+   HBASE-2352  Small values for hbase.client.retries.number and
+               ipc.client.connect.max.retries breaks long ops in hbase shell
+               (Alexey Kovyrin via Stack)
+   HBASE-2531  32-bit encoding of regionnames waaaaaaayyyyy too susceptible to
+               hash clashes (Kannan Muthukkaruppan via Stack)
+
+  IMPROVEMENTS
+   HBASE-1089  Add count of regions on filesystem to master UI; add percentage
+               online as difference between whats open and whats on filesystem
+               (Samuel Guo via Stack)
+   HBASE-1130  PrefixRowFilter (Michael Gottesman via Stack)
+   HBASE-1139  Update Clover in build.xml
+   HBASE-876   There are a large number of Java warnings in HBase; part 1,
+               part 2, part 3, part 4, part 5, part 6, part 7 and part 8
+               (Evgeny Ryabitskiy via Stack)
+   HBASE-896   Update jruby from 1.1.2 to 1.1.6
+   HBASE-1031  Add the Zookeeper jar
+   HBASE-1142  Cleanup thrift server; remove Text and profuse DEBUG messaging
+               (Tim Sell via Stack)
+   HBASE-1064  HBase REST xml/json improvements (Brian Beggs working of
+               initial Michael Gottesman work via Stack)
+   HBASE-5121  Fix shell usage for format.width
+   HBASE-845   HCM.isTableEnabled doesn't really tell if it is, or not
+   HBASE-903   [shell] Can't set table descriptor attributes when I alter a
+               table
+   HBASE-1166  saveVersion.sh doesn't work with git (Nitay Joffe via Stack)
+   HBASE-1167  JSP doesn't work in a git checkout (Nitay Joffe via Andrew
+               Purtell)
+   HBASE-1178  Add shutdown command to shell
+   HBASE-1184  HColumnDescriptor is too restrictive with family names
+               (Toby White via Andrew Purtell)
+   HBASE-1180  Add missing import statements to SampleUploader and remove
+               unnecessary @Overrides (Ryan Smith via Andrew Purtell)
+   HBASE-1191  ZooKeeper ensureParentExists calls fail 
+               on absolute path (Nitay Joffe via Jean-Daniel Cryans)
+   HBASE-1187  After disabling/enabling a table, the regions seems to 
+               be assigned to only 1-2 region servers
+   HBASE-1210  Allow truncation of output for scan and get commands in shell
+               (Lars George via Stack)
+   HBASE-1221  When using ant -projecthelp to build HBase not all the important
+               options show up (Erik Holstad via Stack)
+   HBASE-1189  Changing the map type used internally for HbaseMapWritable
+               (Erik Holstad via Stack)
+   HBASE-1188  Memory size of Java Objects - Make cacheable objects implement
+               HeapSize (Erik Holstad via Stack)
+   HBASE-1230  Document installation of HBase on Windows
+   HBASE-1241  HBase additions to ZooKeeper part 1 (Nitay Joffe via JD)
+   HBASE-1231  Today, going from a RowResult to a BatchUpdate reqiures some
+               data processing even though they are pretty much the same thing
+               (Erik Holstad via Stack)
+   HBASE-1240  Would be nice if RowResult could be comparable
+               (Erik Holstad via Stack)
+   HBASE-803   Atomic increment operations (Ryan Rawson and Jon Gray via Stack)
+               Part 1 and part 2 -- fix for a crash.
+   HBASE-1252  Make atomic increment perform a binary increment
+               (Jonathan Gray via Stack)
+   HBASE-1258,1259 ganglia metrics for 'requests' is confusing
+               (Ryan Rawson via Stack)
+   HBASE-1265  HLogEdit static constants should be final (Nitay Joffe via
+               Stack)
+   HBASE-1244  ZooKeeperWrapper constants cleanup (Nitay Joffe via Stack)
+   HBASE-1262  Eclipse warnings, including performance related things like
+               synthetic accessors (Nitay Joffe via Stack)
+   HBASE-1273  ZooKeeper WARN spits out lots of useless messages
+               (Nitay Joffe via Stack)
+   HBASE-1285  Forcing compactions should be available via thrift
+               (Tim Sell via Stack)
+   HBASE-1186  Memory-aware Maps with LRU eviction for cell cache 
+               (Jonathan Gray via Andrew Purtell)
+   HBASE-1205  RegionServers should find new master when a new master comes up
+               (Nitay Joffe via Andrew Purtell)
+   HBASE-1309  HFile rejects key in Memcache with empty value
+   HBASE-1331  Lower the default scanner caching value
+   HBASE-1235  Add table enabled status to shell and UI
+               (Lars George via Stack)
+   HBASE-1333  RowCounter updates
+   HBASE-1195  If HBase directory exists but version file is inexistent, still
+               proceed with bootstrapping (Evgeny Ryabitskiy via Stack)
+   HBASE-1301  HTable.getRow() returns null if the row does no exist
+               (Rong-en Fan via Stack)
+   HBASE-1176  Javadocs in HBA should be clear about which functions are
+               asynchronous and which are synchronous
+               (Evgeny Ryabitskiy via Stack)
+   HBASE-1260  Bytes utility class changes: remove usage of ByteBuffer and
+               provide additional ByteBuffer primitives (Jon Gray via Stack)
+   HBASE-1183  New MR splitting algorithm and other new features need a way to
+               split a key range in N chunks (Jon Gray via Stack)
+   HBASE-1350  New method in HTable.java to return start and end keys for
+               regions in a table (Vimal Mathew via Stack)
+   HBASE-1271  Allow multiple tests to run on one machine
+               (Evgeny Ryabitskiy via Stack)
+   HBASE-1112  we will lose data if the table name happens to be the logs' dir
+               name (Samuel Guo via Stack)
+   HBASE-889   The current Thrift API does not allow a new scanner to be
+               created without supplying a column list unlike the other APIs.
+               (Tim Sell via Stack)
+   HBASE-1341  HTable pooler
+   HBASE-1379  re-enable LZO using hadoop-gpl-compression library
+               (Ryan Rawson via Stack)
+   HBASE-1383  hbase shell needs to warn on deleting multi-region table
+   HBASE-1286  Thrift should support next(nbRow) like functionality
+               (Alex Newman via Stack)
+   HBASE-1392  change how we build/configure lzocodec (Ryan Rawson via Stack)
+   HBASE-1397  Better distribution in the PerformanceEvaluation MapReduce
+               when rows run to the Billions
+   HBASE-1393  Narrow synchronization in HLog
+   HBASE-1404  minor edit of regionserver logging messages
+   HBASE-1405  Threads.shutdown has unnecessary branch
+   HBASE-1407  Changing internal structure of ImmutableBytesWritable
+               contructor (Erik Holstad via Stack)
+   HBASE-1345  Remove distributed mode from MiniZooKeeper (Nitay Joffe via
+               Stack)
+   HBASE-1414  Add server status logging chore to ServerManager
+   HBASE-1379  Make KeyValue implement Writable
+               (Erik Holstad and Jon Gray via Stack)
+   HBASE-1380  Make KeyValue implement HeapSize
+               (Erik Holstad and Jon Gray via Stack)
+   HBASE-1413  Fall back to filesystem block size default if HLog blocksize is
+               not specified
+   HBASE-1417  Cleanup disorientating RPC message
+   HBASE-1424  have shell print regioninfo and location on first load if
+               DEBUG enabled
+   HBASE-1008  [performance] The replay of logs on server crash takes way too
+               long
+   HBASE-1394  Uploads sometimes fall to 0 requests/second (Binding up on
+               HLog#append?)
+   HBASE-1429  Allow passing of a configuration object to HTablePool
+   HBASE-1432  LuceneDocumentWrapper is not public
+   HBASE-1401  close HLog (and open new one) if there hasnt been edits in N
+               minutes/hours
+   HBASE-1420  add abliity to add and remove (table) indexes on existing
+               tables (Clint Morgan via Stack)
+   HBASE-1430  Read the logs in batches during log splitting to avoid OOME
+   HBASE-1017  Region balancing does not bring newly added node within
+               acceptable range (Evgeny Ryabitskiy via Stack)
+   HBASE-1454  HBaseAdmin.getClusterStatus
+   HBASE-1236  Improve readability of table descriptions in the UI
+               (Lars George and Alex Newman via Stack)
+   HBASE-1455  Update DemoClient.py for thrift 1.0 (Tim Sell via Stack)
+   HBASE-1464  Add hbase.regionserver.logroll.period to hbase-default
+   HBASE-1192  LRU-style map for the block cache (Jon Gray and Ryan Rawson
+               via Stack)
+   HBASE-1466  Binary keys are not first class citizens
+               (Ryan Rawson via Stack)
+   HBASE-1445  Add the ability to start a master from any machine
+   HBASE-1474  Add zk attributes to list of attributes 
+               in master and regionserver UIs
+   HBASE-1448  Add a node in ZK to tell all masters to shutdown
+   HBASE-1478  Remove hbase master options from shell (Nitay Joffe via Stack)
+   HBASE-1462  hclient still seems to depend on master
+   HBASE-1143  region count erratic in master UI
+   HBASE-1490  Update ZooKeeper library
+   HBASE-1489  Basic git ignores for people who use git and eclipse
+   HBASE-1453  Add HADOOP-4681 to our bundled hadoop, add to 'gettting started'
+               recommendation that hbase users backport 
+   HBASE-1507  iCMS as default JVM
+   HBASE-1509  Add explanation to shell "help" command on how to use binarykeys
+               (Lars George via Stack)
+   HBASE-1514  hfile inspection tool
+   HBASE-1329  Visibility into ZooKeeper
+   HBASE-867   If millions of columns in a column family, hbase scanner won't
+               come up (Jonathan Gray via Stack)
+   HBASE-1538  Up zookeeper timeout from 10 seconds to 30 seconds to cut down
+               on hbase-user traffic
+   HBASE-1539  prevent aborts due to missing zoo.cfg
+   HBASE-1488  Fix TestThriftServer and re-enable it
+   HBASE-1541  Scanning multiple column families in the presence of deleted 
+               families results in bad scans
+   HBASE-1540  Client delete unit test, define behavior
+               (Jonathan Gray via Stack)
+   HBASE-1552  provide version running on cluster via getClusterStatus
+   HBASE-1550  hbase-daemon.sh stop should provide more information when stop
+               command fails
+   HBASE-1515  Address part of config option hbase.regionserver unnecessary
+   HBASE-1532  UI Visibility into ZooKeeper
+   HBASE-1572  Zookeeper log4j property set to ERROR on default, same output
+               when cluster working and not working (Jon Gray via Stack)
+   HBASE-1576  TIF needs to be able to set scanner caching size for smaller
+               row tables & performance
+   HBASE-1577  Move memcache to ConcurrentSkipListMap from
+               ConcurrentSkipListSet
+   HBASE-1578  Change the name of the in-memory updates from 'memcache' to
+               'memtable' or....
+   HBASE-1562  How to handle the setting of 32 bit versus 64 bit machines
+               (Erik Holstad via Stack)
+   HBASE-1584  Put add methods should return this for ease of use (Be
+               consistant with Get) (Clint Morgan via Stack)
+   HBASE-1581  Run major compaction on .META. when table is dropped or
+               truncated
+   HBASE-1587  Update ganglia config and doc to account for ganglia 3.1 and
+               hadoop-4675
+   HBASE-1589  Up zk maxClientCnxns from default of 10 to 20 or 30 or so
+   HBASE-1385  Revamp TableInputFormat, needs updating to match hadoop 0.20.x
+               AND remove bit where we can make < maps than regions
+               (Lars George via Stack)
+   HBASE-1596  Remove WatcherWrapper and have all users of Zookeeper provide a
+               Watcher
+   HBASE-1597  Prevent unnecessary caching of blocks during compactions
+               (Jon Gray via Stack)
+   HBASE-1607  Redo MemStore heap sizing to be accurate, testable, and more
+               like new LruBlockCache (Jon Gray via Stack)
+   HBASE-1218  Implement in-memory column (Jon Gray via Stack)
+   HBASE-1606  Remove zoo.cfg, put config options into hbase-site.xml
+   HBASE-1575  HMaster does not handle ZK session expiration
+   HBASE-1620  Need to use special StoreScanner constructor for major
+               compactions (passed sf, no caching, etc) (Jon Gray via Stack)
+   HBASE-1624  Don't sort Puts if only one in list in HCM#processBatchOfRows
+   HBASE-1626  Allow emitting Deletes out of new TableReducer
+               (Lars George via Stack)
+   HBASE-1551  HBase should manage multiple node ZooKeeper quorum
+   HBASE-1637  Delete client class methods should return itself like Put, Get,
+               Scan (Jon Gray via Nitay)
+   HBASE-1640  Allow passing arguments to jruby script run when run by hbase
+               shell
+   HBASE-698   HLog recovery is not performed after master failure
+   HBASE-1643  ScanDeleteTracker takes comparator but it unused
+   HBASE-1603  MR failed "RetriesExhaustedException: Trying to contact region
+               server Some server for region TestTable..." -- deubugging
+   HBASE-1470  hbase and HADOOP-4379, dhruba's flush/sync
+   HBASE-1632  Write documentation for configuring/managing ZooKeeper
+   HBASE-1662  Tool to run major compaction on catalog regions when hbase is
+               shutdown
+   HBASE-1665  expose more load information to the client side
+   HBASE-1609  We wait on leases to expire before regionserver goes down.
+               Rather, just let client fail
+   HBASE-1655  Usability improvements to HTablePool (Ken Weiner via jgray)
+   HBASE-1688  Improve javadocs in Result and KeyValue
+   HBASE-1694  Add TOC to 'Getting Started', add references to THBase and
+               ITHBase
+   HBASE-1699  Remove hbrep example as it's too out of date
+               (Tim Sell via Stack)
+   HBASE-1683  OOME on master splitting logs; stuck, won't go down
+   HBASE-1704  Better zk error when failed connect
+   HBASE-1714  Thrift server: prefix scan API
+   HBASE-1719  hold a reference to the region in stores instead of only the
+               region info
+   HBASE-1743  [debug tool] Add regionsInTransition list to ClusterStatus
+               detailed output
+   HBASE-1772  Up the default ZK session timeout from 30seconds to 60seconds
+   HBASE-2625  Make testDynamicBloom()'s "randomness" deterministic
+               (Nicolas Spiegelberg via Stack)
+
+  OPTIMIZATIONS
+   HBASE-1412  Change values for delete column and column family in KeyValue
+   HBASE-1535  Add client ability to perform mutations without the WAL
+               (Jon Gray via Stack)
+   HBASE-1460  Concurrent LRU Block Cache (Jon Gray via Stack)
+   HBASE-1635  PerformanceEvaluation should use scanner prefetching
+
+Release 0.19.0 - 01/21/2009
+  INCOMPATIBLE CHANGES
+   HBASE-885   TableMap and TableReduce should be interfaces
+               (DoÄŸacan Güney via Stack)
+   HBASE-905   Remove V5 migration classes from 0.19.0 (Jean-Daniel Cryans via
+               Jim Kellerman)
+   HBASE-852   Cannot scan all families in a row with a LIMIT, STARTROW, etc.
+               (Izaak Rubin via Stack)
+   HBASE-953   Enable BLOCKCACHE by default [WAS -> Reevaluate HBASE-288 block
+               caching work....?] -- Update your hbase-default.xml file!
+   HBASE-636   java6 as a requirement
+   HBASE-994   IPC interfaces with different versions can cause problems
+   HBASE-1028  If key does not exist, return null in getRow rather than an
+               empty RowResult
+   HBASE-1134  OOME in HMaster when HBaseRPC is older than 0.19
+
+  BUG FIXES
+   HBASE-891   HRS.validateValuesLength throws IOE, gets caught in the retries
+   HBASE-892   Cell iteration is broken (DoÄŸacan Güney via Jim Kellerman)
+   HBASE-898   RowResult.containsKey(String) doesn't work
+               (DoÄŸacan Güney via Jim Kellerman)
+   HBASE-906   [shell] Truncates output
+   HBASE-912   PE is broken when other tables exist
+   HBASE-853   [shell] Cannot describe meta tables (Izaak Rubin via Stack)
+   HBASE-844   Can't pass script to hbase shell 
+   HBASE-837   Add unit tests for ThriftServer.HBaseHandler (Izaak Rubin via
+               Stack)
+   HBASE-913   Classes using log4j directly
+   HBASE-914   MSG_REPORT_CLOSE has a byte array for a message
+   HBASE-918   Region balancing during startup makes cluster unstable
+   HBASE-921   region close and open processed out of order; makes for 
+               disagreement between master and regionserver on region state
+   HBASE-925   HRS NPE on way out if no master to connect to
+   HBASE-928   NPE throwing RetriesExhaustedException
+   HBASE-924   Update hadoop in lib on 0.18 hbase branch to 0.18.1
+   HBASE-929   Clarify that ttl in HColumnDescriptor is seconds
+   HBASE-930   RegionServer stuck: HLog: Could not append. Requesting close of
+               log java.io.IOException: Could not get block locations
+   HBASE-926   If no master, regionservers should hang out rather than fail on
+               connection and shut themselves down
+   HBASE-919   Master and Region Server need to provide root region location if
+               they are using HTable
+               With J-D's one line patch, test cases now appear to work and
+               PerformanceEvaluation works as before.
+   HBASE-939   NPE in HStoreKey
+   HBASE-945   Be consistent in use of qualified/unqualified mapfile paths
+   HBASE-946   Row with 55k deletes timesout scanner lease
+   HBASE-950   HTable.commit no longer works with existing RowLocks though it's
+               still in API
+   HBASE-952   Deadlock in HRegion.batchUpdate
+   HBASE-954   Don't reassign root region until ProcessServerShutdown has split
+               the former region server's log
+   HBASE-957   PerformanceEvaluation tests if table exists by comparing
+               descriptors
+   HBASE-728,  HBASE-956, HBASE-955 Address thread naming, which threads are
+               Chores, vs Threads, make HLog manager the write ahead log and
+               not extend it to provided optional HLog sync operations.
+   HBASE-970   Update the copy/rename scripts to go against change API
+   HBASE-966   HBASE-748 misses some writes
+   HBASE-971   Fix the failing tests on Hudson
+   HBASE-973   [doc] In getting started, make it clear that hbase needs to
+               create its directory in hdfs
+   HBASE-963   Fix the retries in HTable.flushCommit
+   HBASE-969   Won't when storefile > 2G.
+   HBASE-976   HADOOP 0.19.0 RC0 is broke; replace with HEAD of branch-0.19
+   HBASE-977   Arcane HStoreKey comparator bug
+   HBASE-979   REST web app is not started automatically
+   HBASE-980   Undo core of HBASE-975, caching of start and end row
+   HBASE-982   Deleting a column in MapReduce fails (DoÄŸacan Güney via
+               Stack)
+   HBASE-984   Fix javadoc warnings
+   HBASE-985   Fix javadoc warnings
+   HBASE-951   Either shut down master or let it finish cleanup
+   HBASE-964   Startup stuck "waiting for root region"
+   HBASE-964, HBASE-678 provide for safe-mode without locking up HBase "waiting
+               for root region"
+   HBASE-990   NoSuchElementException in flushSomeRegions; took two attempts.
+   HBASE-602   HBase Crash when network card has a IPv6 address
+   HBASE-996   Migration script to up the versions in catalog tables
+   HBASE-991   Update the mapred package document examples so they work with
+               TRUNK/0.19.0.
+   HBASE-1003  If cell exceeds TTL but not VERSIONs, will not be removed during
+               major compaction
+   HBASE-1005  Regex and string comparison operators for ColumnValueFilter
+   HBASE-910   Scanner misses columns / rows when the scanner is obtained
+               during a memcache flush
+   HBASE-1009  Master stuck in loop wanting to assign but regions are closing
+   HBASE-1016  Fix example in javadoc overvie
+   HBASE-1021  hbase metrics FileContext not working
+   HBASE-1023  Check global flusher
+   HBASE-1036  HBASE-1028 broke Thrift
+   HBASE-1037  Some test cases failing on Windows/Cygwin but not UNIX/Linux
+   HBASE-1041  Migration throwing NPE
+   HBASE-1042  OOME but we don't abort; two part commit.
+   HBASE-927   We don't recover if HRS hosting -ROOT-/.META. goes down
+   HBASE-1029  REST wiki documentation incorrect
+               (Sishen Freecity via Stack)
+   HBASE-1043  Removing @Override attributes where they are no longer needed.
+               (Ryan Smith via Jim Kellerman)
+   HBASE-927   We don't recover if HRS hosting -ROOT-/.META. goes down -
+               (fix bug in createTable which caused tests to fail)
+   HBASE-1039  Compaction fails if bloomfilters are enabled
+   HBASE-1027  Make global flusher check work with percentages rather than
+               hard code memory sizes
+   HBASE-1000  Sleeper.sleep does not go back to sleep when interrupted
+               and no stop flag given.
+   HBASE-900   Regionserver memory leak causing OOME during relatively
+               modest bulk importing; part 1 and part 2
+   HBASE-1054  Index NPE on scanning (Clint Morgan via Andrew Purtell)
+   HBASE-1052  Stopping a HRegionServer with unflushed cache causes data loss
+               from org.apache.hadoop.hbase.DroppedSnapshotException
+   HBASE-1059  ConcurrentModificationException in notifyChangedReadersObservers
+   HBASE-1063  "File separator problem on Windows" (Max Lehn via Stack)
+   HBASE-1068  TestCompaction broken on hudson
+   HBASE-1067  TestRegionRebalancing broken by running of hdfs shutdown thread
+   HBASE-1070  Up default index interval in TRUNK and branch
+   HBASE-1045  Hangup by regionserver causes write to fail
+   HBASE-1079  Dumb NPE in ServerCallable hides the RetriesExhausted exception
+   HBASE-782   The DELETE key in the hbase shell deletes the wrong character
+               (Tim Sell via Stack)
+   HBASE-543,  HBASE-1046, HBase-1051 A region's state is kept in several places
+               in the master opening the possibility for race conditions
+   HBASE-1087  DFS failures did not shutdown regionserver
+   HBASE-1072  Change Thread.join on exit to a timed Thread.join
+   HBASE-1098  IllegalStateException: Cannot set a region to be closed it it
+               was not already marked as closing
+   HBASE-1100  HBASE-1062 broke TestForceSplit
+   HBASE-1191  shell tools -> close_region does not work for regions that did
+               not deploy properly on startup
+   HBASE-1093  NPE in HStore#compact
+   HBASE-1097  SequenceFile.Reader keeps around buffer whose size is that of
+               largest item read -> results in lots of dead heap
+   HBASE-1107  NPE in HStoreScanner.updateReaders
+   HBASE-1083  Will keep scheduling major compactions if last time one ran, we
+               didn't.
+   HBASE-1101  NPE in HConnectionManager$TableServers.processBatchOfRows
+   HBASE-1099  Regions assigned while master is splitting logs of recently
+               crashed server; regionserver tries to execute incomplete log
+   HBASE-1104, HBASE-1098, HBASE-1096: Doubly-assigned regions redux,
+               IllegalStateException: Cannot set a region to be closed it it was
+               not already marked as closing, Does not recover if HRS carrying 
+               -ROOT- goes down
+   HBASE-1114  Weird NPEs compacting
+   HBASE-1116  generated web.xml and svn don't play nice together
+   HBASE-1119  ArrayOutOfBoundsException in HStore.compact
+   HBASE-1121  Cluster confused about where -ROOT- is
+   HBASE-1125  IllegalStateException: Cannot set a region to be closed if it was
+               not already marked as pending close
+   HBASE-1124  Balancer kicks in way too early
+   HBASE-1127  OOME running randomRead PE
+   HBASE-1132  Can't append to HLog, can't roll log, infinite cycle (another
+               spin on HBASE-930)
+
+  IMPROVEMENTS
+   HBASE-901   Add a limit to key length, check key and value length on client side
+   HBASE-890   Alter table operation and also related changes in REST interface
+               (Sishen Freecity via Stack)
+   HBASE-894   [shell] Should be able to copy-paste table description to create
+               new table (Sishen Freecity via Stack)
+   HBASE-886, HBASE-895 Sort the tables in the web UI, [shell] 'list' command
+               should emit a sorted list of tables (Krzysztof Szlapinski via Stack)
+   HBASE-884   Double and float converters for Bytes class
+               (DoÄŸacan Güney via Stack)
+   HBASE-908   Add approximate counting to CountingBloomFilter
+               (Andrzej Bialecki via Stack)
+   HBASE-920   Make region balancing sloppier
+   HBASE-902   Add force compaction and force split operations to UI and Admin
+   HBASE-942   Add convenience methods to RowFilterSet
+               (Clint Morgan via Stack)
+   HBASE-943   to ColumnValueFilter: add filterIfColumnMissing property, add
+               SubString operator (Clint Morgan via Stack)
+   HBASE-937   Thrift getRow does not support specifying columns
+               (DoÄŸacan Güney via Stack)
+   HBASE-959   Be able to get multiple RowResult at one time from client side
+               (Sishen Freecity via Stack)
+   HBASE-936   REST Interface: enable get number of rows from scanner interface
+               (Sishen Freecity via Stack)
+   HBASE-960   REST interface: more generic column family configure and also
+               get Rows using offset and limit (Sishen Freecity via Stack)
+   HBASE-817   Hbase/Shell Truncate
+   HBASE-949   Add an HBase Manual
+   HBASE-839   Update hadoop libs in hbase; move hbase TRUNK on to an hadoop
+               0.19.0 RC
+   HBASE-785   Remove InfoServer, use HADOOP-3824 StatusHttpServer 
+               instead (requires hadoop 0.19)
+   HBASE-81    When a scanner lease times out, throw a more "user friendly" exception
+   HBASE-978   Remove BloomFilterDescriptor. It is no longer used.
+   HBASE-975   Improve MapFile performance for start and end key
+   HBASE-961   Delete multiple columns by regular expression
+               (Samuel Guo via Stack)
+   HBASE-722   Shutdown and Compactions
+   HBASE-983   Declare Perl namespace in Hbase.thrift
+   HBASE-987   We need a Hbase Partitioner for TableMapReduceUtil.initTableReduceJob
+               MR Jobs (Billy Pearson via Stack)
+   HBASE-993   Turn off logging of every catalog table row entry on every scan
+   HBASE-992   Up the versions kept by catalog tables; currently 1. Make it 10?
+   HBASE-998   Narrow getClosestRowBefore by passing column family
+   HBASE-999   Up versions on historian and keep history of deleted regions for a
+               while rather than delete immediately
+   HBASE-938   Major compaction period is not checked periodically
+   HBASE-947   [Optimization] Major compaction should remove deletes as well as
+               the deleted cell
+   HBASE-675   Report correct server hosting a table split for assignment to
+               for MR Jobs
+   HBASE-927   We don't recover if HRS hosting -ROOT-/.META. goes down
+   HBASE-1013  Add debugging around commit log cleanup
+   HBASE-972   Update hbase trunk to use released hadoop 0.19.0
+   HBASE-1022  Add storefile index size to hbase metrics
+   HBASE-1026  Tests in mapred are failing
+   HBASE-1020  Regionserver OOME handler should dump vital stats
+   HBASE-1018  Regionservers should report detailed health to master
+   HBASE-1034  Remove useless TestToString unit test
+   HBASE-1030  Bit of polish on HBASE-1018
+   HBASE-847   new API: HTable.getRow with numVersion specified
+               (DoÄŸacan Güney via Stack)
+   HBASE-1048  HLog: Found 0 logs to remove out of total 1450; oldest
+               outstanding seqnum is 162297053 fr om region -ROOT-,,0
+   HBASE-1055  Better vm stats on startup
+   HBASE-1065  Minor logging improvements in the master
+   HBASE-1053  bring recent rpc changes down from hadoop
+   HBASE-1056  [migration] enable blockcaching on .META. table
+   HBASE-1069  Show whether HRegion major compacts or not in INFO level
+   HBASE-1066  Master should support close/open/reassignment/enable/disable
+               operations on individual regions
+   HBASE-1062  Compactions at (re)start on a large table can overwhelm DFS
+   HBASE-1102  boolean HTable.exists()
+   HBASE-1105  Remove duplicated code in HCM, add javadoc to RegionState, etc.
+   HBASE-1106  Expose getClosestRowBefore in HTable
+               (Michael Gottesman via Stack)
+   HBASE-1082  Administrative functions for table/region maintenance
+   HBASE-1090  Atomic Check And Save in HTable (Michael Gottesman via Stack)
+   HBASE-1137  Add not on xceivers count to overview documentation
+
+  NEW FEATURES
+   HBASE-875   Use MurmurHash instead of JenkinsHash [in bloomfilters]
+               (Andrzej Bialecki via Stack)
+   HBASE-625   Metrics support for cluster load history: emissions and graphs
+   HBASE-883   Secondary indexes (Clint Morgan via Andrew Purtell)
+   HBASE-728   Support for HLog appends
+
+  OPTIMIZATIONS
+   HBASE-748   Add an efficient way to batch update many rows
+   HBASE-887   Fix a hotspot in scanners
+   HBASE-967   [Optimization] Cache cell maximum length (HCD.getMaxValueLength);
+               its used checking batch size
+   HBASE-940   Make the TableOutputFormat batching-aware
+   HBASE-576   Investigate IPC performance
+
+Release 0.18.0 - September 21st, 2008
+
+  INCOMPATIBLE CHANGES
+   HBASE-697   Thrift idl needs update/edit to match new 0.2 API (and to fix bugs)
+               (Tim Sell via Stack)
+   HBASE-822   Update thrift README and HBase.thrift to use thrift 20080411
+               Updated all other languages examples (only python went in)
+
+  BUG FIXES
+   HBASE-881   Fixed bug when Master tries to reassign split or offline regions
+               from a dead server
+   HBASE-860   Fixed Bug in IndexTableReduce where it concerns writing lucene 
+               index fields.
+   HBASE-805   Remove unnecessary getRow overloads in HRS (Jonathan Gray via
+               Jim Kellerman) (Fix whitespace diffs in HRegionServer)
+   HBASE-811   HTD is not fully copyable (Andrew Purtell via Jim Kellerman)
+   HBASE-729   Client region/metadata cache should have a public method for
+               invalidating entries (Andrew Purtell via Stack)
+   HBASE-819   Remove DOS-style ^M carriage returns from all code where found
+               (Jonathan Gray via Jim Kellerman)
+   HBASE-818   Deadlock running 'flushSomeRegions' (Andrew Purtell via Stack)
+   HBASE-820   Need mainline to flush when 'Blocking updates' goes up.
+               (Jean-Daniel Cryans via Stack)
+   HBASE-821   UnknownScanner happens too often (Jean-Daniel Cryans via Stack)
+   HBASE-813   Add a row counter in the new shell (Jean-Daniel Cryans via Stack)
+   HBASE-824   Bug in Hlog we print array of byes for region name
+               (Billy Pearson via Stack)
+   HBASE-825   Master logs showing byte [] in place of string in logging
+               (Billy Pearson via Stack)
+   HBASE-808,809 MAX_VERSIONS not respected, and Deletall doesn't and inserts
+               after delete don't work as expected
+               (Jean-Daniel Cryans via Stack)
+   HBASE-831   committing BatchUpdate with no row should complain
+               (Andrew Purtell via Jim Kellerman)
+   HBASE-833   Doing an insert with an unknown family throws a NPE in HRS
+   HBASE-810   Prevent temporary deadlocks when, during a scan with write
+               operations, the region splits (Jean-Daniel Cryans via Jim
+               Kellerman)
+   HBASE-843   Deleting and recreating a table in a single process does not work
+               (Jonathan Gray via Jim Kellerman)
+   HBASE-849   Speed improvement in JenkinsHash (Andrzej Bialecki via Stack)
+   HBASE-552   Bloom filter bugs (Andrzej Bialecki via Jim Kellerman)
+   HBASE-762   deleteFamily takes timestamp, should only take row and family.
+               Javadoc describes both cases but only implements the timestamp
+               case. (Jean-Daniel Cryans via Jim Kellerman)
+   HBASE-768   This message 'java.io.IOException: Install 0.1.x of hbase and run
+               its migration first' is useless (Jean-Daniel Cryans via Jim
+               Kellerman)
+   HBASE-826   Delete table followed by recreation results in honked table
+   HBASE-834   'Major' compactions and upper bound on files we compact at any
+               one time (Billy Pearson via Stack)
+   HBASE-836   Update thrift examples to work with changed IDL (HBASE-697)
+               (Toby White via Stack)
+   HBASE-854   hbase-841 broke build on hudson? - makes sure that proxies are
+               closed. (Andrew Purtell via Jim Kellerman)
+   HBASE-855   compaction can return less versions then we should in some cases
+               (Billy Pearson via Stack)
+   HBASE-832   Problem with row keys beginnig with characters < than ',' and
+               the region location cache
+   HBASE-864   Deadlock in regionserver
+   HBASE-865   Fix javadoc warnings (Rong-En Fan via Jim Kellerman)
+   HBASE-872   Getting exceptions in shell when creating/disabling tables
+   HBASE-868   Incrementing binary rows cause strange behavior once table
+               splits (Jonathan Gray via Stack)
+   HBASE-877   HCM is unable to find table with multiple regions which contains
+               binary (Jonathan Gray via Stack)
+
+  IMPROVEMENTS
+   HBASE-801  When a table haven't disable, shell could response in a "user
+              friendly" way.
+   HBASE-816  TableMap should survive USE (Andrew Purtell via Stack)
+   HBASE-812  Compaction needs little better skip algo (Daniel Leffel via Stack)
+   HBASE-806  Change HbaseMapWritable and RowResult to implement SortedMap
+              instead of Map (Jonathan Gray via Stack)
+   HBASE-795  More Table operation in TableHandler for REST interface: part 1
+              (Sishen Freecity via Stack)
+   HBASE-795  More Table operation in TableHandler for REST interface: part 2
+              (Sishen Freecity via Stack)
+   HBASE-830  Debugging HCM.locateRegionInMeta is painful
+   HBASE-784  Base hbase-0.3.0 on hadoop-0.18
+   HBASE-841  Consolidate multiple overloaded methods in HRegionInterface,
+              HRegionServer (Jean-Daniel Cryans via Jim Kellerman)
+   HBASE-840  More options on the row query in REST interface
+              (Sishen Freecity via Stack)
+   HBASE-874  deleting a table kills client rpc; no subsequent communication if
+              shell or thrift server, etc. (Jonathan Gray via Jim Kellerman)
+   HBASE-871  Major compaction periodicity should be specifyable at the column
+              family level, not cluster wide (Jonathan Gray via Stack)
+   HBASE-465  Fix javadoc for all public declarations
+   HBASE-882  The BatchUpdate class provides, put(col, cell) and delete(col)
+              but no get() (Ryan Smith via Stack and Jim Kellerman)
+
+  NEW FEATURES
+   HBASE-787  Postgresql to HBase table replication example (Tim Sell via Stack)
+   HBASE-798  Provide Client API to explicitly lock and unlock rows (Jonathan
+              Gray via Jim Kellerman)
+   HBASE-798  Add missing classes: UnknownRowLockException and RowLock which
+              were present in previous versions of the patches for this issue,
+              but not in the version that was committed. Also fix a number of
+              compilation problems that were introduced by patch.
+   HBASE-669  MultiRegion transactions with Optimistic Concurrency Control
+              (Clint Morgan via Stack)
+   HBASE-842  Remove methods that have Text as a parameter and were deprecated
+              in 0.2.1 (Jean-Daniel Cryans via Jim Kellerman)
+
+  OPTIMIZATIONS
+
+Release 0.2.0 - August 8, 2008.
+
+  INCOMPATIBLE CHANGES
+   HBASE-584   Names in the filter interface are confusing (Clint Morgan via
+               Jim Kellerman) (API change for filters)
+   HBASE-601   Just remove deprecated methods in HTable; 0.2 is not backward
+               compatible anyways
+   HBASE-82    Row keys should be array of bytes
+   HBASE-76    Purge servers of Text (Done as part of HBASE-82 commit).
+   HBASE-487   Replace hql w/ a hbase-friendly jirb or jython shell
+               Part 1: purge of hql and added raw jirb in its place.
+   HBASE-521   Improve client scanner interface
+   HBASE-288   Add in-memory caching of data. Required update of hadoop to 
+               0.17.0-dev.2008-02-07_12-01-58. (Tom White via Stack) 
+   HBASE-696   Make bloomfilter true/false and self-sizing
+   HBASE-720   clean up inconsistencies around deletes (Izaak Rubin via Stack)
+   HBASE-796   Deprecates Text methods from HTable
+               (Michael Gottesman via Stack)
+
+  BUG FIXES
+   HBASE-574   HBase does not load hadoop native libs (Rong-En Fan via Stack)
+   HBASE-598   Loggging, no .log file; all goes into .out
+   HBASE-622   Remove StaticTestEnvironment and put a log4j.properties in src/test
+   HBASE-624   Master will shut down if number of active region servers is zero
+               even if shutdown was not requested
+   HBASE-629   Split reports incorrect elapsed time
+   HBASE-623   Migration script for hbase-82
+   HBASE-630   Default hbase.rootdir is garbage
+   HBASE-589   Remove references to deprecated methods in Hadoop once
+               hadoop-0.17.0 is released
+   HBASE-638   Purge \r from src
+   HBASE-644   DroppedSnapshotException but RegionServer doesn't restart
+   HBASE-641   Improve master split logging
+   HBASE-642   Splitting log in a hostile environment -- bad hdfs -- we drop
+               write-ahead-log edits
+   HBASE-646   EOFException opening HStoreFile info file (spin on HBASE-645and 550)
+   HBASE-648   If mapfile index is empty, run repair
+   HBASE-640   TestMigrate failing on hudson
+   HBASE-651   Table.commit should throw NoSuchColumnFamilyException if column
+               family doesn't exist
+   HBASE-649   API polluted with default and protected access data members and methods
+   HBASE-650   Add String versions of get, scanner, put in HTable
+   HBASE-656   Do not retry exceptions such as unknown scanner or illegal argument
+   HBASE-659   HLog#cacheFlushLock not cleared; hangs a region
+   HBASE-663   Incorrect sequence number for cache flush
+   HBASE-655   Need programmatic way to add column family: need programmatic way
+               to enable/disable table
+   HBASE-654   API HTable.getMetadata().addFamily shouldn't be exposed to user
+   HBASE-666   UnmodifyableHRegionInfo gives the wrong encoded name
+   HBASE-668   HBASE-533 broke build
+   HBASE-670   Historian deadlocks if regionserver is at global memory boundary
+               and is hosting .META.
+   HBASE-665   Server side scanner doesn't honor stop row
+   HBASE-662   UI in table.jsp gives META locations, not the table's regions
+               location (Jean-Daniel Cryans via Stack)
+   HBASE-676   Bytes.getInt returns a long (Clint Morgan via Stack)
+   HBASE-680   Config parameter hbase.io.index.interval  should be
+               hbase.index.interval, according to HBaseMapFile.HbaseWriter
+               (LN via Stack)
+   HBASE-682   Unnecessary iteration in HMemcache.internalGet? got much better
+               reading performance after break it (LN via Stack)
+   HBASE-686   MemcacheScanner didn't return the first row(if it exists),
+               because HScannerInterface's output incorrect (LN via Jim Kellerman)
+   HBASE-691   get* and getScanner are different in how they treat column parameter
+   HBASE-694   HStore.rowAtOrBeforeFromMapFile() fails to locate the row if # of mapfiles >= 2
+               (Rong-En Fan via Bryan)
+   HBASE-652   dropping table fails silently if table isn't disabled
+   HBASE-683   can not get svn revision # at build time if locale is not english
+               (Rong-En Fan via Stack)
+   HBASE-699   Fix TestMigrate up on Hudson
+   HBASE-615   Region balancer oscillates during cluster startup
+   HBASE-613   Timestamp-anchored scanning fails to find all records
+   HBASE-681   NPE in Memcache
+   HBASE-701   Showing bytes in log when should be String
+   HBASE-702   deleteall doesn't
+   HBASE-704   update new shell docs and commands on help menu
+   HBASE-709   Deadlock while rolling WAL-log while finishing flush
+   HBASE-710   If clocks are way off, then we can have daughter split come
+               before rather than after its parent in .META.
+   HBASE-714   Showing bytes in log when should be string (2)
+   HBASE-627   Disable table doesn't work reliably
+   HBASE-716   TestGet2.testGetClosestBefore fails with hadoop-0.17.1
+   HBASE-715   Base HBase 0.2 on Hadoop 0.17.1
+   HBASE-718   hbase shell help info
+   HBASE-717   alter table broke with new shell returns InvalidColumnNameException
+   HBASE-573   HBase does not read hadoop-*.xml for dfs configuration after 
+               moving out hadoop/contrib
+   HBASE-11    Unexpected exits corrupt DFS
+   HBASE-12    When hbase regionserver restarts, it says "impossible state for
+               createLease()"
+   HBASE-575   master dies with stack overflow error if rootdir isn't qualified
+   HBASE-582   HBase 554 forgot to clear results on each iteration caused by a filter
+               (Clint Morgan via Stack)
+   HBASE-532   Odd interaction between HRegion.get, HRegion.deleteAll and compactions
+   HBASE-10    HRegionServer hangs upon exit due to DFSClient Exception
+   HBASE-595   RowFilterInterface.rowProcessed() is called *before* fhe final
+               filtering decision is made (Clint Morgan via Stack)
+   HBASE-586   HRegion runs HStore memcache snapshotting -- fix it so only HStore
+               knows about workings of memcache
+   HBASE-588   Still a 'hole' in scanners, even after HBASE-532
+   HBASE-604   Don't allow CLASSPATH from environment pollute the hbase CLASSPATH
+   HBASE-608   HRegionServer::getThisIP() checks hadoop config var for dns interface name
+               (Jim R. Wilson via Stack)
+   HBASE-609   Master doesn't see regionserver edits because of clock skew
+   HBASE-607   MultiRegionTable.makeMultiRegionTable is not deterministic enough
+               for regression tests
+   HBASE-405   TIF and TOF use log4j directly rather than apache commons-logging
+   HBASE-618   We always compact if 2 files, regardless of the compaction threshold setting
+   HBASE-619   Fix 'logs' link in UI
+   HBASE-478   offlining of table does not run reliably
+   HBASE-453   undeclared throwable exception from HTable.get
+   HBASE-620   testmergetool failing in branch and trunk since hbase-618 went in
+   HBASE-550   EOF trying to read reconstruction log stops region deployment
+   HBASE-551   Master stuck splitting server logs in shutdown loop; on each
+               iteration, edits are aggregated up into the millions
+   HBASE-505   Region assignments should never time out so long as the region
+               server reports that it is processing the open request
+   HBASE-561   HBase package does not include LICENSE.txt nor build.xml
+   HBASE-563   TestRowFilterAfterWrite erroneously sets master address to
+               0.0.0.0:60100 rather than relying on conf
+   HBASE-507   Use Callable pattern to sleep between retries
+   HBASE-564   Don't do a cache flush if there are zero entries in the cache.
+   HBASE-554   filters generate StackOverflowException
+   HBASE-567   Reused BatchUpdate instances accumulate BatchOperations
+   HBASE-577   NPE getting scanner
+   HBASE-19    CountingBloomFilter can overflow its storage
+               (Stu Hood and Bryan Duxbury via Stack)
+   HBASE-28    thrift put/mutateRow methods need to throw IllegalArgument
+               exceptions (Dave Simpson via Bryan Duxbury via Stack)
+   HBASE-2     hlog numbers should wrap around when they reach 999
+               (Bryan Duxbury via Stack)
+   HBASE-421   TestRegionServerExit broken
+   HBASE-426   hbase can't find remote filesystem
+   HBASE-437   Clear Command should use system.out (Edward Yoon via Stack)
+   HBASE-434, HBASE-435 TestTableIndex and TestTableMapReduce failed in Hudson builds
+   HBASE-446   Fully qualified hbase.rootdir doesn't work
+   HBASE-438   XMLOutputter state should be initialized. (Edward Yoon via Stack)
+   HBASE-8     Delete table does not remove the table directory in the FS
+   HBASE-428   Under continuous upload of rows, WrongRegionExceptions are thrown
+               that reach the client even after retries
+   HBASE-460   TestMigrate broken when HBase moved to subproject   
+   HBASE-462   Update migration tool
+   HBASE-473   When a table is deleted, master sends multiple close messages to
+               the region server
+   HBASE-490   Doubly-assigned .META.; master uses one and clients another
+   HBASE-492   hbase TRUNK does not build against hadoop TRUNK
+   HBASE-496   impossible state for createLease writes 400k lines in about 15mins
+   HBASE-472   Passing on edits, we dump all to log
+   HBASE-495   No server address listed in .META.
+   HBASE-433 HBASE-251 Region server should delete restore log after successful
+               restore, Stuck replaying the edits of crashed machine.
+   HBASE-27    hregioninfo cell empty in meta table
+   HBASE-501   Empty region server address in info:server entry and a
+               startcode of -1 in .META.
+   HBASE-516   HStoreFile.finalKey does not update the final key if it is not
+               the top region of a split region
+   HBASE-525   HTable.getRow(Text) does not work (Clint Morgan via Bryan Duxbury)
+   HBASE-524   Problems with getFull
+   HBASE-528   table 'does not exist' when it does
+   HBASE-531   Merge tool won't merge two overlapping regions (port HBASE-483 to
+               trunk)
+   HBASE-537   Wait for hdfs to exit safe mode
+   HBASE-476   RegexpRowFilter behaves incorectly when there are multiple store
+               files (Clint Morgan via Jim Kellerman)
+   HBASE-527   RegexpRowFilter does not work when there are columns from 
+               multiple families (Clint Morgan via Jim Kellerman)
+   HBASE-534   Double-assignment at SPLIT-time
+   HBASE-712   midKey found compacting is the first, not necessarily the optimal
+   HBASE-719   Find out why users have network problems in HBase and not in Hadoop
+               and HConnectionManager (Jean-Daniel Cryans via Stack)
+   HBASE-703   Invalid regions listed by regionserver.jsp (Izaak Rubin via Stack)
+   HBASE-674   Memcache size unreliable
+   HBASE-726   Unit tests won't run because of a typo (Sebastien Rainville via Stack)
+   HBASE-727   Client caught in an infinite loop when trying to connect to cached
+               server locations (Izaak Rubin via Stack)
+   HBASE-732   shell formatting error with the describe command
+               (Izaak Rubin via Stack)
+   HBASE-731   delete, deletefc in HBase shell do not work correctly
+               (Izaak Rubin via Stack)
+   HBASE-734   scan '.META.', {LIMIT => 10} crashes (Izaak Rubin via Stack)
+   HBASE-736   Should have HTable.deleteAll(String row) and HTable.deleteAll(Text row)
+               (Jean-Daniel Cryans via Stack)
+   HBASE-740   ThriftServer getting table names incorrectly (Tim Sell via Stack)
+   HBASE-742   Rename getMetainfo in HTable as getTableDescriptor
+   HBASE-739   HBaseAdmin.createTable() using old HTableDescription doesn't work
+               (Izaak Rubin via Stack)
+   HBASE-744   BloomFilter serialization/deserialization broken
+   HBASE-742   Column length limit is not enforced (Jean-Daniel Cryans via Stack)
+   HBASE-737   Scanner: every cell in a row has the same timestamp
+   HBASE-700   hbase.io.index.interval need be configuratable in column family
+               (Andrew Purtell via Stack)
+   HBASE-62    Allow user add arbitrary key/value pairs to table and column
+               descriptors (Andrew Purtell via Stack)
+   HBASE-34    Set memcache flush size per column (Andrew Purtell via Stack)
+   HBASE-42    Set region split size on table creation (Andrew Purtell via Stack)
+   HBASE-43    Add a read-only attribute to columns (Andrew Purtell via Stack)
+   HBASE-424   Should be able to enable/disable .META. table
+   HBASE-679   Regionserver addresses are still not right in the new tables page
+   HBASE-758   Throwing IOE read-only when should be throwing NSRE
+   HBASE-743   bin/hbase migrate upgrade fails when redo logs exists
+   HBASE-754   The JRuby shell documentation is wrong in "get" and "put"
+               (Jean-Daniel Cryans via Stack)
+   HBASE-756   In HBase shell, the put command doesn't process the timestamp
+               (Jean-Daniel Cryans via Stack)
+   HBASE-757   REST mangles table names (Sishen via Stack)
+   HBASE-706   On OOME, regionserver sticks around and doesn't go down with cluster
+               (Jean-Daniel Cryans via Stack)
+   HBASE-759   TestMetaUtils failing on hudson
+   HBASE-761   IOE: Stream closed exception all over logs
+   HBASE-763   ClassCastException from RowResult.get(String)
+               (Andrew Purtell via Stack)
+   HBASE-764   The name of column request has padding zero using REST interface
+               (Sishen Freecity via Stack)
+   HBASE-750   NPE caused by StoreFileScanner.updateReaders
+   HBASE-769   TestMasterAdmin fails throwing RegionOfflineException when we're
+               expecting IllegalStateException
+   HBASE-766   FileNotFoundException trying to load HStoreFile 'data'
+   HBASE-770   Update HBaseRPC to match hadoop 0.17 RPC
+   HBASE-780   Can't scan '.META.' from new shell
+   HBASE-424   Should be able to enable/disable .META. table
+   HBASE-771   Names legal in 0.1 are not in 0.2; breaks migration
+   HBASE-788   Div by zero in Master.jsp (Clint Morgan via Jim Kellerman)
+   HBASE-791   RowCount doesn't work (Jean-Daniel Cryans via Stack)
+   HBASE-751   dfs exception and regionserver stuck during heavy write load
+   HBASE-793   HTable.getStartKeys() ignores table names when matching columns
+               (Andrew Purtell and Dru Jensen via Stack)
+   HBASE-790   During import, single region blocks requests for >10 minutes,
+               thread dumps, throws out pending requests, and continues
+               (Jonathan Gray via Stack)
+   
+  IMPROVEMENTS
+   HBASE-559   MR example job to count table rows
+   HBASE-596   DemoClient.py (Ivan Begtin via Stack)
+   HBASE-581   Allow adding filters to TableInputFormat (At same time, ensure TIF
+               is subclassable) (David Alves via Stack)
+   HBASE-603   When an exception bubbles out of getRegionServerWithRetries, wrap 
+               the exception with a RetriesExhaustedException
+   HBASE-600   Filters have excessive DEBUG logging
+   HBASE-611   regionserver should do basic health check before reporting
+               alls-well to the master
+   HBASE-614   Retiring regions is not used; exploit or remove
+   HBASE-538   Improve exceptions that come out on client-side
+   HBASE-569   DemoClient.php (Jim R. Wilson via Stack)
+   HBASE-522   Where new Text(string) might be used in client side method calls,
+               add an overload that takes String (Done as part of HBASE-82)
+   HBASE-570   Remove HQL unit test (Done as part of HBASE-82 commit).
+   HBASE-626   Use Visitor pattern in MetaRegion to reduce code clones in HTable
+               and HConnectionManager (Jean-Daniel Cryans via Stack)
+   HBASE-621   Make MAX_VERSIONS work like TTL: In scans and gets, check
+               MAX_VERSIONs setting and return that many only rather than wait on
+               compaction (Jean-Daniel Cryans via Stack)
+   HBASE-504   Allow HMsg's carry a payload: e.g. exception that happened over
+               on the remote side.
+   HBASE-583   RangeRowFilter/ColumnValueFilter to allow choice of rows based on
+               a (lexicographic) comparison to column's values
+               (Clint Morgan via Stack)
+   HBASE-579   Add hadoop 0.17.x
+   HBASE-660   [Migration] addColumn/deleteColumn functionality in MetaUtils
+   HBASE-632   HTable.getMetadata is very inefficient
+   HBASE-671   New UI page displaying all regions in a table should be sorted
+   HBASE-672   Sort regions in the regionserver UI
+   HBASE-677   Make HTable, HRegion, HRegionServer, HStore, and HColumnDescriptor
+               subclassable (Clint Morgan via Stack)
+   HBASE-682   Regularize toString
+   HBASE-672   Sort regions in the regionserver UI
+   HBASE-469   Streamline HStore startup and compactions
+   HBASE-544   Purge startUpdate from internal code and test cases
+   HBASE-557   HTable.getRow() should receive RowResult objects
+   HBASE-452   "region offline" should throw IOException, not IllegalStateException
+   HBASE-541   Update hadoop jars.
+   HBASE-523   package-level javadoc should have example client
+   HBASE-415   Rewrite leases to use DelayedBlockingQueue instead of polling
+   HBASE-35    Make BatchUpdate public in the API
+   HBASE-409   Add build path to svn:ignore list (Edward Yoon via Stack)
+   HBASE-408   Add .classpath and .project to svn:ignore list
+               (Edward Yoon via Stack)
+   HBASE-410   Speed up the test suite (make test timeout 5 instead of 15 mins).
+   HBASE-281   Shell should allow deletions in .META. and -ROOT- tables
+               (Edward Yoon & Bryan Duxbury via Stack)
+   HBASE-56    Unnecessary HQLClient Object creation in a shell loop
+               (Edward Yoon via Stack)
+   HBASE-3     rest server: configure number of threads for jetty
+               (Bryan Duxbury via Stack)
+   HBASE-416   Add apache-style logging to REST server and add setting log
+               level, etc.
+   HBASE-406   Remove HTable and HConnection close methods
+               (Bryan Duxbury via Stack)
+   HBASE-418   Move HMaster and related classes into master package
+               (Bryan Duxbury via Stack)
+   HBASE-410   Speed up the test suite - Apparently test timeout was too
+               aggressive for Hudson. TestLogRolling timed out even though it
+               was operating properly. Change test timeout to 10 minutes.
+   HBASE-436   website: http://hadoop.apache.org/hbase
+   HBASE-417   Factor TableOperation and subclasses into separate files from
+               HMaster (Bryan Duxbury via Stack)
+   HBASE-440   Add optional log roll interval so that log files are garbage
+               collected
+   HBASE-407   Keep HRegionLocation information in LRU structure 
+   HBASE-444   hbase is very slow at determining table is not present
+   HBASE-438   XMLOutputter state should be initialized.
+   HBASE-414   Move client classes into client package
+   HBASE-79    When HBase needs to be migrated, it should display a message on
+               stdout, not just in the logs
+   HBASE-461   Simplify leases.
+   HBASE-419   Move RegionServer and related classes into regionserver package
+   HBASE-457   Factor Master into Master, RegionManager, and ServerManager
+   HBASE-464   HBASE-419 introduced javadoc errors
+   HBASE-468   Move HStoreKey back to o.a.h.h
+   HBASE-442   Move internal classes out of HRegionServer
+   HBASE-466   Move HMasterInterface, HRegionInterface, and 
+               HMasterRegionInterface into o.a.h.h.ipc
+   HBASE-479   Speed up TestLogRolling
+   HBASE-480   Tool to manually merge two regions
+   HBASE-477   Add support for an HBASE_CLASSPATH
+   HBASE-443   Move internal classes out of HStore
+   HBASE-515   At least double default timeouts between regionserver and master
+   HBASE-529   RegionServer needs to recover if datanode goes down
+   HBASE-456   Clearly state which ports need to be opened in order to run HBase
+   HBASE-536   Remove MiniDFS startup from MiniHBaseCluster
+   HBASE-521   Improve client scanner interface
+   HBASE-562   Move Exceptions to subpackages (Jean-Daniel Cryans via Stack)
+   HBASE-631   HTable.getRow() for only a column family
+               (Jean-Daniel Cryans via Stack)
+   HBASE-731   Add a meta refresh tag to the Web ui for master and region server
+               (Jean-Daniel Cryans via Stack)
+   HBASE-735   hbase shell doesn't trap CTRL-C signal (Jean-Daniel Cryans via Stack)
+   HBASE-730   On startup, rinse STARTCODE and SERVER from .META.
+               (Jean-Daniel Cryans via Stack)
+   HBASE-738   overview.html in need of updating (Izaak Rubin via Stack)
+   HBASE-745   scaling of one regionserver, improving memory and cpu usage (partial)
+               (LN via Stack)
+   HBASE-746   Batching row mutations via thrift (Tim Sell via Stack)
+   HBASE-772   Up default lease period from 60 to 120 seconds
+   HBASE-779   Test changing hbase.hregion.memcache.block.multiplier to 2
+   HBASE-783   For single row, single family retrieval, getRow() works half
+               as fast as getScanner().next() (Jean-Daniel Cryans via Stack)
+   HBASE-789   add clover coverage report targets (Rong-en Fan via Stack)
+
+  NEW FEATURES
+   HBASE-47    Option to set TTL for columns in hbase
+               (Andrew Purtell via Bryan Duxbury and Stack)
+   HBASE-23    UI listing regions should be sorted by address and show additional
+               region state (Jean-Daniel Cryans via Stack)
+   HBASE-639   Add HBaseAdmin.getTableDescriptor function
+   HBASE-533   Region Historian
+   HBASE-487   Replace hql w/ a hbase-friendly jirb or jython shell
+   HBASE-548   Tool to online single region
+   HBASE-71    Master should rebalance region assignments periodically
+   HBASE-512   Add configuration for global aggregate memcache size
+   HBASE-40    Add a method of getting multiple (but not all) cells for a row
+               at once
+   HBASE-506   When an exception has to escape ServerCallable due to exhausted
+               retries, show all the exceptions that lead to this situation
+   HBASE-747   Add a simple way to do batch updates of many rows (Jean-Daniel
+               Cryans via JimK)
+   HBASE-733   Enhance Cell so that it can contain multiple values at multiple
+               timestamps
+   HBASE-511   Do exponential backoff in clients on NSRE, WRE, ISE, etc.
+               (Andrew Purtell via Jim Kellerman)
+   
+  OPTIMIZATIONS
+   HBASE-430   Performance: Scanners and getRow return maps with duplicate data
+
+Release 0.1.3 - 07/25/2008
+
+  BUG FIXES
+   HBASE-644   DroppedSnapshotException but RegionServer doesn't restart
+   HBASE-645   EOFException opening region (HBASE-550 redux)
+   HBASE-641   Improve master split logging
+   HBASE-642   Splitting log in a hostile environment -- bad hdfs -- we drop
+               write-ahead-log edits
+   HBASE-646   EOFException opening HStoreFile info file (spin on HBASE-645 and 550)
+   HBASE-648   If mapfile index is empty, run repair
+   HBASE-659   HLog#cacheFlushLock not cleared; hangs a region
+   HBASE-663   Incorrect sequence number for cache flush
+   HBASE-652   Dropping table fails silently if table isn't disabled 
+   HBASE-674   Memcache size unreliable
+   HBASE-665   server side scanner doesn't honor stop row
+   HBASE-681   NPE in Memcache (Clint Morgan via Jim Kellerman)
+   HBASE-680   config parameter hbase.io.index.interval should be
+               hbase.index.interval, accroding to HBaseMapFile.HbaseWriter
+               (LN via Stack)
+   HBASE-684   unnecessary iteration in HMemcache.internalGet? got much better
+               reading performance after break it (LN via Stack)
+   HBASE-686   MemcacheScanner didn't return the first row(if it exists),
+               because HScannerInterface's output incorrect (LN via Jim Kellerman)
+   HBASE-613   Timestamp-anchored scanning fails to find all records
+   HBASE-709   Deadlock while rolling WAL-log while finishing flush
+   HBASE-707   High-load import of data into single table/family never triggers split
+   HBASE-710   If clocks are way off, then we can have daughter split come
+               before rather than after its parent in .META.
+
+Release 0.1.2 - 05/13/2008
+
+  BUG FIXES
+   HBASE-577   NPE getting scanner
+   HBASE-574   HBase does not load hadoop native libs (Rong-En Fan via Stack).
+   HBASE-11    Unexpected exits corrupt DFS - best we can do until we have at
+               least a subset of HADOOP-1700
+   HBASE-573   HBase does not read hadoop-*.xml for dfs configuration after
+               moving out hadoop/contrib
+   HBASE-12    when hbase regionserver restarts, it says "impossible state for
+               createLease()"
+   HBASE-575   master dies with stack overflow error if rootdir isn't qualified
+   HBASE-500   Regionserver stuck on exit
+   HBASE-582   HBase 554 forgot to clear results on each iteration caused by a filter
+               (Clint Morgan via Stack)
+   HBASE-532   Odd interaction between HRegion.get, HRegion.deleteAll and compactions
+   HBASE-590   HBase migration tool does not get correct FileSystem or root
+               directory if configuration is not correct
+   HBASE-595   RowFilterInterface.rowProcessed() is called *before* fhe final
+               filtering decision is made (Clint Morgan via Stack)
+   HBASE-586   HRegion runs HStore memcache snapshotting -- fix it so only HStore
+               knows about workings of memcache
+   HBASE-572   Backport HBASE-512 to 0.1 branch
+   HBASE-588   Still a 'hole' in scanners, even after HBASE-532
+   HBASE-604   Don't allow CLASSPATH from environment pollute the hbase CLASSPATH
+   HBASE-608   HRegionServer::getThisIP() checks hadoop config var for dns interface name
+               (Jim R. Wilson via Stack)
+   HBASE-609   Master doesn't see regionserver edits because of clock skew
+   HBASE-607   MultiRegionTable.makeMultiRegionTable is not deterministic enough
+               for regression tests
+   HBASE-478   offlining of table does not run reliably
+   HBASE-618   We always compact if 2 files, regardless of the compaction threshold setting
+   HBASE-619   Fix 'logs' link in UI
+   HBASE-620   testmergetool failing in branch and trunk since hbase-618 went in
+   
+  IMPROVEMENTS
+   HBASE-559   MR example job to count table rows
+   HBASE-578   Upgrade branch to 0.16.3 hadoop.
+   HBASE-596   DemoClient.py (Ivan Begtin via Stack)
+
+
+Release 0.1.1 - 04/11/2008
+
+  BUG FIXES
+   HBASE-550   EOF trying to read reconstruction log stops region deployment
+   HBASE-551   Master stuck splitting server logs in shutdown loop; on each
+               iteration, edits are aggregated up into the millions
+   HBASE-505   Region assignments should never time out so long as the region
+               server reports that it is processing the open request
+   HBASE-552   Fix bloom filter bugs (Andrzej Bialecki via Jim Kellerman)
+   HBASE-507   Add sleep between retries
+   HBASE-555   Only one Worker in HRS; on startup, if assigned tens of regions,
+               havoc of reassignments because open processing is done in series
+   HBASE-547   UI shows hadoop version, not hbase version
+   HBASE-561   HBase package does not include LICENSE.txt nor build.xml
+   HBASE-556   Add 0.16.2 to hbase branch -- if it works
+   HBASE-563   TestRowFilterAfterWrite erroneously sets master address to
+               0.0.0.0:60100 rather than relying on conf
+   HBASE-554   filters generate StackOverflowException (Clint Morgan via
+               Jim Kellerman)
+   HBASE-567   Reused BatchUpdate instances accumulate BatchOperations
+
+  NEW FEATURES
+   HBASE-548   Tool to online single region
+
+Release 0.1.0
+
+  INCOMPATIBLE CHANGES
+   HADOOP-2750 Deprecated methods startBatchUpdate, commitBatch, abortBatch, 
+               and renewLease have been removed from HTable (Bryan Duxbury via
+               Jim Kellerman)
+   HADOOP-2786 Move hbase out of hadoop core
+   HBASE-403   Fix build after move of hbase in svn
+   HBASE-494   Up IPC version on 0.1 branch so we cannot mistakenly connect
+               with a hbase from 0.16.0
+
+  NEW FEATURES
+   HBASE-506   When an exception has to escape ServerCallable due to exhausted retries, 
+               show all the exceptions that lead to this situation
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+   HADOOP-2731 Under load, regions become extremely large and eventually cause
+               region servers to become unresponsive
+   HADOOP-2693 NPE in getClosestRowBefore (Bryan Duxbury & Stack)
+   HADOOP-2599 Some minor improvements to changes in HADOOP-2443
+               (Bryan Duxbury & Stack)
+   HADOOP-2773 Master marks region offline when it is recovering from a region
+               server death
+   HBASE-425   Fix doc. so it accomodates new hbase untethered context
+   HBase-421   TestRegionServerExit broken
+   HBASE-426   hbase can't find remote filesystem
+   HBASE-446   Fully qualified hbase.rootdir doesn't work
+   HBASE-428   Under continuous upload of rows, WrongRegionExceptions are
+               thrown that reach the client even after retries
+   HBASE-490   Doubly-assigned .META.; master uses one and clients another
+   HBASE-496   impossible state for createLease writes 400k lines in about 15mins
+   HBASE-472   Passing on edits, we dump all to log
+   HBASE-79    When HBase needs to be migrated, it should display a message on
+               stdout, not just in the logs
+   HBASE-495   No server address listed in .META.
+   HBASE-433 HBASE-251 Region server should delete restore log after successful
+               restore, Stuck replaying the edits of crashed machine.
+   HBASE-27    hregioninfo cell empty in meta table
+   HBASE-501   Empty region server address in info:server entry and a
+               startcode of -1 in .META.
+   HBASE-516   HStoreFile.finalKey does not update the final key if it is not
+               the top region of a split region
+   HBASE-524   Problems with getFull
+   HBASE-514   table 'does not exist' when it does
+   HBASE-537   Wait for hdfs to exit safe mode
+   HBASE-534   Double-assignment at SPLIT-time
+   
+  IMPROVEMENTS
+   HADOOP-2555 Refactor the HTable#get and HTable#getRow methods to avoid
+               repetition of retry-on-failure logic (thanks to Peter Dolan and
+               Bryan Duxbury)
+   HBASE-281   Shell should allow deletions in .META. and -ROOT- tables
+   HBASE-480   Tool to manually merge two regions
+   HBASE-477   Add support for an HBASE_CLASSPATH
+   HBASE-515   At least double default timeouts between regionserver and master
+   HBASE-482   package-level javadoc should have example client or at least 
+               point at the FAQ
+   HBASE-497   RegionServer needs to recover if datanode goes down
+   HBASE-456   Clearly state which ports need to be opened in order to run HBase
+   HBASE-483   Merge tool won't merge two overlapping regions
+   HBASE-476   RegexpRowFilter behaves incorectly when there are multiple store
+               files (Clint Morgan via Jim Kellerman)
+   HBASE-527   RegexpRowFilter does not work when there are columns from 
+               multiple families (Clint Morgan via Jim Kellerman)
+              
+Release 0.16.0
+
+  2008/02/04   HBase is now a subproject of Hadoop. The first HBase release as
+               a subproject will be release 0.1.0 which will be equivalent to
+               the version of HBase included in Hadoop 0.16.0. In order to
+               accomplish this, the HBase portion of HBASE-288 (formerly 
+               HADOOP-1398) has been backed out. Once 0.1.0 is frozen (depending
+               mostly on changes to infrastructure due to becoming a sub project
+               instead of a contrib project), this patch will re-appear on HBase
+               trunk.
+
+  INCOMPATIBLE CHANGES
+   HADOOP-2056 A table with row keys containing colon fails to split regions
+   HADOOP-2079 Fix generated HLog, HRegion names
+   HADOOP-2495 Minor performance improvements: Slim-down BatchOperation, etc. 
+   HADOOP-2506 Remove the algebra package
+   HADOOP-2519 Performance improvements: Customized RPC serialization
+   HADOOP-2478 Restructure how HBase lays out files in the file system (phase 1)
+               (test input data)
+   HADOOP-2478 Restructure how HBase lays out files in the file system (phase 2)
+               Includes migration tool org.apache.hadoop.hbase.util.Migrate
+   HADOOP-2558 org.onelab.filter.BloomFilter class uses 8X the memory it should
+               be using
+
+  NEW FEATURES
+    HADOOP-2061 Add new Base64 dialects
+    HADOOP-2084 Add a LocalHBaseCluster
+    HADOOP-2068 RESTful interface (Bryan Duxbury via Stack)
+    HADOOP-2316 Run REST servlet outside of master
+                (Bryan Duxbury & Stack)
+    HADOOP-1550 No means of deleting a'row' (Bryan Duxbuery via Stack)
+    HADOOP-2384 Delete all members of a column family on a specific row
+                (Bryan Duxbury via Stack)
+    HADOOP-2395 Implement "ALTER TABLE ... CHANGE column" operation
+                (Bryan Duxbury via Stack)
+    HADOOP-2240 Truncate for hbase (Edward Yoon via Stack)
+    HADOOP-2389 Provide multiple language bindings for HBase (Thrift)
+                (David Simpson via Stack)
+
+  OPTIMIZATIONS
+   HADOOP-2479 Save on number of Text object creations
+   HADOOP-2485 Make mapfile index interval configurable (Set default to 32
+               instead of 128)
+   HADOOP-2553 Don't make Long objects calculating hbase type hash codes
+   HADOOP-2377 Holding open MapFile.Readers is expensive, so use less of them
+   HADOOP-2407 Keeping MapFile.Reader open is expensive: Part 2
+   HADOOP-2533 Performance: Scanning, just creating MapWritable in next
+               consumes >20% CPU
+   HADOOP-2443 Keep lazy cache of regions in client rather than an
+               'authoritative' list (Bryan Duxbury via Stack)
+   HADOOP-2600 Performance: HStore.getRowKeyAtOrBefore should use
+               MapFile.Reader#getClosest (before)
+               (Bryan Duxbury via Stack)
+
+  BUG FIXES
+   HADOOP-2059 In tests, exceptions in min dfs shutdown should not fail test
+               (e.g. nightly #272)
+   HADOOP-2064 TestSplit assertion and NPE failures (Patch build #952 and #953)
+   HADOOP-2124 Use of `hostname` does not work on Cygwin in some cases
+   HADOOP-2083 TestTableIndex failed in #970 and #956
+   HADOOP-2109 Fixed race condition in processing server lease timeout.
+   HADOOP-2137 hql.jsp : The character 0x19 is not valid
+   HADOOP-2109 Fix another race condition in processing dead servers,
+               Fix error online meta regions: was using region name and not
+               startKey as key for map.put. Change TestRegionServerExit to
+               always kill the region server for the META region. This makes
+               the test more deterministic and getting META reassigned was
+               problematic.
+   HADOOP-2155 Method expecting HBaseConfiguration throws NPE when given Configuration
+   HADOOP-2156 BufferUnderflowException for un-named HTableDescriptors
+   HADOOP-2161 getRow() is orders of magnitudes slower than get(), even on rows
+               with one column (Clint Morgan and Stack)
+   HADOOP-2040 Hudson hangs AFTER test has finished
+   HADOOP-2274 Excess synchronization introduced by HADOOP-2139 negatively
+               impacts performance
+   HADOOP-2196 Fix how hbase sits in hadoop 'package' product
+   HADOOP-2276 Address regression caused by HADOOP-2274, fix HADOOP-2173 (When
+               the master times out a region servers lease, the region server
+               may not restart)
+   HADOOP-2253 getRow can return HBASE::DELETEVAL cells
+               (Bryan Duxbury via Stack)
+   HADOOP-2295 Fix assigning a region to multiple servers
+   HADOOP-2234 TableInputFormat erroneously aggregates map values
+   HADOOP-2308 null regioninfo breaks meta scanner
+   HADOOP-2304 Abbreviated symbol parsing error of dir path in jar command
+               (Edward Yoon via Stack)
+   HADOOP-2320 Committed TestGet2 is managled (breaks build).
+   HADOOP-2322 getRow(row, TS) client interface not properly connected
+   HADOOP-2309 ConcurrentModificationException doing get of all region start keys
+   HADOOP-2321 TestScanner2 does not release resources which sometimes cause the
+               test to time out
+   HADOOP-2315 REST servlet doesn't treat / characters in row key correctly
+               (Bryan Duxbury via Stack)
+   HADOOP-2332 Meta table data selection in Hbase Shell
+               (Edward Yoon via Stack)
+   HADOOP-2347 REST servlet not thread safe but run in a threaded manner
+               (Bryan Duxbury via Stack)
+   HADOOP-2365 Result of HashFunction.hash() contains all identical values
+   HADOOP-2362 Leaking hdfs file handle on region split
+   HADOOP-2338 Fix NullPointerException in master server.
+   HADOOP-2380 REST servlet throws NPE when any value node has an empty string
+               (Bryan Duxbury via Stack)
+   HADOOP-2350 Scanner api returns null row names, or skips row names if
+               different column families do not have entries for some rows
+   HADOOP-2283 AlreadyBeingCreatedException (Was: Stuck replay of failed
+               regionserver edits)
+   HADOOP-2392 TestRegionServerExit has new failure mode since HADOOP-2338
+   HADOOP-2324 Fix assertion failures in TestTableMapReduce
+   HADOOP-2396 NPE in HMaster.cancelLease
+   HADOOP-2397 The only time that a meta scanner should try to recover a log is
+               when the master is starting
+   HADOOP-2417 Fix critical shutdown problem introduced by HADOOP-2338
+   HADOOP-2418 Fix assertion failures in TestTableMapReduce, TestTableIndex,
+               and TestTableJoinMapReduce
+   HADOOP-2414 Fix ArrayIndexOutOfBoundsException in bloom filters.
+   HADOOP-2430 Master will not shut down if there are no active region servers
+   HADOOP-2199 Add tools for going from hregion filename to region name in logs
+   HADOOP-2441 Fix build failures in TestHBaseCluster
+   HADOOP-2451 End key is incorrectly assigned in many region splits
+   HADOOP-2455 Error in Help-string of CREATE command (Edward Yoon via Stack)
+   HADOOP-2465 When split parent regions are cleaned up, not all the columns are
+               deleted
+   HADOOP-2468 TestRegionServerExit failed in Hadoop-Nightly #338
+   HADOOP-2467 scanner truncates resultset when > 1 column families
+   HADOOP-2503 REST Insert / Select encoding issue (Bryan Duxbury via Stack)
+   HADOOP-2505 formatter classes missing apache license
+   HADOOP-2504 REST servlet method for deleting a scanner was not properly
+               mapped (Bryan Duxbury via Stack)
+   HADOOP-2507 REST servlet does not properly base64 row keys and column names
+               (Bryan Duxbury via Stack)
+   HADOOP-2530 Missing type in new hbase custom RPC serializer
+   HADOOP-2490 Failure in nightly #346 (Added debugging of hudson failures).
+   HADOOP-2558 fixes for build up on hudson (part 1, part 2, part 3, part 4)
+   HADOOP-2500 Unreadable region kills region servers
+   HADOOP-2579 Initializing a new HTable object against a nonexistent table
+               throws a NoServerForRegionException instead of a
+               TableNotFoundException when a different table has been created
+               previously (Bryan Duxbury via Stack)
+   HADOOP-2587 Splits blocked by compactions cause region to be offline for
+               duration of compaction. 
+   HADOOP-2592 Scanning, a region can let out a row that its not supposed
+               to have
+   HADOOP-2493 hbase will split on row when the start and end row is the
+               same cause data loss (Bryan Duxbury via Stack)
+   HADOOP-2629 Shell digests garbage without complaint
+   HADOOP-2619 Compaction errors after a region splits
+   HADOOP-2621 Memcache flush flushing every 60 secs with out considering
+               the max memcache size
+   HADOOP-2584 Web UI displays an IOException instead of the Tables
+   HADOOP-2650 Remove Writables.clone and use WritableUtils.clone from
+               hadoop instead
+   HADOOP-2668 Documentation and improved logging so fact that hbase now
+               requires migration comes as less of a surprise
+   HADOOP-2686 Removed tables stick around in .META.
+   HADOOP-2688 IllegalArgumentException processing a shutdown stops
+               server going down and results in millions of lines of output
+   HADOOP-2706 HBase Shell crash
+   HADOOP-2712 under load, regions won't split
+   HADOOP-2675 Options not passed to rest/thrift
+   HADOOP-2722 Prevent unintentional thread exit in region server and master
+   HADOOP-2718 Copy Constructor HBaseConfiguration(Configuration) will override
+               hbase configurations if argumant is not an instance of
+               HBaseConfiguration.
+   HADOOP-2753 Back out 2718; programmatic config works but hbase*xml conf
+               is overridden
+   HADOOP-2718 Copy Constructor HBaseConfiguration(Configuration) will override
+               hbase configurations if argumant is not an instance of
+               HBaseConfiguration (Put it back again).
+   HADOOP-2631 2443 breaks HTable.getStartKeys when there is more than one
+               table or table you are enumerating isn't the first table
+   Delete empty file: src/contrib/hbase/src/java/org/apache/hadoop/hbase/mapred/
+               TableOutputCollector.java per Nigel Daley
+   
+  IMPROVEMENTS
+   HADOOP-2401 Add convenience put method that takes writable
+               (Johan Oskarsson via Stack)
+   HADOOP-2074 Simple switch to enable DEBUG level-logging in hbase
+   HADOOP-2088 Make hbase runnable in $HADOOP_HOME/build(/contrib/hbase)
+   HADOOP-2126 Use Bob Jenkins' hash for bloom filters
+   HADOOP-2157 Make Scanners implement Iterable
+   HADOOP-2176 Htable.deleteAll documentation is ambiguous
+   HADOOP-2139 (phase 1) Increase parallelism in region servers.
+   HADOOP-2267 [Hbase Shell] Change the prompt's title from 'hbase' to 'hql'.
+               (Edward Yoon via Stack)
+   HADOOP-2139 (phase 2) Make region server more event driven
+   HADOOP-2289 Useless efforts of looking for the non-existant table in select
+               command.
+               (Edward Yoon via Stack)
+   HADOOP-2257 Show a total of all requests and regions on the web ui
+               (Paul Saab via Stack)
+   HADOOP-2261 HTable.abort no longer throws exception if there is no active update.
+   HADOOP-2287 Make hbase unit tests take less time to complete.
+   HADOOP-2262 Retry n times instead of n**2 times.
+   HADOOP-1608 Relational Algrebra Operators
+               (Edward Yoon via Stack)
+   HADOOP-2198 HTable should have method to return table metadata
+   HADOOP-2296 hbase shell: phantom columns show up from select command
+   HADOOP-2297 System.exit() Handling in hbase shell jar command
+               (Edward Yoon via Stack)
+   HADOOP-2224 Add HTable.getRow(ROW, ts)
+               (Bryan Duxbury via Stack)
+   HADOOP-2339 Delete command with no WHERE clause
+               (Edward Yoon via Stack)
+   HADOOP-2299 Support inclusive scans (Bryan Duxbury via Stack)
+   HADOOP-2333 Client side retries happen at the wrong level
+   HADOOP-2357 Compaction cleanup; less deleting + prevent possible file leaks
+   HADOOP-2392 TestRegionServerExit has new failure mode since HADOOP-2338
+   HADOOP-2370 Allow column families with an unlimited number of versions
+               (Edward Yoon via Stack)
+   HADOOP-2047 Add an '--master=X' and '--html' command-line parameters to shell
+               (Edward Yoon via Stack)
+   HADOOP-2351 If select command returns no result, it doesn't need to show the
+               header information (Edward Yoon via Stack)
+   HADOOP-2285 Add being able to shutdown regionservers (Dennis Kubes via Stack)
+   HADOOP-2458 HStoreFile.writeSplitInfo should just call 
+               HStoreFile.Reference.write
+   HADOOP-2471 Add reading/writing MapFile to PerformanceEvaluation suite
+   HADOOP-2522 Separate MapFile benchmark from PerformanceEvaluation
+               (Tom White via Stack)
+   HADOOP-2502 Insert/Select timestamp, Timestamp data type in HQL
+               (Edward Yoon via Stack)
+   HADOOP-2450 Show version (and svn revision) in hbase web ui
+   HADOOP-2472 Range selection using filter (Edward Yoon via Stack)
+   HADOOP-2548 Make TableMap and TableReduce generic
+               (Frederik Hedberg via Stack)
+   HADOOP-2557 Shell count function (Edward Yoon via Stack)
+   HADOOP-2589 Change an classes/package name from Shell to hql
+               (Edward Yoon via Stack)
+   HADOOP-2545 hbase rest server should be started with hbase-daemon.sh
+   HADOOP-2525 Same 2 lines repeated 11 million times in HMaster log upon
+               HMaster shutdown
+   HADOOP-2616 hbase not spliting when the total size of region reaches max
+               region size * 1.5
+   HADOOP-2643 Make migration tool smarter.
+   
+Release 0.15.1
+Branch 0.15
+
+  INCOMPATIBLE CHANGES
+    HADOOP-1931 Hbase scripts take --ARG=ARG_VALUE when should be like hadoop
+                and do ---ARG ARG_VALUE
+
+  NEW FEATURES
+    HADOOP-1768 FS command using Hadoop FsShell operations
+                (Edward Yoon via Stack)
+    HADOOP-1784 Delete: Fix scanners and gets so they work properly in presence
+                of deletes. Added a deleteAll to remove all cells equal to or
+                older than passed timestamp.  Fixed compaction so deleted cells
+                do not make it out into compacted output.  Ensure also that
+                versions > column max are dropped compacting.
+    HADOOP-1720 Addition of HQL (Hbase Query Language) support in Hbase Shell.
+                The old shell syntax has been replaced by HQL, a small SQL-like
+                set of operators, for creating, altering, dropping, inserting,
+                deleting, and selecting, etc., data in hbase.
+                (Inchul Song and Edward Yoon via Stack)
+    HADOOP-1913 Build a Lucene index on an HBase table
+                (Ning Li via Stack)
+    HADOOP-1957 Web UI with report on cluster state and basic browsing of tables
+
+  OPTIMIZATIONS
+
+  BUG FIXES
+    HADOOP-1527 Region server won't start because logdir exists
+    HADOOP-1723 If master asks region server to shut down, by-pass return of
+                shutdown message
+    HADOOP-1729 Recent renaming or META tables breaks hbase shell
+    HADOOP-1730 unexpected null value causes META scanner to exit (silently)
+    HADOOP-1747 On a cluster, on restart, regions multiply assigned
+    HADOOP-1776 Fix for sporadic compaction failures closing and moving
+                compaction result
+    HADOOP-1780 Regions are still being doubly assigned
+    HADOOP-1797 Fix NPEs in MetaScanner constructor
+    HADOOP-1799 Incorrect classpath in binary version of Hadoop
+    HADOOP-1805 Region server hang on exit
+    HADOOP-1785 TableInputFormat.TableRecordReader.next has a bug
+                (Ning Li via Stack)
+    HADOOP-1800 output should default utf8 encoding
+    HADOOP-1801 When hdfs is yanked out from under hbase, hbase should go down gracefully
+    HADOOP-1813 OOME makes zombie of region server
+    HADOOP-1814	TestCleanRegionServerExit fails too often on Hudson
+    HADOOP-1820 Regionserver creates hlogs without bound
+                (reverted 2007/09/25) (Fixed 2007/09/30)
+    HADOOP-1821 Replace all String.getBytes() with String.getBytes("UTF-8")
+    HADOOP-1832 listTables() returns duplicate tables
+    HADOOP-1834 Scanners ignore timestamp passed on creation
+    HADOOP-1847 Many HBase tests do not fail well.
+    HADOOP-1847 Many HBase tests do not fail well. (phase 2)
+    HADOOP-1870 Once file system failure has been detected, don't check it again
+                and get on with shutting down the hbase cluster.
+    HADOOP-1888 NullPointerException in HMemcacheScanner (reprise)
+    HADOOP-1903 Possible data loss if Exception happens between snapshot and
+                flush to disk.
+    HADOOP-1920 Wrapper scripts broken when hadoop in one location and hbase in
+                another
+    HADOOP-1923, HADOOP-1924 a) tests fail sporadically because set up and tear
+                 down is inconsistent b) TestDFSAbort failed in nightly #242
+    HADOOP-1929 Add hbase-default.xml to hbase jar
+    HADOOP-1941 StopRowFilter throws NPE when passed null row
+    HADOOP-1966 Make HBase unit tests more reliable in the Hudson environment.
+    HADOOP-1975 HBase tests failing with java.lang.NumberFormatException
+    HADOOP-1990 Regression test instability affects nightly and patch builds
+    HADOOP-1996 TestHStoreFile fails on windows if run multiple times
+    HADOOP-1937 When the master times out a region server's lease, it is too 
+                aggressive in reclaiming the server's log.
+    HADOOP-2004 webapp hql formatting bugs 
+    HADOOP_2011 Make hbase daemon scripts take args in same order as hadoop
+                daemon scripts
+    HADOOP-2017 TestRegionServerAbort failure in patch build #903 and
+                nightly #266
+    HADOOP-2029 TestLogRolling fails too often in patch and nightlies
+    HADOOP-2038 TestCleanRegionExit failed in patch build #927
+
+  IMPROVEMENTS
+    HADOOP-1737 Make HColumnDescriptor data publically members settable
+    HADOOP-1746 Clean up findbugs warnings
+    HADOOP-1757 Bloomfilters: single argument constructor, use enum for bloom
+                filter types
+    HADOOP-1760 Use new MapWritable and SortedMapWritable classes from
+                org.apache.hadoop.io
+    HADOOP-1793 (Phase 1) Remove TestHClient (Phase2) remove HClient.
+    HADOOP-1794 Remove deprecated APIs
+    HADOOP-1802 Startup scripts should wait until hdfs as cleared 'safe mode'
+    HADOOP-1833 bin/stop_hbase.sh returns before it completes
+                (Izaak Rubin via Stack) 
+    HADOOP-1835 Updated Documentation for HBase setup/installation
+                (Izaak Rubin via Stack)
+    HADOOP-1868 Make default configuration more responsive
+    HADOOP-1884 Remove useless debugging log messages from hbase.mapred
+    HADOOP-1856 Add Jar command to hbase shell using Hadoop RunJar util
+                (Edward Yoon via Stack)
+    HADOOP-1928 Have master pass the regionserver the filesystem to use
+    HADOOP-1789 Output formatting
+    HADOOP-1960 If a region server cannot talk to the master before its lease
+                times out, it should shut itself down
+    HADOOP-2035 Add logo to webapps
+
+
+Below are the list of changes before 2007-08-18
+
+  1. HADOOP-1384. HBase omnibus patch. (jimk, Vuk Ercegovac, and Michael Stack)
+  2. HADOOP-1402. Fix javadoc warnings in hbase contrib. (Michael Stack)
+  3. HADOOP-1404. HBase command-line shutdown failing (Michael Stack)
+  4. HADOOP-1397. Replace custom hbase locking with 
+     java.util.concurrent.locks.ReentrantLock (Michael Stack)
+  5. HADOOP-1403. HBase reliability - make master and region server more fault
+     tolerant.
+  6. HADOOP-1418. HBase miscellaneous: unit test for HClient, client to do
+     'Performance Evaluation', etc.
+  7. HADOOP-1420, HADOOP-1423. Findbugs changes, remove reference to removed 
+     class HLocking.
+  8. HADOOP-1424. TestHBaseCluster fails with IllegalMonitorStateException. Fix
+     regression introduced by HADOOP-1397.
+  9. HADOOP-1426. Make hbase scripts executable + add test classes to CLASSPATH.
+ 10. HADOOP-1430. HBase shutdown leaves regionservers up.
+ 11. HADOOP-1392. Part1: includes create/delete table; enable/disable table;
+     add/remove column.
+ 12. HADOOP-1392. Part2: includes table compaction by merging adjacent regions
+     that have shrunk in size.
+ 13. HADOOP-1445 Support updates across region splits and compactions
+ 14. HADOOP-1460 On shutdown IOException with complaint 'Cannot cancel lease
+     that is not held'
+ 15. HADOOP-1421 Failover detection, split log files.
+     For the files modified, also clean up javadoc, class, field and method 
+     visibility (HADOOP-1466)
+ 16. HADOOP-1479 Fix NPE in HStore#get if store file only has keys < passed key.
+ 17. HADOOP-1476 Distributed version of 'Performance Evaluation' script
+ 18. HADOOP-1469 Asychronous table creation
+ 19. HADOOP-1415 Integrate BSD licensed bloom filter implementation.
+ 20. HADOOP-1465 Add cluster stop/start scripts for hbase
+ 21. HADOOP-1415 Provide configurable per-column bloom filters - part 2.
+ 22. HADOOP-1498. Replace boxed types with primitives in many places.
+ 23. HADOOP-1509.  Made methods/inner classes in HRegionServer and HClient protected
+     instead of private for easier extension. Also made HRegion and HRegionInfo public too.
+     Added an hbase-default.xml property for specifying what HRegionInterface extension to use
+     for proxy server connection. (James Kennedy via Jim Kellerman)
+ 24. HADOOP-1534. [hbase] Memcache scanner fails if start key not present
+ 25. HADOOP-1537. Catch exceptions in testCleanRegionServerExit so we can see
+     what is failing.
+ 26. HADOOP-1543 [hbase] Add HClient.tableExists
+ 27. HADOOP-1519 [hbase] map/reduce interface for HBase.  (Vuk Ercegovac and
+     Jim Kellerman)
+ 28. HADOOP-1523 Hung region server waiting on write locks 
+ 29. HADOOP-1560 NPE in MiniHBaseCluster on Windows
+ 30. HADOOP-1531 Add RowFilter to HRegion.HScanner
+     Adds a row filtering interface and two implemenentations: A page scanner,
+     and a regex row/column-data matcher. (James Kennedy via Stack)
+ 31. HADOOP-1566 Key-making utility
+ 32. HADOOP-1415 Provide configurable per-column bloom filters. 
+     HADOOP-1466 Clean up visibility and javadoc issues in HBase.
+ 33. HADOOP-1538 Provide capability for client specified time stamps in HBase
+     HADOOP-1466 Clean up visibility and javadoc issues in HBase.
+ 34. HADOOP-1589 Exception handling in HBase is broken over client server connections
+ 35. HADOOP-1375 a simple parser for hbase (Edward Yoon via Stack)
+ 36. HADOOP-1600 Update license in HBase code
+ 37. HADOOP-1589 Exception handling in HBase is broken over client server
+ 38. HADOOP-1574 Concurrent creates of a table named 'X' all succeed
+ 39. HADOOP-1581 Un-openable tablename bug
+ 40. HADOOP-1607 [shell] Clear screen command (Edward Yoon via Stack)
+ 41. HADOOP-1614 [hbase] HClient does not protect itself from simultaneous updates
+ 42. HADOOP-1468 Add HBase batch update to reduce RPC overhead
+ 43. HADOOP-1616 Sporadic TestTable failures
+ 44. HADOOP-1615 Replacing thread notification-based queue with 
+     java.util.concurrent.BlockingQueue in HMaster, HRegionServer
+ 45. HADOOP-1606 Updated implementation of RowFilterSet, RowFilterInterface
+     (Izaak Rubin via Stack)
+ 46. HADOOP-1579 Add new WhileMatchRowFilter and StopRowFilter filters
+    (Izaak Rubin via Stack)
+ 47. HADOOP-1637 Fix to HScanner to Support Filters, Add Filter Tests to
+     TestScanner2 (Izaak Rubin via Stack)
+ 48. HADOOP-1516 HClient fails to readjust when ROOT or META redeployed on new
+     region server
+ 49. HADOOP-1646 RegionServer OOME's under sustained, substantial loading by
+     10 concurrent clients
+ 50. HADOOP-1468 Add HBase batch update to reduce RPC overhead (restrict batches
+     to a single row at a time)
+ 51. HADOOP-1528 HClient for multiple tables (phase 1) (James Kennedy & JimK)
+ 52. HADOOP-1528 HClient for multiple tables (phase 2) all HBase client side code
+     (except TestHClient and HBaseShell) have been converted to use the new client
+     side objects (HTable/HBaseAdmin/HConnection) instead of HClient.
+ 53. HADOOP-1528 HClient for multiple tables - expose close table function
+ 54. HADOOP-1466 Clean up warnings, visibility and javadoc issues in HBase.
+ 55. HADOOP-1662 Make region splits faster
+ 56. HADOOP-1678 On region split, master should designate which host should 
+     serve daughter splits. Phase 1: Master balances load for new regions and
+     when a region server fails.
+ 57. HADOOP-1678 On region split, master should designate which host should 
+     serve daughter splits. Phase 2: Master assigns children of split region
+     instead of HRegionServer serving both children.
+ 58. HADOOP-1710 All updates should be batch updates
+ 59. HADOOP-1711 HTable API should use interfaces instead of concrete classes as
+     method parameters and return values
+ 60. HADOOP-1644 Compactions should not block updates
+ 60. HADOOP-1672 HBase Shell should use new client classes
+     (Edward Yoon via Stack).
+ 61. HADOOP-1709 Make HRegionInterface more like that of HTable
+     HADOOP-1725 Client find of table regions should not include offlined, split parents
diff --git a/0.90/LICENSE.txt b/0.90/LICENSE.txt
new file mode 100644
index 0000000..d645695
--- /dev/null
+++ b/0.90/LICENSE.txt
@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/0.90/NOTICE.txt b/0.90/NOTICE.txt
new file mode 100644
index 0000000..68cbf65
--- /dev/null
+++ b/0.90/NOTICE.txt
@@ -0,0 +1,8 @@
+This product includes software developed by The Apache Software
+Foundation (http://www.apache.org/).
+
+In addition, this product includes software developed by:
+
+Facebook, Inc. (http://developers.facebook.com/thrift/ -- Page includes the Thrift Software License)
+
+JUnit (http://www.junit.org/)
diff --git a/0.90/README.txt b/0.90/README.txt
new file mode 100644
index 0000000..7cc60da
--- /dev/null
+++ b/0.90/README.txt
@@ -0,0 +1,30 @@
+Apache HBase [1] is an open-source, distributed, versioned, column-oriented
+store modeled after Google' Bigtable: A Distributed Storage System for
+Structured Data by Chang et al.[2]  Just as Bigtable leverages the distributed
+data storage provided by the Google File System, HBase provides Bigtable-like
+capabilities on top of Apache Hadoop [3].
+
+To get started using HBase, the full documentation for this release can be
+found under the doc/ directory that accompanies this README.  Using a browser,
+open the docs/index.html to view the project home page (or browse to [1]).
+The hbase 'book' at docs/book.html has a 'quick start' section and is where you
+should being your exploration of the hbase project.
+
+The latest HBase can be downloaded from an Apache Mirror [4].
+
+The source code can be found at [5]
+
+The HBase issue tracker is at [6]
+
+Apache HBase is made available under the Apache License, version 2.0 [7]
+
+The HBase mailing lists and archives are listed here [8].
+
+1. http://hbase.apache.org
+2. http://labs.google.com/papers/bigtable.html
+3. http://hadoop.apache.org
+4. http://www.apache.org/dyn/closer.cgi/hbase/
+5. http://hbase.apache.org/docs/current/source-repository.html
+6. http://hbase.apache.org/docs/current/issue-tracking.html
+7. http://hbase.apache.org/docs/current/license.html
+8. http://hbase.apache.org/docs/current/mail-lists.html
diff --git a/0.90/bin/add_table.rb b/0.90/bin/add_table.rb
new file mode 100644
index 0000000..9bf6a8c
--- /dev/null
+++ b/0.90/bin/add_table.rb
@@ -0,0 +1,147 @@
+#
+# Copyright 2009 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Script adds a table back to a running hbase.
+# Currently only works on if table data is in place.
+#
+# To see usage for this script, run:
+#
+#  ${HBASE_HOME}/bin/hbase org.jruby.Main addtable.rb
+#
+include Java
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.regionserver.HRegion
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.FileSystem
+import org.apache.commons.logging.LogFactory
+
+# Name of this script
+NAME = "add_table"
+
+# Print usage for this script
+def usage
+  puts 'Usage: %s.rb TABLE_DIR [alternate_tablename]' % NAME
+  exit!
+end
+
+# Get configuration to use.
+c = HBaseConfiguration.new()
+
+# Set hadoop filesystem configuration using the hbase.rootdir.
+# Otherwise, we'll always use localhost though the hbase.rootdir
+# might be pointing at hdfs location.
+c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
+fs = FileSystem.get(c)
+
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+
+# Check arguments
+if ARGV.size < 1 || ARGV.size > 2
+  usage
+end
+
+# Get cmdline args.
+srcdir = fs.makeQualified(Path.new(java.lang.String.new(ARGV[0])))
+
+if not fs.exists(srcdir)
+  raise IOError.new("src dir " + srcdir.toString() + " doesn't exist!")
+end
+
+# Get table name
+tableName = nil
+if ARGV.size > 1
+  tableName = ARGV[1]
+  raise IOError.new("Not supported yet")
+elsif
+  # If none provided use dirname
+  tableName = srcdir.getName()
+end
+HTableDescriptor.isLegalTableName(tableName.to_java_bytes)
+
+# Figure locations under hbase.rootdir
+# Move directories into place; be careful not to overwrite.
+rootdir = FSUtils.getRootDir(c)
+tableDir = fs.makeQualified(Path.new(rootdir, tableName))
+
+# If a directory currently in place, move it aside.
+if srcdir.equals(tableDir)
+  LOG.info("Source directory is in place under hbase.rootdir: " + srcdir.toString());
+elsif fs.exists(tableDir)
+  movedTableName = tableName + "." + java.lang.System.currentTimeMillis().to_s
+  movedTableDir = Path.new(rootdir, java.lang.String.new(movedTableName))
+  LOG.warn("Moving " + tableDir.toString() + " aside as " + movedTableDir.toString());
+  raise IOError.new("Failed move of " + tableDir.toString()) unless fs.rename(tableDir, movedTableDir)
+  LOG.info("Moving " + srcdir.toString() + " to " + tableDir.toString());
+  raise IOError.new("Failed move of " + srcdir.toString()) unless fs.rename(srcdir, tableDir)
+end
+
+# Clean mentions of table from .META.
+# Scan the .META. and remove all lines that begin with tablename
+LOG.info("Deleting mention of " + tableName + " from .META.")
+metaTable = HTable.new(c, HConstants::META_TABLE_NAME)
+tableNameMetaPrefix = tableName + HConstants::META_ROW_DELIMITER.chr
+scan = Scan.new((tableNameMetaPrefix + HConstants::META_ROW_DELIMITER.chr).to_java_bytes)
+scanner = metaTable.getScanner(scan)
+# Use java.lang.String doing compares.  Ruby String is a bit odd.
+tableNameStr = java.lang.String.new(tableName)
+while (result = scanner.next())
+  rowid = Bytes.toString(result.getRow())
+  rowidStr = java.lang.String.new(rowid)
+  if not rowidStr.startsWith(tableNameMetaPrefix)
+    # Gone too far, break
+    break
+  end
+  LOG.info("Deleting row from catalog: " + rowid);
+  d = Delete.new(result.getRow())
+  metaTable.delete(d)
+end
+scanner.close()
+
+# Now, walk the table and per region, add an entry
+LOG.info("Walking " + srcdir.toString() + " adding regions to catalog table")
+statuses = fs.listStatus(srcdir)
+for status in statuses
+  next unless status.isDir()
+  next if status.getPath().getName() == "compaction.dir"
+  regioninfofile =  Path.new(status.getPath(), HRegion::REGIONINFO_FILE)
+  unless fs.exists(regioninfofile)
+    LOG.warn("Missing .regioninfo: " + regioninfofile.toString())
+    next
+  end
+  is = fs.open(regioninfofile)
+  hri = HRegionInfo.new()
+  hri.readFields(is)
+  is.close()
+  # TODO: Need to redo table descriptor with passed table name and then recalculate the region encoded names.
+  p = Put.new(hri.getRegionName())
+  p.add(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER, Writables.getBytes(hri))
+  metaTable.put(p)
+  LOG.info("Added to catalog: " + hri.toString())
+end
diff --git a/0.90/bin/check_meta.rb b/0.90/bin/check_meta.rb
new file mode 100644
index 0000000..d874922
--- /dev/null
+++ b/0.90/bin/check_meta.rb
@@ -0,0 +1,160 @@
+# Script looks at hbase .META. table verifying its content is coherent.
+# 
+# To see usage for this script, run: 
+#
+#  ${HBASE_HOME}/bin/hbase org.jruby.Main check_meta.rb --help
+#
+
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+include Java
+import org.apache.commons.logging.LogFactory
+import org.apache.hadoop.hbase.util.VersionInfo
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.fs.FileSystem
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.client.Put
+
+# Name of this script
+NAME = 'check_meta'
+
+# Print usage for this script
+def usage
+  puts 'Usage: %s.rb [--fix]' % NAME
+  puts ' fix   Try to fixup meta issues'
+  puts 'Script checks consistency of the .META. table.  It reports if .META. has missing entries.'
+  puts 'If you pass "--fix", it will try looking in the filesystem for the dropped region and if it'
+  puts 'finds a likely candidate, it will try pluggin the .META. hole.'
+  exit!
+end
+
+def isFixup
+  # Are we to do fixup during this run
+  usage if ARGV.size > 1
+  fixup = nil
+  if ARGV.size == 1
+    usage unless ARGV[0].downcase.match('--fix.*')
+    fixup = 1
+  end
+  return fixup
+end
+
+def getConfiguration
+  hbase_twenty = VersionInfo.getVersion().match('0\.20\..*')
+  # Get configuration to use.
+  if hbase_twenty
+    c = HBaseConfiguration.new()
+  else
+    c = HBaseConfiguration.create()
+  end
+  # Set hadoop filesystem configuration using the hbase.rootdir.
+  # Otherwise, we'll always use localhost though the hbase.rootdir
+  # might be pointing at hdfs location. Do old and new key for fs.
+  c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
+  c.set("fs.defaultFS", c.get(HConstants::HBASE_DIR))
+  return c
+end
+
+def fixup(leftEdge, rightEdge, metatable, fs, rootdir)
+  plugged = nil
+  # Try and fix the passed holes in meta.
+  tabledir = HTableDescriptor::getTableDir(rootdir, leftEdge.getTableDesc().getName())
+  statuses = fs.listStatus(tabledir) 
+  for status in statuses
+    next unless status.isDir()
+    next if status.getPath().getName() == "compaction.dir"
+    regioninfofile =  Path.new(status.getPath(), ".regioninfo")
+    unless fs.exists(regioninfofile)
+      LOG.warn("Missing .regioninfo: " + regioninfofile.toString())
+      next
+    end
+    is = fs.open(regioninfofile) 
+    hri = HRegionInfo.new()
+    hri.readFields(is)
+    is.close() 
+    next unless Bytes.equals(leftEdge.getEndKey(), hri.getStartKey())
+    # TODO: Check against right edge to make sure this addition does not overflow right edge. 
+    # TODO: Check that the schema matches both left and right edges schemas.
+    p = Put.new(hri.getRegionName())
+    p.add(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER, Writables.getBytes(hri))
+    metatable.put(p)
+    LOG.info("Plugged hole in .META. at: " + hri.toString())
+    plugged = true
+  end
+  return plugged
+end
+
+fixup = isFixup()
+
+# Get configuration
+conf = getConfiguration()
+
+# Filesystem
+fs = FileSystem.get(conf)
+
+# Rootdir
+rootdir = FSUtils.getRootDir(conf)
+
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+
+# Scan the .META. looking for holes
+metatable = HTable.new(conf, HConstants::META_TABLE_NAME)
+scan = Scan.new()
+scanner = metatable.getScanner(scan)
+oldHRI = nil
+bad = nil 
+while (result = scanner.next())
+  rowid = Bytes.toString(result.getRow())
+  rowidStr = java.lang.String.new(rowid)
+  bytes = result.getValue(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER)
+  hri = Writables.getHRegionInfo(bytes)
+  if oldHRI
+    if oldHRI.isOffline() && Bytes.equals(oldHRI.getStartKey(), hri.getStartKey())
+      # Presume offlined parent
+    elsif Bytes.equals(oldHRI.getEndKey(), hri.getStartKey())
+      # Start key of next matches end key of previous
+    else
+      LOG.warn("hole after " + oldHRI.toString())
+      if fixup
+        bad = 1 unless fixup(oldHRI, hri, metatable, fs, rootdir)
+      else
+        bad = 1
+      end
+    end
+  end 
+  oldHRI = hri
+end
+scanner.close()
+if bad
+  LOG.info(".META. has holes")
+else
+  LOG.info(".META. is healthy")
+end
+
+# Return 0 if meta is good, else non-zero.
+exit bad
diff --git a/0.90/bin/copy_table.rb b/0.90/bin/copy_table.rb
new file mode 100644
index 0000000..97c8756
--- /dev/null
+++ b/0.90/bin/copy_table.rb
@@ -0,0 +1,168 @@
+#
+# Copyright 2009 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Script that copies table in hbase.  As written, will not work for rare
+# case where there is more than one region in .META. table.  Does the
+# update of the hbase .META. and copies the directories in filesystem.  
+# HBase MUST be shutdown when you run this script.
+#
+# To see usage for this script, run: 
+#
+#  ${HBASE_HOME}/bin/hbase org.jruby.Main rename_table.rb
+#
+include Java
+import org.apache.hadoop.hbase.util.MetaUtils
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.HStoreKey
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.regionserver.HLogEdit
+import org.apache.hadoop.hbase.regionserver.HRegion
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.FileSystem
+import org.apache.hadoop.fs.FileUtil
+import org.apache.commons.logging.Log
+import org.apache.commons.logging.LogFactory
+import java.util.TreeMap
+
+# Name of this script
+NAME = "copy_table"
+
+# Print usage for this script
+def usage
+  puts 'Usage: %s.rb <OLD_NAME> <NEW_NAME>' % NAME
+  exit!
+end
+
+# Passed 'dir' exists and is a directory else exception
+def isDirExists(fs, dir)
+  raise IOError.new("Does not exit: " + dir.toString()) unless fs.exists(dir)
+  raise IOError.new("Not a directory: " + dir.toString()) unless fs.isDirectory(dir)
+end
+
+# Returns true if the region belongs to passed table
+def isTableRegion(tableName, hri)
+  return Bytes.equals(hri.getTableDesc().getName(), tableName)
+end
+
+# Create new HRI based off passed 'oldHRI'
+def createHRI(tableName, oldHRI)
+  htd = oldHRI.getTableDesc()
+  newHtd = HTableDescriptor.new(tableName)
+  for family in htd.getFamilies()
+    newHtd.addFamily(family)
+  end
+  return HRegionInfo.new(newHtd, oldHRI.getStartKey(), oldHRI.getEndKey(),
+    oldHRI.isSplit())
+end
+
+# Check arguments
+if ARGV.size != 2
+  usage
+end
+
+# Check good table names were passed.
+oldTableName = HTableDescriptor.isLegalTableName(ARGV[0].to_java_bytes)
+newTableName = HTableDescriptor.isLegalTableName(ARGV[1].to_java_bytes)
+
+# Get configuration to use.
+c = HBaseConfiguration.new()
+
+# Set hadoop filesystem configuration using the hbase.rootdir.
+# Otherwise, we'll always use localhost though the hbase.rootdir
+# might be pointing at hdfs location.
+c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
+fs = FileSystem.get(c)
+
+# If new table directory does not exit, create it.  Keep going if already
+# exists because maybe we are rerunning script because it failed first
+# time.
+rootdir = FSUtils.getRootDir(c)
+oldTableDir = Path.new(rootdir, Path.new(Bytes.toString(oldTableName)))
+isDirExists(fs, oldTableDir)
+newTableDir = Path.new(rootdir, Bytes.toString(newTableName))
+if !fs.exists(newTableDir)
+  fs.mkdirs(newTableDir)
+end
+
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+utils = MetaUtils.new(c)
+
+# Start.  Get all meta rows.
+begin
+  # Get list of all .META. regions that contain old table name
+  metas = utils.getMETARows(oldTableName)
+  index = 0
+  for meta in metas
+    # For each row we find, move its region from old to new table.
+    # Need to update the encoded name in the hri as we move.
+    # After move, delete old entry and create a new.
+    LOG.info("Scanning " + meta.getRegionNameAsString())
+    metaRegion = utils.getMetaRegion(meta)
+    scanner = metaRegion.getScanner(HConstants::COL_REGIONINFO_ARRAY, oldTableName,
+      HConstants::LATEST_TIMESTAMP, nil) 
+    begin
+      key = HStoreKey.new()
+      value = TreeMap.new(Bytes.BYTES_COMPARATOR)
+      while scanner.next(key, value)
+        index = index + 1
+        keyStr = key.toString()
+        oldHRI = Writables.getHRegionInfo(value.get(HConstants::COL_REGIONINFO))
+        if !oldHRI
+          raise IOError.new(index.to_s + " HRegionInfo is null for " + keyStr)
+        end
+        unless isTableRegion(oldTableName, oldHRI)
+          # If here, we passed out the table.  Break.
+          break
+        end
+        oldRDir = Path.new(oldTableDir, Path.new(oldHRI.getEncodedName().to_s))
+        if !fs.exists(oldRDir)
+          LOG.warn(oldRDir.toString() + " does not exist -- region " +
+            oldHRI.getRegionNameAsString())
+        else
+           # Now make a new HRegionInfo to add to .META. for the new region.
+          newHRI = createHRI(newTableName, oldHRI)
+          newRDir = Path.new(newTableDir, Path.new(newHRI.getEncodedName().to_s))
+          # Move the region in filesystem
+          LOG.info("Copying " + oldRDir.toString() + " as " + newRDir.toString())
+          FileUtil.copy(fs, oldRDir, fs, newRDir, false, true, c)
+          # Create 'new' region
+          newR = HRegion.new(rootdir, utils.getLog(), fs, c, newHRI, nil)
+          # Add new row. NOTE: Presumption is that only one .META. region. If not,
+          # need to do the work to figure proper region to add this new region to.
+          LOG.info("Adding to meta: " + newR.toString())
+          HRegion.addRegionToMETA(metaRegion, newR)
+          LOG.info("Done copying: " + Bytes.toString(key.getRow()))
+        end
+        # Need to clear value else we keep appending values.
+        value.clear()
+      end
+    ensure
+      scanner.close()
+    end
+  end
+ensure
+  utils.shutdown()
+end
diff --git a/0.90/bin/hbase b/0.90/bin/hbase
new file mode 100755
index 0000000..afd7a2e
--- /dev/null
+++ b/0.90/bin/hbase
@@ -0,0 +1,272 @@
+#! /usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+# 
+# The hbase command script.  Based on the hadoop command script putting
+# in hbase classes, libs and configurations ahead of hadoop's.
+#
+# TODO: Narrow the amount of duplicated code.
+#
+# Environment Variables:
+#
+#   JAVA_HOME        The java implementation to use.  Overrides JAVA_HOME.
+#
+#   HBASE_CLASSPATH  Extra Java CLASSPATH entries.
+#
+#   HBASE_HEAPSIZE   The maximum amount of heap to use, in MB. 
+#                    Default is 1000.
+#
+#   HBASE_OPTS       Extra Java runtime options.
+#
+#   HBASE_CONF_DIR   Alternate conf dir. Default is ${HBASE_HOME}/conf.
+#
+#   HBASE_ROOT_LOGGER The root appender. Default is INFO,console
+#
+#   MAVEN_HOME       Where mvn is installed.
+#
+bin=`dirname "$0"`
+bin=`cd "$bin">/dev/null; pwd`
+
+# This will set HBASE_HOME, etc.
+. "$bin"/hbase-config.sh
+
+cygwin=false
+case "`uname`" in
+CYGWIN*) cygwin=true;;
+esac
+
+# Detect if we are in hbase sources dir
+in_dev_env=false
+if [ -d "${HBASE_HOME}/target" ]; then
+  in_dev_env=true
+fi
+
+# if no args specified, show usage
+if [ $# = 0 ]; then
+  echo "Usage: hbase <command>"
+  echo "where <command> is one of:"
+  echo "  shell            run the HBase shell"
+  echo "  zkcli            run the ZooKeeper shell"
+  echo "  master           run an HBase HMaster node" 
+  echo "  regionserver     run an HBase HRegionServer node" 
+  echo "  zookeeper        run a Zookeeper server"
+  echo "  rest             run an HBase REST server" 
+  echo "  thrift           run an HBase Thrift server" 
+  echo "  avro             run an HBase Avro server" 
+  echo "  migrate          upgrade an hbase.rootdir"
+  echo "  hbck             run the hbase 'fsck' tool"
+  echo "  classpath        dump hbase CLASSPATH"
+  echo " or"
+  echo "  CLASSNAME        run the class named CLASSNAME"
+  echo "Most commands print help when invoked w/o parameters."
+  exit 1
+fi
+
+# get arguments
+COMMAND=$1
+shift
+
+JAVA=$JAVA_HOME/bin/java
+JAVA_HEAP_MAX=-Xmx1000m 
+
+MVN="mvn"
+if [ "$MAVEN_HOME" != "" ]; then
+  MVN=${MAVEN_HOME}/bin/mvn
+fi
+
+# check envvars which might override default args
+if [ "$HBASE_HEAPSIZE" != "" ]; then
+  #echo "run with heapsize $HBASE_HEAPSIZE"
+  JAVA_HEAP_MAX="-Xmx""$HBASE_HEAPSIZE""m"
+  #echo $JAVA_HEAP_MAX
+fi
+
+# so that filenames w/ spaces are handled correctly in loops below
+IFS=
+
+# CLASSPATH initially contains $HBASE_CONF_DIR
+CLASSPATH="${HBASE_CONF_DIR}"
+CLASSPATH=${CLASSPATH}:$JAVA_HOME/lib/tools.jar
+
+add_maven_deps_to_classpath() {
+  # Need to generate classpath from maven pom. This is costly so generate it
+  # and cache it. Save the file into our target dir so a mvn clean will get
+  # clean it up and force us create a new one.
+  f="${HBASE_HOME}/target/cached_classpath.txt"
+  if [ ! -f "${f}" ]
+  then
+    ${MVN} -f "${HBASE_HOME}/pom.xml" dependency:build-classpath -Dmdep.outputFile="${f}" &> /dev/null
+  fi
+  CLASSPATH=${CLASSPATH}:`cat "${f}"`
+}
+
+add_maven_main_classes_to_classpath() {
+  if [ -d "$HBASE_HOME/target/classes" ]; then
+    CLASSPATH=${CLASSPATH}:$HBASE_HOME/target/classes
+  fi
+}
+
+add_maven_test_classes_to_classpath() {
+  # For developers, add hbase classes to CLASSPATH
+  f="$HBASE_HOME/target/test-classes"
+  if [ -d "${f}" ]; then
+    CLASSPATH=${CLASSPATH}:${f}
+  fi
+}
+
+# Add maven target directory
+if $in_dev_env; then
+  add_maven_deps_to_classpath
+  add_maven_main_classes_to_classpath
+  add_maven_test_classes_to_classpath
+fi
+
+# For releases, add hbase & webapps to CLASSPATH
+# Webapps must come first else it messes up Jetty
+if [ -d "$HBASE_HOME/hbase-webapps" ]; then
+  CLASSPATH=${CLASSPATH}:$HBASE_HOME
+fi
+if [ -d "$HBASE_HOME/target/hbase-webapps" ]; then
+  CLASSPATH="${CLASSPATH}:${HBASE_HOME}/target"
+fi
+for f in $HBASE_HOME/hbase*.jar; do
+  if [[ $f = *sources.jar ]]
+  then
+    : # Skip sources.jar
+  elif [ -f $f ]
+  then
+    CLASSPATH=${CLASSPATH}:$f;
+  fi
+done
+
+# Add libs to CLASSPATH
+for f in $HBASE_HOME/lib/*.jar; do
+  CLASSPATH=${CLASSPATH}:$f;
+done
+
+# Add user-specified CLASSPATH last
+if [ "$HBASE_CLASSPATH" != "" ]; then
+  CLASSPATH=${CLASSPATH}:${HBASE_CLASSPATH}
+fi
+
+# default log directory & file
+if [ "$HBASE_LOG_DIR" = "" ]; then
+  HBASE_LOG_DIR="$HBASE_HOME/logs"
+fi
+if [ "$HBASE_LOGFILE" = "" ]; then
+  HBASE_LOGFILE='hbase.log'
+fi
+
+# cygwin path translation
+if $cygwin; then
+  CLASSPATH=`cygpath -p -w "$CLASSPATH"`
+  HBASE_HOME=`cygpath -d "$HBASE_HOME"`
+  HBASE_LOG_DIR=`cygpath -d "$HBASE_LOG_DIR"`
+fi
+# setup 'java.library.path' for native-hadoop code if necessary
+JAVA_LIBRARY_PATH=''
+if [ -d "${HBASE_HOME}/build/native" -o -d "${HBASE_HOME}/lib/native" ]; then
+  JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
+
+  if [ -d "$HBASE_HOME/build/native" ]; then
+    JAVA_LIBRARY_PATH=${HBASE_HOME}/build/native/${JAVA_PLATFORM}/lib
+  fi
+
+  if [ -d "${HBASE_HOME}/lib/native" ]; then
+    if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+      JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:${HBASE_HOME}/lib/native/${JAVA_PLATFORM}
+    else
+      JAVA_LIBRARY_PATH=${HBASE_HOME}/lib/native/${JAVA_PLATFORM}
+    fi
+  fi
+fi
+
+# cygwin path translation
+if $cygwin; then
+  JAVA_LIBRARY_PATH=`cygpath -p "$JAVA_LIBRARY_PATH"`
+fi
+ 
+# restore ordinary behaviour
+unset IFS
+
+# figure out which class to run
+if [ "$COMMAND" = "shell" ] ; then
+  CLASS="org.jruby.Main ${HBASE_HOME}/bin/hirb.rb"
+elif [ "$COMMAND" = "master" ] ; then
+  CLASS='org.apache.hadoop.hbase.master.HMaster'
+  if [ "$1" != "stop" ] ; then
+    HBASE_OPTS="$HBASE_OPTS $HBASE_MASTER_OPTS"
+  fi
+elif [ "$COMMAND" = "regionserver" ] ; then
+  CLASS='org.apache.hadoop.hbase.regionserver.HRegionServer'
+  if [ "$1" != "stop" ] ; then
+    HBASE_OPTS="$HBASE_OPTS $HBASE_REGIONSERVER_OPTS"
+  fi
+elif [ "$COMMAND" = "thrift" ] ; then
+  CLASS='org.apache.hadoop.hbase.thrift.ThriftServer'
+  if [ "$1" != "stop" ] ; then
+    HBASE_OPTS="$HBASE_OPTS $HBASE_THRIFT_OPTS"
+  fi
+elif [ "$COMMAND" = "rest" ] ; then
+  CLASS='org.apache.hadoop.hbase.rest.Main'
+  if [ "$1" != "stop" ] ; then
+    HBASE_OPTS="$HBASE_OPTS $HBASE_REST_OPTS"
+  fi
+elif [ "$COMMAND" = "avro" ] ; then
+  CLASS='org.apache.hadoop.hbase.avro.AvroServer'
+  if [ "$1" != "stop" ] ; then
+    HBASE_OPTS="$HBASE_OPTS $HBASE_AVRO_OPTS"
+  fi
+elif [ "$COMMAND" = "migrate" ] ; then
+  CLASS='org.apache.hadoop.hbase.util.Migrate'
+elif [ "$COMMAND" = "hbck" ] ; then
+  CLASS='org.apache.hadoop.hbase.util.HBaseFsck'
+elif [ "$COMMAND" = "zookeeper" ] ; then
+  CLASS='org.apache.hadoop.hbase.zookeeper.HQuorumPeer'
+  if [ "$1" != "stop" ] ; then
+    HBASE_OPTS="$HBASE_OPTS $HBASE_ZOOKEEPER_OPTS"
+  fi
+elif [ "$COMMAND" = "zkcli" ] ; then
+  # ZooKeeperMainServerArg returns '-server HOST:PORT' or empty string.
+  SERVER_ARG=`"$bin"/hbase org.apache.hadoop.hbase.zookeeper.ZooKeeperMainServerArg` 
+  CLASS="org.apache.zookeeper.ZooKeeperMain ${SERVER_ARG}"
+elif [ "$COMMAND" = "classpath" ] ; then
+  echo $CLASSPATH
+  exit 0
+else
+  CLASS=$COMMAND
+fi
+
+# Have JVM dump heap if we run out of memory.  Files will be 'launch directory'
+# and are named like the following: java_pid21612.hprof. Apparently it doesn't
+# 'cost' to have this flag enabled. Its a 1.6 flag only. See:
+# http://blogs.sun.com/alanb/entry/outofmemoryerror_looks_a_bit_better
+HBASE_OPTS="$HBASE_OPTS -Dhbase.log.dir=$HBASE_LOG_DIR"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.log.file=$HBASE_LOGFILE"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.home.dir=$HBASE_HOME"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.id.str=$HBASE_IDENT_STRING"
+HBASE_OPTS="$HBASE_OPTS -Dhbase.root.logger=${HBASE_ROOT_LOGGER:-INFO,console}"
+if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
+  HBASE_OPTS="$HBASE_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
+fi
+
+# run it
+exec "$JAVA" $JAVA_HEAP_MAX $HBASE_OPTS -classpath "$CLASSPATH" $CLASS "$@"
diff --git a/0.90/bin/hbase-config.sh b/0.90/bin/hbase-config.sh
new file mode 100644
index 0000000..5d13859
--- /dev/null
+++ b/0.90/bin/hbase-config.sh
@@ -0,0 +1,112 @@
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# included in all the hbase scripts with source command
+# should not be executable directly
+# also should not be passed any arguments, since we need original $*
+# Modelled after $HADOOP_HOME/bin/hadoop-env.sh.
+
+# resolve links - "${BASH_SOURCE-$0}" may be a softlink
+
+this="${BASH_SOURCE-$0}"
+while [ -h "$this" ]; do
+  ls=`ls -ld "$this"`
+  link=`expr "$ls" : '.*-> \(.*\)$'`
+  if expr "$link" : '.*/.*' > /dev/null; then
+    this="$link"
+  else
+    this=`dirname "$this"`/"$link"
+  fi
+done
+
+# convert relative path to absolute path
+bin=`dirname "$this"`
+script=`basename "$this"`
+bin=`cd "$bin">/dev/null; pwd`
+this="$bin/$script"
+
+# the root of the hbase installation
+if [ -z "$HBASE_HOME" ]; then
+  export HBASE_HOME=`dirname "$this"`/..
+fi
+
+#check to see if the conf dir or hbase home are given as an optional arguments
+while [ $# -gt 1 ]
+do
+  if [ "--config" = "$1" ]
+  then
+    shift
+    confdir=$1
+    shift
+    HBASE_CONF_DIR=$confdir
+  elif [ "--hosts" = "$1" ]
+  then
+    shift
+    hosts=$1
+    shift
+    HBASE_REGIONSERVERS=$hosts
+  else
+    # Presume we are at end of options and break
+    break
+  fi
+done
+ 
+# Allow alternate hbase conf dir location.
+HBASE_CONF_DIR="${HBASE_CONF_DIR:-$HBASE_HOME/conf}"
+# List of hbase regions servers.
+HBASE_REGIONSERVERS="${HBASE_REGIONSERVERS:-$HBASE_CONF_DIR/regionservers}"
+# List of hbase secondary masters.
+HBASE_BACKUP_MASTERS="${HBASE_BACKUP_MASTERS:-$HBASE_CONF_DIR/backup-masters}"
+
+# Source the hbase-env.sh.  Will have JAVA_HOME defined.
+if [ -f "${HBASE_CONF_DIR}/hbase-env.sh" ]; then
+  . "${HBASE_CONF_DIR}/hbase-env.sh"
+fi
+
+if [ -z "$JAVA_HOME" ]; then
+  for candidate in \
+    /usr/lib/jvm/java-6-sun \
+    /usr/lib/j2sdk1.6-sun \
+    /usr/java/jdk1.6* \
+    /usr/java/jre1.6* \
+    /Library/Java/Home ; do
+    if [ -e $candidate/bin/java ]; then
+      export JAVA_HOME=$candidate
+      break
+    fi
+  done
+  # if we didn't set it
+  if [ -z "$JAVA_HOME" ]; then
+    cat 1>&2 <<EOF
++======================================================================+
+|      Error: JAVA_HOME is not set and Java could not be found         |
++----------------------------------------------------------------------+
+| Please download the latest Sun JDK from the Sun Java web site        |
+|       > http://java.sun.com/javase/downloads/ <                      |
+|                                                                      |
+| HBase requires Java 1.6 or later.                                    |
+| NOTE: This script will find Sun Java whether you install using the   |
+|       binary or the RPM based installer.                             |
++======================================================================+
+EOF
+    exit 1
+  fi
+fi
diff --git a/0.90/bin/hbase-daemon.sh b/0.90/bin/hbase-daemon.sh
new file mode 100755
index 0000000..e441955
--- /dev/null
+++ b/0.90/bin/hbase-daemon.sh
@@ -0,0 +1,198 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+# 
+# Runs a Hadoop hbase command as a daemon.
+#
+# Environment Variables
+#
+#   HBASE_CONF_DIR   Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+#   HBASE_LOG_DIR    Where log files are stored.  PWD by default.
+#   HBASE_PID_DIR    The pid files are stored. /tmp by default.
+#   HBASE_IDENT_STRING   A string representing this instance of hadoop. $USER by default
+#   HBASE_NICENESS The scheduling priority for daemons. Defaults to 0.
+#
+# Modelled after $HADOOP_HOME/bin/hadoop-daemon.sh
+
+usage="Usage: hbase-daemon.sh [--config <conf-dir>]\
+ (start|stop|restart) <hbase-command> \
+ <args...>"
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+  echo $usage
+  exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. "$bin"/hbase-config.sh
+
+# get arguments
+startStop=$1
+shift
+
+command=$1
+shift
+
+hbase_rotate_log ()
+{
+    log=$1;
+    num=5;
+    if [ -n "$2" ]; then
+    num=$2
+    fi
+    if [ -f "$log" ]; then # rotate logs
+    while [ $num -gt 1 ]; do
+        prev=`expr $num - 1`
+        [ -f "$log.$prev" ] && mv -f "$log.$prev" "$log.$num"
+        num=$prev
+    done
+    mv -f "$log" "$log.$num";
+    fi
+}
+
+wait_until_done ()
+{
+    p=$1
+    cnt=${HBASE_SLAVE_TIMEOUT:-300}
+    origcnt=$cnt
+    while kill -0 $p > /dev/null 2>&1; do
+      if [ $cnt -gt 1 ]; then
+        cnt=`expr $cnt - 1`
+        sleep 1
+      else
+        echo "Process did not complete after $origcnt seconds, killing."
+        kill -9 $p
+        exit 1
+      fi
+    done
+    return 0
+}
+
+# get log directory
+if [ "$HBASE_LOG_DIR" = "" ]; then
+  export HBASE_LOG_DIR="$HBASE_HOME/logs"
+fi
+mkdir -p "$HBASE_LOG_DIR"
+
+if [ "$HBASE_PID_DIR" = "" ]; then
+  HBASE_PID_DIR=/tmp
+fi
+
+if [ "$HBASE_IDENT_STRING" = "" ]; then
+  export HBASE_IDENT_STRING="$USER"
+fi
+
+# Some variables
+# Work out java location so can print version into log.
+if [ "$JAVA_HOME" != "" ]; then
+  #echo "run java in $JAVA_HOME"
+  JAVA_HOME=$JAVA_HOME
+fi
+if [ "$JAVA_HOME" = "" ]; then
+  echo "Error: JAVA_HOME is not set."
+  exit 1
+fi
+JAVA=$JAVA_HOME/bin/java
+export HBASE_LOGFILE=hbase-$HBASE_IDENT_STRING-$command-$HOSTNAME.log
+export HBASE_ROOT_LOGGER="INFO,DRFA"
+logout=$HBASE_LOG_DIR/hbase-$HBASE_IDENT_STRING-$command-$HOSTNAME.out  
+loglog="${HBASE_LOG_DIR}/${HBASE_LOGFILE}"
+pid=$HBASE_PID_DIR/hbase-$HBASE_IDENT_STRING-$command.pid
+
+# Set default scheduling priority
+if [ "$HBASE_NICENESS" = "" ]; then
+    export HBASE_NICENESS=0
+fi
+
+case $startStop in
+
+  (start)
+    mkdir -p "$HBASE_PID_DIR"
+    if [ -f $pid ]; then
+      if kill -0 `cat $pid` > /dev/null 2>&1; then
+        echo $command running as process `cat $pid`.  Stop it first.
+        exit 1
+      fi
+    fi
+
+    hbase_rotate_log $logout
+    echo starting $command, logging to $logout
+    # Add to the command log file vital stats on our environment.
+    echo "`date` Starting $command on `hostname`" >> $loglog
+    echo "ulimit -n `ulimit -n`" >> $loglog 2>&1
+    nohup nice -n $HBASE_NICENESS "$HBASE_HOME"/bin/hbase \
+        --config "${HBASE_CONF_DIR}" \
+        $command $startStop "$@" > "$logout" 2>&1 < /dev/null &
+    echo $! > $pid
+    sleep 1; head "$logout"
+    ;;
+
+  (stop)
+    if [ -f $pid ]; then
+      # kill -0 == see if the PID exists 
+      if kill -0 `cat $pid` > /dev/null 2>&1; then
+        echo -n stopping $command
+        if [ "$command" = "master" ]; then
+          echo "`date` Killing $command" >> $loglog
+          kill -9 `cat $pid` > /dev/null 2>&1
+        else
+          echo "`date` Killing $command" >> $loglog
+          kill `cat $pid` > /dev/null 2>&1
+        fi
+        while kill -0 `cat $pid` > /dev/null 2>&1; do
+          echo -n "."
+          sleep 1;
+        done
+        echo
+      else
+        retval=$?
+        echo no $command to stop because kill -0 of pid `cat $pid` failed with status $retval
+      fi
+    else
+      echo no $command to stop because no pid file $pid
+    fi
+    ;;
+
+  (restart)
+    thiscmd=$0
+    args=$@
+    # stop the command
+    $thiscmd --config "${HBASE_CONF_DIR}" stop $command $args &
+    wait_until_done $!
+    # wait a user-specified sleep period
+    sp=${HBASE_SLAVE_SLEEP:-3}
+    if [ $sp -gt 0 ]; then
+      sleep $sp
+    fi
+    # start the command
+    $thiscmd --config "${HBASE_CONF_DIR}" start $command $args &
+    wait_until_done $!
+    ;;
+
+  (*)
+    echo $usage
+    exit 1
+    ;;
+
+esac
diff --git a/0.90/bin/hbase-daemons.sh b/0.90/bin/hbase-daemons.sh
new file mode 100755
index 0000000..6a44755
--- /dev/null
+++ b/0.90/bin/hbase-daemons.sh
@@ -0,0 +1,55 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+# 
+# Run a hbase command on all slave hosts.
+# Modelled after $HADOOP_HOME/bin/hadoop-daemons.sh
+
+usage="Usage: hbase-daemons.sh [--config <hbase-confdir>] \
+ [--hosts regionserversfile] [start|stop] command args..."
+
+# if no args specified, show usage
+if [ $# -le 1 ]; then
+  echo $usage
+  exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. $bin/hbase-config.sh
+
+remote_cmd="cd ${HBASE_HOME}; $bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} $@"
+args="--config ${HBASE_CONF_DIR} $remote_cmd"
+
+command=$2
+case $command in
+  (zookeeper)
+    exec "$bin/zookeepers.sh" $args
+    ;;
+  (master-backup)
+    exec "$bin/master-backup.sh" $args
+    ;;
+  (*)
+    exec "$bin/regionservers.sh" $args
+    ;;
+esac
+
diff --git a/0.90/bin/hirb.rb b/0.90/bin/hirb.rb
new file mode 100644
index 0000000..d6892b5
--- /dev/null
+++ b/0.90/bin/hirb.rb
@@ -0,0 +1,183 @@
+#
+# Copyright 2009 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# File passed to org.jruby.Main by bin/hbase.  Pollutes jirb with hbase imports
+# and hbase  commands and then loads jirb.  Outputs a banner that tells user
+# where to find help, shell version, and loads up a custom hirb.
+
+# TODO: Add 'debug' support (client-side logs show in shell).  Add it as
+# command-line option and as command.
+# TODO: Interrupt a table creation or a connection to a bad master.  Currently
+# has to time out.  Below we've set down the retries for rpc and hbase but
+# still can be annoying (And there seem to be times when we'll retry for
+# ever regardless)
+# TODO: Add support for listing and manipulating catalog tables, etc.
+# TODO: Encoding; need to know how to go from ruby String to UTF-8 bytes
+
+# Run the java magic include and import basic HBase types that will help ease
+# hbase hacking.
+include Java
+
+# Some goodies for hirb. Should these be left up to the user's discretion?
+require 'irb/completion'
+
+# Add the $HBASE_HOME/lib/ruby OR $HBASE_HOME/src/main/ruby/lib directory
+# to the ruby load path so I can load up my HBase ruby modules
+if File.exists?(File.join(File.dirname(__FILE__), "..", "lib", "ruby", "hbase.rb"))
+  $LOAD_PATH.unshift File.join(File.dirname(__FILE__), "..", "lib", "ruby")
+else
+  $LOAD_PATH.unshift File.join(File.dirname(__FILE__), "..", "src", "main", "ruby")
+end
+
+#
+# FIXME: Switch args processing to getopt
+#
+# See if there are args for this shell. If any, read and then strip from ARGV
+# so they don't go through to irb.  Output shell 'usage' if user types '--help'
+cmdline_help = <<HERE # HERE document output as shell usage
+HBase Shell command-line options:
+ format        Formatter for outputting results: console | html. Default: console
+ -d | --debug  Set DEBUG log levels.
+HERE
+found = []
+format = 'console'
+script2run = nil
+log_level = org.apache.log4j.Level::ERROR
+for arg in ARGV
+  if arg =~ /^--format=(.+)/i
+    format = $1
+    if format =~ /^html$/i
+      raise NoMethodError.new("Not yet implemented")
+    elsif format =~ /^console$/i
+      # This is default
+    else
+      raise ArgumentError.new("Unsupported format " + arg)
+    end
+    found.push(arg)
+  elsif arg == '-h' || arg == '--help'
+    puts cmdline_help
+    exit
+  elsif arg == '-d' || arg == '--debug'
+    log_level = org.apache.log4j.Level::DEBUG
+    $fullBackTrace = true
+    puts "Setting DEBUG log level..."
+  else
+    # Presume it a script. Save it off for running later below
+    # after we've set up some environment.
+    script2run = arg
+    found.push(arg)
+    # Presume that any other args are meant for the script.
+    break
+  end
+end
+
+# Delete all processed args
+found.each { |arg| ARGV.delete(arg) }
+
+# Set logging level to avoid verboseness
+org.apache.log4j.Logger.getLogger("org.apache.zookeeper").setLevel(log_level)
+org.apache.log4j.Logger.getLogger("org.apache.hadoop.hbase").setLevel(log_level)
+
+# Require HBase now after setting log levels
+require 'hbase'
+
+# Load hbase shell
+require 'shell'
+
+# Require formatter
+require 'shell/formatter'
+
+# Presume console format.
+# Formatter takes an :output_stream parameter, if you don't want STDOUT.
+@formatter = Shell::Formatter::Console.new
+
+# Setup the HBase module.  Create a configuration.
+@hbase = Hbase::Hbase.new
+
+# Setup console
+@shell = Shell::Shell.new(@hbase, @formatter)
+
+# Add commands to this namespace
+@shell.export_commands(self)
+
+# Add help command
+def help(command = nil)
+  @shell.help(command)
+end
+
+# Backwards compatibility method
+def tools
+  @shell.help_group('tools')
+end
+
+# Debugging method
+def debug
+  if @shell.debug
+    @shell.debug = false
+    conf.back_trace_limit = 0
+  else
+    @shell.debug = true
+    conf.back_trace_limit = 100
+  end
+  debug?
+end
+
+def debug?
+  puts "Debug mode is #{@shell.debug ? 'ON' : 'OFF'}\n\n"
+  nil
+end
+
+# Include hbase constants
+include HBaseConstants
+
+# If script2run, try running it.  Will go on to run the shell unless
+# script calls 'exit' or 'exit 0' or 'exit errcode'.
+load(script2run) if script2run
+
+# Output a banner message that tells users where to go for help
+@shell.print_banner
+
+require "irb"
+require 'irb/hirb'
+
+module IRB
+  def self.start(ap_path = nil)
+    $0 = File::basename(ap_path, ".rb") if ap_path
+
+    IRB.setup(ap_path)
+    @CONF[:IRB_NAME] = 'hbase'
+    @CONF[:AP_NAME] = 'hbase'
+    @CONF[:BACK_TRACE_LIMIT] = 0 unless $fullBackTrace
+
+    if @CONF[:SCRIPT]
+      hirb = HIRB.new(nil, @CONF[:SCRIPT])
+    else
+      hirb = HIRB.new
+    end
+
+    @CONF[:IRB_RC].call(hirb.context) if @CONF[:IRB_RC]
+    @CONF[:MAIN_CONTEXT] = hirb.context
+
+    catch(:IRB_EXIT) do
+      hirb.eval_input
+    end
+  end
+end
+
+IRB.start
diff --git a/0.90/bin/loadtable.rb b/0.90/bin/loadtable.rb
new file mode 100644
index 0000000..7b9ced2
--- /dev/null
+++ b/0.90/bin/loadtable.rb
@@ -0,0 +1,150 @@
+#
+# Copyright 2009 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Script that takes over from org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.
+# Pass it output directory of HFileOutputFormat. It will read the passed files,
+# move them into place and update the catalog table appropriately.  Warning:
+# it will overwrite anything that exists already for passed table.
+# It expects hbase to be up and running so it can insert table info.
+#
+# To see usage for this script, run: 
+#
+#  ${HBASE_HOME}/bin/hbase org.jruby.Main loadtable.rb
+#
+include Java
+import java.util.TreeMap
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.HColumnDescriptor
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.io.hfile.HFile
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.FileSystem
+import org.apache.hadoop.mapred.OutputLogFilter
+import org.apache.commons.logging.Log
+import org.apache.commons.logging.LogFactory
+
+# Name of this script
+NAME = "loadtable"
+
+# Print usage for this script
+def usage
+  puts 'Usage: %s.rb TABLENAME HFILEOUTPUTFORMAT_OUTPUT_DIR' % NAME
+  exit!
+end
+
+# Passed 'dir' exists and is a directory else exception
+def isDirExists(fs, dir)
+  raise IOError.new("Does not exit: " + dir.toString()) unless fs.exists(dir)
+  raise IOError.new("Not a directory: " + dir.toString()) unless fs.isDirectory(dir)
+end
+
+# Check arguments
+if ARGV.size != 2
+  usage
+end
+
+# Check good table names were passed.
+tableName = HTableDescriptor.isLegalTableName(ARGV[0].to_java_bytes)
+outputdir = Path.new(ARGV[1])
+
+# Get configuration to use.
+c = HBaseConfiguration.new()
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+
+# Set hadoop filesystem configuration using the hbase.rootdir.
+# Otherwise, we'll always use localhost though the hbase.rootdir
+# might be pointing at hdfs location.
+c.set("fs.defaultFS", c.get(HConstants::HBASE_DIR))
+fs = FileSystem.get(c)
+
+# If hfiles directory does not exist, exit.
+isDirExists(fs, outputdir)
+# Create table dir if it doesn't exist.
+rootdir = FSUtils.getRootDir(c)
+tableDir = Path.new(rootdir, Path.new(Bytes.toString(tableName)))
+fs.mkdirs(tableDir) unless fs.exists(tableDir)
+
+# Start. Per hfile, move it, and insert an entry in catalog table.
+families = fs.listStatus(outputdir, OutputLogFilter.new())
+throw IOError.new("Can do one family only") if families.length > 1
+# Read meta on all files. Put in map keyed by start key.
+map = TreeMap.new(Bytes::ByteArrayComparator.new())
+family = families[0]
+# Make sure this subdir exists under table
+hfiles = fs.listStatus(family.getPath())
+LOG.info("Found " + hfiles.length.to_s + " hfiles");
+count = 0
+for hfile in hfiles
+  reader = HFile::Reader.new(fs, hfile.getPath(), nil, false)
+  begin
+    fileinfo = reader.loadFileInfo() 
+    firstkey = reader.getFirstKey()
+    # First key is row/column/ts.  We just want the row part.
+    rowlen = Bytes.toShort(firstkey)
+    firstkeyrow = firstkey[2, rowlen] 
+    LOG.info(count.to_s + " read firstkey of " +
+      Bytes.toString(firstkeyrow) + " from " + hfile.getPath().toString())
+    map.put(firstkeyrow, [hfile, fileinfo])
+    count = count + 1
+  ensure
+    reader.close()
+  end
+end
+# Now I have sorted list of fileinfo+paths.  Start insert.
+# Get a client on catalog table.
+meta = HTable.new(c, HConstants::META_TABLE_NAME)
+# I can't find out from hfile how its compressed.
+# Using all defaults. Change manually after loading if
+# something else wanted in column or table attributes.
+familyName = family.getPath().getName()
+hcd = HColumnDescriptor.new(familyName)
+htd = HTableDescriptor.new(tableName)
+htd.addFamily(hcd)
+previouslastkey = HConstants::EMPTY_START_ROW
+count = map.size()
+for i in map.descendingKeySet().iterator()
+  tuple = map.get(i)
+  startkey = i
+  count = count - 1
+  # If last time through loop, set start row as EMPTY_START_ROW
+  startkey = HConstants::EMPTY_START_ROW unless count > 0
+  # Next time around, lastkey is this startkey
+  hri = HRegionInfo.new(htd, startkey, previouslastkey)  
+  previouslastkey = startkey 
+  LOG.info(hri.toString())
+  hfile = tuple[0].getPath()
+  rdir = Path.new(Path.new(tableDir, hri.getEncodedName().to_s), familyName)
+  fs.mkdirs(rdir)
+  tgt = Path.new(rdir, hfile.getName())
+  fs.rename(hfile, tgt)
+  LOG.info("Moved " + hfile.toString() + " to " + tgt.toString())
+  p = Put.new(hri.getRegionName())
+  p.add(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER, Writables.getBytes(hri))
+  meta.put(p)
+  LOG.info("Inserted " + hri.toString())
+end
diff --git a/0.90/bin/local-master-backup.sh b/0.90/bin/local-master-backup.sh
new file mode 100644
index 0000000..4c2f5bc
--- /dev/null
+++ b/0.90/bin/local-master-backup.sh
@@ -0,0 +1,36 @@
+#!/bin/sh
+# This is used for starting multiple masters on the same machine.
+# run it from hbase-dir/ just like 'bin/hbase'
+# Supports up to 10 masters (limitation = overlapping ports)
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin" >/dev/null && pwd`
+
+if [ $# -lt 2 ]; then
+  S=`basename "${BASH_SOURCE-$0}"`
+  echo "Usage: $S [start|stop] offset(s)"
+  echo ""
+  echo "    e.g. $S start 1"
+  exit
+fi
+
+# sanity check: make sure your master opts don't use ports [i.e. JMX/DBG]
+export HBASE_MASTER_OPTS=" "
+
+run_master () {
+  DN=$2
+  export HBASE_IDENT_STRING="$USER-$DN"
+  HBASE_MASTER_ARGS="\
+    --backup \
+    -D hbase.master.port=`expr 60000 + $DN` \
+    -D hbase.master.info.port=`expr 60010 + $DN`"
+  "$bin"/hbase-daemon.sh $1 master $HBASE_MASTER_ARGS
+}
+
+cmd=$1
+shift;
+
+for i in $*
+do
+  run_master  $cmd $i
+done
diff --git a/0.90/bin/local-regionservers.sh b/0.90/bin/local-regionservers.sh
new file mode 100644
index 0000000..5a497ce
--- /dev/null
+++ b/0.90/bin/local-regionservers.sh
@@ -0,0 +1,35 @@
+#!/bin/sh
+# This is used for starting multiple regionservers on the same machine.
+# run it from hbase-dir/ just like 'bin/hbase'
+# Supports up to 100 regionservers (limitation = overlapping ports)
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin" >/dev/null && pwd`
+
+if [ $# -lt 2 ]; then
+  S=`basename "${BASH_SOURCE-$0}"`
+  echo "Usage: $S [start|stop] offset(s)"
+  echo ""
+  echo "    e.g. $S start 1 2"
+  exit
+fi
+
+# sanity check: make sure your regionserver opts don't use ports [i.e. JMX/DBG]
+export HBASE_REGIONSERVER_OPTS=" "
+
+run_regionserver () {
+  DN=$2
+  export HBASE_IDENT_STRING="$USER-$DN"
+  HBASE_REGIONSERVER_ARGS="\
+    -D hbase.regionserver.port=`expr 60200 + $DN` \
+    -D hbase.regionserver.info.port=`expr 60300 + $DN`"
+  "$bin"/hbase-daemon.sh $1 regionserver $HBASE_REGIONSERVER_ARGS
+}
+
+cmd=$1
+shift;
+
+for i in $*
+do
+  run_regionserver  $cmd $i
+done
diff --git a/0.90/bin/master-backup.sh b/0.90/bin/master-backup.sh
new file mode 100755
index 0000000..d20f579
--- /dev/null
+++ b/0.90/bin/master-backup.sh
@@ -0,0 +1,76 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2010 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+# 
+# Run a shell command on all backup master hosts.
+#
+# Environment Variables
+#
+#   HBASE_BACKUP_MASTERS File naming remote hosts.
+#     Default is ${HADOOP_CONF_DIR}/backup-masters
+#   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_HOME}/conf.
+#   HBASE_CONF_DIR  Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+#   HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
+#   HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
+#
+# Modelled after $HADOOP_HOME/bin/slaves.sh.
+
+usage="Usage: $0 [--config <hbase-confdir>] command..."
+
+# if no args specified, show usage
+if [ $# -le 0 ]; then
+  echo $usage
+  exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. "$bin"/hbase-config.sh
+
+# If the master backup file is specified in the command line,
+# then it takes precedence over the definition in 
+# hbase-env.sh. Save it here.
+HOSTLIST=$HBASE_BACKUP_MASTERS
+
+if [ "$HOSTLIST" = "" ]; then
+  if [ "$HBASE_BACKUP_MASTERS" = "" ]; then
+    export HOSTLIST="${HBASE_CONF_DIR}/backup-masters"
+  else
+    export HOSTLIST="${HBASE_BACKUP_MASTERS}"
+  fi
+fi
+
+
+args=${@// /\\ }
+args=${args/master-backup/master}
+
+if [ -f $HOSTLIST ]; then
+  for hmaster in `cat "$HOSTLIST"`; do
+   ssh $HBASE_SSH_OPTS $hmaster $"$args --backup" \
+     2>&1 | sed "s/^/$hmaster: /" &
+   if [ "$HBASE_SLAVE_SLEEP" != "" ]; then
+     sleep $HBASE_SLAVE_SLEEP
+   fi
+  done
+fi 
+
+wait
diff --git a/0.90/bin/regionservers.sh b/0.90/bin/regionservers.sh
new file mode 100755
index 0000000..9759f2b
--- /dev/null
+++ b/0.90/bin/regionservers.sh
@@ -0,0 +1,75 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+# 
+# Run a shell command on all regionserver hosts.
+#
+# Environment Variables
+#
+#   HBASE_REGIONSERVERS    File naming remote hosts.
+#     Default is ${HADOOP_CONF_DIR}/regionservers
+#   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_HOME}/conf.
+#   HBASE_CONF_DIR  Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+#   HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
+#   HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
+#
+# Modelled after $HADOOP_HOME/bin/slaves.sh.
+
+usage="Usage: regionservers [--config <hbase-confdir>] command..."
+
+# if no args specified, show usage
+if [ $# -le 0 ]; then
+  echo $usage
+  exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. "$bin"/hbase-config.sh
+
+# If the regionservers file is specified in the command line,
+# then it takes precedence over the definition in 
+# hbase-env.sh. Save it here.
+HOSTLIST=$HBASE_REGIONSERVERS
+
+if [ "$HOSTLIST" = "" ]; then
+  if [ "$HBASE_REGIONSERVERS" = "" ]; then
+    export HOSTLIST="${HBASE_CONF_DIR}/regionservers"
+  else
+    export HOSTLIST="${HBASE_REGIONSERVERS}"
+  fi
+fi
+
+for regionserver in `cat "$HOSTLIST"`; do
+  if ${HBASE_SLAVE_PARALLEL:-true}; then 
+    ssh $HBASE_SSH_OPTS $regionserver $"${@// /\\ }" \
+      2>&1 | sed "s/^/$regionserver: /" &
+  else # run each command serially 
+    ssh $HBASE_SSH_OPTS $regionserver $"${@// /\\ }" \
+      2>&1 | sed "s/^/$regionserver: /"
+  fi
+  if [ "$HBASE_SLAVE_SLEEP" != "" ]; then
+    sleep $HBASE_SLAVE_SLEEP
+  fi
+done
+
+wait
diff --git a/0.90/bin/rename_table.rb b/0.90/bin/rename_table.rb
new file mode 100644
index 0000000..bc53df4
--- /dev/null
+++ b/0.90/bin/rename_table.rb
@@ -0,0 +1,161 @@
+#
+# Copyright 2009 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# Script that renames table in hbase.  As written, will not work for rare
+# case where there is more than one region in .META. table (You'd have to
+# have a really massive hbase install).  Does the update of the hbase
+# .META. and moves the directories in the filesystem.  Use at your own
+# risk.  Before running you must DISABLE YOUR TABLE:
+# 
+# hbase> disable "YOUR_TABLE_NAME"
+#
+# Enable the new table after the script completes.
+#
+# hbase> enable "NEW_TABLE_NAME"
+#
+# To see usage for this script, run: 
+#
+#  ${HBASE_HOME}/bin/hbase org.jruby.Main rename_table.rb
+#
+include Java
+import org.apache.hadoop.hbase.util.MetaUtils
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable
+import org.apache.hadoop.hbase.regionserver.HRegion
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.FileSystem
+import org.apache.commons.logging.Log
+import org.apache.commons.logging.LogFactory
+import java.util.TreeMap
+
+# Name of this script
+NAME = "rename_table"
+
+# Print usage for this script
+def usage
+  puts 'Usage: %s.rb <OLD_NAME> <NEW_NAME>' % NAME
+  exit!
+end
+
+# Passed 'dir' exists and is a directory else exception
+def isDirExists(fs, dir)
+  raise IOError.new("Does not exit: " + dir.toString()) unless fs.exists(dir)
+  raise IOError.new("Not a directory: " + dir.toString()) unless fs.isDirectory(dir)
+end
+
+# Returns true if the region belongs to passed table
+def isTableRegion(tableName, hri)
+  return Bytes.equals(hri.getTableDesc().getName(), tableName)
+end
+
+# Create new HRI based off passed 'oldHRI'
+def createHRI(tableName, oldHRI)
+  htd = oldHRI.getTableDesc()
+  newHtd = HTableDescriptor.new(tableName)
+  for family in htd.getFamilies()
+    newHtd.addFamily(family)
+  end
+  return HRegionInfo.new(newHtd, oldHRI.getStartKey(), oldHRI.getEndKey(),
+    oldHRI.isSplit())
+end
+
+# Check arguments
+if ARGV.size != 2
+  usage
+end
+
+# Check good table names were passed.
+oldTableName = ARGV[0]
+newTableName = ARGV[1]
+
+# Get configuration to use.
+c = HBaseConfiguration.new()
+
+# Set hadoop filesystem configuration using the hbase.rootdir.
+# Otherwise, we'll always use localhost though the hbase.rootdir
+# might be pointing at hdfs location.
+c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
+fs = FileSystem.get(c)
+
+# If new table directory does not exit, create it.  Keep going if already
+# exists because maybe we are rerunning script because it failed first
+# time. Otherwise we are overwriting a pre-existing table.
+rootdir = FSUtils.getRootDir(c)
+oldTableDir = fs.makeQualified(Path.new(rootdir, Path.new(oldTableName)))
+isDirExists(fs, oldTableDir)
+newTableDir = fs.makeQualified(Path.new(rootdir, newTableName))
+if !fs.exists(newTableDir)
+  fs.mkdirs(newTableDir)
+end
+
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+
+# Run through the meta table moving region mentions from old to new table name.
+metaTable = HTable.new(c, HConstants::META_TABLE_NAME)
+# TODO: Start the scan at the old table offset in .META.
+scan = Scan.new()
+scanner = metaTable.getScanner(scan)
+while (result = scanner.next())
+  rowid = Bytes.toString(result.getRow())
+  oldHRI = Writables.getHRegionInfo(result.getValue(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER))
+  if !oldHRI
+    raise IOError.new("HRegionInfo is null for " + rowid)
+  end
+  next unless isTableRegion(oldTableName.to_java_bytes, oldHRI)
+  puts oldHRI.toString()
+  oldRDir = Path.new(oldTableDir, Path.new(oldHRI.getEncodedName().to_s))
+  if !fs.exists(oldRDir)
+    LOG.warn(oldRDir.toString() + " does not exist -- region " +
+      oldHRI.getRegionNameAsString())
+  else
+    # Now make a new HRegionInfo to add to .META. for the new region.
+    newHRI = createHRI(newTableName, oldHRI)
+    puts newHRI.toString()
+    newRDir = Path.new(newTableDir, Path.new(newHRI.getEncodedName().to_s))
+    # Move the region in filesystem
+    LOG.info("Renaming " + oldRDir.toString() + " as " + newRDir.toString())
+    fs.rename(oldRDir, newRDir)
+    # Removing old region from meta
+    LOG.info("Removing " + rowid + " from .META.")
+    d = Delete.new(result.getRow())
+    metaTable.delete(d)
+    # Create 'new' region
+    newR = HRegion.new(rootdir, nil, fs, c, newHRI, nil)
+    # Add new row. NOTE: Presumption is that only one .META. region. If not,
+    # need to do the work to figure proper region to add this new region to.
+    LOG.info("Adding to meta: " + newR.toString())
+    p = Put.new(newR.getRegionName())
+    p.add(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER, Writables.getBytes(newR.getRegionInfo()))
+    metaTable.put(p)
+  end
+end
+scanner.close()
+fs.delete(oldTableDir)
+LOG.info("DONE");
diff --git a/0.90/bin/replication/copy_tables_desc.rb b/0.90/bin/replication/copy_tables_desc.rb
new file mode 100644
index 0000000..ed85655
--- /dev/null
+++ b/0.90/bin/replication/copy_tables_desc.rb
@@ -0,0 +1,75 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Script to recreate all tables from one cluster to another
+# To see usage for this script, run:
+#
+#  ${HBASE_HOME}/bin/hbase org.jruby.Main copy_tables_desc.rb
+#
+
+include Java
+import org.apache.commons.logging.LogFactory
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.EmptyWatcher
+import org.apache.hadoop.hbase.client.HBaseAdmin
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWrapper
+
+# Name of this script
+NAME = "copy_tables_desc"
+
+# Print usage for this script
+def usage
+  puts 'Usage: %s.rb master_zookeeper.quorum.peers:clientport:znode_parent slave_zookeeper.quorum.peers:clientport:znode_parent' % NAME
+  exit!
+end
+
+if ARGV.size != 2
+  usage
+end
+
+LOG = LogFactory.getLog(NAME)
+
+parts1 = ARGV[0].split(":")
+
+parts2 = ARGV[1].split(":")
+
+c1 = HBaseConfiguration.create()
+c1.set(HConstants::ZOOKEEPER_QUORUM, parts1[0])
+c1.set("hbase.zookeeper.property.clientPort", parts1[1])
+c1.set(HConstants::ZOOKEEPER_ZNODE_PARENT, parts1[2])
+
+admin1 = HBaseAdmin.new(c1)
+
+c2 = HBaseConfiguration.create()
+c2.set(HConstants::ZOOKEEPER_QUORUM, parts2[0])
+c2.set("hbase.zookeeper.property.clientPort", parts2[1])
+c2.set(HConstants::ZOOKEEPER_ZNODE_PARENT, parts2[2])
+
+admin2 = HBaseAdmin.new(c2)
+
+for t in admin1.listTables()
+  admin2.createTable(t)
+end
+
+
+puts "All descriptions were copied"
diff --git a/0.90/bin/rolling-restart.sh b/0.90/bin/rolling-restart.sh
new file mode 100755
index 0000000..ae7a9be
--- /dev/null
+++ b/0.90/bin/rolling-restart.sh
@@ -0,0 +1,105 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+# 
+# Run a shell command on all regionserver hosts.
+#
+# Environment Variables
+#
+#   HBASE_REGIONSERVERS    File naming remote hosts.
+#     Default is ${HADOOP_CONF_DIR}/regionservers
+#   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_HOME}/conf.
+#   HBASE_CONF_DIR  Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+#   HADOOP_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
+#   HADOOP_SLAVE_TIMEOUT Seconds to wait for timing out a remote command. 
+#   HADOOP_SSH_OPTS Options passed to ssh when running remote commands.
+#
+# Modelled after $HADOOP_HOME/bin/slaves.sh.
+
+usage="Usage: $0 [--config <hbase-confdir>] commands..."
+
+bin=`dirname "$0"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. "$bin"/hbase-config.sh
+
+# start hbase daemons
+errCode=$?
+if [ $errCode -ne 0 ]
+then
+  exit $errCode
+fi
+
+# quick function to get a value from the HBase config file
+distMode=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed`
+if [ "$distMode" == 'false' ]; then
+  "$bin"/hbase-daemon.sh restart master
+else 
+  # stop all masters before re-start to avoid races for master znode
+  "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" stop master 
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
+    --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
+
+  # make sure the master znode has been deleted before continuing
+  zparent=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.parent`
+  if [ "$zparent" == "null" ]; then zparent="/hbase"; fi
+  zmaster=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.master`
+  if [ "$zmaster" == "null" ]; then zmaster="master"; fi
+  zmaster=$zparent/$zmaster
+  echo -n "Waiting for Master ZNode ${zmaster} to expire"
+  while bin/hbase zkcli stat $zmaster >/dev/null 2>&1; do
+    echo -n "."
+    sleep 1
+  done
+  echo #force a newline
+
+  # all masters are down, now restart
+  "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" start master 
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
+    --hosts "${HBASE_BACKUP_MASTERS}" start master-backup
+
+  echo "Wait a minute for master to come up join cluster"
+  sleep 60
+
+  # Master joing cluster will start in cleaning out regions in transition.
+  # Wait until the master has cleaned out regions in transition before
+  # giving it a bunch of work to do; master is vulnerable during startup
+  zunassigned=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.unassigned`
+  if [ "$zunassigned" == "null" ]; then zunassigned="unassigned"; fi
+  zunassigned="$zparent/$zunassigned"
+  echo -n "Waiting for ${zunassigned} to empty"
+  while true ; do
+    unassigned=`$bin/hbase zkcli stat ${zunassigned} 2>&1 |grep -e 'numChildren = '|sed -e 's,numChildren = ,,'`
+    if test 0 -eq ${unassigned}
+    then
+      break
+    else
+      echo -n " ${unassigned}"
+    fi
+    sleep 1
+  done
+
+  # unlike the masters, roll all regionservers one-at-a-time
+  export HBASE_SLAVE_PARALLEL=false
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
+    --hosts "${HBASE_REGIONSERVERS}" restart regionserver
+
+fi
diff --git a/0.90/bin/set_meta_block_caching.rb b/0.90/bin/set_meta_block_caching.rb
new file mode 100644
index 0000000..7c96392
--- /dev/null
+++ b/0.90/bin/set_meta_block_caching.rb
@@ -0,0 +1,81 @@
+# Set in_memory=true and blockcache=true on catalog tables.
+# The .META. and -ROOT- tables can be created with caching and
+# in_memory set to false.  You want them set to true so that
+# these hot tables make it into cache.  To see if the
+# .META. table has BLOCKCACHE set, in the shell do the following:
+#
+#   hbase> scan '-ROOT-'
+#
+# Look for the 'info' column family.  See if BLOCKCACHE => 'true'? 
+# If not, run this script and it will set the value to true.
+# Setting cache to 'true' will only take effect on region restart
+# of if you close the .META. region -- *disruptive* -- and have
+# it deploy elsewhere.  This script runs against an up and running
+# hbase instance.
+# 
+# To see usage for this script, run: 
+#
+#  ${HBASE_HOME}/bin/hbase org.jruby.Main set_meta_block_caching.rb
+#
+include Java
+import org.apache.hadoop.hbase.util.Bytes
+import org.apache.hadoop.hbase.HConstants
+import org.apache.hadoop.hbase.HRegionInfo
+import org.apache.hadoop.hbase.client.HTable
+import org.apache.hadoop.hbase.client.Delete
+import org.apache.hadoop.hbase.client.Put
+import org.apache.hadoop.hbase.client.Scan
+import org.apache.hadoop.hbase.HTableDescriptor
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.util.FSUtils
+import org.apache.hadoop.hbase.util.Writables
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.FileSystem
+import org.apache.commons.logging.LogFactory
+
+# Name of this script
+NAME = "set_meta_block_caching.rb"
+
+
+# Print usage for this script
+def usage
+  puts 'Usage: %s.rb]' % NAME
+  exit!
+end
+
+# Get configuration to use.
+c = HBaseConfiguration.new()
+
+# Set hadoop filesystem configuration using the hbase.rootdir.
+# Otherwise, we'll always use localhost though the hbase.rootdir
+# might be pointing at hdfs location.
+c.set("fs.default.name", c.get(HConstants::HBASE_DIR))
+fs = FileSystem.get(c)
+
+# Get a logger and a metautils instance.
+LOG = LogFactory.getLog(NAME)
+
+# Check arguments
+if ARGV.size > 0
+  usage
+end
+
+# Clean mentions of table from .META.
+# Scan the .META. and remove all lines that begin with tablename
+metaTable = HTable.new(c, HConstants::ROOT_TABLE_NAME)
+scan = Scan.new()
+scan.addColumn(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER);
+scanner = metaTable.getScanner(scan)
+while (result = scanner.next())
+  rowid = Bytes.toString(result.getRow())
+  LOG.info("Setting BLOCKCACHE and IN_MEMORY on: " + rowid);
+  hriValue = result.getValue(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER)
+  hri = Writables.getHRegionInfo(hriValue)
+  family = hri.getTableDesc().getFamily(HConstants::CATALOG_FAMILY)
+  family.setBlockCacheEnabled(true)
+  family.setInMemory(true)
+  p = Put.new(result.getRow())
+  p.add(HConstants::CATALOG_FAMILY, HConstants::REGIONINFO_QUALIFIER, Writables.getBytes(hri));
+  metaTable.put(p)
+end
+scanner.close()
diff --git a/0.90/bin/start-hbase.sh b/0.90/bin/start-hbase.sh
new file mode 100755
index 0000000..15e952e
--- /dev/null
+++ b/0.90/bin/start-hbase.sh
@@ -0,0 +1,54 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# Modelled after $HADOOP_HOME/bin/start-hbase.sh.
+
+# Start hadoop hbase daemons.
+# Run this on master node.
+usage="Usage: start-hbase.sh"
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. "$bin"/hbase-config.sh
+
+# start hbase daemons
+errCode=$?
+if [ $errCode -ne 0 ]
+then
+  exit $errCode
+fi
+
+distMode=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed`
+
+
+if [ "$distMode" == 'false' ] 
+then
+  "$bin"/hbase-daemon.sh start master
+else
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" start zookeeper
+  "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" start master 
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
+    --hosts "${HBASE_REGIONSERVERS}" start regionserver
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
+    --hosts "${HBASE_BACKUP_MASTERS}" start master-backup
+fi
diff --git a/0.90/bin/stop-hbase.sh b/0.90/bin/stop-hbase.sh
new file mode 100755
index 0000000..670ec21b
--- /dev/null
+++ b/0.90/bin/stop-hbase.sh
@@ -0,0 +1,71 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# Modelled after $HADOOP_HOME/bin/stop-hbase.sh.
+
+# Stop hadoop hbase daemons.  Run this on master node.
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. "$bin"/hbase-config.sh
+
+# variables needed for stop command
+if [ "$HBASE_LOG_DIR" = "" ]; then
+  export HBASE_LOG_DIR="$HBASE_HOME/logs"
+fi
+mkdir -p "$HBASE_LOG_DIR"
+
+if [ "$HBASE_IDENT_STRING" = "" ]; then
+  export HBASE_IDENT_STRING="$USER"
+fi
+
+export HBASE_LOGFILE=hbase-$HBASE_IDENT_STRING-master-$HOSTNAME.log
+logout=$HBASE_LOG_DIR/hbase-$HBASE_IDENT_STRING-master-$HOSTNAME.out  
+loglog="${HBASE_LOG_DIR}/${HBASE_LOGFILE}"
+pid=${HBASE_PID_DIR:-/tmp}/hbase-$HBASE_IDENT_STRING-master.pid
+
+echo -n stopping hbase
+echo "`date` Stopping hbase (via master)" >> $loglog
+
+nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \
+   --config "${HBASE_CONF_DIR}" \
+   master stop "$@" > "$logout" 2>&1 < /dev/null &
+
+while kill -0 `cat $pid` > /dev/null 2>&1; do
+  echo -n "."
+  sleep 1;
+done
+# Add a CR after we're done w/ dots.
+echo
+
+# distributed == false means that the HMaster will kill ZK when it exits
+distMode=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool hbase.cluster.distributed`
+if [ "$distMode" == 'true' ] 
+then
+  # TODO: store backup masters in ZooKeeper and have the primary send them a shutdown message
+  # stop any backup masters
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
+    --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
+
+  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" stop zookeeper
+fi
diff --git a/0.90/bin/zookeepers.sh b/0.90/bin/zookeepers.sh
new file mode 100755
index 0000000..89a214e
--- /dev/null
+++ b/0.90/bin/zookeepers.sh
@@ -0,0 +1,61 @@
+#!/usr/bin/env bash
+#
+#/**
+# * Copyright 2009 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+# 
+# Run a shell command on all zookeeper hosts.
+#
+# Environment Variables
+#
+#   HBASE_CONF_DIR  Alternate hbase conf dir. Default is ${HBASE_HOME}/conf.
+#   HBASE_SLAVE_SLEEP Seconds to sleep between spawning remote commands.
+#   HBASE_SSH_OPTS Options passed to ssh when running remote commands.
+#
+# Modelled after $HADOOP_HOME/bin/slaves.sh.
+
+usage="Usage: zookeepers [--config <hbase-confdir>] command..."
+
+# if no args specified, show usage
+if [ $# -le 0 ]; then
+  echo $usage
+  exit 1
+fi
+
+bin=`dirname "${BASH_SOURCE-$0}"`
+bin=`cd "$bin">/dev/null; pwd`
+
+. "$bin"/hbase-config.sh
+
+if [ "$HBASE_MANAGES_ZK" = "" ]; then
+  HBASE_MANAGES_ZK=true
+fi
+
+if [ "$HBASE_MANAGES_ZK" = "true" ]; then
+  hosts=`"$bin"/hbase org.apache.hadoop.hbase.zookeeper.ZKServerTool | grep '^ZK host:' | sed 's,^ZK host:,,'`
+  cmd=$"${@// /\\ }"
+  for zookeeper in $hosts; do
+   ssh $HBASE_SSH_OPTS $zookeeper $cmd 2>&1 | sed "s/^/$zookeeper: /" &
+   if [ "$HBASE_SLAVE_SLEEP" != "" ]; then
+     sleep $HBASE_SLAVE_SLEEP
+   fi
+  done
+fi
+
+wait
diff --git a/0.90/conf/hadoop-metrics.properties b/0.90/conf/hadoop-metrics.properties
new file mode 100644
index 0000000..046a369
--- /dev/null
+++ b/0.90/conf/hadoop-metrics.properties
@@ -0,0 +1,58 @@
+# See http://wiki.apache.org/hadoop/GangliaMetrics
+# Make sure you know whether you are using ganglia 3.0 or 3.1.
+# If 3.1, you will have to patch your hadoop instance with HADOOP-4675
+# And, yes, this file is named hadoop-metrics.properties rather than
+# hbase-metrics.properties because we're leveraging the hadoop metrics
+# package and hadoop-metrics.properties is an hardcoded-name, at least
+# for the moment.
+#
+# See also http://hadoop.apache.org/hbase/docs/current/metrics.html
+
+# Configuration of the "hbase" context for null
+hbase.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "hbase" context for file
+# hbase.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
+# hbase.period=10
+# hbase.fileName=/tmp/metrics_hbase.log
+
+# HBase-specific configuration to reset long-running stats (e.g. compactions)
+# If this variable is left out, then the default is no expiration.
+hbase.extendedperiod = 3600
+
+# Configuration of the "hbase" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# hbase.period=10
+# hbase.servers=GMETADHOST_IP:8649
+
+# Configuration of the "jvm" context for null
+jvm.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "jvm" context for file
+# jvm.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
+# jvm.period=10
+# jvm.fileName=/tmp/metrics_jvm.log
+
+# Configuration of the "jvm" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# jvm.period=10
+# jvm.servers=GMETADHOST_IP:8649
+
+# Configuration of the "rpc" context for null
+rpc.class=org.apache.hadoop.metrics.spi.NullContext
+
+# Configuration of the "rpc" context for file
+# rpc.class=org.apache.hadoop.hbase.metrics.file.TimeStampingFileContext
+# rpc.period=10
+# rpc.fileName=/tmp/metrics_rpc.log
+
+# Configuration of the "rpc" context for ganglia
+# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter)
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext
+# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
+# rpc.period=10
+# rpc.servers=GMETADHOST_IP:8649
diff --git a/0.90/conf/hbase-env.sh b/0.90/conf/hbase-env.sh
new file mode 100644
index 0000000..718ada4
--- /dev/null
+++ b/0.90/conf/hbase-env.sh
@@ -0,0 +1,76 @@
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+# Set environment variables here.
+
+# The java implementation to use.  Java 1.6 required.
+# export JAVA_HOME=/usr/java/jdk1.6.0/
+
+# Extra Java CLASSPATH elements.  Optional.
+# export HBASE_CLASSPATH=
+
+# The maximum amount of heap to use, in MB. Default is 1000.
+# export HBASE_HEAPSIZE=1000
+
+# Extra Java runtime options.
+# Below are what we set by default.  May only work with SUN JVM.
+# For more on why as well as other possible settings,
+# see http://wiki.apache.org/hadoop/PerformanceTuning
+export HBASE_OPTS="-ea -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"
+
+# Uncomment below to enable java garbage collection logging.
+# export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log" 
+
+# Uncomment and adjust to enable JMX exporting
+# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
+# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
+#
+# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
+# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
+# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
+# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
+
+# File naming hosts on which HRegionServers will run.  $HBASE_HOME/conf/regionservers by default.
+# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
+
+# Extra ssh options.  Empty by default.
+# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"
+
+# Where log files are stored.  $HBASE_HOME/logs by default.
+# export HBASE_LOG_DIR=${HBASE_HOME}/logs
+
+# A string representing this instance of hbase. $USER by default.
+# export HBASE_IDENT_STRING=$USER
+
+# The scheduling priority for daemon processes.  See 'man nice'.
+# export HBASE_NICENESS=10
+
+# The directory where pid files are stored. /tmp by default.
+# export HBASE_PID_DIR=/var/hadoop/pids
+
+# Seconds to sleep between slave commands.  Unset by default.  This
+# can be useful in large clusters, where, e.g., slave rsyncs can
+# otherwise arrive faster than the master can service them.
+# export HBASE_SLAVE_SLEEP=0.1
+
+# Tell HBase whether it should manage it's own instance of Zookeeper or not.
+# export HBASE_MANAGES_ZK=true
diff --git a/0.90/conf/hbase-site.xml b/0.90/conf/hbase-site.xml
new file mode 100644
index 0000000..af4c300
--- /dev/null
+++ b/0.90/conf/hbase-site.xml
@@ -0,0 +1,25 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+</configuration>
diff --git a/0.90/conf/log4j.properties b/0.90/conf/log4j.properties
new file mode 100644
index 0000000..10873e3
--- /dev/null
+++ b/0.90/conf/log4j.properties
@@ -0,0 +1,55 @@
+# Define some default values that can be overridden by system properties
+hbase.root.logger=INFO,console
+hbase.log.dir=.
+hbase.log.file=hbase.log
+
+# Define the root logger to the system property "hbase.root.logger".
+log4j.rootLogger=${hbase.root.logger}
+
+# Logging Threshold
+log4j.threshhold=ALL
+
+#
+# Daily Rolling File Appender
+#
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+
+# Debugging Pattern format
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this 
+#
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
+
+# Custom Logging levels
+
+log4j.logger.org.apache.zookeeper=INFO
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+log4j.logger.org.apache.hadoop.hbase=DEBUG
+# Make these two classes INFO-level. Make them DEBUG to see more zk debug.
+log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO
+log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO
+#log4j.logger.org.apache.hadoop.dfs=DEBUG
+# Set this class to log INFO only otherwise its OTT
+
+# Uncomment the below if you want to remove logging of client region caching'
+# and scan of .META. messages
+# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO
+# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO
diff --git a/0.90/conf/regionservers b/0.90/conf/regionservers
new file mode 100644
index 0000000..2fbb50c
--- /dev/null
+++ b/0.90/conf/regionservers
@@ -0,0 +1 @@
+localhost
diff --git a/0.90/pom.xml b/0.90/pom.xml
new file mode 100644
index 0000000..b74b38f
--- /dev/null
+++ b/0.90/pom.xml
@@ -0,0 +1,857 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <parent>
+    <groupId>org.apache</groupId>
+    <artifactId>apache</artifactId>
+    <version>8</version>
+  </parent>
+
+  <groupId>org.apache.hbase</groupId>
+  <artifactId>hbase</artifactId>
+  <packaging>jar</packaging>
+  <version>0.90.0</version>
+  <name>HBase</name>
+  <description>
+    HBase is the &amp;lt;a href="http://hadoop.apache.org"&amp;rt;Hadoop&lt;/a&amp;rt; database. Use it when you need
+    random, realtime read/write access to your Big Data.
+    This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters
+    of commodity hardware.
+  </description>
+  <url>http://hbase.apache.org</url>
+
+  <scm>
+    <connection>scm:svn:http://svn.apache.org/repos/asf/hbase/trunk</connection>
+    <developerConnection>scm:svn:https://svn.apache.org/repos/asf/hbase/trunk</developerConnection>
+    <url>http://svn.apache.org/viewvc/hbase/trunk</url>
+  </scm>
+
+  <issueManagement>
+    <system>JIRA</system>
+    <url>http://issues.apache.org/jira/browse/HBASE</url>
+  </issueManagement>
+
+  <ciManagement>
+    <system>hudson</system>
+    <url>http://hudson.zones.apache.org/hudson/view/HBase/job/HBase-TRUNK/</url>
+  </ciManagement>
+
+  <mailingLists>
+    <mailingList>
+      <name>User List</name>
+      <subscribe>user-subscribe@hbase.apache.org</subscribe>
+      <unsubscribe>user-unsubscribe@hbase.apache.org</unsubscribe>
+      <post>user@hbase.apache.org</post>
+      <archive>http://mail-archives.apache.org/mod_mbox/hbase-user/</archive>
+      <otherArchives>
+        <otherArchive>http://dir.gmane.org/gmane.comp.java.hadoop.hbase.user</otherArchive>
+        <otherArchive>http://search-hadoop.com/?q=&amp;fc_project=HBase</otherArchive>
+      </otherArchives>
+    </mailingList>
+    <mailingList>
+      <name>Developer List</name>
+      <subscribe>dev-subscribe@hbase.apache.org</subscribe>
+      <unsubscribe>dev-unsubscribe@hbase.apache.org</unsubscribe>
+      <post>dev@hbase.apache.org</post>
+      <archive>http://mail-archives.apache.org/mod_mbox/hbase-dev/</archive>
+      <otherArchives>
+        <otherArchive>http://dir.gmane.org/gmane.comp.java.hadoop.hbase.devel</otherArchive>
+        <otherArchive>http://search-hadoop.com/?q=&amp;fc_project=HBase</otherArchive>
+      </otherArchives>
+    </mailingList>
+    <mailingList>
+      <name>Commits List</name>
+      <subscribe>commits-subscribe@hbase.apache.org</subscribe>
+      <unsubscribe>commits-unsubscribe@hbase.apache.org</unsubscribe>
+      <archive>http://mail-archives.apache.org/mod_mbox/hbase-commits/</archive>
+    </mailingList>
+    <mailingList>
+      <name>Issues List</name>
+      <subscribe>issues-subscribe@hbase.apache.org</subscribe>
+      <unsubscribe>issues-unsubscribe@hbase.apache.org</unsubscribe>
+      <archive>http://mail-archives.apache.org/mod_mbox/hbase-issues/</archive>
+    </mailingList>
+  </mailingLists>
+
+  <developers>
+    <developer>
+      <id>apurtell</id>
+      <name>Andrew Purtell</name>
+      <email>apurtell@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>Trend Micro</organization>
+      <organizationUrl>http://www.trendmicro.com</organizationUrl>
+    </developer>
+    <developer>
+      <id>jdcryans</id>
+      <name>Jean-Daniel Cryans</name>
+      <email>jdcryans@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>StumbleUpon</organization>
+      <organizationUrl>http://www.stumbleupon.com</organizationUrl>
+    </developer>
+    <developer>
+      <id>jgray</id>
+      <name>Jonathan Gray</name>
+      <email>jgray@streamy.com</email>
+      <timezone>-8</timezone>
+      <organization>Facebook</organization>
+      <organizationUrl>http://www.facebook.com</organizationUrl>
+    </developer>
+    <developer>
+      <id>larsgeorge</id>
+      <name>Lars George</name>
+      <email>larsgeorge@apache.org</email>
+      <timezone>+1</timezone>
+      <organization>WorldLingo</organization>
+      <organizationUrl>http://www.worldlingo.com/</organizationUrl>
+    </developer>
+    <developer>
+      <id>rawson</id>
+      <name>Ryan Rawson</name>
+      <email>rawson@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>StumbleUpon</organization>
+      <organizationUrl>http://www.stumbleupon.com</organizationUrl>
+    </developer>
+    <developer>
+      <id>stack</id>
+      <name>Michael Stack</name>
+      <email>stack@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>StumbleUpon</organization>
+      <organizationUrl>http://www.stumbleupon.com/</organizationUrl>
+    </developer>
+    <developer>
+      <id>todd</id>
+      <name>Todd Lipcon</name>
+      <email>todd@apache.org</email>
+      <timezone>-8</timezone>
+      <organization>Cloudera</organization>
+      <organizationUrl>http://www.cloudera.com</organizationUrl>
+    </developer>
+  </developers>
+
+  <repositories>
+    <repository>
+      <id>temp-thrift</id>
+      <name>Thrift 0.2.0</name>
+      <url>http://people.apache.org/~rawson/repo/</url>
+      <snapshots>
+        <enabled>false</enabled>
+      </snapshots>
+      <releases>
+        <enabled>true</enabled>
+      </releases>
+    </repository>
+    <repository>
+      <id>java.net</id>
+      <name>Java.Net</name>
+      <url>http://download.java.net/maven/2/</url>
+      <snapshots>
+        <enabled>false</enabled>
+      </snapshots>
+      <releases>
+        <enabled>true</enabled>
+      </releases>
+    </repository>
+    <repository>
+      <id>codehaus</id>
+      <name>Codehaus Public</name>
+      <url>http://repository.codehaus.org/</url>
+      <snapshots>
+        <enabled>false</enabled>
+      </snapshots>
+      <releases>
+        <enabled>true</enabled>
+      </releases>
+    </repository>
+  </repositories>
+
+  <build>
+    <!-- Some plugins (javadoc for example) can be used in the normal build- and the site phase.
+         These plugins inherit their options from the <reporting> section below. These settings
+         can be overwritten here. -->
+    <pluginManagement>
+      <plugins>
+        <plugin>
+          <artifactId>maven-compiler-plugin</artifactId>
+          <configuration>
+            <source>${compileSource}</source>
+            <target>${compileSource}</target>
+            <showWarnings>true</showWarnings>
+            <showDeprecation>false</showDeprecation>
+          </configuration>
+        </plugin>
+        <plugin>
+          <artifactId>maven-surefire-plugin</artifactId>
+          <configuration>
+            <forkedProcessTimeoutInSeconds>900</forkedProcessTimeoutInSeconds>
+            <argLine>-enableassertions -Xmx1400m</argLine>
+            <redirectTestOutputToFile>true</redirectTestOutputToFile>
+          </configuration>
+        </plugin>
+        <plugin>
+          <artifactId>maven-clean-plugin</artifactId>
+          <configuration>
+            <filesets>
+              <fileset>
+                <!--dfs tests have build dir hardcoded. Clean it as part of
+               clean target-->
+                <directory>build</directory>
+              </fileset>
+            </filesets>
+          </configuration>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.rat</groupId>
+          <artifactId>apache-rat-plugin</artifactId>
+          <version>0.6</version>
+        </plugin>
+      </plugins>
+    </pluginManagement>
+
+    <resources>
+      <resource>
+        <directory>src/main/resources/</directory>
+        <includes>
+          <include>hbase-default.xml</include>
+        </includes>
+      </resource>
+      <resource>
+        <directory>${project.build.directory}</directory>
+        <includes>
+          <include>hbase-webapps/**</include>
+        </includes>
+      </resource>
+    </resources>
+
+    <plugins>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>xml-maven-plugin</artifactId>
+        <version>1.0-beta-3</version>
+        <executions>
+          <execution>
+            <goals>
+              <goal>transform</goal>
+            </goals>
+            <phase>pre-site</phase>
+          </execution>
+        </executions>
+        <configuration>
+          <transformationSets>
+            <transformationSet>
+              <dir>${basedir}/src/main/resources/</dir>
+              <includes>
+                <include>hbase-default.xml</include>
+              </includes>
+              <stylesheet>${basedir}/src/main/xslt/configuration_to_docbook_section.xsl</stylesheet>
+              <outputDir>${basedir}/target/site/</outputDir>
+            </transformationSet>
+          </transformationSets>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>com.agilejava.docbkx</groupId>
+        <artifactId>docbkx-maven-plugin</artifactId>
+        <version>2.0.11</version>
+        <executions>
+          <execution>
+            <goals>
+              <goal>generate-html</goal>
+            </goals>
+            <phase>pre-site</phase>
+          </execution>
+        </executions>
+        <dependencies>
+          <dependency>
+            <groupId>org.docbook</groupId>
+            <artifactId>docbook-xml</artifactId>
+            <version>4.4</version>
+            <scope>runtime</scope>
+          </dependency>
+        </dependencies>
+        <configuration>
+          <xincludeSupported>true</xincludeSupported>
+          <chunkedOutput>true</chunkedOutput>
+          <useIdAsFilename>true</useIdAsFilename>
+          <sectionAutolabelMaxDepth>100</sectionAutolabelMaxDepth>
+          <sectionAutolabel>true</sectionAutolabel>
+          <sectionLabelIncludesComponentLabel>true</sectionLabelIncludesComponentLabel>
+          <targetDirectory>${basedir}/target/site/</targetDirectory>
+          <htmlStylesheet>css/freebsd_docbook.css</htmlStylesheet>
+        </configuration>
+      </plugin>
+      <plugin>
+        <artifactId>maven-assembly-plugin</artifactId>
+        <configuration>
+          <tarLongFileMode>gnu</tarLongFileMode>
+          <appendAssemblyId>false</appendAssemblyId>
+          <descriptors>
+            <descriptor>src/assembly/all.xml</descriptor>
+          </descriptors>
+        </configuration>
+      </plugin>
+
+      <!-- Run with -Dmaven.test.skip.exec=true to build -tests.jar without running tests (this is needed for upstream projects whose tests need this jar simply for compilation)-->
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-jar-plugin</artifactId>
+        <executions>
+          <execution>
+            <goals>
+              <goal>test-jar</goal>
+            </goals>
+          </execution>
+        </executions>
+        <configuration>
+          <archive>
+            <manifest>
+              <mainClass>org/apache/hadoop/hbase/mapreduce/Driver</mainClass>
+            </manifest>
+          </archive>
+          <!-- Exclude these 2 packages, because their dependency _binary_ files include the sources, and Maven 2.2 appears to add them to the sources to compile, weird-->
+          <excludes>
+            <exclude>org/apache/jute/**</exclude>
+            <exclude>org/apache/zookeeper/**</exclude>
+            <exclude>**/*.jsp</exclude>
+            <exclude>**/hbase-site.xml</exclude>
+          </excludes>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-source-plugin</artifactId>
+        <executions>
+          <execution>
+            <id>attach-sources</id>
+            <phase>package</phase>
+            <goals>
+              <goal>jar-no-fork</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-surefire-plugin</artifactId>
+        <configuration>
+          <forkMode>always</forkMode>
+          <includes>
+            <include>**/Test*.java</include>
+          </includes>
+          <excludes>
+            <exclude>**/*$*</exclude>
+          </excludes>
+        </configuration>
+      </plugin>
+      <plugin>
+        <artifactId>maven-antrun-plugin</artifactId>
+        <executions>
+          <execution>
+            <id>generate</id>
+            <phase>generate-sources</phase>
+            <configuration>
+              <tasks>
+                <property name="build.webapps"
+                          location="${project.build.directory}/hbase-webapps"/>
+                <property name="src.webapps"
+                          location="${basedir}/src/main/resources/hbase-webapps"/>
+                <property name="generated.sources"
+                          location="${project.build.directory}/generated-sources"/>
+
+                <mkdir dir="${build.webapps}"/>
+                <copy todir="${build.webapps}">
+                  <fileset dir="${src.webapps}">
+                    <exclude name="**/*.jsp"/>
+                    <exclude name="**/.*"/>
+                    <exclude name="**/*~"/>
+                  </fileset>
+                </copy>
+
+                <!--The compile.classpath is passed in by maven-->
+                <taskdef classname="org.apache.jasper.JspC" name="jspcompiler" classpathref="maven.compile.classpath"/>
+
+                <mkdir dir="${build.webapps}/master/WEB-INF"/>
+                <jspcompiler uriroot="${src.webapps}/master"
+                             outputdir="${generated.sources}"
+                             package="org.apache.hadoop.hbase.generated.master"
+                             webxml="${build.webapps}/master/WEB-INF/web.xml"/>
+
+                <mkdir dir="${build.webapps}/regionserver/WEB-INF"/>
+                <jspcompiler uriroot="${src.webapps}/regionserver"
+                             outputdir="${generated.sources}"
+                             package="org.apache.hadoop.hbase.generated.regionserver"
+                             webxml="${build.webapps}/regionserver/WEB-INF/web.xml"/>
+
+                <exec executable="sh">
+                  <arg line="${basedir}/src/saveVersion.sh ${project.version} ${generated.sources}"/>
+                </exec>
+              </tasks>
+            </configuration>
+            <goals>
+              <goal>run</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>build-helper-maven-plugin</artifactId>
+        <version>1.5</version>
+        <executions>
+          <execution>
+            <id>add-jspc-source</id>
+            <phase>generate-sources</phase>
+            <goals>
+              <goal>add-source</goal>
+            </goals>
+            <configuration>
+              <sources>
+                <source>${basedir}/target/jspc</source>
+              </sources>
+            </configuration>
+          </execution>
+          <execution>
+            <id>add-package-info</id>
+            <phase>generate-sources</phase>
+            <goals>
+              <goal>add-source</goal>
+            </goals>
+            <configuration>
+              <sources>
+                <source>${project.build.directory}/generated-sources</source>
+              </sources>
+            </configuration>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
+  </build>
+
+  <properties>
+    <compileSource>1.6</compileSource>
+
+    <!-- Dependencies -->
+    <avro.version>1.3.3</avro.version>
+    <commons-cli.version>1.2</commons-cli.version>
+    <commons-codec.version>1.4</commons-codec.version>
+    <commons-httpclient.version>3.1</commons-httpclient.version><!-- pretty outdated -->
+    <commons-lang.version>2.5</commons-lang.version>
+    <commons-logging.version>1.1.1</commons-logging.version>
+    <commons-math.version>2.1</commons-math.version>
+    <guava.version>r06</guava.version>
+    <!--The below was made by patching branch-0.20-append
+    at revision 1034499 with this hdfs-895 patch:
+    https://issues.apache.org/jira/secure/attachment/12459473/hdfs-895-branch-20-append.txt
+    -->
+    <hadoop.version>0.20-append-r1044525</hadoop.version>
+    <jasper.version>5.5.23</jasper.version>
+    <jaxb-api.version>2.1</jaxb-api.version>
+    <jetty.version>6.1.26</jetty.version>
+    <jetty.jspapi.version>6.1.14</jetty.jspapi.version>
+    <jersey.version>1.4</jersey.version>
+    <!--JRuby > 1.0.x has *GPL jars in it so we can't upgrade. See HBASE-3374-->
+    <jruby.version>1.0.3</jruby.version>
+    <jsr311.version>1.1.1</jsr311.version>
+    <junit.version>4.8.1</junit.version>
+    <log4j.version>1.2.16</log4j.version>
+    <mockito-all.version>1.8.5</mockito-all.version>
+    <protobuf.version>2.3.0</protobuf.version>
+    <slf4j.version>1.5.8</slf4j.version><!-- newer version available -->
+    <stax-api.version>1.0.1</stax-api.version>
+    <thrift.version>0.2.0</thrift.version><!-- newer version available -->
+    <zookeeper.version>3.3.2</zookeeper.version>
+  </properties>
+
+  <!-- Sorted by groups of dependencies then groupId and artifactId -->
+  <dependencies>
+    <!--
+      Note: There are a few exclusions to prevent duplicate code in different jars to be included:
+        * org.mortbay.jetty:servlet-api, javax.servlet:servlet-api: These are excluded because they are
+          the same implementations. I chose org.mortbay.jetty:servlet-api-2.5 instead, which is a third
+          implementation of the same, because Hadoop also uses this version
+        * javax.servlet:jsp-api in favour of org.mortbay.jetty:jsp-api-2.1
+        * javax.xml.stream:stax-api in favour of stax:stax-api
+
+      Note: Both org.apache.avro:avro and com.sun.jersey:jersey-json depend on Jackson so the version
+        is chosen which comes first in the list of dependencies (jersey in this case)
+    -->
+
+    <!-- General dependencies -->
+    <dependency>
+      <groupId>com.google.guava</groupId>
+      <artifactId>guava</artifactId>
+      <version>${guava.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>commons-cli</groupId>
+      <artifactId>commons-cli</artifactId>
+      <version>${commons-cli.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>commons-codec</groupId>
+      <artifactId>commons-codec</artifactId>
+      <version>${commons-codec.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>commons-httpclient</groupId>
+      <artifactId>commons-httpclient</artifactId>
+      <version>${commons-httpclient.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>commons-lang</groupId>
+      <artifactId>commons-lang</artifactId>
+      <version>${commons-lang.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>commons-logging</groupId>
+      <artifactId>commons-logging</artifactId>
+      <version>${commons-logging.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>log4j</groupId>
+      <artifactId>log4j</artifactId>
+      <version>${log4j.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>avro</artifactId>
+      <version>${avro.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-core</artifactId>
+      <version>${hadoop.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.zookeeper</groupId>
+      <artifactId>zookeeper</artifactId>
+      <version>${zookeeper.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.thrift</groupId>
+      <artifactId>thrift</artifactId>
+      <version>${thrift.version}</version>
+      <exclusions>
+        <exclusion>
+          <groupId>org.slf4j</groupId>
+          <artifactId>slf4j-simple</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.jruby</groupId>
+      <artifactId>jruby-complete</artifactId>
+      <version>${jruby.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.mortbay.jetty</groupId>
+      <artifactId>jetty</artifactId>
+      <version>${jetty.version}</version>
+      <exclusions>
+        <exclusion>
+          <groupId>org.mortbay.jetty</groupId>
+          <artifactId>servlet-api</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>org.mortbay.jetty</groupId>
+      <artifactId>jetty-util</artifactId>
+      <version>${jetty.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.mortbay.jetty</groupId>
+      <artifactId>jsp-2.1</artifactId>
+      <version>${jetty.jspapi.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.mortbay.jetty</groupId>
+      <artifactId>jsp-api-2.1</artifactId>
+      <version>${jetty.jspapi.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.mortbay.jetty</groupId>
+      <artifactId>servlet-api-2.5</artifactId>
+      <version>${jetty.jspapi.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-api</artifactId>
+      <version>${slf4j.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>org.slf4j</groupId>
+      <artifactId>slf4j-log4j12</artifactId>
+      <version>${slf4j.version}</version>
+      <scope>runtime</scope>
+    </dependency>
+    <dependency>
+      <!--If this is not in the runtime lib, we get odd
+      "2009-02-27 11:38:39.504::WARN:  failed jsp
+       java.lang.NoSuchFieldError: IS_SECURITY_ENABLED"
+       exceptions out of jetty deploying webapps.
+       St.Ack Thu May 20 01:04:41 PDT 2010
+      -->
+      <groupId>tomcat</groupId>
+      <artifactId>jasper-compiler</artifactId>
+      <version>${jasper.version}</version>
+      <scope>runtime</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>javax.servlet</groupId>
+          <artifactId>jsp-api</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>javax.servlet</groupId>
+          <artifactId>servlet-api</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>tomcat</groupId>
+      <artifactId>jasper-runtime</artifactId>
+      <version>${jasper.version}</version>
+      <scope>runtime</scope>
+      <exclusions>
+        <exclusion>
+          <groupId>javax.servlet</groupId>
+          <artifactId>servlet-api</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+
+    <!-- REST dependencies -->
+    <dependency>
+      <groupId>com.google.protobuf</groupId>
+      <artifactId>protobuf-java</artifactId>
+      <version>${protobuf.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>com.sun.jersey</groupId>
+      <artifactId>jersey-core</artifactId>
+      <version>${jersey.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>com.sun.jersey</groupId>
+      <artifactId>jersey-json</artifactId>
+      <version>${jersey.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>com.sun.jersey</groupId>
+      <artifactId>jersey-server</artifactId>
+      <version>${jersey.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>javax.ws.rs</groupId>
+      <artifactId>jsr311-api</artifactId>
+      <version>${jsr311.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>javax.xml.bind</groupId>
+      <artifactId>jaxb-api</artifactId>
+      <version>${jaxb-api.version}</version>
+      <exclusions>
+        <exclusion>
+          <groupId>javax.xml.stream</groupId>
+          <artifactId>stax-api</artifactId>
+        </exclusion>
+      </exclusions>
+    </dependency>
+    <dependency>
+      <groupId>stax</groupId>
+      <artifactId>stax-api</artifactId>
+      <version>${stax-api.version}</version>
+    </dependency>
+
+    <!-- Test dependencies -->
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <version>${junit.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.mockito</groupId>
+      <artifactId>mockito-all</artifactId>
+      <version>${mockito-all.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.commons</groupId>
+      <artifactId>commons-math</artifactId>
+      <version>${commons-math.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-test</artifactId>
+      <version>${hadoop.version}</version>
+      <scope>test</scope>
+    </dependency>
+  </dependencies>
+
+  <!--
+  To publish, use the following settings.xml file ( placed in ~/.m2/settings.xml )
+
+ <settings>
+  <servers>
+    <server>
+      <id>apache.releases.https</id>
+      <username>hbase_committer</username>
+      <password>********</password>
+    </server>
+
+    <server>
+      <id>apache.snapshots.https</id>
+      <username>hbase_committer</username>
+      <password>********</password>
+    </server>
+
+  </servers>
+ </settings>
+
+  $ mvn deploy
+(or)
+  $ mvn -s /my/path/settings.xml deploy
+
+  -->
+
+  <!-- See http://jira.codehaus.org/browse/MSITE-443 why the settings need to be here and not in pluginManagement. -->
+  <reporting>
+    <plugins>
+      <plugin>
+        <artifactId>maven-project-info-reports-plugin</artifactId>
+        <version>2.1.2</version>
+        <reportSets>
+          <reportSet>
+            <reports>
+              <report>project-team</report>
+              <report>mailing-list</report>
+              <report>cim</report>
+              <report>issue-tracking</report>
+              <report>license</report>
+              <report>scm</report>
+              <report>index</report>
+            </reports>
+          </reportSet>
+        </reportSets>
+
+      </plugin>
+      <!-- Disabled for now.
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-pmd-plugin</artifactId>
+        <version>2.4</version>
+        <configuration>
+          <targetJdk>${compileSource}</targetJdk>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-checkstyle-plugin</artifactId>
+        <version>2.5</version>
+      </plugin>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>findbugs-maven-plugin</artifactId>
+        <version>2.3.1</version>
+        <configuration>
+          <findbugsXmlOutput>true</findbugsXmlOutput>
+          <findbugsXmlWithMessages>true</findbugsXmlWithMessages>
+          <xmlOutput>true</xmlOutput>
+        </configuration>
+      </plugin>
+      -->
+      <plugin>
+        <artifactId>maven-site-plugin</artifactId>
+        <version>2.1</version>
+        <configuration>
+          <inputEncoding>UTF-8</inputEncoding>
+          <outputEncoding>UTF-8</outputEncoding>
+          <templateFile>src/site/site.vm</templateFile>
+        </configuration>
+      </plugin>
+      <plugin>
+        <artifactId>maven-javadoc-plugin</artifactId>
+        <version>2.6.1</version>
+        <configuration>
+          <docfilessubdirs>true</docfilessubdirs>
+        </configuration>
+        <reportSets>
+          <reportSet>
+            <id>default</id>
+            <reports>
+              <report>javadoc</report>
+            </reports>
+          </reportSet>
+        </reportSets>
+        <!--
+          This is probably not needed, given the smallness of the HBase source code, but left here in case
+          <minmemory>128m</minmemory>
+          <maxmemory>1024m</maxmemory>
+        -->
+      </plugin>
+      <!--Disabled for now.
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>cobertura-maven-plugin</artifactId>
+        <version>2.3</version>
+      </plugin>
+      -->
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-jxr-plugin</artifactId>
+        <version>2.1</version>
+      </plugin>
+      <!-- Disabled for now
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>jdepend-maven-plugin</artifactId>
+        <version>2.0-beta-2</version>
+      </plugin>
+      <plugin>
+        <artifactId>maven-changes-plugin</artifactId>
+        <version>2.3</version>
+        <configuration>
+          <issueLinkTemplate>%URL%/browse/%ISSUE%</issueLinkTemplate>
+        </configuration>
+        <reportSets>
+          <reportSet>
+            <reports>
+              <report>changes-report</report>
+            </reports>
+          </reportSet>
+        </reportSets>
+      </plugin>
+      <plugin>
+        <groupId>com.atlassian.maven.plugins</groupId>
+        <artifactId>maven-clover2-plugin</artifactId>
+        <version>2.6.3</version>
+      </plugin>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>taglist-maven-plugin</artifactId>
+        <version>2.4</version>
+      </plugin>
+      <plugin>
+        <artifactId>maven-surefire-report-plugin</artifactId>
+        <version>2.5</version>
+      </plugin>
+      <plugin>
+        <artifactId>maven-dependency-plugin</artifactId>
+        <version>2.1</version>
+      </plugin>
+      -->
+      <plugin>
+        <groupId>org.apache.rat</groupId>
+        <artifactId>apache-rat-plugin</artifactId>
+        <version>0.6</version>
+      </plugin>
+    </plugins>
+  </reporting>
+</project>
diff --git a/0.90/src/assembly/all.xml b/0.90/src/assembly/all.xml
new file mode 100644
index 0000000..53d5638
--- /dev/null
+++ b/0.90/src/assembly/all.xml
@@ -0,0 +1,61 @@
+<?xml version="1.0"?>
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
+          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
+  <!--This 'all' id is not appended to the produced bundle because we do this:
+    http://maven.apache.org/plugins/maven-assembly-plugin/faq.html#required-classifiers
+  -->
+  <id>all</id>
+  <formats>
+    <format>tar.gz</format>
+  </formats>
+  <fileSets>
+    <fileSet>
+      <includes>
+        <include>${basedir}/*.txt</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <includes>
+        <include>pom.xml</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>src</directory>
+    </fileSet>
+    <fileSet>
+      <directory>conf</directory>
+    </fileSet>
+    <fileSet>
+      <directory>bin</directory>
+      <fileMode>755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>src/main/ruby</directory>
+      <outputDirectory>lib/ruby</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>target</directory>
+      <outputDirectory>/</outputDirectory>
+      <includes>
+          <include>hbase-${project.version}.jar</include>
+          <include>hbase-${project.version}-tests.jar</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>target/hbase-webapps</directory>
+      <outputDirectory>hbase-webapps</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>target/site</directory>
+      <outputDirectory>docs</outputDirectory>
+    </fileSet>
+  </fileSets>
+  <dependencySets>
+    <dependencySet>
+      <outputDirectory>/lib</outputDirectory>
+      <unpack>false</unpack>
+      <scope>runtime</scope>
+    </dependencySet>
+  </dependencySets>
+</assembly>
diff --git a/0.90/src/docbkx/book.xml b/0.90/src/docbkx/book.xml
new file mode 100644
index 0000000..3e688fb
--- /dev/null
+++ b/0.90/src/docbkx/book.xml
@@ -0,0 +1,1926 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<book version="5.0" xmlns="http://docbook.org/ns/docbook"
+      xmlns:xlink="http://www.w3.org/1999/xlink"
+      xmlns:xi="http://www.w3.org/2001/XInclude"
+      xmlns:svg="http://www.w3.org/2000/svg"
+      xmlns:m="http://www.w3.org/1998/Math/MathML"
+      xmlns:html="http://www.w3.org/1999/xhtml"
+      xmlns:db="http://docbook.org/ns/docbook">
+  <info>
+    <title>The Apache <link xlink:href="http://www.hbase.org">HBase</link>
+    Book</title>
+      <copyright><year>2010</year><holder>Apache Software Foundation</holder></copyright>
+      <abstract>
+    <para>This is the official book of
+    <link xlink:href="http://www.hbase.org">Apache HBase</link>,
+    a distributed, versioned, column-oriented database built on top of
+    <link xlink:href="http://hadoop.apache.org/">Apache Hadoop</link> and
+    <link xlink:href="http://zookeeper.apache.org/">Apache ZooKeeper</link>.
+      </para>
+      </abstract>
+
+    <revhistory>
+      <revision>
+        <date />
+
+        <revdescription>Adding first cuts at Configuration, Getting Started, Data Model</revdescription>
+        <revnumber>
+          <?eval ${project.version}?>
+        </revnumber>
+      </revision>
+      <revision>
+        <date>
+        5 October 2010
+        </date>
+        <authorinitials>stack</authorinitials>
+        <revdescription>Initial layout</revdescription>
+        <revnumber>
+          0.89.20100924
+        </revnumber>
+      </revision>
+    </revhistory>
+  </info>
+
+  <preface xml:id="preface">
+    <title>Preface</title>
+
+    <para>This book aims to be the official guide for the <link
+    xlink:href="http://hbase.apache.org/">HBase</link> version it ships with.
+    This document describes HBase version <emphasis><?eval ${project.version}?></emphasis>.
+    Herein you will find either the definitive documentation on an HBase topic
+    as of its standing when the referenced HBase version shipped, or 
+    this book will point to the location in <link
+    xlink:href="http://hbase.apache.org/docs/current/api/index.html">javadoc</link>,
+    <link xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link>
+    or <link xlink:href="http://wiki.apache.org/hadoop/Hbase">wiki</link>
+    where the pertinent information can be found.</para>
+
+    <para>This book is a work in progress. It is lacking in many areas but we
+    hope to fill in the holes with time. Feel free to add to this book should
+    by adding a patch to an issue up in the HBase <link
+    xlink:href="https://issues.apache.org/jira/browse/HBASE">JIRA</link>.</para>
+  </preface>
+
+  <chapter xml:id="getting_started">
+    <title>Getting Started</title>
+    <section >
+      <title>Introduction</title>
+      <para>
+          <link linkend="quickstart">Quick Start</link> will get you up and running
+          on a single-node instance of HBase using the local filesystem.
+          The <link linkend="notsoquick">Not-so-quick Start Guide</link> 
+          describes setup of HBase in distributed mode running on top of HDFS.
+      </para>
+    </section>
+
+    <section xml:id="quickstart">
+      <title>Quick Start</title>
+
+          <para>This guide describes setup of a standalone HBase
+              instance that uses the local filesystem.  It leads you
+              through creating a table, inserting rows via the
+          <link linkend="shell">HBase Shell</link>, and then cleaning up and shutting
+          down your standalone HBase instance.
+          The below exercise should take no more than
+          ten minutes (not including download time).
+      </para>
+          
+          <section>
+            <title>Download and unpack the latest stable release.</title>
+
+            <para>Choose a download site from this list of <link
+            xlink:href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache
+            Download Mirrors</link>. Click on suggested top link. This will take you to a
+            mirror of <emphasis>HBase Releases</emphasis>. Click on
+            the folder named <filename>stable</filename> and then download the
+            file that ends in <filename>.tar.gz</filename> to your local filesystem;
+            e.g. <filename>hbase-<?eval ${project.version}?>.tar.gz</filename>.</para>
+
+            <para>Decompress and untar your download and then change into the
+            unpacked directory.</para>
+
+            <para><programlisting>$ tar xfz hbase-<?eval ${project.version}?>.tar.gz
+$ cd hbase-<?eval ${project.version}?>
+</programlisting></para>
+
+<para>
+   At this point, you are ready to start HBase. But before starting it,
+   you might want to edit <filename>conf/hbase-site.xml</filename>
+   and set the directory you want HBase to write to,
+   <varname>hbase.rootdir</varname>.
+   <programlisting>
+<![CDATA[
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+  <property>
+    <name>hbase.rootdir</name>
+    <value>file:///DIRECTORY/hbase</value>
+  </property>
+</configuration>
+]]>
+</programlisting>
+Replace <varname>DIRECTORY</varname> in the above with a path to a directory where you want
+HBase to store its data.  By default, <varname>hbase.rootdir</varname> is
+set to <filename>/tmp/hbase-${user.name}</filename> 
+which means you'll lose all your data whenever your server reboots
+(Most operating systems clear <filename>/tmp</filename> on restart).
+</para>
+</section>
+<section xml:id="start_hbase">
+<title>Start HBase</title>
+
+            <para>Now start HBase:<programlisting>$ ./bin/start-hbase.sh
+starting Master, logging to logs/hbase-user-master-example.org.out</programlisting></para>
+
+            <para>You should
+            now have a running standalone HBase instance. In standalone mode, HBase runs
+            all daemons in the the one JVM; i.e. both the HBase and ZooKeeper daemons.
+            HBase logs can be found in the <filename>logs</filename> subdirectory. Check them
+            out especially if HBase had trouble starting.</para>
+
+            <note>
+            <title>Is <application>java</application> installed?</title>
+            <para>All of the above presumes a 1.6 version of Oracle
+            <application>java</application> is installed on your
+            machine and available on your path; i.e. when you type
+            <application>java</application>, you see output that describes the options
+            the java program takes (HBase requires java 6).  If this is
+            not the case, HBase will not start.
+            Install java, edit <filename>conf/hbase-env.sh</filename>, uncommenting the
+            <envar>JAVA_HOME</envar> line pointing it to your java install.  Then,
+            retry the steps above.</para>
+            </note>
+            </section>
+            
+
+      <section xml:id="shell_exercises">
+          <title>Shell Exercises</title>
+            <para>Connect to your running HBase via the 
+          <link linkend="shell">HBase Shell</link>.</para>
+
+            <para><programlisting>$ ./bin/hbase shell
+HBase Shell; enter 'help&lt;RETURN&gt;' for list of supported commands.
+Type "exit&lt;RETURN&gt;" to leave the HBase Shell
+Version: 0.89.20100924, r1001068, Fri Sep 24 13:55:42 PDT 2010
+
+hbase(main):001:0&gt; </programlisting></para>
+
+            <para>Type <command>help</command> and then <command>&lt;RETURN&gt;</command>
+            to see a listing of shell
+            commands and options. Browse at least the paragraphs at the end of
+            the help emission for the gist of how variables and command
+            arguments are entered into the
+            HBase shell; in particular note how table names, rows, and
+            columns, etc., must be quoted.</para>
+
+            <para>Create a table named <varname>test</varname> with a single
+            <link linkend="columnfamily">column family</link> named <varname>cf</varname>.
+            Verify its creation by listing all tables and then insert some
+            values.</para>
+            <para><programlisting>hbase(main):003:0&gt; create 'test', 'cf'
+0 row(s) in 1.2200 seconds
+hbase(main):003:0&gt; list 'table'
+test
+1 row(s) in 0.0550 seconds
+hbase(main):004:0&gt; put 'test', 'row1', 'cf:a', 'value1'
+0 row(s) in 0.0560 seconds
+hbase(main):005:0&gt; put 'test', 'row2', 'cf:b', 'value2'
+0 row(s) in 0.0370 seconds
+hbase(main):006:0&gt; put 'test', 'row3', 'cf:c', 'value3'
+0 row(s) in 0.0450 seconds</programlisting></para>
+
+            <para>Above we inserted 3 values, one at a time. The first insert is at
+            <varname>row1</varname>, column <varname>cf:a</varname> with a value of
+            <varname>value1</varname>.
+            Columns in HBase are comprised of a
+            <link linkend="columnfamily">column family</link> prefix
+            -- <varname>cf</varname> in this example -- followed by
+            a colon and then a column qualifier suffix (<varname>a</varname> in this case).
+            </para>
+
+            <para>Verify the data insert.</para>
+
+            <para>Run a scan of the table by doing the following</para>
+
+            <para><programlisting>hbase(main):007:0&gt; scan 'test'
+ROW        COLUMN+CELL
+row1       column=cf:a, timestamp=1288380727188, value=value1
+row2       column=cf:b, timestamp=1288380738440, value=value2
+row3       column=cf:c, timestamp=1288380747365, value=value3
+3 row(s) in 0.0590 seconds</programlisting></para>
+
+            <para>Get a single row as follows</para>
+
+            <para><programlisting>hbase(main):008:0&gt; get 'test', 'row1'
+COLUMN      CELL
+cf:a        timestamp=1288380727188, value=value1
+1 row(s) in 0.0400 seconds</programlisting></para>
+
+            <para>Now, disable and drop your table. This will clean up all
+            done above.</para>
+
+            <para><programlisting>hbase(main):012:0&gt; disable 'test'
+0 row(s) in 1.0930 seconds
+hbase(main):013:0&gt; drop 'test'
+0 row(s) in 0.0770 seconds </programlisting></para>
+
+            <para>Exit the shell by typing exit.</para>
+
+            <para><programlisting>hbase(main):014:0&gt; exit</programlisting></para>
+            </section>
+
+          <section>
+          <title>Stopping HBase</title>
+            <para>Stop your hbase instance by running the stop script.</para>
+
+            <para><programlisting>$ ./bin/stop-hbase.sh
+stopping hbase...............</programlisting></para>
+          </section>
+
+      <section><title>Where to go next
+      </title>
+      <para>The above described standalone setup is good for testing and experiments only.
+      Move on to the next section, the <link linkend="notsoquick">Not-so-quick Start Guide</link>
+      where we'll go into depth on the different HBase run modes, requirements and critical
+      configurations needed setting up a distributed HBase deploy.
+      </para>
+      </section>
+    </section>
+
+    <section xml:id="notsoquick">
+      <title>Not-so-quick Start Guide</title>
+      
+      <section xml:id="requirements"><title>Requirements</title>
+      <para>HBase has the following requirements.  Please read the
+      section below carefully and ensure that all requirements have been
+      satisfied.  Failure to do so will cause you (and us) grief debugging
+      strange errors and/or data loss.
+      </para>
+
+  <section xml:id="java"><title>java</title>
+<para>
+  Just like Hadoop, HBase requires java 6 from <link xlink:href="http://www.java.com/download/">Oracle</link>.
+Usually you'll want to use the latest version available except the problematic u18  (u22 is the latest version as of this writing).</para>
+</section>
+
+  <section xml:id="hadoop"><title><link xlink:href="http://hadoop.apache.org">hadoop</link></title>
+<para>This version of HBase will only run on <link xlink:href="http://hadoop.apache.org/common/releases.html">Hadoop 0.20.x</link>.
+    It will not run on hadoop 0.21.x (nor 0.22.x) as of this writing.
+    HBase will lose data unless it is running on an HDFS that has a durable <code>sync</code>.
+ Currently only the <link xlink:href="http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/">branch-0.20-append</link>
+ branch has this attribute.  No official releases have been made from this branch as of this writing
+ so you will have to build your own Hadoop from the tip of this branch <footnote>
+     <para>Scroll down in the Hadoop <link xlink:href="http://wiki.apache.org/hadoop/HowToRelease">How To Release</link> to the section
+         Build Requirements for instruction on how to build Hadoop.
+     </para>
+ </footnote> or you could use
+ Cloudera's <link xlink:href="http://archive.cloudera.com/docs/">CDH3</link>.
+ CDH has the 0.20-append patches needed to add a durable sync (As of this writing
+ CDH3 is still in beta.  Either CDH3b2 or CDH3b3 will suffice).
+ See <link xlink:href="http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/CHANGES.txt">CHANGES.txt</link>
+ in branch-0.20-append to see list of patches involved.</para>
+ <para>Because HBase depends on Hadoop, it bundles an Hadoop instance under its <filename>lib</filename> directory.
+ The bundled Hadoop was made from the Apache branch-0.20-append branch.
+ If you want to run HBase on an Hadoop cluster that is other than a version made from branch-0.20.append,
+ you must replace the hadoop jar found in the HBase <filename>lib</filename> directory with the
+ hadoop jar you are running out on your cluster to avoid version mismatch issues.
+ For example, versions of CDH do not have HDFS-724 whereas
+ Hadoops branch-0.20-append branch does have HDFS-724. This
+ patch changes the RPC version because protocol was changed.
+ Version mismatch issues have various manifestations but often all looks like its hung up.
+ </para>
+ <note><title>Hadoop Security</title>
+     <para>HBase will run on any Hadoop 0.20.x that incorporates Hadoop security features -- e.g. Y! 0.20S or CDH3B3 -- as long
+         as you do as suggested above and replace the Hadoop jar that ships with HBase with the secure version.
+  </para>
+  </note>
+  </section>
+<section xml:id="ssh"> <title>ssh</title>
+<para><command>ssh</command> must be installed and <command>sshd</command> must
+be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons.
+   You must be able to ssh to all nodes, including your local node, using passwordless login (Google "ssh passwordless login").
+  </para>
+</section>
+  <section xml:id="dns"><title>DNS</title>
+    <para>HBase uses the local hostname to self-report it's IP address. Both forward and reverse DNS resolving should work.</para>
+    <para>If your machine has multiple interfaces, HBase will use the interface that the primary hostname resolves to.</para>
+    <para>If this is insufficient, you can set <varname>hbase.regionserver.dns.interface</varname> to indicate the primary interface.
+    This only works if your cluster
+    configuration is consistent and every host has the same network interface configuration.</para>
+    <para>Another alternative is setting <varname>hbase.regionserver.dns.nameserver</varname> to choose a different nameserver than the
+    system wide default.</para>
+</section>
+  <section xml:id="ntp"><title>NTP</title>
+<para>
+    The clocks on cluster members should be in basic alignments. Some skew is tolerable but
+    wild skew could generate odd behaviors. Run <link xlink:href="http://en.wikipedia.org/wiki/Network_Time_Protocol">NTP</link>
+    on your cluster, or an equivalent.
+  </para>
+    <para>If you are having problems querying data, or "weird" cluster operations, check system time!</para>
+</section>
+
+
+      <section xml:id="ulimit">
+      <title><varname>ulimit</varname></title>
+      <para>HBase is a database, it uses a lot of files at the same time.
+      The default ulimit -n of 1024 on *nix systems is insufficient.
+      Any significant amount of loading will lead you to 
+      <link xlink:href="http://wiki.apache.org/hadoop/Hbase/FAQ#A6">FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?</link>.
+      You may also notice errors such as
+      <programlisting>
+      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
+      2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
+      </programlisting>
+      Do yourself a favor and change the upper bound on the number of file descriptors.
+      Set it to north of 10k.  See the above referenced FAQ for how.</para>
+      <para>To be clear, upping the file descriptors for the user who is
+      running the HBase process is an operating system configuration, not an
+      HBase configuration. Also, a common mistake is that administrators
+      will up the file descriptors for a particular user but for whatever reason,
+      HBase will be running as some one else.  HBase prints in its logs
+      as the first line the ulimit its seeing.  Ensure its correct.
+    <footnote>
+    <para>A useful read setting config on you hadoop cluster is Aaron Kimballs'
+    <link xlink:ref="http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/">Configuration Parameters: What can you just ignore?</link>
+    </para>
+    </footnote>
+      </para>
+        <section xml:id="ulimit_ubuntu">
+          <title><varname>ulimit</varname> on Ubuntu</title>
+        <para>
+          If you are on Ubuntu you will need to make the following changes:</para>
+        <para>
+          In the file <filename>/etc/security/limits.conf</filename> add a line like:
+          <programlisting>hadoop  -       nofile  32768</programlisting>
+          Replace <varname>hadoop</varname>
+          with whatever user is running Hadoop and HBase. If you have
+          separate users, you will need 2 entries, one for each user.
+        </para>
+        <para>
+          In the file <filename>/etc/pam.d/common-session</filename> add as the last line in the file:
+          <programlisting>session required  pam_limits.so</programlisting>
+          Otherwise the changes in <filename>/etc/security/limits.conf</filename> won't be applied.
+        </para>
+        <para>
+          Don't forget to log out and back in again for the changes to take effect!
+        </para>
+          </section>
+      </section>
+
+      <section xml:id="dfs.datanode.max.xcievers">
+      <title><varname>dfs.datanode.max.xcievers</varname></title>
+      <para>
+      An Hadoop HDFS datanode has an upper bound on the number of files
+      that it will serve at any one time.
+      The upper bound parameter is called
+      <varname>xcievers</varname> (yes, this is misspelled). Again, before
+      doing any loading, make sure you have configured
+      Hadoop's <filename>conf/hdfs-site.xml</filename>
+      setting the <varname>xceivers</varname> value to at least the following:
+      <programlisting>
+      &lt;property&gt;
+        &lt;name&gt;dfs.datanode.max.xcievers&lt;/name&gt;
+        &lt;value&gt;4096&lt;/value&gt;
+      &lt;/property&gt;
+      </programlisting>
+      </para>
+      <para>Be sure to restart your HDFS after making the above
+      configuration.</para>
+      </section>
+
+<section xml:id="windows">
+<title>Windows</title>
+<para>
+HBase has been little tested running on windows.
+Running a production install of HBase on top of
+windows is not recommended.
+</para>
+<para>
+If you are running HBase on Windows, you must install
+<link xlink:href="http://cygwin.com/">Cygwin</link>
+to have a *nix-like environment for the shell scripts. The full details
+are explained in the <link xlink:href="cygwin.html">Windows Installation</link>
+guide.
+</para>
+</section>
+
+      </section>
+
+      <section><title>HBase run modes: Standalone and Distributed</title>
+          <para>HBase has two run modes: <link linkend="standalone">standalone</link>
+              and <link linkend="distributed">distributed</link>.
+              Out of the box, HBase runs in standalone mode.  To set up a
+              distributed deploy, you will need to configure HBase by editing
+              files in the HBase <filename>conf</filename> directory.</para>
+
+<para>Whatever your mode, you will need to edit <code>conf/hbase-env.sh</code>
+to tell HBase which <command>java</command> to use. In this file
+you set HBase environment variables such as the heapsize and other options
+for the <application>JVM</application>, the preferred location for log files, etc.
+Set <varname>JAVA_HOME</varname> to point at the root of your
+<command>java</command> install.</para>
+
+      <section xml:id="standalone"><title>Standalone HBase</title>
+        <para>This is the default mode. Standalone mode is
+        what is described in the <link linkend="quickstart">quickstart</link>
+        section.  In standalone mode, HBase does not use HDFS -- it uses the local
+        filesystem instead -- and it runs all HBase daemons and a local zookeeper
+        all up in the same JVM.  Zookeeper binds to a well known port so clients may
+        talk to HBase.
+      </para>
+      </section>
+      <section><title>Distributed</title>
+          <para>Distributed mode can be subdivided into distributed but all daemons run on a
+          single node -- a.k.a <emphasis>pseudo-distributed</emphasis>-- and
+          <emphasis>fully-distributed</emphasis> where the daemons 
+          are spread across all nodes in the cluster
+          <footnote><para>The pseudo-distributed vs fully-distributed nomenclature comes from Hadoop.</para></footnote>.</para>
+      <para>
+          Distributed modes require an instance of the
+          <emphasis>Hadoop Distributed File System</emphasis> (HDFS).  See the
+          Hadoop <link xlink:href="http://hadoop.apache.org/common/docs/current/api/overview-summary.html#overview_description">
+          requirements and instructions</link> for how to set up a HDFS.
+          Before proceeding, ensure you have an appropriate, working HDFS.
+      </para>
+      <para>Below we describe the different distributed setups.
+      Starting, verification and exploration of your install, whether a 
+      <emphasis>pseudo-distributed</emphasis> or <emphasis>fully-distributed</emphasis>
+      configuration is described in a section that follows,
+      <link linkend="confirm">Running and Confirming your Installation</link>.
+      The same verification script applies to both deploy types.</para>
+
+      <section xml:id="pseudo"><title>Pseudo-distributed</title>
+<para>A pseudo-distributed mode is simply a distributed mode run on a single host.
+Use this configuration testing and prototyping on HBase.  Do not use this configuration
+for production nor for evaluating HBase performance.
+</para>
+<para>Once you have confirmed your HDFS setup,
+edit <filename>conf/hbase-site.xml</filename>.  This is the file
+into which you add local customizations and overrides for 
+<link linkend="hbase_default_configurations">Default HBase Configurations</link>
+and <link linkend="hdfs_client_conf">HDFS Client Configurations</link>.
+Point HBase at the running Hadoop HDFS instance by setting the
+<varname>hbase.rootdir</varname> property.
+This property points HBase at the Hadoop filesystem instance to use.
+For example, adding the properties below to your
+<filename>hbase-site.xml</filename> says that HBase
+should use the <filename>/hbase</filename> 
+directory in the HDFS whose namenode is at port 9000 on your local machine, and that
+it should run with one replica only (recommended for pseudo-distributed mode):</para>
+<programlisting>
+&lt;configuration&gt;
+  ...
+  &lt;property&gt;
+    &lt;name&gt;hbase.rootdir&lt;/name&gt;
+    &lt;value&gt;hdfs://localhost:9000/hbase&lt;/value&gt;
+    &lt;description&gt;The directory shared by region servers.
+    &lt;/description&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;dfs.replication&lt;/name&gt;
+    &lt;value&gt;1&lt;/value&gt;
+    &lt;description&gt;The replication count for HLog &amp; HFile storage. Should not be greater than HDFS datanode count.
+    &lt;/description&gt;
+  &lt;/property&gt;
+  ...
+&lt;/configuration&gt;
+</programlisting>
+
+<note>
+<para>Let HBase create the <varname>hbase.rootdir</varname>
+directory. If you don't, you'll get warning saying HBase
+needs a migration run because the directory is missing files
+expected by HBase (it'll create them if you let it).</para>
+</note>
+
+<note>
+<para>Above we bind to <varname>localhost</varname>.
+This means that a remote client cannot
+connect.  Amend accordingly, if you want to
+connect from a remote location.</para>
+</note>
+
+<para>Now skip to <link linkend="confirm">Running and Confirming your Installation</link>
+for how to start and verify your pseudo-distributed install.
+
+<footnote>
+<para>See <link xlink:href="pseudo-distributed.html">Pseudo-distributed mode extras</link>
+for notes on how to start extra Masters and regionservers when running
+    pseudo-distributed.</para>
+</footnote>
+</para>
+
+</section>
+
+      <section xml:id="fully_dist"><title>Fully-distributed</title>
+
+<para>For running a fully-distributed operation on more than one host, make
+the following configurations.  In <filename>hbase-site.xml</filename>,
+add the property <varname>hbase.cluster.distributed</varname> 
+and set it to <varname>true</varname> and point the HBase
+<varname>hbase.rootdir</varname> at the appropriate
+HDFS NameNode and location in HDFS where you would like
+HBase to write data. For example, if you namenode were running
+at namenode.example.org on port 9000 and you wanted to home
+your HBase in HDFS at <filename>/hbase</filename>,
+make the following configuration.</para>
+<programlisting>
+&lt;configuration&gt;
+  ...
+  &lt;property&gt;
+    &lt;name&gt;hbase.rootdir&lt;/name&gt;
+    &lt;value&gt;hdfs://namenode.example.org:9000/hbase&lt;/value&gt;
+    &lt;description&gt;The directory shared by region servers.
+    &lt;/description&gt;
+  &lt;/property&gt;
+  &lt;property&gt;
+    &lt;name&gt;hbase.cluster.distributed&lt;/name&gt;
+    &lt;value&gt;true&lt;/value&gt;
+    &lt;description&gt;The mode the cluster will be in. Possible values are
+      false: standalone and pseudo-distributed setups with managed Zookeeper
+      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
+    &lt;/description&gt;
+  &lt;/property&gt;
+  ...
+&lt;/configuration&gt;
+</programlisting>
+
+<section><title><filename>regionservers</filename></title>
+<para>In addition, a fully-distributed mode requires that you
+modify <filename>conf/regionservers</filename>.
+The <filename><link linkend="regionservrers">regionservers</link></filename> file lists all hosts
+that you would have running <application>HRegionServer</application>s, one host per line
+(This file in HBase is like the Hadoop <filename>slaves</filename> file).  All servers
+listed in this file will be started and stopped when HBase cluster start or stop is run.</para>
+</section>
+
+<section xml:id="zookeeper"><title>ZooKeeper<indexterm><primary>ZooKeeper</primary></indexterm></title>
+<para>A distributed HBase depends on a running ZooKeeper cluster.
+All participating nodes and clients
+need to be able to access the running ZooKeeper ensemble.
+HBase by default manages a ZooKeeper "cluster" for you.
+It will start and stop the ZooKeeper ensemble as part of
+the HBase start/stop process.  You can also manage
+the ZooKeeper ensemble independent of HBase and 
+just point HBase at the cluster it should use.
+To toggle HBase management of ZooKeeper,
+use the <varname>HBASE_MANAGES_ZK</varname> variable in
+<filename>conf/hbase-env.sh</filename>.
+This variable, which defaults to <varname>true</varname>, tells HBase whether to
+start/stop the ZooKeeper ensemble servers as part of HBase start/stop.</para>
+
+<para>When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration
+using its native <filename>zoo.cfg</filename> file, or, the easier option
+is to just specify ZooKeeper options directly in <filename>conf/hbase-site.xml</filename>.
+A ZooKeeper configuration option can be set as a property in the HBase
+<filename>hbase-site.xml</filename>
+XML configuration file by prefacing the ZooKeeper option name with
+<varname>hbase.zookeeper.property</varname>.
+For example, the <varname>clientPort</varname> setting in ZooKeeper can be changed by
+setting the <varname>hbase.zookeeper.property.clientPort</varname> property.
+
+For all default values used by HBase, including ZooKeeper configuration,
+see the section
+<link linkend="hbase_default_configurations">Default HBase Configurations</link>.
+Look for the <varname>hbase.zookeeper.property</varname> prefix
+
+<footnote><para>For the full list of ZooKeeper configurations,
+see ZooKeeper's <filename>zoo.cfg</filename>.
+HBase does not ship with a <filename>zoo.cfg</filename> so you will need to
+browse the <filename>conf</filename> directory in an appropriate ZooKeeper download.
+</para>
+</footnote>
+</para>
+
+
+
+<para>You must at least list the ensemble servers in <filename>hbase-site.xml</filename>
+using the <varname>hbase.zookeeper.quorum</varname> property.
+This property defaults to a single ensemble member at
+<varname>localhost</varname> which is not suitable for a
+fully distributed HBase. (It binds to the local machine only and remote clients
+will not be able to connect).
+<note xml:id="how_many_zks">
+<title>How many ZooKeepers should I run?</title>
+<para>
+You can run a ZooKeeper ensemble that comprises 1 node only but
+in production it is recommended that you run a ZooKeeper ensemble of
+3, 5 or 7 machines; the more members an ensemble has, the more
+tolerant the ensemble is of host failures. Also, run an odd number of machines.
+There can be no quorum if the number of members is an even number.  Give each
+ZooKeeper server around 1GB of RAM, and if possible, its own dedicated disk
+(A dedicated disk is the best thing you can do to ensure a performant ZooKeeper
+ensemble).  For very heavily loaded clusters, run ZooKeeper servers on separate machines from
+RegionServers (DataNodes and TaskTrackers).</para>
+</note>
+</para>
+
+
+<para>For example, to have HBase manage a ZooKeeper quorum on nodes
+<emphasis>rs{1,2,3,4,5}.example.com</emphasis>, bound to port 2222 (the default is 2181)
+ensure <varname>HBASE_MANAGE_ZK</varname> is commented out or set to
+<varname>true</varname> in <filename>conf/hbase-env.sh</filename> and
+then edit <filename>conf/hbase-site.xml</filename> and set 
+<varname>hbase.zookeeper.property.clientPort</varname>
+and
+<varname>hbase.zookeeper.quorum</varname>.  You should also
+set
+<varname>hbase.zookeeper.property.dataDir</varname>
+to other than the default as the default has ZooKeeper persist data under
+<filename>/tmp</filename> which is often cleared on system restart.
+In the example below we have ZooKeeper persist to <filename>/user/local/zookeeper</filename>.
+<programlisting>
+  &lt;configuration&gt;
+    ...
+    &lt;property&gt;
+      &lt;name&gt;hbase.zookeeper.property.clientPort&lt;/name&gt;
+      &lt;value&gt;2222&lt;/value&gt;
+      &lt;description&gt;Property from ZooKeeper's config zoo.cfg.
+      The port at which the clients will connect.
+      &lt;/description&gt;
+    &lt;/property&gt;
+    &lt;property&gt;
+      &lt;name&gt;hbase.zookeeper.quorum&lt;/name&gt;
+      &lt;value&gt;rs1.example.com,rs2.example.com,rs3.example.com,rs4.example.com,rs5.example.com&lt;/value&gt;
+      &lt;description&gt;Comma separated list of servers in the ZooKeeper Quorum.
+      For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
+      By default this is set to localhost for local and pseudo-distributed modes
+      of operation. For a fully-distributed setup, this should be set to a full
+      list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
+      this is the list of servers which we will start/stop ZooKeeper on.
+      &lt;/description&gt;
+    &lt;/property&gt;
+    &lt;property&gt;
+      &lt;name&gt;hbase.zookeeper.property.dataDir&lt;/name&gt;
+      &lt;value&gt;/usr/local/zookeeper&lt;/value&gt;
+      &lt;description>Property from ZooKeeper's config zoo.cfg.
+      The directory where the snapshot is stored.
+      &lt;/description&gt;
+    &lt;/property&gt;
+    ...
+  &lt;/configuration&gt;</programlisting>
+</para>
+
+<section><title>Using existing ZooKeeper ensemble</title>
+<para>To point HBase at an existing ZooKeeper cluster,
+one that is not managed by HBase,
+set <varname>HBASE_MANAGES_ZK</varname> in 
+<filename>conf/hbase-env.sh</filename> to false
+<programlisting>
+  ...
+  # Tell HBase whether it should manage it's own instance of Zookeeper or not.
+  export HBASE_MANAGES_ZK=false</programlisting>
+
+Next set ensemble locations and client port, if non-standard,
+in <filename>hbase-site.xml</filename>,
+or add a suitably configured <filename>zoo.cfg</filename> to HBase's <filename>CLASSPATH</filename>.
+HBase will prefer the configuration found in <filename>zoo.cfg</filename>
+over any settings in <filename>hbase-site.xml</filename>.
+</para>
+
+<para>When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as a part
+of the regular start/stop scripts. If you would like to run ZooKeeper yourself,
+independent of HBase start/stop, you would do the following</para>
+<programlisting>
+${HBASE_HOME}/bin/hbase-daemons.sh {start,stop} zookeeper
+</programlisting>
+
+<para>Note that you can use HBase in this manner to spin up a ZooKeeper cluster,
+unrelated to HBase. Just make sure to set <varname>HBASE_MANAGES_ZK</varname> to
+<varname>false</varname> if you want it to stay up across HBase restarts
+so that when HBase shuts down, it doesn't take ZooKeeper down with it.</para>
+
+<para>For more information about running a distinct ZooKeeper cluster, see
+the ZooKeeper <link xlink:href="http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html">Getting Started Guide</link>.
+</para>
+</section>
+</section>
+
+<section xml:id="hdfs_client_conf">
+<title>HDFS Client Configuration</title>
+<para>Of note, if you have made <emphasis>HDFS client configuration</emphasis> on your Hadoop cluster
+-- i.e. configuration you want HDFS clients to use as opposed to server-side configurations --
+HBase will not see this configuration unless you do one of the following:</para>
+<itemizedlist>
+  <listitem><para>Add a pointer to your <varname>HADOOP_CONF_DIR</varname>
+  to the <varname>HBASE_CLASSPATH</varname> environment variable
+  in <filename>hbase-env.sh</filename>.</para></listitem>
+  <listitem><para>Add a copy of <filename>hdfs-site.xml</filename>
+  (or <filename>hadoop-site.xml</filename>) or, better, symlinks,
+  under
+  <filename>${HBASE_HOME}/conf</filename>, or</para></listitem>
+  <listitem><para>if only a small set of HDFS client
+  configurations, add them to <filename>hbase-site.xml</filename>.</para></listitem>
+</itemizedlist>
+
+<para>An example of such an HDFS client configuration is <varname>dfs.replication</varname>. If for example,
+you want to run with a replication factor of 5, hbase will create files with the default of 3 unless
+you do the above to make the configuration available to HBase.</para>
+</section>
+      </section>
+      </section>
+
+<section xml:id="confirm"><title>Running and Confirming Your Installation</title>
+<para>Make sure HDFS is running first.
+Start and stop the Hadoop HDFS daemons by running <filename>bin/start-hdfs.sh</filename>
+over in the <varname>HADOOP_HOME</varname> directory.
+You can ensure it started properly by testing the <command>put</command> and
+<command>get</command> of files into the Hadoop filesystem.
+HBase does not normally use the mapreduce daemons.  These do not need to be started.</para>
+
+<para><emphasis>If</emphasis> you are managing your own ZooKeeper, start it
+and confirm its running else, HBase will start up ZooKeeper for you as part
+of its start process.</para>
+
+<para>Start HBase with the following command:</para>
+<programlisting>bin/start-hbase.sh</programlisting>
+Run the above from the <varname>HBASE_HOME</varname> directory.
+
+<para>You should now have a running HBase instance.
+HBase logs can be found in the <filename>logs</filename> subdirectory. Check them
+out especially if HBase had trouble starting.</para>
+
+<para>HBase also puts up a UI listing vital attributes. By default its deployed on the Master host
+at port 60010 (HBase RegionServers listen on port 60020 by default and put up an informational
+http server at 60030). If the Master were running on a host named <varname>master.example.org</varname>
+on the default port, to see the Master's homepage you'd point your browser at
+<filename>http://master.example.org:60010</filename>.</para>
+
+<para>Once HBase has started, see the
+<link linkend="shell_exercises">Shell Exercises</link> section for how to
+create tables, add data, scan your insertions, and finally disable and
+drop your tables.
+</para>
+
+<para>To stop HBase after exiting the HBase shell enter
+<programlisting>$ ./bin/stop-hbase.sh
+stopping hbase...............</programlisting>
+Shutdown can take a moment to complete.  It can take longer if your cluster
+is comprised of many machines.  If you are running a distributed operation,
+be sure to wait until HBase has shut down completely
+before stopping the Hadoop daemons.</para>
+
+
+
+</section>
+</section>
+
+
+
+
+
+
+    <section><title>Example Configurations</title>
+    <section><title>Basic Distributed HBase Install</title>
+    <para>Here is an example basic configuration for a distributed ten node cluster.
+    The nodes are named <varname>example0</varname>, <varname>example1</varname>, etc., through
+node <varname>example9</varname>  in this example.  The HBase Master and the HDFS namenode 
+are running on the node <varname>example0</varname>.  RegionServers run on nodes
+<varname>example1</varname>-<varname>example9</varname>.
+A 3-node ZooKeeper ensemble runs on <varname>example1</varname>,
+<varname>example2</varname>, and <varname>example3</varname> on the
+default ports. ZooKeeper data is persisted to the directory
+<filename>/export/zookeeper</filename>.
+Below we show what the main configuration files
+-- <filename>hbase-site.xml</filename>, <filename>regionservers</filename>, and
+<filename>hbase-env.sh</filename> -- found in the HBase
+<filename>conf</filename> directory might look like.
+</para>
+    <section xml:id="hbase_site"><title><filename>hbase-site.xml</filename></title>
+    <programlisting>
+<![CDATA[
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+  <property>
+    <name>hbase.zookeeper.quorum</name>
+    <value>example1,example2,example3</value>
+    <description>The directory shared by region servers.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.dataDir</name>
+    <value>/export/zookeeper</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The directory where the snapshot is stored.
+    </description>
+  </property>
+  <property>
+    <name>hbase.rootdir</name>
+    <value>hdfs://example1:9000/hbase</value>
+    <description>The directory shared by region servers.
+    </description>
+  </property>
+  <property>
+    <name>hbase.cluster.distributed</name>
+    <value>true</value>
+    <description>The mode the cluster will be in. Possible values are
+      false: standalone and pseudo-distributed setups with managed Zookeeper
+      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
+    </description>
+  </property>
+</configuration>
+]]>
+    </programlisting>
+    </section>
+
+    <section xml:id="regionservers"><title><filename>regionservers</filename></title>
+    <para>In this file you list the nodes that will run regionservers.  In
+    our case we run regionservers on all but the head node
+    <varname>example1</varname> which is
+    carrying the HBase Master and the HDFS namenode</para>
+    <programlisting>
+    example1
+    example3
+    example4
+    example5
+    example6
+    example7
+    example8
+    example9
+    </programlisting>
+    </section>
+
+    <section xml:id="hbase_env"><title><filename>hbase-env.sh</filename></title>
+    <para>Below we use a <command>diff</command> to show the differences from 
+    default in the <filename>hbase-env.sh</filename> file. Here we are setting
+the HBase heap to be 4G instead of the default 1G.
+    </para>
+    <programlisting>
+    <![CDATA[
+$ git diff hbase-env.sh
+diff --git a/conf/hbase-env.sh b/conf/hbase-env.sh
+index e70ebc6..96f8c27 100644
+--- a/conf/hbase-env.sh
++++ b/conf/hbase-env.sh
+@@ -31,7 +31,7 @@ export JAVA_HOME=/usr/lib//jvm/java-6-sun/
+ # export HBASE_CLASSPATH=
+ 
+ # The maximum amount of heap to use, in MB. Default is 1000.
+-# export HBASE_HEAPSIZE=1000
++export HBASE_HEAPSIZE=4096
+ 
+ # Extra Java runtime options.
+ # Below are what we set by default.  May only work with SUN JVM.
+]]>
+    </programlisting>
+
+    <para>Use <command>rsync</command> to copy the content of
+    the <filename>conf</filename> directory to
+    all nodes of the cluster.
+    </para>
+    </section>
+
+    </section>
+    
+    </section>
+    </section>
+  </chapter>
+
+  <chapter xml:id="configuration">
+    <title>Configuration</title>
+    <para>
+        HBase uses the same configuration system as Hadoop.
+        To configure a deploy, edit a file of environment variables
+        in <filename>conf/hbase-env.sh</filename> -- this configuration
+        is used mostly by the launcher shell scripts getting the cluster
+        off the ground -- and then add configuration to an XML file to
+        do things like override HBase defaults, tell HBase what Filesystem to
+        use, and the location of the ZooKeeper ensemble
+        <footnote>
+<para>
+Be careful editing XML.  Make sure you close all elements.
+Run your file through <command>xmmlint</command> or similar
+to ensure well-formedness of your document after an edit session.
+</para>
+        </footnote>
+        .
+    </para>
+
+    <para>When running in distributed mode, after you make
+    an edit to an HBase configuration, make sure you copy the
+    content of the <filename>conf</filename> directory to
+    all nodes of the cluster.  HBase will not do this for you.
+    Use <command>rsync</command>.</para>
+
+
+    <section>
+    <title><filename>hbase-site.xml</filename> and <filename>hbase-default.xml</filename></title>
+    <para>Just as in Hadoop where you add site-specific HDFS configuration
+    to the <filename>hdfs-site.xml</filename> file,
+    for HBase, site specific customizations go into
+    the file <filename>conf/hbase-site.xml</filename>.
+    For the list of configurable properties, see
+    <link linkend="hbase_default_configurations">Default HBase Configurations</link>
+    below or view the raw <filename>hbase-default.xml</filename>
+    source file in the HBase source code at
+    <filename>src/main/resources</filename>.
+    </para>
+    <para>
+    Not all configuration options make it out to
+    <filename>hbase-default.xml</filename>.  Configuration
+    that it is thought rare anyone would change can exist only
+    in code; the only way to turn up such configurations is
+    via a reading of the source code itself.
+    </para>
+      <para>
+      Changes here will require a cluster restart for HBase to notice the change.
+      </para>
+    <!--The file hbase-default.xml is generated as part of
+    the build of the hbase site.  See the hbase pom.xml.
+    The generated file is a docbook section with a glossary
+    in it-->
+    <xi:include xmlns:xi="http://www.w3.org/2001/XInclude"
+      href="../../target/site/hbase-default.xml" />
+    </section>
+
+      <section>
+      <title><filename>hbase-env.sh</filename></title>
+      <para>Set HBase environment variables in this file.
+      Examples include options to pass the JVM on start of
+      an HBase daemon such as heap size and garbarge collector configs.
+      You also set configurations for HBase configuration, log directories,
+      niceness, ssh options, where to locate process pid files,
+      etc., via settings in this file. Open the file at
+      <filename>conf/hbase-env.sh</filename> and peruse its content.
+      Each option is fairly well documented.  Add your own environment
+      variables here if you want them read by HBase daemon startup.</para>
+      <para>
+      Changes here will require a cluster restart for HBase to notice the change.
+      </para>
+      </section>
+
+      <section xml:id="log4j">
+      <title><filename>log4j.properties</filename></title>
+      <para>Edit this file to change rate at which HBase files
+      are rolled and to change the level at which HBase logs messages.
+      </para>
+      <para>
+      Changes here will require a cluster restart for HBase to notice the change
+      though log levels can be changed for particular daemons via the HBase UI.
+      </para>
+      </section>
+
+      <section xml:id="important_configurations">
+      <title>The Important Configurations</title>
+      <para>Below we list the important Configurations.  We've divided this section into
+      required configuration and worth-a-look recommended configs.
+      </para>
+
+
+      <section xml:id="required_configuration"><title>Required Configurations</title>
+      <para>See the <link linkend="requirements">Requirements</link> section.
+      It lists at least two required configurations needed running HBase bearing
+      load: i.e. <link linkend="ulimit">file descriptors <varname>ulimit</varname></link> and
+      <link linkend="dfs.datanode.max.xcievers"><varname>dfs.datanode.max.xcievers</varname></link>.
+      </para>
+      </section>
+
+      <section xml:id="recommended_configurations"><title>Recommended Configuations</title>
+          <section xml:id="zookeeper.session.timeout"><title><varname>zookeeper.session.timeout</varname></title>
+          <para>The default timeout is three minutes (specified in milliseconds). This means
+              that if a server crashes, it will be three minutes before the Master notices
+              the crash and starts recovery. You might like to tune the timeout down to
+              a minute or even less so the Master notices failures the sooner.
+              Before changing this value, be sure you have your JVM garbage collection
+              configuration under control otherwise, a long garbage collection that lasts
+              beyond the zookeeper session timeout will take out
+              your RegionServer (You might be fine with this -- you probably want recovery to start
+          on the server if a RegionServer has been in GC for a long period of time).</para> 
+
+      <para>To change this configuration, edit <filename>hbase-site.xml</filename>,
+          copy the changed file around the cluster and restart.</para>
+
+          <para>We set this value high to save our having to field noob questions up on the mailing lists asking
+              why a RegionServer went down during a massive import.  The usual cause is that their JVM is untuned and
+              they are running into long GC pauses.  Our thinking is that
+              while users are  getting familiar with HBase, we'd save them having to know all of its
+              intricacies.  Later when they've built some confidence, then they can play
+              with configuration such as this.
+          </para>
+      </section>
+          <section xml:id="hbase.regionserver.handler.count"><title><varname>hbase.regionserver.handler.count</varname></title>
+          <para>
+          This setting defines the number of threads that are kept open to answer
+          incoming requests to user tables. The default of 10 is rather low in order to
+          prevent users from killing their region servers when using large write buffers
+          with a high number of concurrent clients. The rule of thumb is to keep this
+          number low when the payload per request approaches the MB (big puts, scans using
+          a large cache) and high when the payload is small (gets, small puts, ICVs, deletes).
+          </para>
+          <para>
+          It is safe to set that number to the
+          maximum number of incoming clients if their payload is small, the typical example
+          being a cluster that serves a website since puts aren't typically buffered
+          and most of the operations are gets.
+          </para>
+          <para>
+          The reason why it is dangerous to keep this setting high is that the aggregate
+          size of all the puts that are currently happening in a region server may impose
+          too much pressure on its memory, or even trigger an OutOfMemoryError. A region server
+          running on low memory will trigger its JVM's garbage collector to run more frequently
+          up to a point where GC pauses become noticeable (the reason being that all the memory
+          used to keep all the requests' payloads cannot be trashed, no matter how hard the
+          garbage collector tries). After some time, the overall cluster
+          throughput is affected since every request that hits that region server will take longer,
+          which exacerbates the problem even more.
+          </para>
+          </section>
+      <section xml:id="big_memory">
+        <title>Configuration for large memory machines</title>
+        <para>
+          HBase ships with a reasonable, conservative configuration that will
+          work on nearly all
+          machine types that people might want to test with. If you have larger
+          machines you might the following configuration options helpful.
+        </para>
+
+      </section>
+      <section xml:id="lzo">
+      <title>LZO compression</title>
+      <para>You should consider enabling LZO compression.  Its
+      near-frictionless and in most all cases boosts performance.
+      </para>
+      <para>Unfortunately, HBase cannot ship with LZO because of
+      the licensing issues; HBase is Apache-licensed, LZO is GPL.
+      Therefore LZO install is to be done post-HBase install.
+      See the <link xlink:href="http://wiki.apache.org/hadoop/UsingLzoCompression">Using LZO Compression</link>
+      wiki page for how to make LZO work with HBase.
+      </para>
+      <para>A common problem users run into when using LZO is that while initial
+      setup of the cluster runs smooth, a month goes by and some sysadmin goes to
+      add a machine to the cluster only they'll have forgotten to do the LZO
+      fixup on the new machine.  In versions since HBase 0.90.0, we should
+      fail in a way that makes it plain what the problem is, but maybe not.
+      Remember you read this paragraph<footnote><para>See
+      <link linkend="hbase.regionserver.codecs">hbase.regionserver.codecs</link>
+      for a feature to help protect against failed LZO install</para></footnote>.
+      </para>
+      </section>
+      </section>
+
+      </section>
+      <section xml:id="client_dependencies"><title>Client configuration and dependencies connecting to an HBase cluster</title>
+
+      <para>
+        Since the HBase Master may move around, clients bootstrap by looking ZooKeeper.  Thus clients
+        require the ZooKeeper quorum information in a <filename>hbase-site.xml</filename> that
+        is on their <varname>CLASSPATH</varname>.</para>
+        <para>If you are configuring an IDE to run a HBase client, you should
+        include the <filename>conf/</filename> directory on your classpath.
+      </para>
+      <para>
+      Minimally, a client of HBase needs the hbase, hadoop, guava, and zookeeper jars
+      in its <varname>CLASSPATH</varname> connecting to HBase.
+      </para>
+        <para>
+          An example basic <filename>hbase-site.xml</filename> for client only
+          might look as follows:
+          <programlisting><![CDATA[
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+  <property>
+    <name>hbase.zookeeper.quorum</name>
+    <value>example1,example2,example3</value>
+    <description>The directory shared by region servers.
+    </description>
+  </property>
+</configuration>
+]]>
+          </programlisting>
+        </para>
+    </section>
+
+  </chapter>
+
+  <chapter xml:id="shell">
+    <title>The HBase Shell</title>
+
+    <para>
+        The HBase Shell is <link xlink:href="http://jruby.org">(J)Ruby</link>'s
+        IRB with some HBase particular verbs added.  Anything you can do in
+        IRB, you should be able to do in the HBase Shell.</para>
+        <para>To run the HBase shell, 
+        do as follows:
+        <programlisting>$ ./bin/hbase shell</programlisting>
+        </para>
+            <para>Type <command>help</command> and then <command>&lt;RETURN&gt;</command>
+            to see a listing of shell
+            commands and options. Browse at least the paragraphs at the end of
+            the help emission for the gist of how variables and command
+            arguments are entered into the
+            HBase shell; in particular note how table names, rows, and
+            columns, etc., must be quoted.</para>
+            <para>See <link linkend="shell_exercises">Shell Exercises</link>
+            for example basic shell operation.</para>
+
+    <section><title>Scripting</title>
+        <para>For examples scripting HBase, look in the
+            HBase <filename>bin</filename> directory.  Look at the files
+            that end in <filename>*.rb</filename>.  To run one of these
+            files, do as follows:
+            <programlisting>$ ./bin/hbase org.jruby.Main PATH_TO_SCRIPT</programlisting>
+        </para>
+    </section>
+
+    <section xml:id="shell_tricks"><title>Shell Tricks</title>
+        <section><title><filename>irbrc</filename></title>
+                <para>Create an <filename>.irbrc</filename> file for yourself in your
+                    home directory. Add customizations. A useful one is
+                    command history so commands are save across Shell invocations:
+                    <programlisting>
+                        $ more .irbrc
+                        require 'irb/ext/save-history'
+                        IRB.conf[:SAVE_HISTORY] = 100
+                        IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"</programlisting>
+                See the <application>ruby</application> documentation of
+                <filename>.irbrc</filename> to learn about other possible
+                confiurations.
+                </para>
+        </section>
+        <section><title>LOG data to timestamp</title>
+            <para>
+                To convert the date '08/08/16 20:56:29' from an hbase log into a timestamp, do:
+                <programlisting>
+                    hbase(main):021:0> import java.text.SimpleDateFormat
+                    hbase(main):022:0> import java.text.ParsePosition
+                    hbase(main):023:0> SimpleDateFormat.new("yy/MM/dd HH:mm:ss").parse("08/08/16 20:56:29", ParsePosition.new(0)).getTime() => 1218920189000</programlisting>
+            </para>
+            <para>
+                To go the other direction:
+                <programlisting>
+                    hbase(main):021:0> import java.util.Date
+                    hbase(main):022:0> Date.new(1218920189000).toString() => "Sat Aug 16 20:56:29 UTC 2008"</programlisting>
+            </para>
+            <para>
+                To output in a format that is exactly like that of the HBase log format will take a little messing with
+                <link xlink:href="http://download.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html">SimpleDateFormat</link>.
+            </para>
+        </section>
+        <section><title>Debug</title>
+            <section><title>Shell debug switch</title>
+                <para>You can set a debug switch in the shell to see more output
+                    -- e.g. more of the stack trace on exception --
+                    when you run a command:
+                    <programlisting>hbase> debug &lt;RETURN&gt;</programlisting>
+                 </para>
+            </section>
+            <section><title>DEBUG log level</title>
+                <para>To enable DEBUG level logging in the shell,
+                    launch it with the <command>-d</command> option.
+                    <programlisting>$ ./bin/hbase shell -d</programlisting>
+               </para>
+            </section>
+         </section>
+    </section>
+  </chapter>
+
+  <chapter xml:id="mapreduce">
+  <title>HBase and MapReduce</title>
+  <para>See <link xlink:href="apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html#package_description">HBase and MapReduce</link>
+  up in javadocs.</para>
+  </chapter>
+
+  <chapter xml:id="hbase_metrics">
+  <title>Metrics</title>
+  <para>See <link xlink:href="metrics.html">Metrics</link>.
+  </para>
+  </chapter>
+
+  <chapter xml:id="cluster_replication">
+  <title>Cluster Replication</title>
+  <para>See <link xlink:href="replication.html">Cluster Replication</link>.
+  </para>
+  </chapter>
+
+  <chapter xml:id="datamodel">
+    <title>Data Model</title>
+  <para>In short, applications store data into HBase <link linkend="table">tables</link>.
+      Tables are made of <link linkend="row">rows</link> and <emphasis>columns</emphasis>.
+      All colums in HBase belong to a particular
+      <link linkend="columnfamily">Column Family</link>.
+      Table <link linkend="cell">cells</link> -- the intersection of row and column
+      coordinates -- are versioned.
+      A cell’s content is an uninterpreted array of bytes.
+  </para>
+      <para>Table row keys are also byte arrays so almost anything can
+      serve as a row key from strings to binary representations of longs or
+      even serialized data structures. Rows in HBase tables
+      are sorted by row key. The sort is byte-ordered. All table accesses are
+      via the table row key -- its primary key.
+</para>
+
+    <section xml:id="table">
+      <title>Table</title>
+      <para>
+      Tables are declared up front at schema definition time.
+      </para>
+    </section>
+
+    <section xml:id="row">
+      <title>Row</title>
+      <para>Row keys are uninterrpreted bytes. Rows are
+      lexicographically sorted with the lowest order appearing first
+      in a table.  The empty byte array is used to denote both the
+      start and end of a tables' namespace.</para>
+    </section>
+
+    <section xml:id="columnfamily">
+      <title>Column Family<indexterm><primary>Column Family</primary></indexterm></title>
+        <para>
+      Columns in HBase are grouped into <emphasis>column families</emphasis>.
+      All column members of a column family have a common prefix.  For example, the
+      columns <emphasis>courses:history</emphasis> and
+      <emphasis>courses:math</emphasis> are both members of the
+      <emphasis>courses</emphasis> column family.
+          The colon character (<literal
+          moreinfo="none">:</literal>) delimits the column family from the
+      <indexterm>column family <emphasis>qualifier</emphasis><primary>Column Family Qualifier</primary></indexterm>.
+        The column family prefix must be composed of
+      <emphasis>printable</emphasis> characters. The qualifying tail, the
+      column family <emphasis>qualifier</emphasis>, can be made of any
+      arbitrary bytes. Column families must be declared up front
+      at schema definition time whereas columns do not need to be
+      defined at schema time but can be conjured on the fly while
+      the table is up an running.</para>
+      <para>Physically, all column family members are stored together on the
+      filesystem.  Because tunings and
+      storage specifications are done at the column family level, it is
+      advised that all column family members have the same general access
+      pattern and size characteristics.</para>
+
+      <para></para>
+    </section>
+    <section>
+      <title>Cells<indexterm><primary>Cells</primary></indexterm></title>
+      <para>A <emphasis>{row, column, version} </emphasis>tuple exactly
+      specifies a <literal>cell</literal> in HBase. 
+      Cell content is uninterrpreted bytes</para>
+    </section>
+
+    <section xml:id="versions">
+      <title>Versions<indexterm><primary>Versions</primary></indexterm></title>
+
+      <para>A <emphasis>{row, column, version} </emphasis>tuple exactly
+      specifies a <literal>cell</literal> in HBase. Its possible to have an
+      unbounded number of cells where the row and column are the same but the
+      cell address differs only in its version dimension.</para>
+
+      <para>While rows and column keys are expressed as bytes, the version is
+      specified using a long integer. Typically this long contains time
+      instances such as those returned by
+      <code>java.util.Date.getTime()</code> or
+      <code>System.currentTimeMillis()</code>, that is: <quote>the difference,
+      measured in milliseconds, between the current time and midnight, January
+      1, 1970 UTC</quote>.</para>
+
+      <para>The HBase version dimension is stored in decreasing order, so that
+      when reading from a store file, the most recent values are found
+      first.</para>
+
+      <para>There is a lot of confusion over the semantics of
+      <literal>cell</literal> versions, in HBase. In particular, a couple
+      questions that often come up are:<itemizedlist>
+          <listitem>
+            <para>If multiple writes to a cell have the same version, are all
+            versions maintained or just the last?<footnote>
+                <para>Currently, only the last written is fetchable.</para>
+              </footnote></para>
+          </listitem>
+
+          <listitem>
+            <para>Is it OK to write cells in a non-increasing version
+            order?<footnote>
+                <para>Yes</para>
+              </footnote></para>
+          </listitem>
+        </itemizedlist></para>
+
+      <para>Below we describe how the version dimension in HBase currently
+      works<footnote>
+          <para>See <link
+          xlink:href="https://issues.apache.org/jira/browse/HBASE-2406">HBASE-2406</link>
+          for discussion of HBase versions. <link
+          xlink:href="http://outerthought.org/blog/417-ot.html">Bending time
+          in HBase</link> makes for a good read on the version, or time,
+          dimension in HBase. It has more detail on versioning than is
+          provided here. As of this writing, the limiitation
+          <emphasis>Overwriting values at existing timestamps</emphasis>
+          mentioned in the article no longer holds in HBase. This section is
+          basically a synopsis of this article by Bruno Dumon.</para>
+        </footnote>.</para>
+
+      <section>
+        <title>Versions and HBase Operations</title>
+
+        <para>In this section we look at the behavior of the version dimension
+        for each of the core HBase operations.</para>
+
+        <section>
+          <title>Get/Scan</title>
+
+          <para>Gets are implemented on top of Scans. The below discussion of
+          Get applies equally to Scans.</para>
+
+          <para>By default, i.e. if you specify no explicit version, when
+          doing a <literal>get</literal>, the cell whose version has the
+          largest value is returned (which may or may not be the latest one
+          written, see later). The default behavior can be modified in the
+          following ways:</para>
+
+          <itemizedlist>
+            <listitem>
+              <para>to return more than one version, see <link
+              xlink:href="http://hbase.apache.org/docs/current/api/org/apache/hadoop/hbase/client/Get.html#setMaxVersions()">Get.setMaxVersions()</link></para>
+            </listitem>
+
+            <listitem>
+              <para>to return versions other than the latest, see <link
+              xlink:href="???">Get.setTimeRange()</link></para>
+
+              <para>To retrieve the latest version that is less than or equal
+              to a given value, thus giving the 'latest' state of the record
+              at a certain point in time, just use a range from 0 to the
+              desired version and set the max versions to 1.</para>
+            </listitem>
+          </itemizedlist>
+        </section>
+
+        <section>
+          <title>Put</title>
+
+          <para>Doing a put always creates a new version of a
+          <literal>cell</literal>, at a certain timestamp. By default the
+          system uses the server's <literal>currentTimeMillis</literal>, but
+          you can specify the version (= the long integer) yourself, on a
+          per-column level. This means you could assign a time in the past or
+          the future, or use the long value for non-time purposes.</para>
+
+          <para>To overwrite an existing value, do a put at exactly the same
+          row, column, and version as that of the cell you would
+          overshadow.</para>
+        </section>
+
+        <section>
+          <title>Delete</title>
+
+          <para>When performing a delete operation in HBase, there are two
+          ways to specify the versions to be deleted</para>
+
+          <itemizedlist>
+            <listitem>
+              <para>Delete all versions older than a certain timestamp</para>
+            </listitem>
+
+            <listitem>
+              <para>Delete the version at a specific timestamp</para>
+            </listitem>
+          </itemizedlist>
+
+          <para>A delete can apply to a complete row, a complete column
+          family, or to just one column. It is only in the last case that you
+          can delete explicit versions. For the deletion of a row or all the
+          columns within a family, it always works by deleting all cells older
+          than a certain version.</para>
+
+          <para>Deletes work by creating <emphasis>tombstone</emphasis>
+          markers. For example, let's suppose we want to delete a row. For
+          this you can specify a version, or else by default the
+          <literal>currentTimeMillis</literal> is used. What this means is
+          <quote>delete all cells where the version is less than or equal to
+          this version</quote>. HBase never modifies data in place, so for
+          example a delete will not immediately delete (or mark as deleted)
+          the entries in the storage file that correspond to the delete
+          condition. Rather, a so-called <emphasis>tombstone</emphasis> is
+          written, which will mask the deleted values<footnote>
+              <para>When HBase does a major compaction, the tombstones are
+              processed to actually remove the dead values, together with the
+              tombstones themselves.</para>
+            </footnote>. If the version you specified when deleting a row is
+          larger than the version of any value in the row, then you can
+          consider the complete row to be deleted.</para>
+        </section>
+      </section>
+
+      <section>
+        <title>Current Limitations</title>
+
+        <para>There are still some bugs (or at least 'undecided behavior')
+        with the version dimension that will be addressed by later HBase
+        releases.</para>
+
+        <section>
+          <title>Deletes mask Puts</title>
+
+          <para>Deletes mask puts, even puts that happened after the delete
+          was entered<footnote>
+              <para><link
+              xlink:href="https://issues.apache.org/jira/browse/HBASE-2256">HBASE-2256</link></para>
+            </footnote>. Remember that a delete writes a tombstone, which only
+          disappears after then next major compaction has run. Suppose you do
+          a delete of everything &lt;= T. After this you do a new put with a
+          timestamp &lt;= T. This put, even if it happened after the delete,
+          will be masked by the delete tombstone. Performing the put will not
+          fail, but when you do a get you will notice the put did have no
+          effect. It will start working again after the major compaction has
+          run. These issues should not be a problem if you use
+          always-increasing versions for new puts to a row. But they can occur
+          even if you do not care about time: just do delete and put
+          immediately after each other, and there is some chance they happen
+          within the same millisecond.</para>
+        </section>
+
+        <section>
+          <title>Major compactions change query results</title>
+
+          <para><quote>...create three cell versions at t1, t2 and t3, with a
+          maximum-versions setting of 2. So when getting all versions, only
+          the values at t2 and t3 will be returned. But if you delete the
+          version at t2 or t3, the one at t1 will appear again. Obviously,
+          once a major compaction has run, such behavior will not be the case
+          anymore...<footnote>
+              <para>See <emphasis>Garbage Collection</emphasis> in <link
+              xlink:href="http://outerthought.org/blog/417-ot.html">Bending
+              time in HBase</link> </para>
+            </footnote></quote></para>
+        </section>
+      </section>
+    </section>
+  </chapter>
+
+
+
+  <chapter xml:id="architecture">
+    <title>Architecture</title>
+    <section>
+     <title>Daemons</title>
+     <section><title>Master</title>
+     </section>
+     <section><title>RegionServer</title>
+     </section>
+    </section>
+
+    <section>
+    <title>Regions</title>
+    <para>This chapter is all about Regions.</para>
+    <note>
+        <para>Regions are comprised of a Store per Column Family.
+        </para>
+    </note>
+
+    <section>
+      <title>Region Size</title>
+
+      <para>Region size is one of those tricky things, there are a few factors
+      to consider:</para>
+
+      <itemizedlist>
+        <listitem>
+          <para>Regions are the basic element of availability and
+          distribution.</para>
+        </listitem>
+
+        <listitem>
+          <para>HBase scales by having regions across many servers. Thus if
+          you have 2 regions for 16GB data, on a 20 node machine you are a net
+          loss there.</para>
+        </listitem>
+
+        <listitem>
+          <para>High region count has been known to make things slow, this is
+          getting better, but it is probably better to have 700 regions than
+          3000 for the same amount of data.</para>
+        </listitem>
+
+        <listitem>
+          <para>Low region count prevents parallel scalability as per point
+          #2. This really cant be stressed enough, since a common problem is
+          loading 200MB data into HBase then wondering why your awesome 10
+          node cluster is mostly idle.</para>
+        </listitem>
+
+        <listitem>
+          <para>There is not much memory footprint difference between 1 region
+          and 10 in terms of indexes, etc, held by the regionserver.</para>
+        </listitem>
+      </itemizedlist>
+
+      <para>Its probably best to stick to the default, perhaps going smaller
+      for hot tables (or manually split hot regions to spread the load over
+      the cluster), or go with a 1GB region size if your cell sizes tend to be
+      largish (100k and up).</para>
+    </section>
+
+      <section>
+        <title>Region Splits</title>
+
+        <para>Splits run unaided on the RegionServer; i.e. the Master does not
+        participate. The RegionServer splits a region, offlines the split
+        region and then adds the daughter regions to META, opens daughters on
+        the parent's hosting RegionServer and then reports the split to the
+        Master.</para>
+      </section>
+
+      <section>
+        <title>Region Load Balancer</title>
+
+        <para>
+        Periodically, and when there are not any regions in transition, a load balancer will run and move regions around to balance cluster load.
+        </para>
+      </section>
+
+      <section xml:id="store">
+          <title>Store</title>
+          <para>A Store hosts a MemStore and 0 or more StoreFiles.
+              StoreFiles are HFiles.
+          </para>
+    <section xml:id="hfile">
+      <title>HFile</title>
+      <section><title>HFile Format</title>
+          <para>The <emphasis>hfile</emphasis> file format is based on
+              the SSTable file described in the <link xlink:href="http://labs.google.com/papers/bigtable.html">BigTable [2006]</link> paper and on
+              Hadoop's <link xlink:href="http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/file/tfile/TFile.html">tfile</link>
+              (The unit test suite and the compression harness were taken directly from tfile). 
+              See Schubert Zhang's blog post on <link xlink:ref="http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.html">HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs</link> for a thorough introduction.
+          </para>
+      </section>
+
+      <section xml:id="hfile_tool">
+        <title>HFile Tool</title>
+
+        <para>To view a textualized version of hfile content, you can do use
+        the <classname>org.apache.hadoop.hbase.io.hfile.HFile
+        </classname>tool. Type the following to see usage:<programlisting><code>$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile </code> </programlisting>For
+        example, to view the content of the file
+        <filename>hdfs://10.81.47.41:9000/hbase/TEST/1418428042/DSMP/4759508618286845475</filename>,
+        type the following:<programlisting> <code>$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:9000/hbase/TEST/1418428042/DSMP/4759508618286845475 </code> </programlisting>If
+        you leave off the option -v to see just a summary on the hfile. See
+        usage for other things to do with the <classname>HFile</classname>
+        tool.</para>
+      </section>
+      </section>
+      </section>
+
+    </section>
+  </chapter>
+
+  <chapter xml:id="wal">
+    <title >The WAL</title>
+
+    <subtitle>HBase's<link
+    xlink:href="http://en.wikipedia.org/wiki/Write-ahead_logging"> Write-Ahead
+    Log</link></subtitle>
+
+    <para>Each RegionServer adds updates to its Write-ahead Log (WAL)
+    first, and then to memory.</para>
+
+    <section>
+      <title>What is the purpose of the HBase WAL</title>
+
+      <para>
+     See the Wikipedia
+     <link xlink:href="http://en.wikipedia.org/wiki/Write-ahead_logging">Write-Ahead
+    Log</link> article.
+
+      </para>
+    </section>
+
+    <section xml:id="wal_splitting">
+      <title>WAL splitting</title>
+
+      <subtitle>How edits are recovered from a crashed RegionServer</subtitle>
+
+      <para>When a RegionServer crashes, it will lose its ephemeral lease in
+      ZooKeeper...TODO</para>
+
+      <section>
+        <title><varname>hbase.hlog.split.skip.errors</varname></title>
+
+        <para>When set to <constant>true</constant>, the default, any error
+        encountered splitting will be logged, the problematic WAL will be
+        moved into the <filename>.corrupt</filename> directory under the hbase
+        <varname>rootdir</varname>, and processing will continue. If set to
+        <constant>false</constant>, the exception will be propagated and the
+        split logged as failed.<footnote>
+            <para>See <link
+            xlink:href="https://issues.apache.org/jira/browse/HBASE-2958">HBASE-2958
+            When hbase.hlog.split.skip.errors is set to false, we fail the
+            split but thats it</link>. We need to do more than just fail split
+            if this flag is set.</para>
+          </footnote></para>
+      </section>
+
+      <section>
+        <title>How EOFExceptions are treated when splitting a crashed
+        RegionServers' WALs</title>
+
+        <para>If we get an EOF while splitting logs, we proceed with the split
+        even when <varname>hbase.hlog.split.skip.errors</varname> ==
+        <constant>false</constant>. An EOF while reading the last log in the
+        set of files to split is near-guaranteed since the RegionServer likely
+        crashed mid-write of a record. But we'll continue even if we got an
+        EOF reading other than the last file in the set.<footnote>
+            <para>For background, see <link
+            xlink:href="https://issues.apache.org/jira/browse/HBASE-2643">HBASE-2643
+            Figure how to deal with eof splitting logs</link></para>
+          </footnote></para>
+      </section>
+    </section>
+
+  </chapter>
+
+  <chapter xml:id="blooms">
+    <title>Bloom Filters</title>
+
+    <para>Bloom filters were developed over in <link
+    xlink:href="https://issues.apache.org/jira/browse/HBASE-1200">HBase-1200
+    Add bloomfilters</link>.<footnote>
+        <para>For description of the development process -- why static blooms
+        rather than dynamic -- and for an overview of the unique properties
+        that pertain to blooms in HBase, as well as possible future
+        directions, see the <emphasis>Development Process</emphasis> section
+        of the document <link
+        xlink:href="https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf">BloomFilters
+        in HBase</link> attached to <link
+        xlink:href="https://issues.apache.org/jira/browse/HBASE-1200">HBase-1200</link>.</para>
+      </footnote><footnote>
+        <para>The bloom filters described here are actually version two of
+        blooms in HBase. In versions up to 0.19.x, HBase had a dynamic bloom
+        option based on work done by the <link
+        xlink:href="http://www.one-lab.org">European Commission One-Lab
+        Project 034819</link>. The core of the HBase bloom work was later
+        pulled up into Hadoop to implement org.apache.hadoop.io.BloomMapFile.
+        Version 1 of HBase blooms never worked that well. Version 2 is a
+        rewrite from scratch though again it starts with the one-lab
+        work.</para>
+      </footnote></para>
+
+    <section>
+      <title>Configurations</title>
+
+      <para>Blooms are enabled by specifying options on a column family in the
+      HBase shell or in java code as specification on
+      <classname>org.apache.hadoop.hbase.HColumnDescriptor</classname>.</para>
+
+      <section>
+        <title><code>HColumnDescriptor</code> option</title>
+
+        <para>Use <code>HColumnDescriptor.setBloomFilterType(NONE | ROW |
+        ROWCOL)</code> to enable blooms per Column Family. Default =
+        <varname>NONE</varname> for no bloom filters. If
+        <varname>ROW</varname>, the hash of the row will be added to the bloom
+        on each insert. If <varname>ROWCOL</varname>, the hash of the row +
+        column family + column family qualifier will be added to the bloom on
+        each key insert.</para>
+      </section>
+
+      <section>
+        <title><varname>io.hfile.bloom.enabled</varname> global kill
+        switch</title>
+
+        <para><code>io.hfile.bloom.enabled</code> in
+        <classname>Configuration</classname> serves as the kill switch in case
+        something goes wrong. Default = <varname>true</varname>.</para>
+      </section>
+
+      <section>
+        <title><varname>io.hfile.bloom.error.rate</varname></title>
+
+        <para><varname>io.hfile.bloom.error.rate</varname> = average false
+        positive rate. Default = 1%. Decrease rate by ½ (e.g. to .5%) == +1
+        bit per bloom entry.</para>
+      </section>
+
+      <section>
+        <title><varname>io.hfile.bloom.max.fold</varname></title>
+
+        <para><varname>io.hfile.bloom.max.fold</varname> = guaranteed minimum
+        fold rate. Most people should leave this alone. Default = 7, or can
+        collapse to at least 1/128th of original size. See the
+        <emphasis>Development Process</emphasis> section of the document <link
+        xlink:href="https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf">BloomFilters
+        in HBase</link> for more on what this option means.</para>
+      </section>
+    </section>
+
+    <section xml:id="bloom_footprint">
+      <title>Bloom StoreFile footprint</title>
+
+      <para>Bloom filters add an entry to the <classname>StoreFile</classname>
+      general <classname>FileInfo</classname> data structure and then two
+      extra entries to the <classname>StoreFile</classname> metadata
+      section.</para>
+
+      <section>
+        <title>BloomFilter in the <classname>StoreFile</classname>
+        <classname>FileInfo</classname> data structure</title>
+
+        <section>
+          <title><varname>BLOOM_FILTER_TYPE</varname></title>
+
+          <para><classname>FileInfo</classname> has a
+          <varname>BLOOM_FILTER_TYPE</varname> entry which is set to
+          <varname>NONE</varname>, <varname>ROW</varname> or
+          <varname>ROWCOL.</varname></para>
+        </section>
+      </section>
+
+      <section>
+        <title>BloomFilter entries in <classname>StoreFile</classname>
+        metadata</title>
+
+        <section>
+          <title><varname>BLOOM_FILTER_META</varname></title>
+
+          <para><varname>BLOOM_FILTER_META</varname> holds Bloom Size, Hash
+          Function used, etc. Its small in size and is cached on
+          <classname>StoreFile.Reader</classname> load</para>
+        </section>
+
+        <section>
+          <title><varname>BLOOM_FILTER_DATA</varname></title>
+
+          <para><varname>BLOOM_FILTER_DATA</varname> is the actual bloomfilter
+          data. Obtained on-demand. Stored in the LRU cache, if it is enabled
+          (Its enabled by default).</para>
+        </section>
+      </section>
+    </section>
+  </chapter>
+
+  <appendix xml:id="tools">
+    <title >Tools</title>
+
+    <para>Here we list HBase tools for administration, analysis, fixup, and
+    debugging.</para>
+    <section xml:id="hbck">
+        <title>HBase <application>hbck</application></title>
+        <subtitle>An <emphasis>fsck</emphasis> for your HBase install</subtitle>
+        <para>To run <application>hbck</application> against your HBase cluster run
+        <programlisting>$ ./bin/hbase hbck</programlisting>
+        At the end of the commands output it prints <emphasis>OK</emphasis>
+        or <emphasis>INCONSISTENCY</emphasis>. If your cluster reports
+        inconsistencies, pass <command>-details</command> to see more detail emitted.
+        If inconsistencies, run <command>hbck</command> a few times because the
+        inconsistency may be transient (e.g. cluster is starting up or a region is
+        splitting).
+        Passing <command>-fix</command> may correct the inconsistency (This latter
+        is an experimental feature).
+        </para>
+    </section>
+    <section><title>HFile Tool</title>
+        <para>See <link linkend="hfile_tool" >HFile Tool</link>.</para>
+    </section>
+    <section xml:id="wal_tools">
+      <title>WAL Tools</title>
+
+      <section xml:id="hlog_tool">
+        <title><classname>HLog</classname> tool</title>
+
+        <para>The main method on <classname>HLog</classname> offers manual
+        split and dump facilities. Pass it WALs or the product of a split, the
+        content of the <filename>recovered.edits</filename>. directory.</para>
+
+        <para>You can get a textual dump of a WAL file content by doing the
+        following:<programlisting> <code>$ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.HLog --dump hdfs://example.org:9000/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012</code> </programlisting>The
+        return code will be non-zero if issues with the file so you can test
+        wholesomeness of file by redirecting <varname>STDOUT</varname> to
+        <code>/dev/null</code> and testing the program return.</para>
+
+        <para>Similarily you can force a split of a log file directory by
+        doing:<programlisting> $ ./<code>bin/hbase org.apache.hadoop.hbase.regionserver.wal.HLog --split hdfs://example.org:9000/hbase/.logs/example.org,60020,1283516293161/</code></programlisting></para>
+      </section>
+    </section>
+  </appendix>
+  <appendix xml:id="compression">
+    <title >Compression</title>
+
+    <para>TODO: Compression in hbase...</para>
+    <section>
+    <title>
+    LZO
+    </title>
+    <para>
+    Running with LZO enabled is recommended though HBase does not ship with
+    LZO because of licensing issues.  To install LZO and verify its installation
+    and that its available to HBase, do the following...
+    </para>
+    </section>
+
+    <section id="hbase.regionserver.codecs">
+    <title>
+    <varname>
+    hbase.regionserver.codecs
+    </varname>
+    </title>
+    <para>
+    To have a RegionServer test a set of codecs and fail-to-start if any
+    code is missing or misinstalled, add the configuration
+    <varname>
+    hbase.regionserver.codecs
+    </varname>
+    to your <filename>hbase-site.xml</filename> with a value of
+    codecs to test on startup.  For example if the 
+    <varname>
+    hbase.regionserver.codecs
+    </varname> value is <code>lzo,gz</code> and if lzo is not present
+    or improperly installed, the misconfigured RegionServer will fail
+    to start.
+    </para>
+    <para>
+    Administrators might make use of this facility to guard against
+    the case where a new server is added to cluster but the cluster
+    requires install of a particular coded.
+    </para>
+
+    </section>
+  </appendix>
+
+  <appendix xml:id="faq">
+    <title >FAQ</title>
+    <qandaset defaultlabel='faq'>
+        <qandadiv><title>General</title>
+        <qandaentry>
+                <question><para>Are there other HBase FAQs?</para></question>
+            <answer>
+                <para>
+              See the FAQ that is up on the wiki, <link xlink:href="http://wiki.apache.org/hadoop/Hbase/FAQ">HBase Wiki FAQ</link>
+              as well as the <link xlink:href="http://wiki.apache.org/hadoop/Hbase/Troubleshooting">Troubleshooting</link> page and
+              the <link xlink:href="http://wiki.apache.org/hadoop/Hbase/FrequentlySeenErrors">Frequently Seen Errors</link> page.
+                </para>
+            </answer>
+        </qandaentry>
+    </qandadiv>
+    <qandadiv xml:id="ec2"><title>EC2</title>
+        <qandaentry>
+            <question><para>
+            Why doesn't my remote java connection into my ec2 cluster work?
+            </para></question>
+            <answer>
+                <para>
+          See Andrew's answer here, up on the user list: <link xlink:href="http://search-hadoop.com/m/sPdqNFAwyg2">Remote Java client connection into EC2 instance</link>.
+                </para>
+            </answer>
+        </qandaentry>
+    </qandadiv>
+        <qandadiv><title>Building HBase</title>
+        <qandaentry>
+            <question><para>
+When I build, why do I always get <code>Unable to find resource 'VM_global_library.vm'</code>?
+            </para></question>
+            <answer>
+                <para>
+                    Ignore it.  Its not an error.  It is <link xlink:href="http://jira.codehaus.org/browse/MSITE-286">officially ugly</link> though.
+                </para>
+            </answer>
+        </qandaentry>
+    </qandadiv>
+        <qandadiv><title>Upgrading your HBase</title>
+        <qandaentry>
+            <question xml:id="0_90_upgrade"><para>
+            Whats involved upgrading to HBase 0.90.x from 0.89.x or from 0.20.x?
+            </para></question>
+            <answer>
+          <para>This version of 0.90.x HBase can be started on data written by
+              HBase 0.20.x or HBase 0.89.x.  There is no need of a migration step.
+              HBase 0.89.x and 0.90.x does write out the name of region directories
+              differently -- it names them with a md5 hash of the region name rather
+              than a jenkins hash -- so this means that once started, there is no
+              going back to HBase 0.20.x.
+          </para>
+            </answer>
+        </qandaentry>
+    </qandadiv>
+    </qandaset>
+  </appendix>
+
+
+
+
+  <index xml:id="book_index">
+  <title>Index</title>
+  </index>
+</book>
diff --git a/0.90/src/examples/README.txt b/0.90/src/examples/README.txt
new file mode 100644
index 0000000..009c3d8
--- /dev/null
+++ b/0.90/src/examples/README.txt
@@ -0,0 +1,11 @@
+Example code.
+
+* src/examples/thrift
+    Examples for interacting with HBase via Thrift from C++, PHP, Python and Ruby.
+* org.apache.hadoop.hbase.mapreduce.SampleUploader
+    Demonstrates uploading data from text files (presumably stored in HDFS) to HBase.
+* org.apache.hadoop.hbase.mapreduce.IndexBuilder
+    Demonstrates map/reduce with a table as the source and other tables as the sink.
+
+As of 0.20 there is no ant target for building the examples. You can easily build
+the Java examples by copying them to the right location in the main source hierarchy.
\ No newline at end of file
diff --git a/0.90/src/examples/mapreduce/index-builder-setup.rb b/0.90/src/examples/mapreduce/index-builder-setup.rb
new file mode 100644
index 0000000..141c1bb
--- /dev/null
+++ b/0.90/src/examples/mapreduce/index-builder-setup.rb
@@ -0,0 +1,15 @@
+# Set up sample data for IndexBuilder example
+create "people", "attributes"
+create "people-email", "INDEX"
+create "people-phone", "INDEX"
+create "people-name", "INDEX"
+
+[["1", "jenny", "jenny@example.com", "867-5309"],
+ ["2", "alice", "alice@example.com", "555-1234"],
+ ["3", "kevin", "kevinpet@example.com", "555-1212"]].each do |fields|
+  (id, name, email, phone) = *fields
+  put "people", id, "attributes:name", name
+  put "people", id, "attributes:email", email
+  put "people", id, "attributes:phone", phone
+end
+  
diff --git a/0.90/src/examples/mapreduce/org/apache/hadoop/hbase/mapreduce/IndexBuilder.java b/0.90/src/examples/mapreduce/org/apache/hadoop/hbase/mapreduce/IndexBuilder.java
new file mode 100644
index 0000000..31c1b38
--- /dev/null
+++ b/0.90/src/examples/mapreduce/org/apache/hadoop/hbase/mapreduce/IndexBuilder.java
@@ -0,0 +1,154 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.util.HashMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.util.GenericOptionsParser;
+
+/**
+ * Example map/reduce job to construct index tables that can be used to quickly
+ * find a row based on the value of a column. It demonstrates:
+ * <ul>
+ * <li>Using TableInputFormat and TableMapReduceUtil to use an HTable as input
+ * to a map/reduce job.</li>
+ * <li>Passing values from main method to children via the configuration.</li>
+ * <li>Using MultiTableOutputFormat to output to multiple tables from a
+ * map/reduce job.</li>
+ * <li>A real use case of building a secondary index over a table.</li>
+ * </ul>
+ * 
+ * <h3>Usage</h3>
+ * 
+ * <p>
+ * Modify ${HADOOP_HOME}/conf/hadoop-env.sh to include the hbase jar, the
+ * zookeeper jar, the examples output directory, and the hbase conf directory in
+ * HADOOP_CLASSPATH, and then run
+ * <tt><strong>bin/hadoop org.apache.hadoop.hbase.mapreduce.IndexBuilder TABLE_NAME COLUMN_FAMILY ATTR [ATTR ...]</strong></tt>
+ * </p>
+ * 
+ * <p>
+ * To run with the sample data provided in index-builder-setup.rb, use the
+ * arguments <strong><tt>people attributes name email phone</tt></strong>.
+ * </p>
+ * 
+ * <p>
+ * This code was written against HBase 0.21 trunk.
+ * </p>
+ */
+public class IndexBuilder {
+  /** the column family containing the indexed row key */
+  public static final byte[] INDEX_COLUMN = Bytes.toBytes("INDEX");
+  /** the qualifier containing the indexed row key */
+  public static final byte[] INDEX_QUALIFIER = Bytes.toBytes("ROW");
+
+  /**
+   * Internal Mapper to be run by Hadoop.
+   */
+  public static class Map extends
+      Mapper<ImmutableBytesWritable, Result, ImmutableBytesWritable, Writable> {
+    private byte[] family;
+    private HashMap<byte[], ImmutableBytesWritable> indexes;
+
+    @Override
+    protected void map(ImmutableBytesWritable rowKey, Result result, Context context)
+        throws IOException, InterruptedException {
+      for(java.util.Map.Entry<byte[], ImmutableBytesWritable> index : indexes.entrySet()) {
+        byte[] qualifier = index.getKey();
+        ImmutableBytesWritable tableName = index.getValue();
+        byte[] value = result.getValue(family, qualifier);
+        if (value != null) {
+          // original: row 123 attribute:phone 555-1212
+          // index: row 555-1212 INDEX:ROW 123
+          Put put = new Put(value);
+          put.add(INDEX_COLUMN, INDEX_QUALIFIER, rowKey.get());
+          context.write(tableName, put);
+        }
+      }
+    }
+
+    @Override
+    protected void setup(Context context) throws IOException,
+        InterruptedException {
+      Configuration configuration = context.getConfiguration();
+      String tableName = configuration.get("index.tablename");
+      String[] fields = configuration.getStrings("index.fields");
+      String familyName = configuration.get("index.familyname");
+      family = Bytes.toBytes(familyName);
+      indexes = new HashMap<byte[], ImmutableBytesWritable>();
+      for(String field : fields) {
+        // if the table is "people" and the field to index is "email", then the
+        // index table will be called "people-email"
+        indexes.put(Bytes.toBytes(field),
+            new ImmutableBytesWritable(Bytes.toBytes(tableName + "-" + field)));
+      }
+    }
+  }
+
+  /**
+   * Job configuration.
+   */
+  public static Job configureJob(Configuration conf, String [] args)
+  throws IOException {
+    String tableName = args[0];
+    String columnFamily = args[1];
+    System.out.println("****" + tableName);
+    conf.set(TableInputFormat.SCAN, TableMapReduceUtil.convertScanToString(new Scan()));
+    conf.set(TableInputFormat.INPUT_TABLE, tableName);
+    conf.set("index.tablename", tableName);
+    conf.set("index.familyname", columnFamily);
+    String[] fields = new String[args.length - 2];
+    for(int i = 0; i < fields.length; i++) {
+      fields[i] = args[i + 2];
+    }
+    conf.setStrings("index.fields", fields);
+    conf.set("index.familyname", "attributes");
+    Job job = new Job(conf, tableName);
+    job.setJarByClass(IndexBuilder.class);
+    job.setMapperClass(Map.class);
+    job.setNumReduceTasks(0);
+    job.setInputFormatClass(TableInputFormat.class);
+    job.setOutputFormatClass(MultiTableOutputFormat.class);
+    return job;
+  }
+
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
+    if(otherArgs.length < 3) {
+      System.err.println("Only " + otherArgs.length + " arguments supplied, required: 3");
+      System.err.println("Usage: IndexBuilder <TABLE_NAME> <COLUMN_FAMILY> <ATTR> [<ATTR> ...]");
+      System.exit(-1);
+    }
+    Job job = configureJob(conf, otherArgs);
+    System.exit(job.waitForCompletion(true) ? 0 : 1);
+  }
+}
diff --git a/0.90/src/examples/mapreduce/org/apache/hadoop/hbase/mapreduce/SampleUploader.java b/0.90/src/examples/mapreduce/org/apache/hadoop/hbase/mapreduce/SampleUploader.java
new file mode 100644
index 0000000..5629cca
--- /dev/null
+++ b/0.90/src/examples/mapreduce/org/apache/hadoop/hbase/mapreduce/SampleUploader.java
@@ -0,0 +1,148 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
+import org.apache.hadoop.util.GenericOptionsParser;
+
+/**
+ * Sample Uploader MapReduce
+ * <p>
+ * This is EXAMPLE code.  You will need to change it to work for your context.
+ * <p>
+ * Uses {@link TableReducer} to put the data into HBase. Change the InputFormat 
+ * to suit your data.  In this example, we are importing a CSV file.
+ * <p>
+ * <pre>row,family,qualifier,value</pre>
+ * <p>
+ * The table and columnfamily we're to insert into must preexist.
+ * <p>
+ * There is no reducer in this example as it is not necessary and adds 
+ * significant overhead.  If you need to do any massaging of data before
+ * inserting into HBase, you can do this in the map as well.
+ * <p>Do the following to start the MR job:
+ * <pre>
+ * ./bin/hadoop org.apache.hadoop.hbase.mapreduce.SampleUploader /tmp/input.csv TABLE_NAME
+ * </pre>
+ * <p>
+ * This code was written against HBase 0.21 trunk.
+ */
+public class SampleUploader {
+
+  private static final String NAME = "SampleUploader";
+  
+  static class Uploader 
+  extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put> {
+
+    private long checkpoint = 100;
+    private long count = 0;
+    
+    @Override
+    public void map(LongWritable key, Text line, Context context)
+    throws IOException {
+      
+      // Input is a CSV file
+      // Each map() is a single line, where the key is the line number
+      // Each line is comma-delimited; row,family,qualifier,value
+            
+      // Split CSV line
+      String [] values = line.toString().split(",");
+      if(values.length != 4) {
+        return;
+      }
+      
+      // Extract each value
+      byte [] row = Bytes.toBytes(values[0]);
+      byte [] family = Bytes.toBytes(values[1]);
+      byte [] qualifier = Bytes.toBytes(values[2]);
+      byte [] value = Bytes.toBytes(values[3]);
+      
+      // Create Put
+      Put put = new Put(row);
+      put.add(family, qualifier, value);
+      
+      // Uncomment below to disable WAL. This will improve performance but means
+      // you will experience data loss in the case of a RegionServer crash.
+      // put.setWriteToWAL(false);
+      
+      try {
+        context.write(new ImmutableBytesWritable(row), put);
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+      
+      // Set status every checkpoint lines
+      if(++count % checkpoint == 0) {
+        context.setStatus("Emitting Put " + count);
+      }
+    }
+  }
+  
+  /**
+   * Job configuration.
+   */
+  public static Job configureJob(Configuration conf, String [] args)
+  throws IOException {
+    Path inputPath = new Path(args[0]);
+    String tableName = args[1];
+    Job job = new Job(conf, NAME + "_" + tableName);
+    job.setJarByClass(Uploader.class);
+    FileInputFormat.setInputPaths(job, inputPath);
+    job.setInputFormatClass(SequenceFileInputFormat.class);
+    job.setMapperClass(Uploader.class);
+    // No reducers.  Just write straight to table.  Call initTableReducerJob
+    // because it sets up the TableOutputFormat.
+    TableMapReduceUtil.initTableReducerJob(tableName, null, job);
+    job.setNumReduceTasks(0);
+    return job;
+  }
+
+  /**
+   * Main entry point.
+   * 
+   * @param args  The command line parameters.
+   * @throws Exception When running the job fails.
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
+    if(otherArgs.length != 2) {
+      System.err.println("Wrong number of arguments: " + otherArgs.length);
+      System.err.println("Usage: " + NAME + " <input> <tablename>");
+      System.exit(-1);
+    }
+    Job job = configureJob(conf, otherArgs);
+    System.exit(job.waitForCompletion(true) ? 0 : 1);
+  }
+}
diff --git a/0.90/src/examples/thrift/DemoClient.cpp b/0.90/src/examples/thrift/DemoClient.cpp
new file mode 100644
index 0000000..ac1972f
--- /dev/null
+++ b/0.90/src/examples/thrift/DemoClient.cpp
@@ -0,0 +1,300 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/*
+ * Instructions:
+ * 1. Run Thrift to generate the cpp module HBase
+ *    thrift --gen cpp ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+ * 2. Execute {make}.
+ * 3. Execute {./DemoClient}.
+ */ 
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/time.h>
+#include <poll.h>
+
+#include <iostream>
+
+#include <protocol/TBinaryProtocol.h>
+#include <transport/TSocket.h>
+#include <transport/TTransportUtils.h>
+
+#include "Hbase.h"
+
+using namespace facebook::thrift;
+using namespace facebook::thrift::protocol;
+using namespace facebook::thrift::transport;
+
+using namespace apache::hadoop::hbase::thrift;
+
+typedef std::vector<std::string> StrVec;
+typedef std::map<std::string,std::string> StrMap;
+typedef std::vector<ColumnDescriptor> ColVec;
+typedef std::map<std::string,ColumnDescriptor> ColMap;
+typedef std::vector<TCell> CellVec;
+typedef std::map<std::string,TCell> CellMap;
+
+
+static void
+printRow(const TRowResult &rowResult)
+{
+  std::cout << "row: " << rowResult.row << ", cols: ";
+  for (CellMap::const_iterator it = rowResult.columns.begin(); 
+      it != rowResult.columns.end(); ++it) {
+    std::cout << it->first << " => " << it->second.value << "; ";
+  }
+  std::cout << std::endl;
+}
+
+static void
+printVersions(const std::string &row, const CellVec &versions)
+{
+  std::cout << "row: " << row << ", values: ";
+  for (CellVec::const_iterator it = versions.begin(); it != versions.end(); ++it) {
+    std::cout << (*it).value << "; ";
+  }
+  std::cout << std::endl;
+}
+
+int 
+main(int argc, char** argv) 
+{
+  boost::shared_ptr<TTransport> socket(new TSocket("localhost", 9090));
+  boost::shared_ptr<TTransport> transport(new TBufferedTransport(socket));
+  boost::shared_ptr<TProtocol> protocol(new TBinaryProtocol(transport));
+  HbaseClient client(protocol);
+
+  try {
+    transport->open();
+
+    std::string t("demo_table");
+
+    //
+    // Scan all tables, look for the demo table and delete it.
+    //
+    std::cout << "scanning tables..." << std::endl;
+    StrVec tables;
+    client.getTableNames(tables);
+    for (StrVec::const_iterator it = tables.begin(); it != tables.end(); ++it) {
+      std::cout << "  found: " << *it << std::endl;
+      if (t == *it) {
+        if (client.isTableEnabled(*it)) {
+          std::cout << "    disabling table: " << *it << std::endl;
+          client.disableTable(*it);
+        }
+        std::cout << "    deleting table: " << *it << std::endl;
+        client.deleteTable(*it);
+      }
+    }
+
+    //
+    // Create the demo table with two column families, entry: and unused:
+    //
+    ColVec columns;
+    columns.push_back(ColumnDescriptor());
+    columns.back().name = "entry:";
+    columns.back().maxVersions = 10;
+    columns.push_back(ColumnDescriptor());
+    columns.back().name = "unused:";
+
+    std::cout << "creating table: " << t << std::endl;
+    try {
+      client.createTable(t, columns);
+    } catch (AlreadyExists &ae) {
+      std::cout << "WARN: " << ae.message << std::endl;
+    }
+
+    ColMap columnMap;
+    client.getColumnDescriptors(columnMap, t);
+    std::cout << "column families in " << t << ": " << std::endl;
+    for (ColMap::const_iterator it = columnMap.begin(); it != columnMap.end(); ++it) {
+      std::cout << "  column: " << it->second.name << ", maxVer: " << it->second.maxVersions << std::endl;
+    }
+
+    //
+    // Test UTF-8 handling
+    //
+    std::string invalid("foo-\xfc\xa1\xa1\xa1\xa1\xa1");
+    std::string valid("foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB");
+
+    // non-utf8 is fine for data
+    std::vector<Mutation> mutations;
+    mutations.push_back(Mutation());
+    mutations.back().column = "entry:foo";
+    mutations.back().value = invalid;
+    client.mutateRow(t, "foo", mutations);
+
+    // try empty strings
+    mutations.clear();
+    mutations.push_back(Mutation());
+    mutations.back().column = "entry:";
+    mutations.back().value = "";
+    client.mutateRow(t, "", mutations);
+
+    // this row name is valid utf8
+    mutations.clear();
+    mutations.push_back(Mutation());
+    mutations.back().column = "entry:foo";
+    mutations.back().value = valid;
+    client.mutateRow(t, valid, mutations);
+
+    // non-utf8 is not allowed in row names
+    try {
+      mutations.clear();
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:foo";
+      mutations.back().value = invalid;
+      client.mutateRow(t, invalid, mutations);
+      std::cout << "FATAL: shouldn't get here!" << std::endl;
+      exit(-1);
+    } catch (IOError e) {
+      std::cout << "expected error: " << e.message << std::endl;
+    }
+
+    // Run a scanner on the rows we just created
+    StrVec columnNames;
+    columnNames.push_back("entry:");
+
+    std::cout << "Starting scanner..." << std::endl;
+    int scanner = client.scannerOpen(t, "", columnNames);
+    try {
+      while (true) {
+        TRowResult value;
+        client.scannerGet(value, scanner);
+        printRow(value);
+      }
+    } catch (NotFound &nf) {
+      client.scannerClose(scanner);
+      std::cout << "Scanner finished" << std::endl;
+    }
+
+    //
+    // Run some operations on a bunch of rows.
+    //
+    for (int i = 100; i >= 0; --i) {
+      // format row keys as "00000" to "00100"
+      char buf[32];
+      sprintf(buf, "%0.5d", i);
+      std::string row(buf);
+      
+      TRowResult rowResult;
+
+      mutations.clear();
+      mutations.push_back(Mutation());
+      mutations.back().column = "unused:";
+      mutations.back().value = "DELETE_ME";
+      client.mutateRow(t, row, mutations);
+      client.getRow(rowResult, t, row);
+      printRow(rowResult);
+      client.deleteAllRow(t, row);
+
+      mutations.clear();
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:num";
+      mutations.back().value = "0";
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:foo";
+      mutations.back().value = "FOO";
+      client.mutateRow(t, row, mutations);
+      client.getRow(rowResult, t, row);
+      printRow(rowResult);
+
+      // sleep to force later timestamp 
+      poll(0, 0, 50);
+
+      mutations.clear();
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:foo";
+      mutations.back().isDelete = true;
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:num";
+      mutations.back().value = "-1";
+      client.mutateRow(t, row, mutations);
+      client.getRow(rowResult, t, row);
+      printRow(rowResult);
+
+      mutations.clear();
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:num";
+      mutations.back().value = boost::lexical_cast<std::string>(i);
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:sqr";
+      mutations.back().value = boost::lexical_cast<std::string>(i*i);
+      client.mutateRow(t, row, mutations);
+      client.getRow(rowResult, t, row);
+      printRow(rowResult);
+
+      mutations.clear();
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:num";
+      mutations.back().value = "-999";
+      mutations.push_back(Mutation());
+      mutations.back().column = "entry:sqr";
+      mutations.back().isDelete = true;
+      client.mutateRowTs(t, row, mutations, 1); // shouldn't override latest
+      client.getRow(rowResult, t, row);
+      printRow(rowResult);
+
+      CellVec versions;
+      client.getVer(versions, t, row, "entry:num", 10);
+      printVersions(row, versions);
+      assert(versions.size() == 4);
+      std::cout << std::endl;
+
+      try {
+        TCell value;
+        client.get(value, t, row, "entry:foo");
+        std::cout << "FATAL: shouldn't get here!" << std::endl;
+        exit(-1);
+      } catch (NotFound &nf) {
+        // blank
+      }
+    }
+
+    // scan all rows/columns
+
+    columnNames.clear();
+    client.getColumnDescriptors(columnMap, t);
+    for (ColMap::const_iterator it = columnMap.begin(); it != columnMap.end(); ++it) {
+      std::cout << "column with name: " + it->second.name << std::endl;
+      columnNames.push_back(it->second.name + ":");
+    }
+
+    std::cout << "Starting scanner..." << std::endl;
+    scanner = client.scannerOpenWithStop(t, "00020", "00040", columnNames);
+    try {
+      while (true) {
+        TRowResult value;
+        client.scannerGet(value, scanner);
+        printRow(value);
+      }
+    } catch (NotFound &nf) {
+      client.scannerClose(scanner);
+      std::cout << "Scanner finished" << std::endl;
+    }
+
+    transport->close();
+  } 
+  catch (TException &tx) {
+    printf("ERROR: %s\n", tx.what());
+  }
+
+}
diff --git a/0.90/src/examples/thrift/DemoClient.java b/0.90/src/examples/thrift/DemoClient.java
new file mode 100644
index 0000000..13562e3
--- /dev/null
+++ b/0.90/src/examples/thrift/DemoClient.java
@@ -0,0 +1,331 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import java.io.UnsupportedEncodingException;
+import java.nio.ByteBuffer;
+import java.nio.charset.CharacterCodingException;
+import java.nio.charset.Charset;
+import java.nio.charset.CharsetDecoder;
+import java.text.NumberFormat;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.SortedMap;
+
+import org.apache.hadoop.hbase.thrift.generated.AlreadyExists;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.thrift.generated.IOError;
+import org.apache.hadoop.hbase.thrift.generated.IllegalArgument;
+import org.apache.hadoop.hbase.thrift.generated.Mutation;
+import org.apache.hadoop.hbase.thrift.generated.NotFound;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+
+import com.facebook.thrift.TException;
+import com.facebook.thrift.protocol.TBinaryProtocol;
+import com.facebook.thrift.protocol.TProtocol;
+import com.facebook.thrift.transport.TSocket;
+import com.facebook.thrift.transport.TTransport;
+
+/*
+ * Instructions:
+ * 1. Run Thrift to generate the java module HBase
+ *    thrift --gen java ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+ * 2. Acquire a jar of compiled Thrift java classes.  As of this writing, HBase ships 
+ *    with this jar (libthrift-[VERSION].jar).  If this jar is not present, or it is 
+ *    out-of-date with your current version of thrift, you can compile the jar 
+ *    yourself by executing {ant} in {$THRIFT_HOME}/lib/java.
+ * 3. Compile and execute this file with both the libthrift jar and the gen-java/ 
+ *    directory in the classpath.  This can be done on the command-line with the 
+ *    following lines: (from the directory containing this file and gen-java/)
+ *    
+ *    javac -cp /path/to/libthrift/jar.jar:gen-java/ DemoClient.java
+ *    mv DemoClient.class gen-java/org/apache/hadoop/hbase/thrift/
+ *    java -cp /path/to/libthrift/jar.jar:gen-java/ org.apache.hadoop.hbase.thrift.DemoClient
+ * 
+ */
+public class DemoClient {
+  
+  protected int port = 9090;
+  CharsetDecoder decoder = null;
+
+  public static void main(String[] args) 
+  throws IOError, TException, NotFound, UnsupportedEncodingException, IllegalArgument, AlreadyExists {
+    DemoClient client = new DemoClient();
+    client.run();
+  }
+
+  DemoClient() {
+    decoder = Charset.forName("UTF-8").newDecoder();
+  }
+  
+  // Helper to translate byte[]'s to UTF8 strings
+  private String utf8(byte[] buf) {
+    try {
+      return decoder.decode(ByteBuffer.wrap(buf)).toString();
+    } catch (CharacterCodingException e) {
+      return "[INVALID UTF-8]";
+    }
+  }
+  
+  // Helper to translate strings to UTF8 bytes
+  private byte[] bytes(String s) {
+    try {
+      return s.getBytes("UTF-8");
+    } catch (UnsupportedEncodingException e) {
+      e.printStackTrace();
+      return null;
+    }
+  }
+  
+  private void run() throws IOError, TException, NotFound, IllegalArgument,
+      AlreadyExists {
+    
+    TTransport transport = new TSocket("localhost", port);
+    TProtocol protocol = new TBinaryProtocol(transport, true, true);
+    Hbase.Client client = new Hbase.Client(protocol);
+
+    transport.open();
+
+    byte[] t = bytes("demo_table");
+    
+    //
+    // Scan all tables, look for the demo table and delete it.
+    //
+    System.out.println("scanning tables...");
+    for (byte[] name : client.getTableNames()) {
+      System.out.println("  found: " + utf8(name));
+      if (utf8(name).equals(utf8(t))) {
+        if (client.isTableEnabled(name)) {
+          System.out.println("    disabling table: " + utf8(name));
+          client.disableTable(name);
+        }
+        System.out.println("    deleting table: " + utf8(name)); 
+        client.deleteTable(name);
+      }
+    }
+    
+    //
+    // Create the demo table with two column families, entry: and unused:
+    //
+    ArrayList<ColumnDescriptor> columns = new ArrayList<ColumnDescriptor>();
+    ColumnDescriptor col = null;
+    col = new ColumnDescriptor();
+    col.name = bytes("entry:");
+    col.maxVersions = 10;
+    columns.add(col);
+    col = new ColumnDescriptor();
+    col.name = bytes("unused:");
+    columns.add(col);
+
+    System.out.println("creating table: " + utf8(t));
+    try {
+      client.createTable(t, columns);
+    } catch (AlreadyExists ae) {
+      System.out.println("WARN: " + ae.message);
+    }
+    
+    System.out.println("column families in " + utf8(t) + ": ");
+    Map<byte[], ColumnDescriptor> columnMap = client.getColumnDescriptors(t);
+    for (ColumnDescriptor col2 : columnMap.values()) {
+      System.out.println("  column: " + utf8(col2.name) + ", maxVer: " + Integer.toString(col2.maxVersions));
+    }
+    
+    //
+    // Test UTF-8 handling
+    //
+    byte[] invalid = { (byte) 'f', (byte) 'o', (byte) 'o', (byte) '-', (byte) 0xfc, (byte) 0xa1, (byte) 0xa1, (byte) 0xa1, (byte) 0xa1 };
+    byte[] valid = { (byte) 'f', (byte) 'o', (byte) 'o', (byte) '-', (byte) 0xE7, (byte) 0x94, (byte) 0x9F, (byte) 0xE3, (byte) 0x83, (byte) 0x93, (byte) 0xE3, (byte) 0x83, (byte) 0xBC, (byte) 0xE3, (byte) 0x83, (byte) 0xAB};
+
+    ArrayList<Mutation> mutations;
+    // non-utf8 is fine for data
+    mutations = new ArrayList<Mutation>();
+    mutations.add(new Mutation(false, bytes("entry:foo"), invalid));
+    client.mutateRow(t, bytes("foo"), mutations);
+
+    // try empty strings
+    mutations = new ArrayList<Mutation>();
+    mutations.add(new Mutation(false, bytes("entry:"), bytes("")));
+    client.mutateRow(t, bytes(""), mutations);
+
+    // this row name is valid utf8
+    mutations = new ArrayList<Mutation>();
+    mutations.add(new Mutation(false, bytes("entry:foo"), valid));
+    client.mutateRow(t, valid, mutations);
+    
+    // non-utf8 is not allowed in row names
+    try {
+      mutations = new ArrayList<Mutation>();
+      mutations.add(new Mutation(false, bytes("entry:foo"), invalid));
+      client.mutateRow(t, invalid, mutations);
+      System.out.println("FATAL: shouldn't get here");
+      System.exit(-1);
+    } catch (IOError e) {
+      System.out.println("expected error: " + e.message);
+    }
+    
+    // Run a scanner on the rows we just created
+    ArrayList<byte[]> columnNames = new ArrayList<byte[]>();
+    columnNames.add(bytes("entry:"));
+    
+    System.out.println("Starting scanner...");
+    int scanner = client.scannerOpen(t, bytes(""), columnNames);
+    try {
+      while (true) {
+        TRowResult entry = client.scannerGet(scanner);
+        printRow(entry);
+      }
+    } catch (NotFound nf) {
+      client.scannerClose(scanner);
+      System.out.println("Scanner finished");
+    }
+    
+    //
+    // Run some operations on a bunch of rows
+    //
+    for (int i = 100; i >= 0; --i) {
+      // format row keys as "00000" to "00100"
+      NumberFormat nf = NumberFormat.getInstance();
+      nf.setMinimumIntegerDigits(5);
+      nf.setGroupingUsed(false);
+      byte[] row = bytes(nf.format(i));
+      
+      mutations = new ArrayList<Mutation>();
+      mutations.add(new Mutation(false, bytes("unused:"), bytes("DELETE_ME")));
+      client.mutateRow(t, row, mutations);
+      printRow(client.getRow(t, row));
+      client.deleteAllRow(t, row);
+
+      mutations = new ArrayList<Mutation>();
+      mutations.add(new Mutation(false, bytes("entry:num"), bytes("0")));
+      mutations.add(new Mutation(false, bytes("entry:foo"), bytes("FOO")));
+      client.mutateRow(t, row, mutations);
+      printRow(client.getRow(t, row));
+
+      Mutation m = null;
+      mutations = new ArrayList<Mutation>();
+      m = new Mutation();
+      m.column = bytes("entry:foo");
+      m.isDelete = true;
+      mutations.add(m);
+      m = new Mutation();
+      m.column = bytes("entry:num");
+      m.value = bytes("-1");
+      mutations.add(m);
+      client.mutateRow(t, row, mutations);
+      printRow(client.getRow(t, row));
+      
+      mutations = new ArrayList<Mutation>();
+      mutations.add(new Mutation(false, bytes("entry:num"), bytes(Integer.toString(i))));
+      mutations.add(new Mutation(false, bytes("entry:sqr"), bytes(Integer.toString(i * i))));
+      client.mutateRow(t, row, mutations);
+      printRow(client.getRow(t, row));
+
+      // sleep to force later timestamp 
+      try {
+        Thread.sleep(50);
+      } catch (InterruptedException e) {
+        // no-op
+      }
+      
+      mutations.clear();
+      m = new Mutation();
+      m.column = bytes("entry:num");
+      m.value = bytes("-999");
+      mutations.add(m);
+      m = new Mutation();
+      m.column = bytes("entry:sqr");
+      m.isDelete = true;
+      client.mutateRowTs(t, row, mutations, 1); // shouldn't override latest
+      printRow(client.getRow(t, row));
+
+      List<TCell> versions = client.getVer(t, row, bytes("entry:num"), 10);
+      printVersions(row, versions);
+      if (versions.size() != 4) {
+        System.out.println("FATAL: wrong # of versions");
+        System.exit(-1);
+      }
+      
+      try {
+        client.get(t, row, bytes("entry:foo"));
+        System.out.println("FATAL: shouldn't get here");
+        System.exit(-1);
+      } catch (NotFound nf2) {
+        // blank
+      }
+
+      System.out.println("");
+    }
+    
+    // scan all rows/columnNames
+    
+    columnNames.clear();
+    for (ColumnDescriptor col2 : client.getColumnDescriptors(t).values()) {
+      System.out.println("column with name: " + new String(col2.name));
+      System.out.println(col2.toString());
+      columnNames.add((utf8(col2.name) + ":").getBytes());
+    }
+    
+    System.out.println("Starting scanner...");
+    scanner = client.scannerOpenWithStop(t, bytes("00020"), bytes("00040"),
+        columnNames);
+    try {
+      while (true) {
+        TRowResult entry = client.scannerGet(scanner);
+        printRow(entry);
+      }
+    } catch (NotFound nf) {
+      client.scannerClose(scanner);
+      System.out.println("Scanner finished");
+    }
+    
+    transport.close();
+  }
+  
+  private final void printVersions(byte[] row, List<TCell> versions) {
+    StringBuilder rowStr = new StringBuilder();
+    for (TCell cell : versions) {
+      rowStr.append(utf8(cell.value));
+      rowStr.append("; ");
+    }
+    System.out.println("row: " + utf8(row) + ", values: " + rowStr);
+  }
+  
+  private final void printRow(TRowResult rowResult) {
+    // copy values into a TreeMap to get them in sorted order
+    
+    TreeMap<String,TCell> sorted = new TreeMap<String,TCell>();
+    for (Map.Entry<byte[], TCell> column : rowResult.columns.entrySet()) {
+      sorted.put(utf8(column.getKey()), column.getValue());
+    }
+    
+    StringBuilder rowStr = new StringBuilder();
+    for (SortedMap.Entry<String, TCell> entry : sorted.entrySet()) {
+      rowStr.append(entry.getKey());
+      rowStr.append(" => ");
+      rowStr.append(utf8(entry.getValue().value));
+      rowStr.append("; ");
+    }
+    System.out.println("row: " + utf8(rowResult.row) + ", cols: " + rowStr);
+  }
+}
diff --git a/0.90/src/examples/thrift/DemoClient.php b/0.90/src/examples/thrift/DemoClient.php
new file mode 100644
index 0000000..b5ea551
--- /dev/null
+++ b/0.90/src/examples/thrift/DemoClient.php
@@ -0,0 +1,277 @@
+<?php
+/**
+ * Copyright 2008 The Apache Software Foundation
+ * 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+# Instructions:
+# 1. Run Thrift to generate the php module HBase
+#    thrift -php ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift 
+# 2. Modify the import string below to point to {$THRIFT_HOME}/lib/php/src.
+# 3. Execute {php DemoClient.php}.  Note that you must use php5 or higher.
+# 4. See {$THRIFT_HOME}/lib/php/README for additional help.
+
+# Change this to match your thrift root
+$GLOBALS['THRIFT_ROOT'] = '/Users/irubin/Thrift/thrift-20080411p1/lib/php/src';
+
+require_once( $GLOBALS['THRIFT_ROOT'].'/Thrift.php' );
+
+require_once( $GLOBALS['THRIFT_ROOT'].'/transport/TSocket.php' );
+require_once( $GLOBALS['THRIFT_ROOT'].'/transport/TBufferedTransport.php' );
+require_once( $GLOBALS['THRIFT_ROOT'].'/protocol/TBinaryProtocol.php' );
+
+# According to the thrift documentation, compiled PHP thrift libraries should
+# reside under the THRIFT_ROOT/packages directory.  If these compiled libraries 
+# are not present in this directory, move them there from gen-php/.  
+require_once( $GLOBALS['THRIFT_ROOT'].'/packages/Hbase/Hbase.php' );
+
+function printRow( $rowresult ) {
+  echo( "row: {$rowresult->row}, cols: \n" );
+  $values = $rowresult->columns;
+  asort( $values );
+  foreach ( $values as $k=>$v ) {
+    echo( "  {$k} => {$v->value}\n" );
+  }
+}
+
+$socket = new TSocket( 'localhost', 9090 );
+$socket->setSendTimeout( 10000 ); // Ten seconds (too long for production, but this is just a demo ;)
+$socket->setRecvTimeout( 20000 ); // Twenty seconds
+$transport = new TBufferedTransport( $socket );
+$protocol = new TBinaryProtocol( $transport );
+$client = new HbaseClient( $protocol );
+
+$transport->open();
+
+$t = 'demo_table';
+
+?><html>
+<head>
+<title>DemoClient</title>
+</head>
+<body>
+<pre>
+<?php
+
+#
+# Scan all tables, look for the demo table and delete it.
+#
+echo( "scanning tables...\n" );
+$tables = $client->getTableNames();
+sort( $tables );
+foreach ( $tables as $name ) {
+  echo( "  found: {$name}\n" );
+  if ( $name == $t ) {
+    if ($client->isTableEnabled( $name )) {
+      echo( "    disabling table: {$name}\n");
+      $client->disableTable( $name );
+    }
+    echo( "    deleting table: {$name}\n" );
+    $client->deleteTable( $name );
+  }
+}
+
+#
+# Create the demo table with two column families, entry: and unused:
+#
+$columns = array(
+  new ColumnDescriptor( array(
+    'name' => 'entry:',
+    'maxVersions' => 10
+  ) ),
+  new ColumnDescriptor( array(
+    'name' => 'unused:'
+  ) )
+);
+
+echo( "creating table: {$t}\n" );
+try {
+  $client->createTable( $t, $columns );
+} catch ( AlreadyExists $ae ) {
+  echo( "WARN: {$ae->message}\n" );
+}
+
+echo( "column families in {$t}:\n" );
+$descriptors = $client->getColumnDescriptors( $t );
+asort( $descriptors );
+foreach ( $descriptors as $col ) {
+  echo( "  column: {$col->name}, maxVer: {$col->maxVersions}\n" );
+}
+
+#
+# Test UTF-8 handling
+#
+$invalid = "foo-\xfc\xa1\xa1\xa1\xa1\xa1";
+$valid = "foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB";
+
+# non-utf8 is fine for data
+$mutations = array(
+  new Mutation( array(
+    'column' => 'entry:foo',
+    'value' => $invalid
+  ) ),
+);
+$client->mutateRow( $t, "foo", $mutations );
+
+# try empty strings
+$mutations = array(
+  new Mutation( array(
+    'column' => 'entry:',
+    'value' => ""
+  ) ),
+);
+$client->mutateRow( $t, "", $mutations );
+
+# this row name is valid utf8
+$mutations = array(
+  new Mutation( array(
+    'column' => 'entry:foo',
+    'value' => $valid
+  ) ),
+);
+$client->mutateRow( $t, $valid, $mutations );
+
+# non-utf8 is not allowed in row names
+try {
+  $mutations = array(
+    new Mutation( array(
+      'column' => 'entry:foo',
+      'value' => $invalid
+    ) ),
+  );
+  $client->mutateRow( $t, $invalid, $mutations );
+  throw new Exception( "shouldn't get here!" );
+} catch ( IOError $e ) {
+  echo( "expected error: {$e->message}\n" );
+}
+
+# Run a scanner on the rows we just created
+echo( "Starting scanner...\n" );
+$scanner = $client->scannerOpen( $t, "", array( "entry:" ) );
+try {
+  while (true) printRow( $client->scannerGet( $scanner ) );
+} catch ( NotFound $nf ) {
+  $client->scannerClose( $scanner );
+  echo( "Scanner finished\n" );
+}
+
+#
+# Run some operations on a bunch of rows.
+#
+for ($e=100; $e>=0; $e--) {
+
+  # format row keys as "00000" to "00100"
+  $row = str_pad( $e, 5, '0', STR_PAD_LEFT );
+
+  $mutations = array(
+    new Mutation( array(
+      'column' => 'unused:',
+      'value' => "DELETE_ME"
+    ) ),
+  );
+  $client->mutateRow( $t, $row, $mutations);
+  printRow( $client->getRow( $t, $row ));
+  $client->deleteAllRow( $t, $row );
+
+  $mutations = array(
+    new Mutation( array(
+      'column' => 'entry:num',
+      'value' => "0"
+    ) ),
+    new Mutation( array(
+      'column' => 'entry:foo',
+      'value' => "FOO"
+    ) ),
+  );
+  $client->mutateRow( $t, $row, $mutations );
+  printRow( $client->getRow( $t, $row ));
+
+  $mutations = array(
+    new Mutation( array(
+      'column' => 'entry:foo',
+      'isDelete' => 1
+    ) ),
+    new Mutation( array(
+      'column' => 'entry:num',
+      'value' => '-1'
+    ) ),
+  );
+  $client->mutateRow( $t, $row, $mutations );
+  printRow( $client->getRow( $t, $row ) );
+
+  $mutations = array(
+    new Mutation( array(
+      'column' => "entry:num",
+      'value' => $e
+    ) ),
+    new Mutation( array(
+      'column' => "entry:sqr",
+      'value' => $e * $e
+    ) ),
+  );
+  $client->mutateRow( $t, $row, $mutations );
+  printRow( $client->getRow( $t, $row ));
+  
+  $mutations = array(
+    new Mutation( array(
+      'column' => 'entry:num',
+      'value' => '-999'
+    ) ),
+    new Mutation( array(
+      'column' => 'entry:sqr',
+      'isDelete' => 1
+    ) ),
+  );
+  $client->mutateRowTs( $t, $row, $mutations, 1 ); # shouldn't override latest
+  printRow( $client->getRow( $t, $row ) );
+
+  $versions = $client->getVer( $t, $row, "entry:num", 10 );
+  echo( "row: {$row}, values: \n" );
+  foreach ( $versions as $v ) echo( "  {$v->value};\n" );
+  
+  try {
+    $client->get( $t, $row, "entry:foo");
+    throw new Exception ( "shouldn't get here! " );
+  } catch ( NotFound $nf ) {
+    # blank
+  }
+
+}
+
+$columns = array();
+foreach ( $client->getColumnDescriptors($t) as $col=>$desc ) {
+  echo("column with name: {$desc->name}\n");
+  $columns[] = $desc->name.":";
+}
+
+echo( "Starting scanner...\n" );
+$scanner = $client->scannerOpenWithStop( $t, "00020", "00040", $columns );
+try {
+  while (true) printRow( $client->scannerGet( $scanner ) );
+} catch ( NotFound $nf ) {
+  $client->scannerClose( $scanner );
+  echo( "Scanner finished\n" );
+}
+  
+$transport->close();
+
+?>
+</pre>
+</body>
+</html>
+
diff --git a/0.90/src/examples/thrift/DemoClient.py b/0.90/src/examples/thrift/DemoClient.py
new file mode 100755
index 0000000..e1b1f8a
--- /dev/null
+++ b/0.90/src/examples/thrift/DemoClient.py
@@ -0,0 +1,202 @@
+#!/usr/bin/python
+'''Copyright 2008 The Apache Software Foundation
+ 
+  Licensed to the Apache Software Foundation (ASF) under one
+  or more contributor license agreements.  See the NOTICE file
+  distributed with this work for additional information
+  regarding copyright ownership.  The ASF licenses this file
+  to you under the Apache License, Version 2.0 (the
+  "License"); you may not use this file except in compliance
+  with the License.  You may obtain a copy of the License at
+ 
+     http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+''' 
+# Instructions:
+# 1. Run Thrift to generate the python module HBase
+#    thrift --gen py ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift 
+# 2. Create a directory of your choosing that contains:
+#     a. This file (DemoClient.py).
+#     b. The directory gen-py/hbase (generated by instruction step 1).
+#     c. The directory {$THRIFT_HOME}/lib/py/build/lib.{YOUR_SYSTEM}/thrift.
+#    Or, modify the import statements below such that this file can access the 
+#    directories from steps 3b and 3c.
+# 3. Execute {python DemoClient.py}.
+
+import sys
+import time
+
+from thrift import Thrift
+from thrift.transport import TSocket, TTransport
+from thrift.protocol import TBinaryProtocol
+from hbase import ttypes
+from hbase.Hbase import Client, ColumnDescriptor, Mutation
+
+def printVersions(row, versions):
+  print "row: " + row + ", values: ",
+  for cell in versions:
+    print cell.value + "; ",
+  print
+
+def printRow(entry):
+  print "row: " + entry.row + ", cols:",
+  for k in sorted(entry.columns):
+    print k + " => " + entry.columns[k].value,
+  print
+
+# Make socket
+transport = TSocket.TSocket('localhost', 9090)
+
+# Buffering is critical. Raw sockets are very slow
+transport = TTransport.TBufferedTransport(transport)
+
+# Wrap in a protocol
+protocol = TBinaryProtocol.TBinaryProtocol(transport)
+
+# Create a client to use the protocol encoder
+client = Client(protocol)
+
+# Connect!
+transport.open()
+
+t = "demo_table"
+
+#
+# Scan all tables, look for the demo table and delete it.
+#
+print "scanning tables..."
+for table in client.getTableNames():
+  print "  found: %s" %(table)
+  if table == t:
+    if client.isTableEnabled(table):
+      print "    disabling table: %s"  %(t)
+      client.disableTable(table)
+    print "    deleting table: %s"  %(t)
+    client.deleteTable(table)
+
+columns = []
+col = ColumnDescriptor()
+col.name = 'entry:'
+col.maxVersions = 10
+columns.append(col)
+col = ColumnDescriptor()
+col.name = 'unused:'
+columns.append(col)
+
+try:
+  print "creating table: %s" %(t)
+  client.createTable(t, columns)
+except AlreadyExists, ae:
+  print "WARN: " + ae.message
+
+cols = client.getColumnDescriptors(t)
+print "column families in %s" %(t)
+for col_name in cols.keys():
+  col = cols[col_name]
+  print "  column: %s, maxVer: %d" % (col.name, col.maxVersions)
+#
+# Test UTF-8 handling
+#
+invalid = "foo-\xfc\xa1\xa1\xa1\xa1\xa1"
+valid = "foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB";
+
+# non-utf8 is fine for data
+mutations = [Mutation(column="entry:foo",value=invalid)]
+print str(mutations)
+client.mutateRow(t, "foo", mutations)
+
+# try empty strings
+mutations = [Mutation(column="entry:", value="")]
+client.mutateRow(t, "", mutations)
+
+# this row name is valid utf8
+mutations = [Mutation(column="entry:foo", value=valid)]
+client.mutateRow(t, valid, mutations)
+
+# non-utf8 is not allowed in row names
+try:
+  mutations = [Mutation(column="entry:foo", value=invalid)]
+  client.mutateRow(t, invalid, mutations)
+except ttypes.IOError, e:
+  print 'expected exception: %s' %(e.message)
+
+# Run a scanner on the rows we just created
+print "Starting scanner..."
+scanner = client.scannerOpen(t, "", ["entry:"])
+
+r = client.scannerGet(scanner)
+while r:
+  printRow(r[0])
+  r = client.scannerGet(scanner)
+print "Scanner finished"
+
+#
+# Run some operations on a bunch of rows.
+#
+for e in range(100, 0, -1):
+    # format row keys as "00000" to "00100"
+  row = "%0.5d" % (e)
+
+  mutations = [Mutation(column="unused:", value="DELETE_ME")]
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row)[0])
+  client.deleteAllRow(t, row)
+
+  mutations = [Mutation(column="entry:num", value="0"),
+               Mutation(column="entry:foo", value="FOO")]
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row)[0]);
+
+  mutations = [Mutation(column="entry:foo",isDelete=True),
+               Mutation(column="entry:num",value="-1")]
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row)[0])
+
+  mutations = [Mutation(column="entry:num", value=str(e)),
+               Mutation(column="entry:sqr", value=str(e*e))]
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row)[0])
+
+  time.sleep(0.05)
+
+  mutations = [Mutation(column="entry:num",value="-999"),
+               Mutation(column="entry:sqr",isDelete=True)]
+  client.mutateRowTs(t, row, mutations, 1) # shouldn't override latest
+  printRow(client.getRow(t, row)[0])
+
+  versions = client.getVer(t, row, "entry:num", 10)
+  printVersions(row, versions)
+  if len(versions) != 4:
+    print("FATAL: wrong # of versions")
+    sys.exit(-1)
+
+  r = client.get(t, row, "entry:foo")
+  if not r:
+    print "yup, we didn't find entry:foo"
+  # just to be explicit, we get lists back, if it's empty there was no matching row.
+  if len(r) > 0:
+    raise "shouldn't get here!"
+
+columnNames = []
+for (col, desc) in client.getColumnDescriptors(t).items():
+  print "column with name: "+desc.name
+  print desc
+  columnNames.append(desc.name+":")
+
+print "Starting scanner..."
+scanner = client.scannerOpenWithStop(t, "00020", "00040", columnNames)
+
+r = client.scannerGet(scanner)
+while r:
+  printRow(r[0])
+  r = client.scannerGet(scanner)
+
+client.scannerClose(scanner)
+print "Scanner finished"
+
+transport.close()
diff --git a/0.90/src/examples/thrift/DemoClient.rb b/0.90/src/examples/thrift/DemoClient.rb
new file mode 100644
index 0000000..84f8818
--- /dev/null
+++ b/0.90/src/examples/thrift/DemoClient.rb
@@ -0,0 +1,245 @@
+#!/usr/bin/ruby
+
+# Copyright 2008 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Instructions: 
+# 1. Run Thrift to generate the ruby module HBase
+#    thrift --gen rb ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift 
+# 2. Modify the import string below to point to {$THRIFT_HOME}/lib/rb/lib.
+# 3. Execute {ruby DemoClient.rb}.
+
+# You will need to modify this import string:
+$:.push('~/Thrift/thrift-20080411p1/lib/rb/lib')
+$:.push('./gen-rb')
+
+require 'thrift/transport/tsocket'
+require 'thrift/protocol/tbinaryprotocol'
+
+require 'Hbase'
+
+def printRow(rowresult)
+  print "row: #{rowresult.row}, cols: "
+  rowresult.columns.sort.each do |k,v|
+    print "#{k} => #{v.value}; "
+  end
+  puts ""
+end
+
+transport = TBufferedTransport.new(TSocket.new("localhost", 9090))
+protocol = TBinaryProtocol.new(transport)
+client = Apache::Hadoop::Hbase::Thrift::Hbase::Client.new(protocol)
+
+transport.open()
+
+t = "demo_table"
+
+#
+# Scan all tables, look for the demo table and delete it.
+#
+puts "scanning tables..."
+client.getTableNames().sort.each do |name|
+  puts "  found: #{name}"
+  if (name == t)
+    if (client.isTableEnabled(name))
+      puts "    disabling table: #{name}"
+      client.disableTable(name)
+    end
+    puts "    deleting table: #{name}" 
+    client.deleteTable(name)
+  end
+end
+
+#
+# Create the demo table with two column families, entry: and unused:
+#
+columns = []
+col = Apache::Hadoop::Hbase::Thrift::ColumnDescriptor.new
+col.name = "entry:"
+col.maxVersions = 10
+columns << col;
+col = Apache::Hadoop::Hbase::Thrift::ColumnDescriptor.new
+col.name = "unused:"
+columns << col;
+
+puts "creating table: #{t}"
+begin
+  client.createTable(t, columns)
+rescue Apache::Hadoop::Hbase::Thrift::AlreadyExists => ae
+  puts "WARN: #{ae.message}"
+end
+
+puts "column families in #{t}: "
+client.getColumnDescriptors(t).sort.each do |key, col|
+  puts "  column: #{col.name}, maxVer: #{col.maxVersions}"
+end
+
+#
+# Test UTF-8 handling
+#
+invalid = "foo-\xfc\xa1\xa1\xa1\xa1\xa1"
+valid = "foo-\xE7\x94\x9F\xE3\x83\x93\xE3\x83\xBC\xE3\x83\xAB";
+
+# non-utf8 is fine for data
+mutations = []
+m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+m.column = "entry:foo"
+m.value = invalid
+mutations << m
+client.mutateRow(t, "foo", mutations)
+
+# try empty strings
+mutations = []
+m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+m.column = "entry:"
+m.value = ""
+mutations << m
+client.mutateRow(t, "", mutations)
+
+# this row name is valid utf8
+mutations = []
+m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+m.column = "entry:foo"
+m.value = valid
+mutations << m
+client.mutateRow(t, valid, mutations)
+
+# non-utf8 is not allowed in row names
+begin
+  mutations = []
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:foo"
+  m.value = invalid
+  mutations << m
+  client.mutateRow(t, invalid, mutations)
+  raise "shouldn't get here!"
+rescue Apache::Hadoop::Hbase::Thrift::IOError => e
+  puts "expected error: #{e.message}"
+end
+
+# Run a scanner on the rows we just created
+puts "Starting scanner..."
+scanner = client.scannerOpen(t, "", ["entry:"])
+begin
+  while (true) 
+    printRow(client.scannerGet(scanner))
+  end
+rescue Apache::Hadoop::Hbase::Thrift::NotFound => nf
+  client.scannerClose(scanner)
+  puts "Scanner finished"
+end
+
+#
+# Run some operations on a bunch of rows.
+#
+(0..100).to_a.reverse.each do |e|
+  # format row keys as "00000" to "00100"
+  row = format("%0.5d", e)
+
+  mutations = []
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "unused:"
+  m.value = "DELETE_ME"
+  mutations << m
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row))
+  client.deleteAllRow(t, row)
+
+  mutations = []
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:num"
+  m.value = "0"
+  mutations << m
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:foo"
+  m.value = "FOO"
+  mutations << m
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row))
+
+  mutations = []
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:foo"
+  m.isDelete = 1
+  mutations << m
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:num"
+  m.value = "-1"
+  mutations << m
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row));
+
+  mutations = []
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:num"
+  m.value = e.to_s
+  mutations << m
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:sqr"
+  m.value = (e*e).to_s
+  mutations << m
+  client.mutateRow(t, row, mutations)
+  printRow(client.getRow(t, row))
+  
+  mutations = []
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:num"
+  m.value = "-999"
+  mutations << m
+  m = Apache::Hadoop::Hbase::Thrift::Mutation.new
+  m.column = "entry:sqr"
+  m.isDelete = 1
+  mutations << m
+  client.mutateRowTs(t, row, mutations, 1) # shouldn't override latest
+  printRow(client.getRow(t, row));
+
+  versions = client.getVer(t, row, "entry:num", 10)
+  print "row: #{row}, values: "
+  versions.each do |v|
+    print "#{v.value}; "
+  end
+  puts ""    
+  
+  begin
+    client.get(t, row, "entry:foo")
+    raise "shouldn't get here!"
+  rescue Apache::Hadoop::Hbase::Thrift::NotFound => nf
+    # blank
+  end
+
+  puts ""
+end 
+
+columns = []
+client.getColumnDescriptors(t).each do |col, desc|
+  puts "column with name: #{desc.name}"
+  columns << desc.name + ":"
+end
+
+puts "Starting scanner..."
+scanner = client.scannerOpenWithStop(t, "00020", "00040", columns)
+begin
+  while (true) 
+    printRow(client.scannerGet(scanner))
+  end
+rescue Apache::Hadoop::Hbase::Thrift::NotFound => nf
+  client.scannerClose(scanner)
+  puts "Scanner finished"
+end
+  
+transport.close()
diff --git a/0.90/src/examples/thrift/Makefile b/0.90/src/examples/thrift/Makefile
new file mode 100644
index 0000000..691a1e9
--- /dev/null
+++ b/0.90/src/examples/thrift/Makefile
@@ -0,0 +1,35 @@
+# Copyright 2008 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Makefile for C++ Hbase Thrift DemoClient
+# NOTE: run 'thrift -cpp Hbase.thrift' first
+
+THRIFT_DIR = /usr/local/include/thrift
+LIB_DIR = /usr/local/lib
+
+GEN_SRC = ./gen-cpp/Hbase.cpp \
+	  ./gen-cpp/Hbase_types.cpp \
+	  ./gen-cpp/Hbase_constants.cpp
+
+default: DemoClient
+
+DemoClient: DemoClient.cpp
+	g++ -o DemoClient -I${THRIFT_DIR}  -I./gen-cpp -L${LIB_DIR} -lthrift DemoClient.cpp ${GEN_SRC}
+
+clean:
+	rm -rf DemoClient
diff --git a/0.90/src/examples/thrift/README.txt b/0.90/src/examples/thrift/README.txt
new file mode 100644
index 0000000..c742f8d
--- /dev/null
+++ b/0.90/src/examples/thrift/README.txt
@@ -0,0 +1,16 @@
+Hbase Thrift Client Examples
+============================
+
+Included in this directory are sample clients of the HBase ThriftServer.  They
+all perform the same actions but are implemented in C++, Java, Ruby, PHP, and
+Python respectively.
+
+To run/compile this clients, you will first need to install the thrift package
+(from http://developers.facebook.com/thrift/) and then run thrift to generate
+the language files:
+
+thrift --gen cpp --gen java --gen rb --gen py -php \
+    ../../../src/java/org/apache/hadoop/hbase/thrift/Hbase.thrift
+
+See the individual DemoClient test files for more specific instructions on 
+running each test.
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/Abortable.java b/0.90/src/main/java/org/apache/hadoop/hbase/Abortable.java
new file mode 100644
index 0000000..b4fba88
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/Abortable.java
@@ -0,0 +1,37 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/**
+ * Interface to support the aborting of a given server or client.
+ * <p>
+ * This is used primarily for ZooKeeper usage when we could get an unexpected
+ * and fatal exception, requiring an abort.
+ * <p>
+ * Implemented by the Master, RegionServer, and TableServers (client).
+ */
+public interface Abortable {
+  /**
+   * Abort the server or client.
+   * @param why Why we're aborting.
+   * @param e Throwable that caused abort. Can be null.
+   */
+  public void abort(String why, Throwable e);
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/Chore.java b/0.90/src/main/java/org/apache/hadoop/hbase/Chore.java
new file mode 100644
index 0000000..df1514a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/Chore.java
@@ -0,0 +1,112 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.util.Sleeper;
+
+/**
+ * Chore is a task performed on a period in hbase.  The chore is run in its own
+ * thread. This base abstract class provides while loop and sleeping facility.
+ * If an unhandled exception, the threads exit is logged.
+ * Implementers just need to add checking if there is work to be done and if
+ * so, do it.  Its the base of most of the chore threads in hbase.
+ *
+ * <p>Don't subclass Chore if the task relies on being woken up for something to
+ * do, such as an entry being added to a queue, etc.
+ */
+public abstract class Chore extends Thread {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+  private final Sleeper sleeper;
+  protected final Stoppable stopper;
+
+  /**
+   * @param p Period at which we should run.  Will be adjusted appropriately
+   * should we find work and it takes time to complete.
+   * @param stopper When {@link Stoppable#isStopped()} is true, this thread will
+   * cleanup and exit cleanly.
+   */
+  public Chore(String name, final int p, final Stoppable stopper) {
+    super(name);
+    this.sleeper = new Sleeper(p, stopper);
+    this.stopper = stopper;
+  }
+
+  /**
+   * @see java.lang.Thread#run()
+   */
+  @Override
+  public void run() {
+    try {
+      boolean initialChoreComplete = false;
+      while (!this.stopper.isStopped()) {
+        long startTime = System.currentTimeMillis();
+        try {
+          if (!initialChoreComplete) {
+            initialChoreComplete = initialChore();
+          } else {
+            chore();
+          }
+        } catch (Exception e) {
+          LOG.error("Caught exception", e);
+          if (this.stopper.isStopped()) {
+            continue;
+          }
+        }
+        this.sleeper.sleep(startTime);
+      }
+    } catch (Throwable t) {
+      LOG.fatal(getName() + "error", t);
+    } finally {
+      LOG.info(getName() + " exiting");
+    }
+  }
+
+  /**
+   * If the thread is currently sleeping, trigger the core to happen immediately.
+   * If it's in the middle of its operation, will begin another operation
+   * immediately after finishing this one.
+   */
+  public void triggerNow() {
+    this.sleeper.skipSleepCycle();
+  }
+
+  /**
+   * Override to run a task before we start looping.
+   * @return true if initial chore was successful
+   */
+  protected boolean initialChore() {
+    // Default does nothing.
+    return true;
+  }
+
+  /**
+   * Look for chores.  If any found, do them else just return.
+   */
+  protected abstract void chore();
+
+  /**
+   * Sleep for period.
+   */
+  protected void sleep() {
+    this.sleeper.sleep();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ClockOutOfSyncException.java b/0.90/src/main/java/org/apache/hadoop/hbase/ClockOutOfSyncException.java
new file mode 100644
index 0000000..5c51e4b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ClockOutOfSyncException.java
@@ -0,0 +1,33 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * This exception is thrown by the master when a region server clock skew is
+ * too high.
+ */
+@SuppressWarnings("serial")
+public class ClockOutOfSyncException extends IOException {
+  public ClockOutOfSyncException(String message) {
+    super(message);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java b/0.90/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
new file mode 100644
index 0000000..789cad4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
@@ -0,0 +1,243 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.master.AssignmentManager.RegionState;
+import org.apache.hadoop.io.VersionedWritable;
+
+/**
+ * Status information on the HBase cluster.
+ * <p>
+ * <tt>ClusterStatus</tt> provides clients with information such as:
+ * <ul>
+ * <li>The count and names of region servers in the cluster.</li>
+ * <li>The count and names of dead region servers in the cluster.</li>
+ * <li>The average cluster load.</li>
+ * <li>The number of regions deployed on the cluster.</li>
+ * <li>The number of requests since last report.</li>
+ * <li>Detailed region server loading and resource usage information,
+ *  per server and per region.</li>
+ *  <li>Regions in transition at master</li>
+ * </ul>
+ */
+public class ClusterStatus extends VersionedWritable {
+  private static final byte VERSION = 0;
+
+  private String hbaseVersion;
+  private Collection<HServerInfo> liveServerInfo;
+  private Collection<String> deadServers;
+  private Map<String, RegionState> intransition;
+
+  /**
+   * Constructor, for Writable
+   */
+  public ClusterStatus() {
+    super();
+  }
+
+  /**
+   * @return the names of region servers on the dead list
+   */
+  public Collection<String> getDeadServerNames() {
+    return Collections.unmodifiableCollection(deadServers);
+  }
+
+  /**
+   * @return the number of region servers in the cluster
+   */
+  public int getServers() {
+    return liveServerInfo.size();
+  }
+
+  /**
+   * @return the number of dead region servers in the cluster
+   */
+  public int getDeadServers() {
+    return deadServers.size();
+  }
+
+  /**
+   * @return the average cluster load
+   */
+  public double getAverageLoad() {
+    int load = 0;
+    for (HServerInfo server: liveServerInfo) {
+      load += server.getLoad().getLoad();
+    }
+    return (double)load / (double)liveServerInfo.size();
+  }
+
+  /**
+   * @return the number of regions deployed on the cluster
+   */
+  public int getRegionsCount() {
+    int count = 0;
+    for (HServerInfo server: liveServerInfo) {
+      count += server.getLoad().getNumberOfRegions();
+    }
+    return count;
+  }
+
+  /**
+   * @return the number of requests since last report
+   */
+  public int getRequestsCount() {
+    int count = 0;
+    for (HServerInfo server: liveServerInfo) {
+      count += server.getLoad().getNumberOfRequests();
+    }
+    return count;
+  }
+
+  /**
+   * @return the HBase version string as reported by the HMaster
+   */
+  public String getHBaseVersion() {
+    return hbaseVersion;
+  }
+
+  /**
+   * @param version the HBase version string
+   */
+  public void setHBaseVersion(String version) {
+    hbaseVersion = version;
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  public boolean equals(Object o) {
+    if (this == o) {
+      return true;
+    }
+    if (!(o instanceof ClusterStatus)) {
+      return false;
+    }
+    return (getVersion() == ((ClusterStatus)o).getVersion()) &&
+      getHBaseVersion().equals(((ClusterStatus)o).getHBaseVersion()) &&
+      liveServerInfo.equals(((ClusterStatus)o).liveServerInfo) &&
+      deadServers.equals(((ClusterStatus)o).deadServers);
+  }
+
+  /**
+   * @see java.lang.Object#hashCode()
+   */
+  public int hashCode() {
+    return VERSION + hbaseVersion.hashCode() + liveServerInfo.hashCode() +
+      deadServers.hashCode();
+  }
+
+  /** @return the object version number */
+  public byte getVersion() {
+    return VERSION;
+  }
+
+  //
+  // Getters
+  //
+
+  /**
+   * Returns detailed region server information: A list of
+   * {@link HServerInfo}, containing server load and resource usage
+   * statistics as {@link HServerLoad}, containing per-region
+   * statistics as {@link HServerLoad.RegionLoad}.
+   * @return region server information
+   */
+  public Collection<HServerInfo> getServerInfo() {
+    return Collections.unmodifiableCollection(liveServerInfo);
+  }
+
+  //
+  // Setters
+  //
+
+  public void setServerInfo(Collection<HServerInfo> serverInfo) {
+    this.liveServerInfo = serverInfo;
+  }
+
+  public void setDeadServers(Collection<String> deadServers) {
+    this.deadServers = deadServers;
+  }
+
+  public Map<String, RegionState> getRegionsInTransition() {
+    return this.intransition;
+  }
+
+  public void setRegionsInTransition(final Map<String, RegionState> m) {
+    this.intransition = m;
+  }
+
+  //
+  // Writable
+  //
+
+  public void write(DataOutput out) throws IOException {
+    super.write(out);
+    out.writeUTF(hbaseVersion);
+    out.writeInt(liveServerInfo.size());
+    for (HServerInfo server: liveServerInfo) {
+      server.write(out);
+    }
+    out.writeInt(deadServers.size());
+    for (String server: deadServers) {
+      out.writeUTF(server);
+    }
+    out.writeInt(this.intransition.size());
+    for (Map.Entry<String, RegionState> e: this.intransition.entrySet()) {
+      out.writeUTF(e.getKey());
+      e.getValue().write(out);
+    }
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    super.readFields(in);
+    hbaseVersion = in.readUTF();
+    int count = in.readInt();
+    liveServerInfo = new ArrayList<HServerInfo>(count);
+    for (int i = 0; i < count; i++) {
+      HServerInfo info = new HServerInfo();
+      info.readFields(in);
+      liveServerInfo.add(info);
+    }
+    count = in.readInt();
+    deadServers = new ArrayList<String>(count);
+    for (int i = 0; i < count; i++) {
+      deadServers.add(in.readUTF());
+    }
+    count = in.readInt();
+    this.intransition = new TreeMap<String, RegionState>();
+    for (int i = 0; i < count; i++) {
+      String key = in.readUTF();
+      RegionState regionState = new RegionState();
+      regionState.readFields(in);
+      this.intransition.put(key, regionState);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java b/0.90/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
new file mode 100644
index 0000000..98c5b9b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java
@@ -0,0 +1,53 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Subclass if exception is not meant to be retried: e.g.
+ * {@link UnknownScannerException}
+ */
+public class DoNotRetryIOException extends IOException {
+
+  private static final long serialVersionUID = 1197446454511704139L;
+
+  /**
+   * default constructor
+   */
+  public DoNotRetryIOException() {
+    super();
+  }
+
+  /**
+   * @param message
+   */
+  public DoNotRetryIOException(String message) {
+    super(message);
+  }
+
+  /**
+   * @param message
+   * @param cause
+   */
+  public DoNotRetryIOException(String message, Throwable cause) {
+    super(message, cause);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java b/0.90/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
new file mode 100644
index 0000000..9b1d021
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+import java.io.IOException;
+
+
+/**
+ * Thrown during flush if the possibility snapshot content was not properly
+ * persisted into store files.  Response should include replay of hlog content.
+ */
+public class DroppedSnapshotException extends IOException {
+
+  private static final long serialVersionUID = -5463156580831677374L;
+
+  /**
+   * @param msg
+   */
+  public DroppedSnapshotException(String msg) {
+    super(msg);
+  }
+
+  /**
+   * default constructor
+   */
+  public DroppedSnapshotException() {
+    super();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java b/0.90/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
new file mode 100644
index 0000000..13900c3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
@@ -0,0 +1,89 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.util.Map.Entry;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Adds HBase configuration files to a Configuration
+ */
+public class HBaseConfiguration extends Configuration {
+
+  private static final Log LOG = LogFactory.getLog(HBaseConfiguration.class);
+
+  /**
+   * Instantinating HBaseConfiguration() is deprecated. Please use
+   * HBaseConfiguration#create() to construct a plain Configuration
+   */
+  @Deprecated
+  public HBaseConfiguration() {
+    //TODO:replace with private constructor, HBaseConfiguration should not extend Configuration
+    super();
+    addHbaseResources(this);
+    LOG.warn("instantiating HBaseConfiguration() is deprecated. Please use" +
+    		" HBaseConfiguration#create() to construct a plain Configuration");
+  }
+
+  /**
+   * Instantiating HBaseConfiguration() is deprecated. Please use
+   * HBaseConfiguration#create(conf) to construct a plain Configuration
+   */
+  @Deprecated
+  public HBaseConfiguration(final Configuration c) {
+    //TODO:replace with private constructor
+    this();
+    for (Entry<String, String>e: c) {
+      set(e.getKey(), e.getValue());
+    }
+  }
+
+  public static Configuration addHbaseResources(Configuration conf) {
+    conf.addResource("hbase-default.xml");
+    conf.addResource("hbase-site.xml");
+    return conf;
+  }
+
+  /**
+   * Creates a Configuration with HBase resources
+   * @return a Configuration with HBase resources
+   */
+  public static Configuration create() {
+    Configuration conf = new Configuration();
+    return addHbaseResources(conf);
+  }
+
+  /**
+   * Creates a clone of passed configuration.
+   * @param that Configuration to clone.
+   * @return a Configuration created with the hbase-*.xml files plus
+   * the given configuration.
+   */
+  public static Configuration create(final Configuration that) {
+    Configuration conf = create();
+    for (Entry<String, String>e: that) {
+      conf.set(e.getKey(), e.getValue());
+    }
+    return conf;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
new file mode 100644
index 0000000..8808c06
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
@@ -0,0 +1,696 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFile.BloomType;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * An HColumnDescriptor contains information about a column family such as the
+ * number of versions, compression settings, etc.
+ *
+ * It is used as input when creating a table or adding a column. Once set, the
+ * parameters that specify a column cannot be changed without deleting the
+ * column and recreating it. If there is data stored in the column, it will be
+ * deleted when the column is deleted.
+ */
+public class HColumnDescriptor implements WritableComparable<HColumnDescriptor> {
+  // For future backward compatibility
+
+  // Version 3 was when column names become byte arrays and when we picked up
+  // Time-to-live feature.  Version 4 was when we moved to byte arrays, HBASE-82.
+  // Version 5 was when bloom filter descriptors were removed.
+  // Version 6 adds metadata as a map where keys and values are byte[].
+  // Version 7 -- add new compression and hfile blocksize to HColumnDescriptor (HBASE-1217)
+  // Version 8 -- reintroduction of bloom filters, changed from boolean to enum
+  private static final byte COLUMN_DESCRIPTOR_VERSION = (byte)8;
+
+  /**
+   * The type of compression.
+   * @see org.apache.hadoop.io.SequenceFile.Writer
+   * @deprecated Compression now means which compression library
+   * rather than 'what' to compress.
+   */
+  @Deprecated
+  public static enum CompressionType {
+    /** Do not compress records. */
+    NONE,
+    /** Compress values only, each separately. */
+    RECORD,
+    /** Compress sequences of records together in blocks. */
+    BLOCK
+  }
+
+  public static final String COMPRESSION = "COMPRESSION";
+  public static final String COMPRESSION_COMPACT = "COMPRESSION_COMPACT";
+  public static final String BLOCKCACHE = "BLOCKCACHE";
+  
+  /**
+   * Size of storefile/hfile 'blocks'.  Default is {@link #DEFAULT_BLOCKSIZE}.
+   * Use smaller block sizes for faster random-access at expense of larger
+   * indices (more memory consumption).
+   */
+  public static final String BLOCKSIZE = "BLOCKSIZE";
+
+  public static final String LENGTH = "LENGTH";
+  public static final String TTL = "TTL";
+  public static final String BLOOMFILTER = "BLOOMFILTER";
+  public static final String FOREVER = "FOREVER";
+  public static final String REPLICATION_SCOPE = "REPLICATION_SCOPE";
+
+  /**
+   * Default compression type.
+   */
+  public static final String DEFAULT_COMPRESSION =
+    Compression.Algorithm.NONE.getName();
+
+  /**
+   * Default number of versions of a record to keep.
+   */
+  public static final int DEFAULT_VERSIONS = 3;
+
+  /*
+   * Cache here the HCD value.
+   * Question: its OK to cache since when we're reenable, we create a new HCD?
+   */
+  private volatile Integer blocksize = null;
+
+  /**
+   * Default setting for whether to serve from memory or not.
+   */
+  public static final boolean DEFAULT_IN_MEMORY = false;
+
+  /**
+   * Default setting for whether to use a block cache or not.
+   */
+  public static final boolean DEFAULT_BLOCKCACHE = true;
+
+  /**
+   * Default size of blocks in files stored to the filesytem (hfiles).
+   */
+  public static final int DEFAULT_BLOCKSIZE = HFile.DEFAULT_BLOCKSIZE;
+
+  /**
+   * Default setting for whether or not to use bloomfilters.
+   */
+  public static final String DEFAULT_BLOOMFILTER = StoreFile.BloomType.NONE.toString();
+
+  /**
+   * Default time to live of cell contents.
+   */
+  public static final int DEFAULT_TTL = HConstants.FOREVER;
+
+  /**
+   * Default scope.
+   */
+  public static final int DEFAULT_REPLICATION_SCOPE = HConstants.REPLICATION_SCOPE_LOCAL;
+
+  // Column family name
+  private byte [] name;
+
+  // Column metadata
+  protected Map<ImmutableBytesWritable,ImmutableBytesWritable> values =
+    new HashMap<ImmutableBytesWritable,ImmutableBytesWritable>();
+
+  /*
+   * Cache the max versions rather than calculate it every time.
+   */
+  private int cachedMaxVersions = -1;
+
+  /**
+   * Default constructor. Must be present for Writable.
+   */
+  public HColumnDescriptor() {
+    this.name = null;
+  }
+
+  /**
+   * Construct a column descriptor specifying only the family name
+   * The other attributes are defaulted.
+   *
+   * @param familyName Column family name. Must be 'printable' -- digit or
+   * letter -- and may not contain a <code>:<code>
+   */
+  public HColumnDescriptor(final String familyName) {
+    this(Bytes.toBytes(familyName));
+  }
+
+  /**
+   * Construct a column descriptor specifying only the family name
+   * The other attributes are defaulted.
+   *
+   * @param familyName Column family name. Must be 'printable' -- digit or
+   * letter -- and may not contain a <code>:<code>
+   */
+  public HColumnDescriptor(final byte [] familyName) {
+    this (familyName == null || familyName.length <= 0?
+      HConstants.EMPTY_BYTE_ARRAY: familyName, DEFAULT_VERSIONS,
+      DEFAULT_COMPRESSION, DEFAULT_IN_MEMORY, DEFAULT_BLOCKCACHE,
+      DEFAULT_TTL, DEFAULT_BLOOMFILTER);
+  }
+
+  /**
+   * Constructor.
+   * Makes a deep copy of the supplied descriptor.
+   * Can make a modifiable descriptor from an UnmodifyableHColumnDescriptor.
+   * @param desc The descriptor.
+   */
+  public HColumnDescriptor(HColumnDescriptor desc) {
+    super();
+    this.name = desc.name.clone();
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+        desc.values.entrySet()) {
+      this.values.put(e.getKey(), e.getValue());
+    }
+    setMaxVersions(desc.getMaxVersions());
+  }
+
+  /**
+   * Constructor
+   * @param familyName Column family name. Must be 'printable' -- digit or
+   * letter -- and may not contain a <code>:<code>
+   * @param maxVersions Maximum number of versions to keep
+   * @param compression Compression type
+   * @param inMemory If true, column data should be kept in an HRegionServer's
+   * cache
+   * @param blockCacheEnabled If true, MapFile blocks should be cached
+   * @param timeToLive Time-to-live of cell contents, in seconds
+   * (use HConstants.FOREVER for unlimited TTL)
+   * @param bloomFilter Bloom filter type for this column
+   *
+   * @throws IllegalArgumentException if passed a family name that is made of
+   * other than 'word' characters: i.e. <code>[a-zA-Z_0-9]</code> or contains
+   * a <code>:</code>
+   * @throws IllegalArgumentException if the number of versions is &lt;= 0
+   */
+  public HColumnDescriptor(final byte [] familyName, final int maxVersions,
+      final String compression, final boolean inMemory,
+      final boolean blockCacheEnabled,
+      final int timeToLive, final String bloomFilter) {
+    this(familyName, maxVersions, compression, inMemory, blockCacheEnabled,
+      DEFAULT_BLOCKSIZE, timeToLive, bloomFilter, DEFAULT_REPLICATION_SCOPE);
+  }
+
+  /**
+   * Constructor
+   * @param familyName Column family name. Must be 'printable' -- digit or
+   * letter -- and may not contain a <code>:<code>
+   * @param maxVersions Maximum number of versions to keep
+   * @param compression Compression type
+   * @param inMemory If true, column data should be kept in an HRegionServer's
+   * cache
+   * @param blockCacheEnabled If true, MapFile blocks should be cached
+   * @param blocksize Block size to use when writing out storefiles.  Use
+   * smaller blocksizes for faster random-access at expense of larger indices
+   * (more memory consumption).  Default is usually 64k.
+   * @param timeToLive Time-to-live of cell contents, in seconds
+   * (use HConstants.FOREVER for unlimited TTL)
+   * @param bloomFilter Bloom filter type for this column
+   * @param scope The scope tag for this column
+   *
+   * @throws IllegalArgumentException if passed a family name that is made of
+   * other than 'word' characters: i.e. <code>[a-zA-Z_0-9]</code> or contains
+   * a <code>:</code>
+   * @throws IllegalArgumentException if the number of versions is &lt;= 0
+   */
+  public HColumnDescriptor(final byte [] familyName, final int maxVersions,
+      final String compression, final boolean inMemory,
+      final boolean blockCacheEnabled, final int blocksize,
+      final int timeToLive, final String bloomFilter, final int scope) {
+    isLegalFamilyName(familyName);
+    this.name = familyName;
+
+    if (maxVersions <= 0) {
+      // TODO: Allow maxVersion of 0 to be the way you say "Keep all versions".
+      // Until there is support, consider 0 or < 0 -- a configuration error.
+      throw new IllegalArgumentException("Maximum versions must be positive");
+    }
+    setMaxVersions(maxVersions);
+    setInMemory(inMemory);
+    setBlockCacheEnabled(blockCacheEnabled);
+    setTimeToLive(timeToLive);
+    setCompressionType(Compression.Algorithm.
+      valueOf(compression.toUpperCase()));
+    setBloomFilterType(StoreFile.BloomType.
+      valueOf(bloomFilter.toUpperCase()));
+    setBlocksize(blocksize);
+    setScope(scope);
+  }
+
+  /**
+   * @param b Family name.
+   * @return <code>b</code>
+   * @throws IllegalArgumentException If not null and not a legitimate family
+   * name: i.e. 'printable' and ends in a ':' (Null passes are allowed because
+   * <code>b</code> can be null when deserializing).  Cannot start with a '.'
+   * either.
+   */
+  public static byte [] isLegalFamilyName(final byte [] b) {
+    if (b == null) {
+      return b;
+    }
+    if (b[0] == '.') {
+      throw new IllegalArgumentException("Family names cannot start with a " +
+        "period: " + Bytes.toString(b));
+    }
+    for (int i = 0; i < b.length; i++) {
+      if (Character.isISOControl(b[i]) || b[i] == ':') {
+        throw new IllegalArgumentException("Illegal character <" + b[i] +
+          ">. Family names cannot contain control characters or colons: " +
+          Bytes.toString(b));
+      }
+    }
+    return b;
+  }
+
+  /**
+   * @return Name of this column family
+   */
+  public byte [] getName() {
+    return name;
+  }
+
+  /**
+   * @return Name of this column family
+   */
+  public String getNameAsString() {
+    return Bytes.toString(this.name);
+  }
+
+  /**
+   * @param key The key.
+   * @return The value.
+   */
+  public byte[] getValue(byte[] key) {
+    ImmutableBytesWritable ibw = values.get(new ImmutableBytesWritable(key));
+    if (ibw == null)
+      return null;
+    return ibw.get();
+  }
+
+  /**
+   * @param key The key.
+   * @return The value as a string.
+   */
+  public String getValue(String key) {
+    byte[] value = getValue(Bytes.toBytes(key));
+    if (value == null)
+      return null;
+    return Bytes.toString(value);
+  }
+
+  /**
+   * @return All values.
+   */
+  public Map<ImmutableBytesWritable,ImmutableBytesWritable> getValues() {
+    return Collections.unmodifiableMap(values);
+  }
+
+  /**
+   * @param key The key.
+   * @param value The value.
+   */
+  public void setValue(byte[] key, byte[] value) {
+    values.put(new ImmutableBytesWritable(key),
+      new ImmutableBytesWritable(value));
+  }
+
+  /**
+   * @param key Key whose key and value we're to remove from HCD parameters.
+   */
+  public void remove(final byte [] key) {
+    values.remove(new ImmutableBytesWritable(key));
+  }
+
+  /**
+   * @param key The key.
+   * @param value The value.
+   */
+  public void setValue(String key, String value) {
+    setValue(Bytes.toBytes(key), Bytes.toBytes(value));
+  }
+
+  /** @return compression type being used for the column family */
+  public Compression.Algorithm getCompression() {
+    String n = getValue(COMPRESSION);
+    if (n == null) {
+      return Compression.Algorithm.NONE;
+    }
+    return Compression.Algorithm.valueOf(n.toUpperCase());
+  }
+
+  /** @return compression type being used for the column family for major 
+      compression */
+  public Compression.Algorithm getCompactionCompression() {
+    String n = getValue(COMPRESSION_COMPACT);
+    if (n == null) {
+      return getCompression();
+    }
+    return Compression.Algorithm.valueOf(n.toUpperCase());
+  }
+
+  /** @return maximum number of versions */
+  public int getMaxVersions() {
+    return this.cachedMaxVersions;
+  }
+
+  /**
+   * @param maxVersions maximum number of versions
+   */
+  public void setMaxVersions(int maxVersions) {
+    setValue(HConstants.VERSIONS, Integer.toString(maxVersions));
+    cachedMaxVersions = maxVersions;
+  }
+
+  /**
+   * @return The storefile/hfile blocksize for this column family.
+   */
+  public synchronized int getBlocksize() {
+    if (this.blocksize == null) {
+      String value = getValue(BLOCKSIZE);
+      this.blocksize = (value != null)?
+        Integer.decode(value): Integer.valueOf(DEFAULT_BLOCKSIZE);
+    }
+    return this.blocksize.intValue();
+  }
+
+  /**
+   * @param s Blocksize to use when writing out storefiles/hfiles on this
+   * column family.
+   */
+  public void setBlocksize(int s) {
+    setValue(BLOCKSIZE, Integer.toString(s));
+    this.blocksize = null;
+  }
+
+  /**
+   * @return Compression type setting.
+   */
+  public Compression.Algorithm getCompressionType() {
+    return getCompression();
+  }
+
+  /**
+   * Compression types supported in hbase.
+   * LZO is not bundled as part of the hbase distribution.
+   * See <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">LZO Compression</a>
+   * for how to enable it.
+   * @param type Compression type setting.
+   */
+  public void setCompressionType(Compression.Algorithm type) {
+    String compressionType;
+    switch (type) {
+      case LZO: compressionType = "LZO"; break;
+      case GZ: compressionType = "GZ"; break;
+      default: compressionType = "NONE"; break;
+    }
+    setValue(COMPRESSION, compressionType);
+  }
+
+  /**
+   * @return Compression type setting.
+   */
+  public Compression.Algorithm getCompactionCompressionType() {
+    return getCompactionCompression();
+  }
+
+  /**
+   * Compression types supported in hbase.
+   * LZO is not bundled as part of the hbase distribution.
+   * See <a href="http://wiki.apache.org/hadoop/UsingLzoCompression">LZO Compression</a>
+   * for how to enable it.
+   * @param type Compression type setting.
+   */
+  public void setCompactionCompressionType(Compression.Algorithm type) {
+    String compressionType;
+    switch (type) {
+      case LZO: compressionType = "LZO"; break;
+      case GZ: compressionType = "GZ"; break;
+      default: compressionType = "NONE"; break;
+    }
+    setValue(COMPRESSION_COMPACT, compressionType);
+  }
+
+  /**
+   * @return True if we are to keep all in use HRegionServer cache.
+   */
+  public boolean isInMemory() {
+    String value = getValue(HConstants.IN_MEMORY);
+    if (value != null)
+      return Boolean.valueOf(value).booleanValue();
+    return DEFAULT_IN_MEMORY;
+  }
+
+  /**
+   * @param inMemory True if we are to keep all values in the HRegionServer
+   * cache
+   */
+  public void setInMemory(boolean inMemory) {
+    setValue(HConstants.IN_MEMORY, Boolean.toString(inMemory));
+  }
+
+  /**
+   * @return Time-to-live of cell contents, in seconds.
+   */
+  public int getTimeToLive() {
+    String value = getValue(TTL);
+    return (value != null)? Integer.valueOf(value).intValue(): DEFAULT_TTL;
+  }
+
+  /**
+   * @param timeToLive Time-to-live of cell contents, in seconds.
+   */
+  public void setTimeToLive(int timeToLive) {
+    setValue(TTL, Integer.toString(timeToLive));
+  }
+
+  /**
+   * @return True if MapFile blocks should be cached.
+   */
+  public boolean isBlockCacheEnabled() {
+    String value = getValue(BLOCKCACHE);
+    if (value != null)
+      return Boolean.valueOf(value).booleanValue();
+    return DEFAULT_BLOCKCACHE;
+  }
+
+  /**
+   * @param blockCacheEnabled True if MapFile blocks should be cached.
+   */
+  public void setBlockCacheEnabled(boolean blockCacheEnabled) {
+    setValue(BLOCKCACHE, Boolean.toString(blockCacheEnabled));
+  }
+
+  /**
+   * @return bloom filter type used for new StoreFiles in ColumnFamily
+   */
+  public StoreFile.BloomType getBloomFilterType() {
+    String n = getValue(BLOOMFILTER);
+    if (n == null) {
+      n = DEFAULT_BLOOMFILTER;
+    }
+    return StoreFile.BloomType.valueOf(n.toUpperCase());
+  }
+
+  /**
+   * @param bt bloom filter type
+   */
+  public void setBloomFilterType(final StoreFile.BloomType bt) {
+    setValue(BLOOMFILTER, bt.toString());
+  }
+
+   /**
+    * @return the scope tag
+    */
+  public int getScope() {
+    String value = getValue(REPLICATION_SCOPE);
+    if (value != null) {
+      return Integer.valueOf(value).intValue();
+    }
+    return DEFAULT_REPLICATION_SCOPE;
+  }
+
+ /**
+  * @param scope the scope tag
+  */
+  public void setScope(int scope) {
+    setValue(REPLICATION_SCOPE, Integer.toString(scope));
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder s = new StringBuilder();
+    s.append('{');
+    s.append(HConstants.NAME);
+    s.append(" => '");
+    s.append(Bytes.toString(name));
+    s.append("'");
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+        values.entrySet()) {
+      String key = Bytes.toString(e.getKey().get());
+      String value = Bytes.toString(e.getValue().get());
+      s.append(", ");
+      s.append(key);
+      s.append(" => '");
+      s.append(value);
+      s.append("'");
+    }
+    s.append('}');
+    return s.toString();
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj) {
+      return true;
+    }
+    if (obj == null) {
+      return false;
+    }
+    if (!(obj instanceof HColumnDescriptor)) {
+      return false;
+    }
+    return compareTo((HColumnDescriptor)obj) == 0;
+  }
+
+  /**
+   * @see java.lang.Object#hashCode()
+   */
+  @Override
+  public int hashCode() {
+    int result = Bytes.hashCode(this.name);
+    result ^= Byte.valueOf(COLUMN_DESCRIPTOR_VERSION).hashCode();
+    result ^= values.hashCode();
+    return result;
+  }
+
+  // Writable
+
+  public void readFields(DataInput in) throws IOException {
+    int version = in.readByte();
+    if (version < 6) {
+      if (version <= 2) {
+        Text t = new Text();
+        t.readFields(in);
+        this.name = t.getBytes();
+//        if(KeyValue.getFamilyDelimiterIndex(this.name, 0, this.name.length)
+//            > 0) {
+//          this.name = stripColon(this.name);
+//        }
+      } else {
+        this.name = Bytes.readByteArray(in);
+      }
+      this.values.clear();
+      setMaxVersions(in.readInt());
+      int ordinal = in.readInt();
+      setCompressionType(Compression.Algorithm.values()[ordinal]);
+      setInMemory(in.readBoolean());
+      setBloomFilterType(in.readBoolean() ? BloomType.ROW : BloomType.NONE);
+      if (getBloomFilterType() != BloomType.NONE && version < 5) {
+        // If a bloomFilter is enabled and the column descriptor is less than
+        // version 5, we need to skip over it to read the rest of the column
+        // descriptor. There are no BloomFilterDescriptors written to disk for
+        // column descriptors with a version number >= 5
+        throw new UnsupportedClassVersionError(this.getClass().getName() +
+            " does not support backward compatibility with versions older " +
+            "than version 5");
+      }
+      if (version > 1) {
+        setBlockCacheEnabled(in.readBoolean());
+      }
+      if (version > 2) {
+       setTimeToLive(in.readInt());
+      }
+    } else {
+      // version 6+
+      this.name = Bytes.readByteArray(in);
+      this.values.clear();
+      int numValues = in.readInt();
+      for (int i = 0; i < numValues; i++) {
+        ImmutableBytesWritable key = new ImmutableBytesWritable();
+        ImmutableBytesWritable value = new ImmutableBytesWritable();
+        key.readFields(in);
+        value.readFields(in);
+
+        // in version 8, the BloomFilter setting changed from bool to enum
+        if (version < 8 && Bytes.toString(key.get()).equals(BLOOMFILTER)) {
+          value.set(Bytes.toBytes(
+              Boolean.getBoolean(Bytes.toString(value.get()))
+                ? BloomType.ROW.toString()
+                : BloomType.NONE.toString()));
+        }
+
+        values.put(key, value);
+      }
+      if (version == 6) {
+        // Convert old values.
+        setValue(COMPRESSION, Compression.Algorithm.NONE.getName());
+      }
+      String value = getValue(HConstants.VERSIONS);
+      this.cachedMaxVersions = (value != null)?
+          Integer.valueOf(value).intValue(): DEFAULT_VERSIONS;
+    }
+  }
+
+  public void write(DataOutput out) throws IOException {
+    out.writeByte(COLUMN_DESCRIPTOR_VERSION);
+    Bytes.writeByteArray(out, this.name);
+    out.writeInt(values.size());
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+        values.entrySet()) {
+      e.getKey().write(out);
+      e.getValue().write(out);
+    }
+  }
+
+  // Comparable
+
+  public int compareTo(HColumnDescriptor o) {
+    int result = Bytes.compareTo(this.name, o.getName());
+    if (result == 0) {
+      // punt on comparison for ordering, just calculate difference
+      result = this.values.hashCode() - o.values.hashCode();
+      if (result < 0)
+        result = -1;
+      else if (result > 0)
+        result = 1;
+    }
+    return result;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HConstants.java b/0.90/src/main/java/org/apache/hadoop/hbase/HConstants.java
new file mode 100644
index 0000000..a44a0b9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -0,0 +1,370 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * HConstants holds a bunch of HBase-related constants
+ */
+public final class HConstants {
+  /**
+   * Status codes used for return values of bulk operations.
+   */
+  public enum OperationStatusCode {
+    NOT_RUN,
+    SUCCESS,
+    BAD_FAMILY,
+    FAILURE;
+  }
+
+  /** long constant for zero */
+  public static final Long ZERO_L = Long.valueOf(0L);
+  public static final String NINES = "99999999999999";
+  public static final String ZEROES = "00000000000000";
+
+  // For migration
+
+  /** name of version file */
+  public static final String VERSION_FILE_NAME = "hbase.version";
+
+  /**
+   * Current version of file system.
+   * Version 4 supports only one kind of bloom filter.
+   * Version 5 changes versions in catalog table regions.
+   * Version 6 enables blockcaching on catalog tables.
+   * Version 7 introduces hfile -- hbase 0.19 to 0.20..
+   */
+  // public static final String FILE_SYSTEM_VERSION = "6";
+  public static final String FILE_SYSTEM_VERSION = "7";
+
+  // Configuration parameters
+
+  //TODO: Is having HBase homed on port 60k OK?
+
+  /** Cluster is in distributed mode or not */
+  public static final String CLUSTER_DISTRIBUTED = "hbase.cluster.distributed";
+
+  /** Cluster is standalone or pseudo-distributed */
+  public static final String CLUSTER_IS_LOCAL = "false";
+
+  /** Cluster is fully-distributed */
+  public static final String CLUSTER_IS_DISTRIBUTED = "true";
+
+  /** default host address */
+  public static final String DEFAULT_HOST = "0.0.0.0";
+
+  /** Parameter name for port master listens on. */
+  public static final String MASTER_PORT = "hbase.master.port";
+
+  /** default port that the master listens on */
+  public static final int DEFAULT_MASTER_PORT = 60000;
+
+  /** default port for master web api */
+  public static final int DEFAULT_MASTER_INFOPORT = 60010;
+
+  /** Parameter name for the master type being backup (waits for primary to go inactive). */
+  public static final String MASTER_TYPE_BACKUP = "hbase.master.backup";
+
+  /** by default every master is a possible primary master unless the conf explicitly overrides it */
+  public static final boolean DEFAULT_MASTER_TYPE_BACKUP = false;
+
+  /** Name of ZooKeeper quorum configuration parameter. */
+  public static final String ZOOKEEPER_QUORUM = "hbase.zookeeper.quorum";
+
+  /** Name of ZooKeeper config file in conf/ directory. */
+  public static final String ZOOKEEPER_CONFIG_NAME = "zoo.cfg";
+
+  /** default client port that the zookeeper listens on */
+  public static final int DEFAULT_ZOOKEPER_CLIENT_PORT = 2181;
+
+  /** Parameter name for the root dir in ZK for this cluster */
+  public static final String ZOOKEEPER_ZNODE_PARENT = "zookeeper.znode.parent";
+
+  public static final String DEFAULT_ZOOKEEPER_ZNODE_PARENT = "/hbase";
+
+  /** Parameter name for port region server listens on. */
+  public static final String REGIONSERVER_PORT = "hbase.regionserver.port";
+
+  /** Default port region server listens on. */
+  public static final int DEFAULT_REGIONSERVER_PORT = 60020;
+
+  /** default port for region server web api */
+  public static final int DEFAULT_REGIONSERVER_INFOPORT = 60030;
+
+  /** Parameter name for what region server interface to use. */
+  public static final String REGION_SERVER_CLASS = "hbase.regionserver.class";
+
+  /** Parameter name for what region server implementation to use. */
+  public static final String REGION_SERVER_IMPL= "hbase.regionserver.impl";
+
+  /** Default region server interface class name. */
+  public static final String DEFAULT_REGION_SERVER_CLASS = HRegionInterface.class.getName();
+
+  /** Parameter name for what master implementation to use. */
+  public static final String MASTER_IMPL= "hbase.master.impl";
+
+  /** Parameter name for how often threads should wake up */
+  public static final String THREAD_WAKE_FREQUENCY = "hbase.server.thread.wakefrequency";
+
+  /** Default value for thread wake frequency */
+  public static final int DEFAULT_THREAD_WAKE_FREQUENCY = 10 * 1000;
+  
+  /** Parameter name for how often a region should should perform a major compaction */
+  public static final String MAJOR_COMPACTION_PERIOD = "hbase.hregion.majorcompaction";
+
+  /** Parameter name for HBase instance root directory */
+  public static final String HBASE_DIR = "hbase.rootdir";
+
+  /** Used to construct the name of the log directory for a region server
+   * Use '.' as a special character to seperate the log files from table data */
+  public static final String HREGION_LOGDIR_NAME = ".logs";
+
+  /** Like the previous, but for old logs that are about to be deleted */
+  public static final String HREGION_OLDLOGDIR_NAME = ".oldlogs";
+
+  /** Used to construct the name of the compaction directory during compaction */
+  public static final String HREGION_COMPACTIONDIR_NAME = "compaction.dir";
+
+  /** Default maximum file size */
+  public static final long DEFAULT_MAX_FILE_SIZE = 256 * 1024 * 1024;
+
+  /** Default size of a reservation block   */
+  public static final int DEFAULT_SIZE_RESERVATION_BLOCK = 1024 * 1024 * 5;
+
+  /** Maximum value length, enforced on KeyValue construction */
+  public static final int MAXIMUM_VALUE_LENGTH = Integer.MAX_VALUE;
+
+  // Always store the location of the root table's HRegion.
+  // This HRegion is never split.
+
+  // region name = table + startkey + regionid. This is the row key.
+  // each row in the root and meta tables describes exactly 1 region
+  // Do we ever need to know all the information that we are storing?
+
+  // Note that the name of the root table starts with "-" and the name of the
+  // meta table starts with "." Why? it's a trick. It turns out that when we
+  // store region names in memory, we use a SortedMap. Since "-" sorts before
+  // "." (and since no other table name can start with either of these
+  // characters, the root region will always be the first entry in such a Map,
+  // followed by all the meta regions (which will be ordered by their starting
+  // row key as well), followed by all user tables. So when the Master is
+  // choosing regions to assign, it will always choose the root region first,
+  // followed by the meta regions, followed by user regions. Since the root
+  // and meta regions always need to be on-line, this ensures that they will
+  // be the first to be reassigned if the server(s) they are being served by
+  // should go down.
+
+
+  //
+  // New stuff.  Making a slow transition.
+  //
+
+  /** The root table's name.*/
+  public static final byte [] ROOT_TABLE_NAME = Bytes.toBytes("-ROOT-");
+
+  /** The META table's name. */
+  public static final byte [] META_TABLE_NAME = Bytes.toBytes(".META.");
+
+  /** delimiter used between portions of a region name */
+  public static final int META_ROW_DELIMITER = ',';
+
+  /** The catalog family as a string*/
+  public static final String CATALOG_FAMILY_STR = "info";
+
+  /** The catalog family */
+  public static final byte [] CATALOG_FAMILY = Bytes.toBytes(CATALOG_FAMILY_STR);
+
+  /** The regioninfo column qualifier */
+  public static final byte [] REGIONINFO_QUALIFIER = Bytes.toBytes("regioninfo");
+
+  /** The server column qualifier */
+  public static final byte [] SERVER_QUALIFIER = Bytes.toBytes("server");
+
+  /** The startcode column qualifier */
+  public static final byte [] STARTCODE_QUALIFIER = Bytes.toBytes("serverstartcode");
+
+  /** The lower-half split region column qualifier */
+  public static final byte [] SPLITA_QUALIFIER = Bytes.toBytes("splitA");
+
+  /** The upper-half split region column qualifier */
+  public static final byte [] SPLITB_QUALIFIER = Bytes.toBytes("splitB");
+
+  // Other constants
+
+  /**
+   * An empty instance.
+   */
+  public static final byte [] EMPTY_BYTE_ARRAY = new byte [0];
+
+  /**
+   * Used by scanners, etc when they want to start at the beginning of a region
+   */
+  public static final byte [] EMPTY_START_ROW = EMPTY_BYTE_ARRAY;
+
+  /**
+   * Last row in a table.
+   */
+  public static final byte [] EMPTY_END_ROW = EMPTY_START_ROW;
+
+  /**
+    * Used by scanners and others when they're trying to detect the end of a
+    * table
+    */
+  public static final byte [] LAST_ROW = EMPTY_BYTE_ARRAY;
+
+  /**
+   * Max length a row can have because of the limitation in TFile.
+   */
+  public static final int MAX_ROW_LENGTH = Short.MAX_VALUE;
+
+  /** When we encode strings, we always specify UTF8 encoding */
+  public static final String UTF8_ENCODING = "UTF-8";
+
+  /**
+   * Timestamp to use when we want to refer to the latest cell.
+   * This is the timestamp sent by clients when no timestamp is specified on
+   * commit.
+   */
+  public static final long LATEST_TIMESTAMP = Long.MAX_VALUE;
+
+  /**
+   * Timestamp to use when we want to refer to the oldest cell.
+   */
+  public static final long OLDEST_TIMESTAMP = Long.MIN_VALUE;
+
+  /**
+   * LATEST_TIMESTAMP in bytes form
+   */
+  public static final byte [] LATEST_TIMESTAMP_BYTES = Bytes.toBytes(LATEST_TIMESTAMP);
+
+  /**
+   * Define for 'return-all-versions'.
+   */
+  public static final int ALL_VERSIONS = Integer.MAX_VALUE;
+
+  /**
+   * Unlimited time-to-live.
+   */
+//  public static final int FOREVER = -1;
+  public static final int FOREVER = Integer.MAX_VALUE;
+
+  /**
+   * Seconds in a week
+   */
+  public static final int WEEK_IN_SECONDS = 7 * 24 * 3600;
+
+  //TODO: although the following are referenced widely to format strings for
+  //      the shell. They really aren't a part of the public API. It would be
+  //      nice if we could put them somewhere where they did not need to be
+  //      public. They could have package visibility
+  public static final String NAME = "NAME";
+  public static final String VERSIONS = "VERSIONS";
+  public static final String IN_MEMORY = "IN_MEMORY";
+
+  /**
+   * This is a retry backoff multiplier table similar to the BSD TCP syn
+   * backoff table, a bit more aggressive than simple exponential backoff.
+   */
+  public static int RETRY_BACKOFF[] = { 1, 1, 1, 2, 2, 4, 4, 8, 16, 32 };
+
+  public static final String REGION_IMPL = "hbase.hregion.impl";
+
+  /** modifyTable op for replacing the table descriptor */
+  public static enum Modify {
+    CLOSE_REGION,
+    TABLE_COMPACT,
+    TABLE_FLUSH,
+    TABLE_MAJOR_COMPACT,
+    TABLE_SET_HTD,
+    TABLE_SPLIT
+  }
+
+  /**
+   * Scope tag for locally scoped data.
+   * This data will not be replicated.
+   */
+  public static final int REPLICATION_SCOPE_LOCAL = 0;
+
+  /**
+   * Scope tag for globally scoped data.
+   * This data will be replicated to all peers.
+   */
+  public static final int REPLICATION_SCOPE_GLOBAL = 1;
+
+  /**
+   * Default cluster ID, cannot be used to identify a cluster so a key with
+   * this value means it wasn't meant for replication.
+   */
+  public static final byte DEFAULT_CLUSTER_ID = 0;
+
+    /**
+     * Parameter name for maximum number of bytes returned when calling a
+     * scanner's next method.
+     */
+  public static String HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE_KEY = "hbase.client.scanner.max.result.size";
+
+  /**
+   * Maximum number of bytes returned when calling a scanner's next method.
+   * Note that when a single row is larger than this limit the row is still
+   * returned completely.
+   *
+   * The default value is unlimited.
+   */
+  public static long DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE = Long.MAX_VALUE;
+
+
+  /**
+   * HRegion server lease period in milliseconds. Clients must report in within this period
+   * else they are considered dead. Unit measured in ms (milliseconds).
+   */
+  public static String HBASE_REGIONSERVER_LEASE_PERIOD_KEY =
+    "hbase.regionserver.lease.period";
+
+
+  /**
+   * Default value of {@link #HBASE_REGIONSERVER_LEASE_PERIOD_KEY}.
+   */
+  public static long DEFAULT_HBASE_REGIONSERVER_LEASE_PERIOD = 60000;
+  
+  /**
+   * timeout for each RPC
+   */
+  public static String HBASE_RPC_TIMEOUT_KEY = "hbase.rpc.timeout";
+  
+  /**
+   * Default value of {@link #HBASE_RPC_TIMEOUT_KEY}
+   */
+  public static int DEFAULT_HBASE_RPC_TIMEOUT = 60000;
+
+  public static final String
+      REPLICATION_ENABLE_KEY = "hbase.replication";
+
+  /** HBCK special code name used as server name when manipulating ZK nodes */
+  public static final String HBCK_CODE_NAME = "HBCKServerName";
+
+  public static final String HBASE_MASTER_LOGCLEANER_PLUGINS =
+      "hbase.master.logcleaner.plugins";
+
+  private HConstants() {
+    // Can't be instantiated with this ctor.
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HMsg.java b/0.90/src/main/java/org/apache/hadoop/hbase/HMsg.java
new file mode 100644
index 0000000..c53460f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HMsg.java
@@ -0,0 +1,256 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * HMsg is used to send messages between master and regionservers.  Messages are
+ * sent as payload on the regionserver-to-master heartbeats.  Region assignment
+ * does not use this mechanism.  It goes via zookeeper.
+ *
+ * <p>Most of the time the messages are simple but some messages are accompanied
+ * by the region affected.  HMsg may also carry an optional message.
+ * 
+ * <p>TODO: Clean out all messages that go from master to regionserver; by
+ * design, these are to go via zk from here on out.
+ */
+public class HMsg implements Writable {
+  public static final HMsg [] STOP_REGIONSERVER_ARRAY =
+    new HMsg [] {new HMsg(Type.STOP_REGIONSERVER)};
+  public static final HMsg [] EMPTY_HMSG_ARRAY = new HMsg[0];
+
+  public static enum Type {
+    /** Master tells region server to stop.
+     */
+    STOP_REGIONSERVER,
+
+    /**
+     * Region server split the region associated with this message.
+     */
+    REGION_SPLIT,
+
+    /**
+     * When RegionServer receives this message, it goes into a sleep that only
+     * an exit will cure.  This message is sent by unit tests simulating
+     * pathological states.
+     */
+    TESTING_BLOCK_REGIONSERVER,
+  }
+
+  private Type type = null;
+  private HRegionInfo info = null;
+  private byte[] message = null;
+  private HRegionInfo daughterA = null;
+  private HRegionInfo daughterB = null;
+
+  /** Default constructor. Used during deserialization */
+  public HMsg() {
+    this(null);
+  }
+
+  /**
+   * Construct a message with the specified message and empty HRegionInfo
+   * @param type Message type
+   */
+  public HMsg(final HMsg.Type type) {
+    this(type, new HRegionInfo(), null);
+  }
+
+  /**
+   * Construct a message with the specified message and HRegionInfo
+   * @param type Message type
+   * @param hri Region to which message <code>type</code> applies
+   */
+  public HMsg(final HMsg.Type type, final HRegionInfo hri) {
+    this(type, hri, null);
+  }
+
+  /**
+   * Construct a message with the specified message and HRegionInfo
+   *
+   * @param type Message type
+   * @param hri Region to which message <code>type</code> applies.  Cannot be
+   * null.  If no info associated, used other Constructor.
+   * @param msg Optional message (Stringified exception, etc.)
+   */
+  public HMsg(final HMsg.Type type, final HRegionInfo hri, final byte[] msg) {
+    this(type, hri, null, null, msg);
+  }
+
+  /**
+   * Construct a message with the specified message and HRegionInfo
+   *
+   * @param type Message type
+   * @param hri Region to which message <code>type</code> applies.  Cannot be
+   * null.  If no info associated, used other Constructor.
+   * @param daughterA
+   * @param daughterB
+   * @param msg Optional message (Stringified exception, etc.)
+   */
+  public HMsg(final HMsg.Type type, final HRegionInfo hri,
+      final HRegionInfo daughterA, final HRegionInfo daughterB, final byte[] msg) {
+    this.type = type;
+    if (hri == null) {
+      throw new NullPointerException("Region cannot be null");
+    }
+    this.info = hri;
+    this.message = msg;
+    this.daughterA = daughterA;
+    this.daughterB = daughterB;
+  }
+
+  /**
+   * @return Region info or null if none associated with this message type.
+   */
+  public HRegionInfo getRegionInfo() {
+    return this.info;
+  }
+
+  /** @return the type of message */
+  public Type getType() {
+    return this.type;
+  }
+
+  /**
+   * @param other Message type to compare to
+   * @return True if we are of same message type as <code>other</code>
+   */
+  public boolean isType(final HMsg.Type other) {
+    return this.type.equals(other);
+  }
+
+  /** @return the message type */
+  public byte[] getMessage() {
+    return this.message;
+  }
+
+  /**
+   * @return First daughter if Type is MSG_REPORT_SPLIT_INCLUDES_DAUGHTERS else
+   * null
+   */
+  public HRegionInfo getDaughterA() {
+    return this.daughterA;
+  }
+
+  /**
+   * @return Second daughter if Type is MSG_REPORT_SPLIT_INCLUDES_DAUGHTERS else
+   * null
+   */
+  public HRegionInfo getDaughterB() {
+    return this.daughterB;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append(this.type.toString());
+    // If null or empty region, don't bother printing it out.
+    if (this.info != null && this.info.getRegionName().length > 0) {
+      sb.append(": ");
+      sb.append(this.info.getRegionNameAsString());
+    }
+    if (this.message != null && this.message.length > 0) {
+      sb.append(": " + Bytes.toString(this.message));
+    }
+    return sb.toString();
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj) {
+      return true;
+    }
+    if (obj == null) {
+      return false;
+    }
+    if (getClass() != obj.getClass()) {
+      return false;
+    }
+    HMsg that = (HMsg)obj;
+    return this.type.equals(that.type) &&
+      (this.info != null)? this.info.equals(that.info):
+        that.info == null;
+  }
+
+  /**
+   * @see java.lang.Object#hashCode()
+   */
+  @Override
+  public int hashCode() {
+    int result = this.type.hashCode();
+    if (this.info != null) {
+      result ^= this.info.hashCode();
+    }
+    return result;
+  }
+
+  // ////////////////////////////////////////////////////////////////////////////
+  // Writable
+  //////////////////////////////////////////////////////////////////////////////
+
+  /**
+   * @see org.apache.hadoop.io.Writable#write(java.io.DataOutput)
+   */
+  public void write(DataOutput out) throws IOException {
+     out.writeInt(this.type.ordinal());
+     this.info.write(out);
+     if (this.message == null || this.message.length == 0) {
+       out.writeBoolean(false);
+     } else {
+       out.writeBoolean(true);
+       Bytes.writeByteArray(out, this.message);
+     }
+     if (this.type.equals(Type.REGION_SPLIT)) {
+       this.daughterA.write(out);
+       this.daughterB.write(out);
+     }
+   }
+
+  /**
+   * @see org.apache.hadoop.io.Writable#readFields(java.io.DataInput)
+   */
+  public void readFields(DataInput in) throws IOException {
+     int ordinal = in.readInt();
+     this.type = HMsg.Type.values()[ordinal];
+     this.info.readFields(in);
+     boolean hasMessage = in.readBoolean();
+     if (hasMessage) {
+       this.message = Bytes.readByteArray(in);
+     }
+     if (this.type.equals(Type.REGION_SPLIT)) {
+       this.daughterA = new HRegionInfo();
+       this.daughterB = new HRegionInfo();
+       this.daughterA.readFields(in);
+       this.daughterB.readFields(in);
+     }
+   }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
new file mode 100644
index 0000000..2e601e1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java
@@ -0,0 +1,669 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue.KVComparator;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JenkinsHash;
+import org.apache.hadoop.hbase.util.MD5Hash;
+import org.apache.hadoop.io.VersionedWritable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * HRegion information.
+ * Contains HRegion id, start and end keys, a reference to this
+ * HRegions' table descriptor, etc.
+ */
+public class HRegionInfo extends VersionedWritable implements WritableComparable<HRegionInfo>{
+  private static final byte VERSION = 0;
+  private static final Log LOG = LogFactory.getLog(HRegionInfo.class);
+
+  /**
+   * The new format for a region name contains its encodedName at the end.
+   * The encoded name also serves as the directory name for the region
+   * in the filesystem.
+   *
+   * New region name format:
+   *    &lt;tablename>,,&lt;startkey>,&lt;regionIdTimestamp>.&lt;encodedName>.
+   * where,
+   *    &lt;encodedName> is a hex version of the MD5 hash of
+   *    &lt;tablename>,&lt;startkey>,&lt;regionIdTimestamp>
+   * 
+   * The old region name format:
+   *    &lt;tablename>,&lt;startkey>,&lt;regionIdTimestamp>
+   * For region names in the old format, the encoded name is a 32-bit
+   * JenkinsHash integer value (in its decimal notation, string form). 
+   *<p>
+   * **NOTE**
+   *
+   * ROOT, the first META region, and regions created by an older
+   * version of HBase (0.20 or prior) will continue to use the
+   * old region name format.
+   */
+
+  /** Separator used to demarcate the encodedName in a region name
+   * in the new format. See description on new format above. 
+   */ 
+  private static final int ENC_SEPARATOR = '.';
+  public  static final int MD5_HEX_LENGTH   = 32;
+
+  /**
+   * Does region name contain its encoded name?
+   * @param regionName region name
+   * @return boolean indicating if this a new format region
+   *         name which contains its encoded name.
+   */
+  private static boolean hasEncodedName(final byte[] regionName) {
+    // check if region name ends in ENC_SEPARATOR
+    if ((regionName.length >= 1)
+        && (regionName[regionName.length - 1] == ENC_SEPARATOR)) {
+      // region name is new format. it contains the encoded name.
+      return true; 
+    }
+    return false;
+  }
+  
+  /**
+   * @param regionName
+   * @return the encodedName
+   */
+  public static String encodeRegionName(final byte [] regionName) {
+    String encodedName;
+    if (hasEncodedName(regionName)) {
+      // region is in new format:
+      // <tableName>,<startKey>,<regionIdTimeStamp>/encodedName/
+      encodedName = Bytes.toString(regionName,
+          regionName.length - MD5_HEX_LENGTH - 1,
+          MD5_HEX_LENGTH);
+    } else {
+      // old format region name. ROOT and first META region also 
+      // use this format.EncodedName is the JenkinsHash value.
+      int hashVal = Math.abs(JenkinsHash.getInstance().hash(regionName,
+        regionName.length, 0));
+      encodedName = String.valueOf(hashVal);
+    }
+    return encodedName;
+  }
+
+  /**
+   * Use logging.
+   * @param encodedRegionName The encoded regionname.
+   * @return <code>-ROOT-</code> if passed <code>70236052</code> or
+   * <code>.META.</code> if passed </code>1028785192</code> else returns
+   * <code>encodedRegionName</code>
+   */
+  public static String prettyPrint(final String encodedRegionName) {
+    if (encodedRegionName.equals("70236052")) {
+      return encodedRegionName + "/-ROOT-";
+    } else if (encodedRegionName.equals("1028785192")) {
+      return encodedRegionName + "/.META.";
+    }
+    return encodedRegionName;
+  }
+
+  /** delimiter used between portions of a region name */
+  public static final int DELIMITER = ',';
+
+  /** HRegionInfo for root region */
+  public static final HRegionInfo ROOT_REGIONINFO =
+    new HRegionInfo(0L, HTableDescriptor.ROOT_TABLEDESC);
+
+  /** HRegionInfo for first meta region */
+  public static final HRegionInfo FIRST_META_REGIONINFO =
+    new HRegionInfo(1L, HTableDescriptor.META_TABLEDESC);
+
+  private byte [] endKey = HConstants.EMPTY_BYTE_ARRAY;
+  // This flag is in the parent of a split while the parent is still referenced
+  // by daughter regions.  We USED to set this flag when we disabled a table
+  // but now table state is kept up in zookeeper as of 0.90.0 HBase.
+  private boolean offLine = false;
+  private long regionId = -1;
+  private transient byte [] regionName = HConstants.EMPTY_BYTE_ARRAY;
+  private String regionNameStr = "";
+  private boolean split = false;
+  private byte [] startKey = HConstants.EMPTY_BYTE_ARRAY;
+  protected HTableDescriptor tableDesc = null;
+  private int hashCode = -1;
+  //TODO: Move NO_HASH to HStoreFile which is really the only place it is used.
+  public static final String NO_HASH = null;
+  private volatile String encodedName = NO_HASH;
+  private byte [] encodedNameAsBytes = null;
+
+  private void setHashCode() {
+    int result = Arrays.hashCode(this.regionName);
+    result ^= this.regionId;
+    result ^= Arrays.hashCode(this.startKey);
+    result ^= Arrays.hashCode(this.endKey);
+    result ^= Boolean.valueOf(this.offLine).hashCode();
+    result ^= this.tableDesc.hashCode();
+    this.hashCode = result;
+  }
+
+  /**
+   * Private constructor used constructing HRegionInfo for the catalog root and
+   * first meta regions
+   */
+  private HRegionInfo(long regionId, HTableDescriptor tableDesc) {
+    super();
+    this.regionId = regionId;
+    this.tableDesc = tableDesc;
+    
+    // Note: Root & First Meta regions names are still in old format   
+    this.regionName = createRegionName(tableDesc.getName(), null,
+                                       regionId, false);
+    this.regionNameStr = Bytes.toStringBinary(this.regionName);
+    setHashCode();
+  }
+
+  /** Default constructor - creates empty object */
+  public HRegionInfo() {
+    super();
+    this.tableDesc = new HTableDescriptor();
+  }
+
+  /**
+   * Construct HRegionInfo with explicit parameters
+   *
+   * @param tableDesc the table descriptor
+   * @param startKey first key in region
+   * @param endKey end of key range
+   * @throws IllegalArgumentException
+   */
+  public HRegionInfo(final HTableDescriptor tableDesc, final byte [] startKey,
+      final byte [] endKey)
+  throws IllegalArgumentException {
+    this(tableDesc, startKey, endKey, false);
+  }
+
+  /**
+   * Construct HRegionInfo with explicit parameters
+   *
+   * @param tableDesc the table descriptor
+   * @param startKey first key in region
+   * @param endKey end of key range
+   * @param split true if this region has split and we have daughter regions
+   * regions that may or may not hold references to this region.
+   * @throws IllegalArgumentException
+   */
+  public HRegionInfo(HTableDescriptor tableDesc, final byte [] startKey,
+      final byte [] endKey, final boolean split)
+  throws IllegalArgumentException {
+    this(tableDesc, startKey, endKey, split, System.currentTimeMillis());
+  }
+
+  /**
+   * Construct HRegionInfo with explicit parameters
+   *
+   * @param tableDesc the table descriptor
+   * @param startKey first key in region
+   * @param endKey end of key range
+   * @param split true if this region has split and we have daughter regions
+   * regions that may or may not hold references to this region.
+   * @param regionid Region id to use.
+   * @throws IllegalArgumentException
+   */
+  public HRegionInfo(HTableDescriptor tableDesc, final byte [] startKey,
+    final byte [] endKey, final boolean split, final long regionid)
+  throws IllegalArgumentException {
+    super();
+    if (tableDesc == null) {
+      throw new IllegalArgumentException("tableDesc cannot be null");
+    }
+    this.offLine = false;
+    this.regionId = regionid;
+    this.regionName = createRegionName(tableDesc.getName(), startKey, regionId, true);
+    this.regionNameStr = Bytes.toStringBinary(this.regionName);
+    this.split = split;
+    this.endKey = endKey == null? HConstants.EMPTY_END_ROW: endKey.clone();
+    this.startKey = startKey == null?
+      HConstants.EMPTY_START_ROW: startKey.clone();
+    this.tableDesc = tableDesc;
+    setHashCode();
+  }
+
+  /**
+   * Costruct a copy of another HRegionInfo
+   *
+   * @param other
+   */
+  public HRegionInfo(HRegionInfo other) {
+    super();
+    this.endKey = other.getEndKey();
+    this.offLine = other.isOffline();
+    this.regionId = other.getRegionId();
+    this.regionName = other.getRegionName();
+    this.regionNameStr = Bytes.toStringBinary(this.regionName);
+    this.split = other.isSplit();
+    this.startKey = other.getStartKey();
+    this.tableDesc = other.getTableDesc();
+    this.hashCode = other.hashCode();
+    this.encodedName = other.getEncodedName();
+  }
+
+  /**
+   * Make a region name of passed parameters.
+   * @param tableName
+   * @param startKey Can be null
+   * @param regionid Region id (Usually timestamp from when region was created).
+   * @param newFormat should we create the region name in the new format
+   *                  (such that it contains its encoded name?).
+   * @return Region name made of passed tableName, startKey and id
+   */
+  public static byte [] createRegionName(final byte [] tableName,
+      final byte [] startKey, final long regionid, boolean newFormat) {
+    return createRegionName(tableName, startKey, Long.toString(regionid), newFormat);
+  }
+
+  /**
+   * Make a region name of passed parameters.
+   * @param tableName
+   * @param startKey Can be null
+   * @param id Region id (Usually timestamp from when region was created).
+   * @param newFormat should we create the region name in the new format
+   *                  (such that it contains its encoded name?).
+   * @return Region name made of passed tableName, startKey and id
+   */
+  public static byte [] createRegionName(final byte [] tableName,
+      final byte [] startKey, final String id, boolean newFormat) {
+    return createRegionName(tableName, startKey, Bytes.toBytes(id), newFormat);
+  }
+
+  /**
+   * Make a region name of passed parameters.
+   * @param tableName
+   * @param startKey Can be null
+   * @param id Region id (Usually timestamp from when region was created).
+   * @param newFormat should we create the region name in the new format
+   *                  (such that it contains its encoded name?).
+   * @return Region name made of passed tableName, startKey and id
+   */
+  public static byte [] createRegionName(final byte [] tableName,
+      final byte [] startKey, final byte [] id, boolean newFormat) {
+    byte [] b = new byte [tableName.length + 2 + id.length +
+       (startKey == null? 0: startKey.length) +
+       (newFormat ? (MD5_HEX_LENGTH + 2) : 0)];
+
+    int offset = tableName.length;
+    System.arraycopy(tableName, 0, b, 0, offset);
+    b[offset++] = DELIMITER;
+    if (startKey != null && startKey.length > 0) {
+      System.arraycopy(startKey, 0, b, offset, startKey.length);
+      offset += startKey.length;
+    }
+    b[offset++] = DELIMITER;
+    System.arraycopy(id, 0, b, offset, id.length);
+    offset += id.length;
+
+    if (newFormat) {
+      //
+      // Encoded name should be built into the region name.
+      //
+      // Use the region name thus far (namely, <tablename>,<startKey>,<id>)
+      // to compute a MD5 hash to be used as the encoded name, and append
+      // it to the byte buffer.
+      //
+      String md5Hash = MD5Hash.getMD5AsHex(b, 0, offset);
+      byte [] md5HashBytes = Bytes.toBytes(md5Hash);
+
+      if (md5HashBytes.length != MD5_HEX_LENGTH) {
+        LOG.error("MD5-hash length mismatch: Expected=" + MD5_HEX_LENGTH +
+                  "; Got=" + md5HashBytes.length); 
+      }
+
+      // now append the bytes '.<encodedName>.' to the end
+      b[offset++] = ENC_SEPARATOR;
+      System.arraycopy(md5HashBytes, 0, b, offset, MD5_HEX_LENGTH);
+      offset += MD5_HEX_LENGTH;
+      b[offset++] = ENC_SEPARATOR;
+    }
+    
+    return b;
+  }
+
+  /**
+   * Gets the table name from the specified region name.
+   * @param regionName
+   * @return Table name.
+   */
+  public static byte [] getTableName(byte [] regionName) {
+    int offset = -1;
+    for (int i = 0; i < regionName.length; i++) {
+      if (regionName[i] == DELIMITER) {
+        offset = i;
+        break;
+      }
+    }
+    byte [] tableName = new byte[offset];
+    System.arraycopy(regionName, 0, tableName, 0, offset);
+    return tableName;
+  }
+
+  /**
+   * Separate elements of a regionName.
+   * @param regionName
+   * @return Array of byte[] containing tableName, startKey and id
+   * @throws IOException
+   */
+  public static byte [][] parseRegionName(final byte [] regionName)
+  throws IOException {
+    int offset = -1;
+    for (int i = 0; i < regionName.length; i++) {
+      if (regionName[i] == DELIMITER) {
+        offset = i;
+        break;
+      }
+    }
+    if(offset == -1) throw new IOException("Invalid regionName format");
+    byte [] tableName = new byte[offset];
+    System.arraycopy(regionName, 0, tableName, 0, offset);
+    offset = -1;
+    for (int i = regionName.length - 1; i > 0; i--) {
+      if(regionName[i] == DELIMITER) {
+        offset = i;
+        break;
+      }
+    }
+    if(offset == -1) throw new IOException("Invalid regionName format");
+    byte [] startKey = HConstants.EMPTY_BYTE_ARRAY;
+    if(offset != tableName.length + 1) {
+      startKey = new byte[offset - tableName.length - 1];
+      System.arraycopy(regionName, tableName.length + 1, startKey, 0,
+          offset - tableName.length - 1);
+    }
+    byte [] id = new byte[regionName.length - offset - 1];
+    System.arraycopy(regionName, offset + 1, id, 0,
+        regionName.length - offset - 1);
+    byte [][] elements = new byte[3][];
+    elements[0] = tableName;
+    elements[1] = startKey;
+    elements[2] = id;
+    return elements;
+  }
+
+  /** @return the regionId */
+  public long getRegionId(){
+    return regionId;
+  }
+
+  /**
+   * @return the regionName as an array of bytes.
+   * @see #getRegionNameAsString()
+   */
+  public byte [] getRegionName(){
+    return regionName;
+  }
+
+  /**
+   * @return Region name as a String for use in logging, etc.
+   */
+  public String getRegionNameAsString() {
+    if (hasEncodedName(this.regionName)) {
+      // new format region names already have their encoded name.
+      return this.regionNameStr;
+    }
+
+    // old format. regionNameStr doesn't have the region name.
+    //
+    //
+    return this.regionNameStr + "." + this.getEncodedName();
+  }
+
+  /** @return the encoded region name */
+  public synchronized String getEncodedName() {
+    if (this.encodedName == NO_HASH) {
+      this.encodedName = encodeRegionName(this.regionName);
+    }
+    return this.encodedName;
+  }
+
+  public synchronized byte [] getEncodedNameAsBytes() {
+    if (this.encodedNameAsBytes == null) {
+      this.encodedNameAsBytes = Bytes.toBytes(getEncodedName());
+    }
+    return this.encodedNameAsBytes;
+  }
+
+  /** @return the startKey */
+  public byte [] getStartKey(){
+    return startKey;
+  }
+  
+  /** @return the endKey */
+  public byte [] getEndKey(){
+    return endKey;
+  }
+
+  /**
+   * Returns true if the given inclusive range of rows is fully contained
+   * by this region. For example, if the region is foo,a,g and this is
+   * passed ["b","c"] or ["a","c"] it will return true, but if this is passed
+   * ["b","z"] it will return false.
+   * @throws IllegalArgumentException if the range passed is invalid (ie end < start)
+   */
+  public boolean containsRange(byte[] rangeStartKey, byte[] rangeEndKey) {
+    if (Bytes.compareTo(rangeStartKey, rangeEndKey) > 0) {
+      throw new IllegalArgumentException(
+      "Invalid range: " + Bytes.toStringBinary(rangeStartKey) +
+      " > " + Bytes.toStringBinary(rangeEndKey));
+    }
+
+    boolean firstKeyInRange = Bytes.compareTo(rangeStartKey, startKey) >= 0;
+    boolean lastKeyInRange =
+      Bytes.compareTo(rangeEndKey, endKey) < 0 ||
+      Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY);
+    return firstKeyInRange && lastKeyInRange;
+  }
+  
+  /**
+   * Return true if the given row falls in this region.
+   */
+  public boolean containsRow(byte[] row) {
+    return Bytes.compareTo(row, startKey) >= 0 &&
+      (Bytes.compareTo(row, endKey) < 0 ||
+       Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY));
+  }
+
+  /** @return the tableDesc */
+  public HTableDescriptor getTableDesc(){
+    return tableDesc;
+  }
+
+  /**
+   * @param newDesc new table descriptor to use
+   */
+  public void setTableDesc(HTableDescriptor newDesc) {
+    this.tableDesc = newDesc;
+  }
+
+  /** @return true if this is the root region */
+  public boolean isRootRegion() {
+    return this.tableDesc.isRootRegion();
+  }
+
+  /** @return true if this is the meta table */
+  public boolean isMetaTable() {
+    return this.tableDesc.isMetaTable();
+  }
+
+  /** @return true if this region is a meta region */
+  public boolean isMetaRegion() {
+    return this.tableDesc.isMetaRegion();
+  }
+
+  /**
+   * @return True if has been split and has daughters.
+   */
+  public boolean isSplit() {
+    return this.split;
+  }
+
+  /**
+   * @param split set split status
+   */
+  public void setSplit(boolean split) {
+    this.split = split;
+  }
+
+  /**
+   * @return True if this region is offline.
+   */
+  public boolean isOffline() {
+    return this.offLine;
+  }
+
+  /**
+   * The parent of a region split is offline while split daughters hold
+   * references to the parent. Offlined regions are closed.
+   * @param offLine Set online/offline status.
+   */
+  public void setOffline(boolean offLine) {
+    this.offLine = offLine;
+  }
+
+
+  /**
+   * @return True if this is a split parent region.
+   */
+  public boolean isSplitParent() {
+    if (!isSplit()) return false;
+    if (!isOffline()) {
+      LOG.warn("Region is split but NOT offline: " + getRegionNameAsString());
+    }
+    return true;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    return "REGION => {" + HConstants.NAME + " => '" +
+      this.regionNameStr +
+      "', STARTKEY => '" +
+      Bytes.toStringBinary(this.startKey) + "', ENDKEY => '" +
+      Bytes.toStringBinary(this.endKey) +
+      "', ENCODED => " + getEncodedName() + "," +
+      (isOffline()? " OFFLINE => true,": "") +
+      (isSplit()? " SPLIT => true,": "") +
+      " TABLE => {" + this.tableDesc.toString() + "}";
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) {
+      return true;
+    }
+    if (o == null) {
+      return false;
+    }
+    if (!(o instanceof HRegionInfo)) {
+      return false;
+    }
+    return this.compareTo((HRegionInfo)o) == 0;
+  }
+
+  /**
+   * @see java.lang.Object#hashCode()
+   */
+  @Override
+  public int hashCode() {
+    return this.hashCode;
+  }
+
+  /** @return the object version number */
+  @Override
+  public byte getVersion() {
+    return VERSION;
+  }
+
+  //
+  // Writable
+  //
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    super.write(out);
+    Bytes.writeByteArray(out, endKey);
+    out.writeBoolean(offLine);
+    out.writeLong(regionId);
+    Bytes.writeByteArray(out, regionName);
+    out.writeBoolean(split);
+    Bytes.writeByteArray(out, startKey);
+    tableDesc.write(out);
+    out.writeInt(hashCode);
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    super.readFields(in);
+    this.endKey = Bytes.readByteArray(in);
+    this.offLine = in.readBoolean();
+    this.regionId = in.readLong();
+    this.regionName = Bytes.readByteArray(in);
+    this.regionNameStr = Bytes.toStringBinary(this.regionName);
+    this.split = in.readBoolean();
+    this.startKey = Bytes.readByteArray(in);
+    this.tableDesc.readFields(in);
+    this.hashCode = in.readInt();
+  }
+
+  //
+  // Comparable
+  //
+
+  public int compareTo(HRegionInfo o) {
+    if (o == null) {
+      return 1;
+    }
+
+    // Are regions of same table?
+    int result = Bytes.compareTo(this.tableDesc.getName(), o.tableDesc.getName());
+    if (result != 0) {
+      return result;
+    }
+
+    // Compare start keys.
+    result = Bytes.compareTo(this.startKey, o.startKey);
+    if (result != 0) {
+      return result;
+    }
+
+    // Compare end keys.
+    return Bytes.compareTo(this.endKey, o.endKey);
+  }
+
+  /**
+   * @return Comparator to use comparing {@link KeyValue}s.
+   */
+  public KVComparator getComparator() {
+    return isRootRegion()? KeyValue.ROOT_COMPARATOR: isMetaRegion()?
+      KeyValue.META_COMPARATOR: KeyValue.COMPARATOR;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java b/0.90/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java
new file mode 100644
index 0000000..bd353b8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java
@@ -0,0 +1,99 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/**
+ * Contains the HRegionInfo for the region and the HServerAddress for the
+ * HRegionServer serving the region
+ */
+public class HRegionLocation implements Comparable<HRegionLocation> {
+  // TODO: Is this class necessary?  Why not just have a Pair?
+  private HRegionInfo regionInfo;
+  private HServerAddress serverAddress;
+
+  /**
+   * Constructor
+   *
+   * @param regionInfo the HRegionInfo for the region
+   * @param serverAddress the HServerAddress for the region server
+   */
+  public HRegionLocation(HRegionInfo regionInfo, HServerAddress serverAddress) {
+    this.regionInfo = regionInfo;
+    this.serverAddress = serverAddress;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    return "address: " + this.serverAddress.toString() + ", regioninfo: " +
+      this.regionInfo.getRegionNameAsString();
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) {
+      return true;
+    }
+    if (o == null) {
+      return false;
+    }
+    if (!(o instanceof HRegionLocation)) {
+      return false;
+    }
+    return this.compareTo((HRegionLocation)o) == 0;
+  }
+
+  /**
+   * @see java.lang.Object#hashCode()
+   */
+  @Override
+  public int hashCode() {
+    int result = this.regionInfo.hashCode();
+    result ^= this.serverAddress.hashCode();
+    return result;
+  }
+
+  /** @return HRegionInfo */
+  public HRegionInfo getRegionInfo(){
+    return regionInfo;
+  }
+
+  /** @return HServerAddress */
+  public HServerAddress getServerAddress(){
+    return serverAddress;
+  }
+
+  //
+  // Comparable
+  //
+
+  public int compareTo(HRegionLocation o) {
+    int result = this.regionInfo.compareTo(o.regionInfo);
+    if(result == 0) {
+      result = this.serverAddress.compareTo(o.serverAddress);
+    }
+    return result;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HServerAddress.java b/0.90/src/main/java/org/apache/hadoop/hbase/HServerAddress.java
new file mode 100644
index 0000000..7f8a472
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HServerAddress.java
@@ -0,0 +1,193 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.io.WritableComparable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.net.InetAddress;
+
+/**
+ * HServerAddress is a "label" for a HBase server made of host and port number.
+ */
+public class HServerAddress implements WritableComparable<HServerAddress> {
+  private InetSocketAddress address;
+  String stringValue;
+
+  public HServerAddress() {
+    this.address = null;
+    this.stringValue = null;
+  }
+
+  /**
+   * Construct an instance from an {@link InetSocketAddress}.
+   * @param address InetSocketAddress of server
+   */
+  public HServerAddress(InetSocketAddress address) {
+    this.address = address;
+    this.stringValue = address.getAddress().getHostName() + ":" +
+      address.getPort();
+    checkBindAddressCanBeResolved();
+  }
+
+  /**
+   * @param hostAndPort Hostname and port formatted as <code>&lt;hostname> ':' &lt;port></code>
+   */
+  public HServerAddress(String hostAndPort) {
+    int colonIndex = hostAndPort.lastIndexOf(':');
+    if (colonIndex < 0) {
+      throw new IllegalArgumentException("Not a host:port pair: " + hostAndPort);
+    }
+    String host = hostAndPort.substring(0, colonIndex);
+    int port = Integer.parseInt(hostAndPort.substring(colonIndex + 1));
+    this.address = new InetSocketAddress(host, port);
+    this.stringValue = address.getHostName() + ":" + port;
+    checkBindAddressCanBeResolved();
+  }
+
+  /**
+   * @param bindAddress Hostname
+   * @param port Port number
+   */
+  public HServerAddress(String bindAddress, int port) {
+    this.address = new InetSocketAddress(bindAddress, port);
+    this.stringValue = address.getHostName() + ":" + port;
+    checkBindAddressCanBeResolved();
+  }
+
+  /**
+   * Copy-constructor.
+   * @param other HServerAddress to copy from
+   */
+  public HServerAddress(HServerAddress other) {
+    String bindAddress = other.getBindAddress();
+    int port = other.getPort();
+    this.address = new InetSocketAddress(bindAddress, port);
+    stringValue = other.stringValue;
+    checkBindAddressCanBeResolved();
+  }
+
+  /** @return Bind address */
+  public String getBindAddress() {
+    final InetAddress addr = address.getAddress();
+    if (addr != null) {
+      return addr.getHostAddress();
+    } else {
+      LogFactory.getLog(HServerAddress.class).error("Could not resolve the"
+          + " DNS name of " + stringValue);
+      return null;
+    }
+  }
+
+  private void checkBindAddressCanBeResolved() {
+    if (getBindAddress() == null) {
+      throw new IllegalArgumentException("Could not resolve the"
+          + " DNS name of " + stringValue);
+    }
+  }
+
+  /** @return Port number */
+  public int getPort() {
+    return address.getPort();
+  }
+
+  /** @return Hostname */
+  public String getHostname() {
+    return address.getHostName();
+  }
+
+  /** @return The InetSocketAddress */
+  public InetSocketAddress getInetSocketAddress() {
+    return address;
+  }
+
+  /**
+   * @return String formatted as <code>&lt;bind address> ':' &lt;port></code>
+   */
+  @Override
+  public String toString() {
+    return stringValue == null ? "" : stringValue;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) {
+      return true;
+    }
+    if (o == null) {
+      return false;
+    }
+    if (getClass() != o.getClass()) {
+      return false;
+    }
+    return compareTo((HServerAddress) o) == 0;
+  }
+
+  @Override
+  public int hashCode() {
+    int result = address.hashCode();
+    result ^= stringValue.hashCode();
+    return result;
+  }
+
+  //
+  // Writable
+  //
+
+  public void readFields(DataInput in) throws IOException {
+    String hostname = in.readUTF();
+    int port = in.readInt();
+
+    if (hostname == null || hostname.length() == 0) {
+      address = null;
+      stringValue = null;
+    } else {
+      address = new InetSocketAddress(hostname, port);
+      stringValue = hostname + ":" + port;
+      checkBindAddressCanBeResolved();
+    }
+  }
+
+  public void write(DataOutput out) throws IOException {
+    if (address == null) {
+      out.writeUTF("");
+      out.writeInt(0);
+    } else {
+      out.writeUTF(address.getAddress().getHostName());
+      out.writeInt(address.getPort());
+    }
+  }
+
+  //
+  // Comparable
+  //
+
+  public int compareTo(HServerAddress o) {
+    // Addresses as Strings may not compare though address is for the one
+    // server with only difference being that one address has hostname
+    // resolved whereas other only has IP.
+    if (address.equals(o.address)) return 0;
+    return toString().compareTo(o.toString());
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HServerInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/HServerInfo.java
new file mode 100644
index 0000000..c742951
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HServerInfo.java
@@ -0,0 +1,282 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Comparator;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+
+/**
+ * HServerInfo is meta info about an {@link HRegionServer}.  It is the token
+ * by which a master distingushes a particular regionserver from the rest.
+ * It holds hostname, ports, regionserver startcode, and load.  Each server has
+ * a <code>servername</code> where servername is made up of a concatenation of
+ * hostname, port, and regionserver startcode.  This servername is used in
+ * various places identifying this regionserver.  Its even used as part of
+ * a pathname in the filesystem.  As part of the initialization,
+ * master will pass the regionserver the address that it knows this regionserver
+ * by.  In subsequent communications, the regionserver will pass a HServerInfo
+ * with the master-supplied address.
+ */
+public class HServerInfo implements WritableComparable<HServerInfo> {
+  /*
+   * This character is used as separator between server hostname and port and
+   * its startcode. Servername is formatted as
+   * <code>&lt;hostname> '{@ink #SERVERNAME_SEPARATOR"}' &lt;port> '{@ink #SERVERNAME_SEPARATOR"}' &lt;startcode></code>.
+   */
+  private static final String SERVERNAME_SEPARATOR = ",";
+
+  private HServerAddress serverAddress;
+  private long startCode;
+  private HServerLoad load;
+  private int infoPort;
+  // Servername is made of hostname, port and startcode.
+  private String serverName = null;
+  // Hostname of the regionserver.
+  private String hostname;
+  private String cachedHostnamePort = null;
+
+  public HServerInfo() {
+    this(new HServerAddress(), 0, HConstants.DEFAULT_REGIONSERVER_INFOPORT,
+      "default name");
+  }
+
+  /**
+   * Constructor that creates a HServerInfo with a generated startcode and an
+   * empty load.
+   * @param serverAddress An {@link InetSocketAddress} encased in a {@link Writable}
+   * @param infoPort Port the webui runs on.
+   * @param hostname Server hostname.
+   */
+  public HServerInfo(HServerAddress serverAddress, final int infoPort,
+      final String hostname) {
+    this(serverAddress, System.currentTimeMillis(), infoPort, hostname);
+  }
+
+  public HServerInfo(HServerAddress serverAddress, long startCode,
+      final int infoPort, String hostname) {
+    this.serverAddress = serverAddress;
+    this.startCode = startCode;
+    this.load = new HServerLoad();
+    this.infoPort = infoPort;
+    this.hostname = hostname;
+  }
+
+  /**
+   * Copy-constructor
+   * @param other
+   */
+  public HServerInfo(HServerInfo other) {
+    this.serverAddress = new HServerAddress(other.getServerAddress());
+    this.startCode = other.getStartCode();
+    this.load = other.getLoad();
+    this.infoPort = other.getInfoPort();
+    this.hostname = other.hostname;
+  }
+
+  public HServerLoad getLoad() {
+    return load;
+  }
+
+  public void setLoad(HServerLoad load) {
+    this.load = load;
+  }
+
+  public synchronized HServerAddress getServerAddress() {
+    return new HServerAddress(serverAddress);
+  }
+
+  public synchronized void setServerAddress(HServerAddress serverAddress) {
+    this.serverAddress = serverAddress;
+    this.hostname = serverAddress.getHostname();
+    this.serverName = null;
+  }
+
+  public synchronized long getStartCode() {
+    return startCode;
+  }
+
+  public int getInfoPort() {
+    return this.infoPort;
+  }
+
+  public String getHostname() {
+    return this.hostname;
+  }
+
+  /**
+   * @return The hostname and port concatenated with a ':' as separator.
+   */
+  public synchronized String getHostnamePort() {
+    if (this.cachedHostnamePort == null) {
+      this.cachedHostnamePort = getHostnamePort(this.hostname, this.serverAddress.getPort());
+    }
+    return this.cachedHostnamePort;
+  }
+
+  /**
+   * @param hostname
+   * @param port
+   * @return The hostname and port concatenated with a ':' as separator.
+   */
+  public static String getHostnamePort(final String hostname, final int port) {
+    return hostname + ":" + port;
+  }
+
+  /**
+   * Gets the unique server instance name.  Includes the hostname, port, and
+   * start code.
+   * @return Server name made of the concatenation of hostname, port and
+   * startcode formatted as <code>&lt;hostname> ',' &lt;port> ',' &lt;startcode></code>
+   */
+  public synchronized String getServerName() {
+    if (this.serverName == null) {
+      this.serverName = getServerName(this.hostname,
+        this.serverAddress.getPort(), this.startCode);
+    }
+    return this.serverName;
+  }
+
+  public static synchronized String getServerName(final String hostAndPort,
+      final long startcode) {
+    int index = hostAndPort.indexOf(":");
+    if (index <= 0) throw new IllegalArgumentException("Expected <hostname> ':' <port>");
+    return getServerName(hostAndPort.substring(0, index),
+      Integer.parseInt(hostAndPort.substring(index + 1)), startcode);
+  }
+
+  /**
+   * @param address Server address
+   * @param startCode Server startcode
+   * @return Server name made of the concatenation of hostname, port and
+   * startcode formatted as <code>&lt;hostname> ',' &lt;port> ',' &lt;startcode></code>
+   */
+  public static String getServerName(HServerAddress address, long startCode) {
+    return getServerName(address.getHostname(), address.getPort(), startCode);
+  }
+
+  /*
+   * @param hostName
+   * @param port
+   * @param startCode
+   * @return Server name made of the concatenation of hostname, port and
+   * startcode formatted as <code>&lt;hostname> ',' &lt;port> ',' &lt;startcode></code>
+   */
+  public static String getServerName(String hostName, int port, long startCode) {
+    StringBuilder name = new StringBuilder(hostName);
+    name.append(SERVERNAME_SEPARATOR);
+    name.append(port);
+    name.append(SERVERNAME_SEPARATOR);
+    name.append(startCode);
+    return name.toString();
+  }
+
+  /**
+   * @return ServerName and load concatenated.
+   * @see #getServerName()
+   * @see #getLoad()
+   */
+  @Override
+  public String toString() {
+    return "serverName=" + getServerName() +
+      ", load=(" + this.load.toString() + ")";
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj) {
+      return true;
+    }
+    if (obj == null) {
+      return false;
+    }
+    if (getClass() != obj.getClass()) {
+      return false;
+    }
+    return compareTo((HServerInfo)obj) == 0;
+  }
+
+  @Override
+  public int hashCode() {
+    return this.getServerName().hashCode();
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    this.serverAddress.readFields(in);
+    this.startCode = in.readLong();
+    this.load.readFields(in);
+    this.infoPort = in.readInt();
+    this.hostname = in.readUTF();
+  }
+
+  public void write(DataOutput out) throws IOException {
+    this.serverAddress.write(out);
+    out.writeLong(this.startCode);
+    this.load.write(out);
+    out.writeInt(this.infoPort);
+    out.writeUTF(hostname);
+  }
+
+  public int compareTo(HServerInfo o) {
+    return this.getServerName().compareTo(o.getServerName());
+  }
+
+  /**
+   * Orders HServerInfos by load then name.  Natural/ascending order.
+   */
+  public static class LoadComparator implements Comparator<HServerInfo> {
+    @Override
+    public int compare(HServerInfo left, HServerInfo right) {
+      int loadCompare = left.getLoad().compareTo(right.getLoad());
+      return loadCompare != 0 ? loadCompare : left.compareTo(right);
+    }
+  }
+
+  /**
+   * Utility method that does a find of a servername or a hostandport combination
+   * in the passed Set.
+   * @param servers Set of server names
+   * @param serverName Name to look for
+   * @param hostAndPortOnly If <code>serverName</code> is a
+   * <code>hostname ':' port</code>
+   * or <code>hostname , port , startcode</code>.
+   * @return True if <code>serverName</code> found in <code>servers</code>
+   */
+  public static boolean isServer(final Set<String> servers,
+      final String serverName, final boolean hostAndPortOnly) {
+    if (!hostAndPortOnly) return servers.contains(serverName);
+    String serverNameColonReplaced =
+      serverName.replaceFirst(":", SERVERNAME_SEPARATOR);
+    for (String hostPortStartCode: servers) {
+      int index = hostPortStartCode.lastIndexOf(SERVERNAME_SEPARATOR);
+      String hostPortStrippedOfStartCode = hostPortStartCode.substring(0, index);
+      if (hostPortStrippedOfStartCode.equals(serverNameColonReplaced)) return true;
+    }
+    return false;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HServerLoad.java b/0.90/src/main/java/org/apache/hadoop/hbase/HServerLoad.java
new file mode 100644
index 0000000..efa7e0e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HServerLoad.java
@@ -0,0 +1,493 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Strings;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * This class encapsulates metrics for determining the load on a HRegionServer
+ */
+public class HServerLoad implements WritableComparable<HServerLoad> {
+  /** number of regions */
+    // could just use regionLoad.size() but master.RegionManager likes to play
+    // around with this value while passing HServerLoad objects around during
+    // balancer calculations
+  private int numberOfRegions;
+  /** number of requests since last report */
+  private int numberOfRequests;
+  /** the amount of used heap, in MB */
+  private int usedHeapMB;
+  /** the maximum allowable size of the heap, in MB */
+  private int maxHeapMB;
+  /** per-region load metrics */
+  private ArrayList<RegionLoad> regionLoad = new ArrayList<RegionLoad>();
+
+  /**
+   * Encapsulates per-region loading metrics.
+   */
+  public static class RegionLoad implements Writable {
+    /** the region name */
+    private byte[] name;
+    /** the number of stores for the region */
+    private int stores;
+    /** the number of storefiles for the region */
+    private int storefiles;
+    /** the current total size of the store files for the region, in MB */
+    private int storefileSizeMB;
+    /** the current size of the memstore for the region, in MB */
+    private int memstoreSizeMB;
+    /** the current total size of storefile indexes for the region, in MB */
+    private int storefileIndexSizeMB;
+
+    /**
+     * Constructor, for Writable
+     */
+    public RegionLoad() {
+        super();
+    }
+
+    /**
+     * @param name
+     * @param stores
+     * @param storefiles
+     * @param storefileSizeMB
+     * @param memstoreSizeMB
+     * @param storefileIndexSizeMB
+     */
+    public RegionLoad(final byte[] name, final int stores,
+        final int storefiles, final int storefileSizeMB,
+        final int memstoreSizeMB, final int storefileIndexSizeMB) {
+      this.name = name;
+      this.stores = stores;
+      this.storefiles = storefiles;
+      this.storefileSizeMB = storefileSizeMB;
+      this.memstoreSizeMB = memstoreSizeMB;
+      this.storefileIndexSizeMB = storefileIndexSizeMB;
+    }
+
+    // Getters
+
+    /**
+     * @return the region name
+     */
+    public byte[] getName() {
+      return name;
+    }
+
+    /**
+     * @return the region name as a string
+     */
+    public String getNameAsString() {
+      return Bytes.toString(name);
+    }
+
+    /**
+     * @return the number of stores
+     */
+    public int getStores() {
+      return stores;
+    }
+
+    /**
+     * @return the number of storefiles
+     */
+    public int getStorefiles() {
+      return storefiles;
+    }
+
+    /**
+     * @return the total size of the storefiles, in MB
+     */
+    public int getStorefileSizeMB() {
+      return storefileSizeMB;
+    }
+
+    /**
+     * @return the memstore size, in MB
+     */
+    public int getMemStoreSizeMB() {
+      return memstoreSizeMB;
+    }
+
+    /**
+     * @return the approximate size of storefile indexes on the heap, in MB
+     */
+    public int getStorefileIndexSizeMB() {
+      return storefileIndexSizeMB;
+    }
+
+    // Setters
+
+    /**
+     * @param name the region name
+     */
+    public void setName(byte[] name) {
+      this.name = name;
+    }
+
+    /**
+     * @param stores the number of stores
+     */
+    public void setStores(int stores) {
+      this.stores = stores;
+    }
+
+    /**
+     * @param storefiles the number of storefiles
+     */
+    public void setStorefiles(int storefiles) {
+      this.storefiles = storefiles;
+    }
+
+    /**
+     * @param memstoreSizeMB the memstore size, in MB
+     */
+    public void setMemStoreSizeMB(int memstoreSizeMB) {
+      this.memstoreSizeMB = memstoreSizeMB;
+    }
+
+    /**
+     * @param storefileIndexSizeMB the approximate size of storefile indexes
+     *  on the heap, in MB
+     */
+    public void setStorefileIndexSizeMB(int storefileIndexSizeMB) {
+      this.storefileIndexSizeMB = storefileIndexSizeMB;
+    }
+
+    // Writable
+    public void readFields(DataInput in) throws IOException {
+      int namelen = in.readInt();
+      this.name = new byte[namelen];
+      in.readFully(this.name);
+      this.stores = in.readInt();
+      this.storefiles = in.readInt();
+      this.storefileSizeMB = in.readInt();
+      this.memstoreSizeMB = in.readInt();
+      this.storefileIndexSizeMB = in.readInt();
+    }
+
+    public void write(DataOutput out) throws IOException {
+      out.writeInt(name.length);
+      out.write(name);
+      out.writeInt(stores);
+      out.writeInt(storefiles);
+      out.writeInt(storefileSizeMB);
+      out.writeInt(memstoreSizeMB);
+      out.writeInt(storefileIndexSizeMB);
+    }
+
+    /**
+     * @see java.lang.Object#toString()
+     */
+    @Override
+    public String toString() {
+      StringBuilder sb = Strings.appendKeyValue(new StringBuilder(), "stores",
+        Integer.valueOf(this.stores));
+      sb = Strings.appendKeyValue(sb, "storefiles",
+        Integer.valueOf(this.storefiles));
+      sb = Strings.appendKeyValue(sb, "storefileSizeMB",
+          Integer.valueOf(this.storefileSizeMB));
+      sb = Strings.appendKeyValue(sb, "memstoreSizeMB",
+        Integer.valueOf(this.memstoreSizeMB));
+      sb = Strings.appendKeyValue(sb, "storefileIndexSizeMB",
+        Integer.valueOf(this.storefileIndexSizeMB));
+      return sb.toString();
+    }
+  }
+
+  /*
+   * TODO: Other metrics that might be considered when the master is actually
+   * doing load balancing instead of merely trying to decide where to assign
+   * a region:
+   * <ul>
+   *   <li># of CPUs, heap size (to determine the "class" of machine). For
+   *       now, we consider them to be homogeneous.</li>
+   *   <li>#requests per region (Map<{String|HRegionInfo}, Integer>)</li>
+   *   <li>#compactions and/or #splits (churn)</li>
+   *   <li>server death rate (maybe there is something wrong with this server)</li>
+   * </ul>
+   */
+
+  /** default constructor (used by Writable) */
+  public HServerLoad() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param numberOfRequests
+   * @param usedHeapMB
+   * @param maxHeapMB
+   */
+  public HServerLoad(final int numberOfRequests, final int usedHeapMB,
+      final int maxHeapMB) {
+    this.numberOfRequests = numberOfRequests;
+    this.usedHeapMB = usedHeapMB;
+    this.maxHeapMB = maxHeapMB;
+  }
+
+  /**
+   * Constructor
+   * @param hsl the template HServerLoad
+   */
+  public HServerLoad(final HServerLoad hsl) {
+    this(hsl.numberOfRequests, hsl.usedHeapMB, hsl.maxHeapMB);
+    this.regionLoad.addAll(hsl.regionLoad);
+  }
+
+  /**
+   * Originally, this method factored in the effect of requests going to the
+   * server as well. However, this does not interact very well with the current
+   * region rebalancing code, which only factors number of regions. For the
+   * interim, until we can figure out how to make rebalancing use all the info
+   * available, we're just going to make load purely the number of regions.
+   *
+   * @return load factor for this server
+   */
+  public int getLoad() {
+    // int load = numberOfRequests == 0 ? 1 : numberOfRequests;
+    // load *= numberOfRegions == 0 ? 1 : numberOfRegions;
+    // return load;
+    return numberOfRegions;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    return toString(1);
+  }
+
+  /**
+   * Returns toString() with the number of requests divided by the message
+   * interval in seconds
+   * @param msgInterval
+   * @return The load as a String
+   */
+  public String toString(int msgInterval) {
+    StringBuilder sb = new StringBuilder();
+    sb = Strings.appendKeyValue(sb, "requests",
+      Integer.valueOf(numberOfRequests/msgInterval));
+    sb = Strings.appendKeyValue(sb, "regions",
+      Integer.valueOf(numberOfRegions));
+    sb = Strings.appendKeyValue(sb, "usedHeap",
+      Integer.valueOf(this.usedHeapMB));
+    sb = Strings.appendKeyValue(sb, "maxHeap", Integer.valueOf(maxHeapMB));
+    return sb.toString();
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  @Override
+  public boolean equals(Object o) {
+    if (this == o) {
+      return true;
+    }
+    if (o == null) {
+      return false;
+    }
+    if (getClass() != o.getClass()) {
+      return false;
+    }
+    return compareTo((HServerLoad)o) == 0;
+  }
+
+  /**
+   * @see java.lang.Object#hashCode()
+   */
+  @Override
+  public int hashCode() {
+    int result = Integer.valueOf(numberOfRequests).hashCode();
+    result ^= Integer.valueOf(numberOfRegions).hashCode();
+    return result;
+  }
+
+  // Getters
+
+  /**
+   * @return the numberOfRegions
+   */
+  public int getNumberOfRegions() {
+    return numberOfRegions;
+  }
+
+  /**
+   * @return the numberOfRequests
+   */
+  public int getNumberOfRequests() {
+    return numberOfRequests;
+  }
+
+  /**
+   * @return the amount of heap in use, in MB
+   */
+  public int getUsedHeapMB() {
+    return usedHeapMB;
+  }
+
+  /**
+   * @return the maximum allowable heap size, in MB
+   */
+  public int getMaxHeapMB() {
+    return maxHeapMB;
+  }
+
+  /**
+   * @return region load metrics
+   */
+  public Collection<RegionLoad> getRegionsLoad() {
+    return Collections.unmodifiableCollection(regionLoad);
+  }
+
+  /**
+   * @return Count of storefiles on this regionserver
+   */
+  public int getStorefiles() {
+    int count = 0;
+    for (RegionLoad info: regionLoad)
+    	count += info.getStorefiles();
+    return count;
+  }
+
+  /**
+   * @return Total size of store files in MB
+   */
+  public int getStorefileSizeInMB() {
+    int count = 0;
+    for (RegionLoad info: regionLoad)
+      count += info.getStorefileSizeMB();
+    return count;
+  }
+
+  /**
+   * @return Size of memstores in MB
+   */
+  public int getMemStoreSizeInMB() {
+    int count = 0;
+    for (RegionLoad info: regionLoad)
+    	count += info.getMemStoreSizeMB();
+    return count;
+  }
+
+  /**
+   * @return Size of store file indexes in MB
+   */
+  public int getStorefileIndexSizeInMB() {
+    int count = 0;
+    for (RegionLoad info: regionLoad)
+    	count += info.getStorefileIndexSizeMB();
+    return count;
+  }
+
+  // Setters
+
+  /**
+   * @param numberOfRegions the number of regions
+   */
+  public void setNumberOfRegions(int numberOfRegions) {
+    this.numberOfRegions = numberOfRegions;
+  }
+
+  /**
+   * @param numberOfRequests the number of requests to set
+   */
+  public void setNumberOfRequests(int numberOfRequests) {
+    this.numberOfRequests = numberOfRequests;
+  }
+
+  /**
+   * @param usedHeapMB the amount of heap in use, in MB
+   */
+  public void setUsedHeapMB(int usedHeapMB) {
+    this.usedHeapMB = usedHeapMB;
+  }
+
+  /**
+   * @param maxHeapMB the maximum allowable heap size, in MB
+   */
+  public void setMaxHeapMB(int maxHeapMB) {
+    this.maxHeapMB = maxHeapMB;
+  }
+
+  /**
+   * @param load Instance of HServerLoad
+   */
+  public void addRegionInfo(final HServerLoad.RegionLoad load) {
+    this.numberOfRegions++;
+    this.regionLoad.add(load);
+  }
+
+  /**
+   * @param name
+   * @param stores
+   * @param storefiles
+   * @param memstoreSizeMB
+   * @param storefileIndexSizeMB
+   * @deprecated Use {@link #addRegionInfo(RegionLoad)}
+   */
+  @Deprecated
+  public void addRegionInfo(final byte[] name, final int stores,
+      final int storefiles, final int storefileSizeMB,
+      final int memstoreSizeMB, final int storefileIndexSizeMB) {
+    this.regionLoad.add(new HServerLoad.RegionLoad(name, stores, storefiles,
+      storefileSizeMB, memstoreSizeMB, storefileIndexSizeMB));
+  }
+
+  // Writable
+
+  public void readFields(DataInput in) throws IOException {
+    numberOfRequests = in.readInt();
+    usedHeapMB = in.readInt();
+    maxHeapMB = in.readInt();
+    numberOfRegions = in.readInt();
+    for (int i = 0; i < numberOfRegions; i++) {
+      RegionLoad rl = new RegionLoad();
+      rl.readFields(in);
+      regionLoad.add(rl);
+    }
+  }
+
+  public void write(DataOutput out) throws IOException {
+    out.writeInt(numberOfRequests);
+    out.writeInt(usedHeapMB);
+    out.writeInt(maxHeapMB);
+    out.writeInt(numberOfRegions);
+    for (int i = 0; i < numberOfRegions; i++)
+      regionLoad.get(i).write(out);
+  }
+
+  // Comparable
+
+  public int compareTo(HServerLoad o) {
+    return this.getLoad() - o.getLoad();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
new file mode 100644
index 0000000..2d27a98
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
@@ -0,0 +1,691 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * HTableDescriptor contains the name of an HTable, and its
+ * column families.
+ */
+public class HTableDescriptor implements WritableComparable<HTableDescriptor> {
+
+  // Changes prior to version 3 were not recorded here.
+  // Version 3 adds metadata as a map where keys and values are byte[].
+  // Version 4 adds indexes
+  // Version 5 removed transactional pollution -- e.g. indexes
+  public static final byte TABLE_DESCRIPTOR_VERSION = 5;
+
+  private byte [] name = HConstants.EMPTY_BYTE_ARRAY;
+  private String nameAsString = "";
+
+  // Table metadata
+  protected Map<ImmutableBytesWritable, ImmutableBytesWritable> values =
+    new HashMap<ImmutableBytesWritable, ImmutableBytesWritable>();
+
+  public static final String FAMILIES = "FAMILIES";
+  public static final ImmutableBytesWritable FAMILIES_KEY =
+    new ImmutableBytesWritable(Bytes.toBytes(FAMILIES));
+  public static final String MAX_FILESIZE = "MAX_FILESIZE";
+  public static final ImmutableBytesWritable MAX_FILESIZE_KEY =
+    new ImmutableBytesWritable(Bytes.toBytes(MAX_FILESIZE));
+  public static final String READONLY = "READONLY";
+  public static final ImmutableBytesWritable READONLY_KEY =
+    new ImmutableBytesWritable(Bytes.toBytes(READONLY));
+  public static final String MEMSTORE_FLUSHSIZE = "MEMSTORE_FLUSHSIZE";
+  public static final ImmutableBytesWritable MEMSTORE_FLUSHSIZE_KEY =
+    new ImmutableBytesWritable(Bytes.toBytes(MEMSTORE_FLUSHSIZE));
+  public static final String IS_ROOT = "IS_ROOT";
+  public static final ImmutableBytesWritable IS_ROOT_KEY =
+    new ImmutableBytesWritable(Bytes.toBytes(IS_ROOT));
+  public static final String IS_META = "IS_META";
+
+  public static final ImmutableBytesWritable IS_META_KEY =
+    new ImmutableBytesWritable(Bytes.toBytes(IS_META));
+
+  public static final String DEFERRED_LOG_FLUSH = "DEFERRED_LOG_FLUSH";
+  public static final ImmutableBytesWritable DEFERRED_LOG_FLUSH_KEY =
+    new ImmutableBytesWritable(Bytes.toBytes(DEFERRED_LOG_FLUSH));
+
+
+  // The below are ugly but better than creating them each time till we
+  // replace booleans being saved as Strings with plain booleans.  Need a
+  // migration script to do this.  TODO.
+  private static final ImmutableBytesWritable FALSE =
+    new ImmutableBytesWritable(Bytes.toBytes(Boolean.FALSE.toString()));
+  private static final ImmutableBytesWritable TRUE =
+    new ImmutableBytesWritable(Bytes.toBytes(Boolean.TRUE.toString()));
+
+  public static final boolean DEFAULT_READONLY = false;
+
+  public static final long DEFAULT_MEMSTORE_FLUSH_SIZE = 1024*1024*64L;
+
+  public static final long DEFAULT_MAX_FILESIZE = 1024*1024*256L;
+
+  public static final boolean DEFAULT_DEFERRED_LOG_FLUSH = false;
+
+  private volatile Boolean meta = null;
+  private volatile Boolean root = null;
+  private Boolean isDeferredLog = null;
+
+  // Key is hash of the family name.
+  public final Map<byte [], HColumnDescriptor> families =
+    new TreeMap<byte [], HColumnDescriptor>(Bytes.BYTES_RAWCOMPARATOR);
+
+  /**
+   * Private constructor used internally creating table descriptors for
+   * catalog tables: e.g. .META. and -ROOT-.
+   */
+  protected HTableDescriptor(final byte [] name, HColumnDescriptor[] families) {
+    this.name = name.clone();
+    this.nameAsString = Bytes.toString(this.name);
+    setMetaFlags(name);
+    for(HColumnDescriptor descriptor : families) {
+      this.families.put(descriptor.getName(), descriptor);
+    }
+  }
+
+  /**
+   * Private constructor used internally creating table descriptors for
+   * catalog tables: e.g. .META. and -ROOT-.
+   */
+  protected HTableDescriptor(final byte [] name, HColumnDescriptor[] families,
+      Map<ImmutableBytesWritable,ImmutableBytesWritable> values) {
+    this.name = name.clone();
+    this.nameAsString = Bytes.toString(this.name);
+    setMetaFlags(name);
+    for(HColumnDescriptor descriptor : families) {
+      this.families.put(descriptor.getName(), descriptor);
+    }
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> entry:
+        values.entrySet()) {
+      this.values.put(entry.getKey(), entry.getValue());
+    }
+  }
+
+
+  /**
+   * Constructs an empty object.
+   * For deserializing an HTableDescriptor instance only.
+   * @see #HTableDescriptor(byte[])
+   */
+  public HTableDescriptor() {
+    super();
+  }
+
+  /**
+   * Constructor.
+   * @param name Table name.
+   * @throws IllegalArgumentException if passed a table name
+   * that is made of other than 'word' characters, underscore or period: i.e.
+   * <code>[a-zA-Z_0-9.].
+   * @see <a href="HADOOP-1581">HADOOP-1581 HBASE: Un-openable tablename bug</a>
+   */
+  public HTableDescriptor(final String name) {
+    this(Bytes.toBytes(name));
+  }
+
+  /**
+   * Constructor.
+   * @param name Table name.
+   * @throws IllegalArgumentException if passed a table name
+   * that is made of other than 'word' characters, underscore or period: i.e.
+   * <code>[a-zA-Z_0-9-.].
+   * @see <a href="HADOOP-1581">HADOOP-1581 HBASE: Un-openable tablename bug</a>
+   */
+  public HTableDescriptor(final byte [] name) {
+    super();
+    setMetaFlags(this.name);
+    this.name = this.isMetaRegion()? name: isLegalTableName(name);
+    this.nameAsString = Bytes.toString(this.name);
+  }
+
+  /**
+   * Constructor.
+   * <p>
+   * Makes a deep copy of the supplied descriptor.
+   * Can make a modifiable descriptor from an UnmodifyableHTableDescriptor.
+   * @param desc The descriptor.
+   */
+  public HTableDescriptor(final HTableDescriptor desc) {
+    super();
+    this.name = desc.name.clone();
+    this.nameAsString = Bytes.toString(this.name);
+    setMetaFlags(this.name);
+    for (HColumnDescriptor c: desc.families.values()) {
+      this.families.put(c.getName(), new HColumnDescriptor(c));
+    }
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+        desc.values.entrySet()) {
+      this.values.put(e.getKey(), e.getValue());
+    }
+  }
+
+  /*
+   * Set meta flags on this table.
+   * Called by constructors.
+   * @param name
+   */
+  private void setMetaFlags(final byte [] name) {
+    setRootRegion(Bytes.equals(name, HConstants.ROOT_TABLE_NAME));
+    setMetaRegion(isRootRegion() ||
+      Bytes.equals(name, HConstants.META_TABLE_NAME));
+  }
+
+  /** @return true if this is the root region */
+  public boolean isRootRegion() {
+    if (this.root == null) {
+      this.root = isSomething(IS_ROOT_KEY, false)? Boolean.TRUE: Boolean.FALSE;
+    }
+    return this.root.booleanValue();
+  }
+
+  /** @param isRoot true if this is the root region */
+  protected void setRootRegion(boolean isRoot) {
+    // TODO: Make the value a boolean rather than String of boolean.
+    values.put(IS_ROOT_KEY, isRoot? TRUE: FALSE);
+  }
+
+  /** @return true if this is a meta region (part of the root or meta tables) */
+  public boolean isMetaRegion() {
+    if (this.meta == null) {
+      this.meta = calculateIsMetaRegion();
+    }
+    return this.meta.booleanValue();
+  }
+
+  private synchronized Boolean calculateIsMetaRegion() {
+    byte [] value = getValue(IS_META_KEY);
+    return (value != null)? Boolean.valueOf(Bytes.toString(value)): Boolean.FALSE;
+  }
+
+  private boolean isSomething(final ImmutableBytesWritable key,
+      final boolean valueIfNull) {
+    byte [] value = getValue(key);
+    if (value != null) {
+      // TODO: Make value be a boolean rather than String of boolean.
+      return Boolean.valueOf(Bytes.toString(value)).booleanValue();
+    }
+    return valueIfNull;
+  }
+
+  /**
+   * @param isMeta true if this is a meta region (part of the root or meta
+   * tables) */
+  protected void setMetaRegion(boolean isMeta) {
+    values.put(IS_META_KEY, isMeta? TRUE: FALSE);
+  }
+
+  /** @return true if table is the meta table */
+  public boolean isMetaTable() {
+    return isMetaRegion() && !isRootRegion();
+  }
+
+  /**
+   * @param n Table name.
+   * @return True if a catalog table, -ROOT- or .META.
+   */
+  public static boolean isMetaTable(final byte [] n) {
+    return Bytes.equals(n, HConstants.ROOT_TABLE_NAME) ||
+      Bytes.equals(n, HConstants.META_TABLE_NAME);
+  }
+
+  /**
+   * Check passed buffer is legal user-space table name.
+   * @param b Table name.
+   * @return Returns passed <code>b</code> param
+   * @throws NullPointerException If passed <code>b</code> is null
+   * @throws IllegalArgumentException if passed a table name
+   * that is made of other than 'word' characters or underscores: i.e.
+   * <code>[a-zA-Z_0-9].
+   */
+  public static byte [] isLegalTableName(final byte [] b) {
+    if (b == null || b.length <= 0) {
+      throw new IllegalArgumentException("Name is null or empty");
+    }
+    if (b[0] == '.' || b[0] == '-') {
+      throw new IllegalArgumentException("Illegal first character <" + b[0] +
+          "> at 0. User-space table names can only start with 'word " +
+          "characters': i.e. [a-zA-Z_0-9]: " + Bytes.toString(b));
+    }
+    for (int i = 0; i < b.length; i++) {
+      if (Character.isLetterOrDigit(b[i]) || b[i] == '_' || b[i] == '-' ||
+          b[i] == '.') {
+        continue;
+      }
+      throw new IllegalArgumentException("Illegal character <" + b[i] +
+        "> at " + i + ". User-space table names can only contain " +
+        "'word characters': i.e. [a-zA-Z_0-9-.]: " + Bytes.toString(b));
+    }
+    return b;
+  }
+
+  /**
+   * @param key The key.
+   * @return The value.
+   */
+  public byte[] getValue(byte[] key) {
+    return getValue(new ImmutableBytesWritable(key));
+  }
+
+  private byte[] getValue(final ImmutableBytesWritable key) {
+    ImmutableBytesWritable ibw = values.get(key);
+    if (ibw == null)
+      return null;
+    return ibw.get();
+  }
+
+  /**
+   * @param key The key.
+   * @return The value as a string.
+   */
+  public String getValue(String key) {
+    byte[] value = getValue(Bytes.toBytes(key));
+    if (value == null)
+      return null;
+    return Bytes.toString(value);
+  }
+
+  /**
+   * @return All values.
+   */
+  public Map<ImmutableBytesWritable,ImmutableBytesWritable> getValues() {
+     return Collections.unmodifiableMap(values);
+  }
+
+  /**
+   * @param key The key.
+   * @param value The value.
+   */
+  public void setValue(byte[] key, byte[] value) {
+    setValue(new ImmutableBytesWritable(key), value);
+  }
+
+  /*
+   * @param key The key.
+   * @param value The value.
+   */
+  private void setValue(final ImmutableBytesWritable key,
+      final byte[] value) {
+    values.put(key, new ImmutableBytesWritable(value));
+  }
+
+  /*
+   * @param key The key.
+   * @param value The value.
+   */
+  private void setValue(final ImmutableBytesWritable key,
+      final ImmutableBytesWritable value) {
+    values.put(key, value);
+  }
+
+  /**
+   * @param key The key.
+   * @param value The value.
+   */
+  public void setValue(String key, String value) {
+    setValue(Bytes.toBytes(key), Bytes.toBytes(value));
+  }
+
+  /**
+   * @param key Key whose key and value we're to remove from HTD parameters.
+   */
+  public void remove(final byte [] key) {
+    values.remove(new ImmutableBytesWritable(key));
+  }
+
+  /**
+   * @return true if all columns in the table should be read only
+   */
+  public boolean isReadOnly() {
+    return isSomething(READONLY_KEY, DEFAULT_READONLY);
+  }
+
+  /**
+   * @param readOnly True if all of the columns in the table should be read
+   * only.
+   */
+  public void setReadOnly(final boolean readOnly) {
+    setValue(READONLY_KEY, readOnly? TRUE: FALSE);
+  }
+
+  /**
+   * @return true if that table's log is hflush by other means
+   */
+  public synchronized boolean isDeferredLogFlush() {
+    if(this.isDeferredLog == null) {
+      this.isDeferredLog =
+          isSomething(DEFERRED_LOG_FLUSH_KEY, DEFAULT_DEFERRED_LOG_FLUSH);
+    }
+    return this.isDeferredLog;
+  }
+
+  /**
+   * @param isDeferredLogFlush true if that table's log is hlfush by oter means
+   * only.
+   */
+  public void setDeferredLogFlush(final boolean isDeferredLogFlush) {
+    setValue(DEFERRED_LOG_FLUSH_KEY, isDeferredLogFlush? TRUE: FALSE);
+  }
+
+  /** @return name of table */
+  public byte [] getName() {
+    return name;
+  }
+
+  /** @return name of table */
+  public String getNameAsString() {
+    return this.nameAsString;
+  }
+
+  /** @return max hregion size for table */
+  public long getMaxFileSize() {
+    byte [] value = getValue(MAX_FILESIZE_KEY);
+    if (value != null)
+      return Long.valueOf(Bytes.toString(value)).longValue();
+    return HConstants.DEFAULT_MAX_FILE_SIZE;
+  }
+
+  /** @param name name of table */
+  public void setName(byte[] name) {
+    this.name = name;
+  }
+
+  /**
+   * @param maxFileSize The maximum file size that a store file can grow to
+   * before a split is triggered.
+   */
+  public void setMaxFileSize(long maxFileSize) {
+    setValue(MAX_FILESIZE_KEY, Bytes.toBytes(Long.toString(maxFileSize)));
+  }
+
+  /**
+   * @return memory cache flush size for each hregion
+   */
+  public long getMemStoreFlushSize() {
+    byte [] value = getValue(MEMSTORE_FLUSHSIZE_KEY);
+    if (value != null)
+      return Long.valueOf(Bytes.toString(value)).longValue();
+    return DEFAULT_MEMSTORE_FLUSH_SIZE;
+  }
+
+  /**
+   * @param memstoreFlushSize memory cache flush size for each hregion
+   */
+  public void setMemStoreFlushSize(long memstoreFlushSize) {
+    setValue(MEMSTORE_FLUSHSIZE_KEY,
+      Bytes.toBytes(Long.toString(memstoreFlushSize)));
+  }
+
+  /**
+   * Adds a column family.
+   * @param family HColumnDescriptor of familyto add.
+   */
+  public void addFamily(final HColumnDescriptor family) {
+    if (family.getName() == null || family.getName().length <= 0) {
+      throw new NullPointerException("Family name cannot be null or empty");
+    }
+    this.families.put(family.getName(), family);
+  }
+
+  /**
+   * Checks to see if this table contains the given column family
+   * @param c Family name or column name.
+   * @return true if the table contains the specified family name
+   */
+  public boolean hasFamily(final byte [] c) {
+    return families.containsKey(c);
+  }
+
+  /**
+   * @return Name of this table and then a map of all of the column family
+   * descriptors.
+   * @see #getNameAsString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder s = new StringBuilder();
+    s.append('{');
+    s.append(HConstants.NAME);
+    s.append(" => '");
+    s.append(Bytes.toString(name));
+    s.append("'");
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+        values.entrySet()) {
+      String key = Bytes.toString(e.getKey().get());
+      String value = Bytes.toString(e.getValue().get());
+      if (key == null) {
+        continue;
+      }
+      String upperCase = key.toUpperCase();
+      if (upperCase.equals(IS_ROOT) || upperCase.equals(IS_META)) {
+        // Skip. Don't bother printing out read-only values if false.
+        if (value.toLowerCase().equals(Boolean.FALSE.toString())) {
+          continue;
+        }
+      }
+      s.append(", ");
+      s.append(Bytes.toString(e.getKey().get()));
+      s.append(" => '");
+      s.append(Bytes.toString(e.getValue().get()));
+      s.append("'");
+    }
+    s.append(", ");
+    s.append(FAMILIES);
+    s.append(" => ");
+    s.append(families.values());
+    s.append('}');
+    return s.toString();
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj) {
+      return true;
+    }
+    if (obj == null) {
+      return false;
+    }
+    if (!(obj instanceof HTableDescriptor)) {
+      return false;
+    }
+    return compareTo((HTableDescriptor)obj) == 0;
+  }
+
+  /**
+   * @see java.lang.Object#hashCode()
+   */
+  @Override
+  public int hashCode() {
+    int result = Bytes.hashCode(this.name);
+    result ^= Byte.valueOf(TABLE_DESCRIPTOR_VERSION).hashCode();
+    if (this.families != null && this.families.size() > 0) {
+      for (HColumnDescriptor e: this.families.values()) {
+        result ^= e.hashCode();
+      }
+    }
+    result ^= values.hashCode();
+    return result;
+  }
+
+  // Writable
+
+  public void readFields(DataInput in) throws IOException {
+    int version = in.readInt();
+    if (version < 3)
+      throw new IOException("versions < 3 are not supported (and never existed!?)");
+    // version 3+
+    name = Bytes.readByteArray(in);
+    nameAsString = Bytes.toString(this.name);
+    setRootRegion(in.readBoolean());
+    setMetaRegion(in.readBoolean());
+    values.clear();
+    int numVals = in.readInt();
+    for (int i = 0; i < numVals; i++) {
+      ImmutableBytesWritable key = new ImmutableBytesWritable();
+      ImmutableBytesWritable value = new ImmutableBytesWritable();
+      key.readFields(in);
+      value.readFields(in);
+      values.put(key, value);
+    }
+    families.clear();
+    int numFamilies = in.readInt();
+    for (int i = 0; i < numFamilies; i++) {
+      HColumnDescriptor c = new HColumnDescriptor();
+      c.readFields(in);
+      families.put(c.getName(), c);
+    }
+    if (version < 4) {
+      return;
+    }
+  }
+
+  public void write(DataOutput out) throws IOException {
+	out.writeInt(TABLE_DESCRIPTOR_VERSION);
+    Bytes.writeByteArray(out, name);
+    out.writeBoolean(isRootRegion());
+    out.writeBoolean(isMetaRegion());
+    out.writeInt(values.size());
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+        values.entrySet()) {
+      e.getKey().write(out);
+      e.getValue().write(out);
+    }
+    out.writeInt(families.size());
+    for(Iterator<HColumnDescriptor> it = families.values().iterator();
+        it.hasNext(); ) {
+      HColumnDescriptor family = it.next();
+      family.write(out);
+    }
+  }
+
+  // Comparable
+
+  public int compareTo(final HTableDescriptor other) {
+    int result = Bytes.compareTo(this.name, other.name);
+    if (result == 0) {
+      result = families.size() - other.families.size();
+    }
+    if (result == 0 && families.size() != other.families.size()) {
+      result = Integer.valueOf(families.size()).compareTo(
+          Integer.valueOf(other.families.size()));
+    }
+    if (result == 0) {
+      for (Iterator<HColumnDescriptor> it = families.values().iterator(),
+          it2 = other.families.values().iterator(); it.hasNext(); ) {
+        result = it.next().compareTo(it2.next());
+        if (result != 0) {
+          break;
+        }
+      }
+    }
+    if (result == 0) {
+      // punt on comparison for ordering, just calculate difference
+      result = this.values.hashCode() - other.values.hashCode();
+      if (result < 0)
+        result = -1;
+      else if (result > 0)
+        result = 1;
+    }
+    return result;
+  }
+
+  /**
+   * @return Immutable sorted map of families.
+   */
+  public Collection<HColumnDescriptor> getFamilies() {
+    return Collections.unmodifiableCollection(this.families.values());
+  }
+
+  /**
+   * @return Immutable sorted set of the keys of the families.
+   */
+  public Set<byte[]> getFamiliesKeys() {
+    return Collections.unmodifiableSet(this.families.keySet());
+  }
+
+  public HColumnDescriptor[] getColumnFamilies() {
+    return getFamilies().toArray(new HColumnDescriptor[0]);
+  }
+
+  /**
+   * @param column
+   * @return Column descriptor for the passed family name or the family on
+   * passed in column.
+   */
+  public HColumnDescriptor getFamily(final byte [] column) {
+    return this.families.get(column);
+  }
+
+  /**
+   * @param column
+   * @return Column descriptor for the passed family name or the family on
+   * passed in column.
+   */
+  public HColumnDescriptor removeFamily(final byte [] column) {
+    return this.families.remove(column);
+  }
+
+  /**
+   * @param rootdir qualified path of HBase root directory
+   * @param tableName name of table
+   * @return path for table
+   */
+  public static Path getTableDir(Path rootdir, final byte [] tableName) {
+    return new Path(rootdir, Bytes.toString(tableName));
+  }
+
+  /** Table descriptor for <core>-ROOT-</code> catalog table */
+  public static final HTableDescriptor ROOT_TABLEDESC = new HTableDescriptor(
+      HConstants.ROOT_TABLE_NAME,
+      new HColumnDescriptor[] { new HColumnDescriptor(HConstants.CATALOG_FAMILY,
+          10,  // Ten is arbitrary number.  Keep versions to help debuggging.
+          Compression.Algorithm.NONE.getName(), true, true, 8 * 1024,
+          HConstants.FOREVER, StoreFile.BloomType.NONE.toString(),  
+          HConstants.REPLICATION_SCOPE_LOCAL) });
+
+  /** Table descriptor for <code>.META.</code> catalog table */
+  public static final HTableDescriptor META_TABLEDESC = new HTableDescriptor(
+      HConstants.META_TABLE_NAME, new HColumnDescriptor[] {
+          new HColumnDescriptor(HConstants.CATALOG_FAMILY,
+            10, // Ten is arbitrary number.  Keep versions to help debuggging.
+            Compression.Algorithm.NONE.getName(), true, true, 8 * 1024,
+            HConstants.FOREVER, StoreFile.BloomType.NONE.toString(),
+            HConstants.REPLICATION_SCOPE_LOCAL)});
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java b/0.90/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java
new file mode 100644
index 0000000..bb2b666
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Thrown if a request is table schema modification is requested but
+ * made for an invalid family name.
+ */
+public class InvalidFamilyOperationException extends IOException {
+  private static final long serialVersionUID = 1L << 22 - 1L;
+  /** default constructor */
+  public InvalidFamilyOperationException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public InvalidFamilyOperationException(String s) {
+    super(s);
+  }
+
+  /**
+   * Constructor taking another exception.
+   * @param e Exception to grab data from.
+   */
+  public InvalidFamilyOperationException(Exception e) {
+    super(e);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/KeyValue.java b/0.90/src/main/java/org/apache/hadoop/hbase/KeyValue.java
new file mode 100644
index 0000000..02d4142
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/KeyValue.java
@@ -0,0 +1,1984 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Comparator;
+
+import com.google.common.primitives.Longs;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * An HBase Key/Value.
+ *
+ * <p>If being used client-side, the primary methods to access individual fields
+ * are {@link #getRow()}, {@link #getFamily()}, {@link #getQualifier()},
+ * {@link #getTimestamp()}, and {@link #getValue()}.  These methods allocate new
+ * byte arrays and return copies so they should be avoided server-side.
+ *
+ * <p>Instances of this class are immutable.  They are not
+ * comparable but Comparators are provided.  Comparators change with context,
+ * whether user table or a catalog table comparison context.  Its
+ * important that you use the appropriate comparator comparing rows in
+ * particular.  There are Comparators for KeyValue instances and then for
+ * just the Key portion of a KeyValue used mostly in {@link HFile}.
+ *
+ * <p>KeyValue wraps a byte array and has offset and length for passed array
+ * at where to start interpreting the content as a KeyValue blob.  The KeyValue
+ * blob format inside the byte array is:
+ * <code>&lt;keylength> &lt;valuelength> &lt;key> &lt;value></code>
+ * Key is decomposed as:
+ * <code>&lt;rowlength> &lt;row> &lt;columnfamilylength> &lt;columnfamily> &lt;columnqualifier> &lt;timestamp> &lt;keytype></code>
+ * Rowlength maximum is Short.MAX_SIZE, column family length maximum is
+ * Byte.MAX_SIZE, and column qualifier + key length must be < Integer.MAX_SIZE.
+ * The column does not contain the family/qualifier delimiter.
+ *
+ * <p>TODO: Group Key-only comparators and operations into a Key class, just
+ * for neatness sake, if can figure what to call it.
+ */
+public class KeyValue implements Writable, HeapSize {
+  static final Log LOG = LogFactory.getLog(KeyValue.class);
+
+  /**
+   * Colon character in UTF-8
+   */
+  public static final char COLUMN_FAMILY_DELIMITER = ':';
+
+  public static final byte[] COLUMN_FAMILY_DELIM_ARRAY =
+    new byte[]{COLUMN_FAMILY_DELIMITER};
+
+  /**
+   * Comparator for plain key/values; i.e. non-catalog table key/values.
+   */
+  public static KVComparator COMPARATOR = new KVComparator();
+
+  /**
+   * Comparator for plain key; i.e. non-catalog table key.  Works on Key portion
+   * of KeyValue only.
+   */
+  public static KeyComparator KEY_COMPARATOR = new KeyComparator();
+
+  /**
+   * A {@link KVComparator} for <code>.META.</code> catalog table
+   * {@link KeyValue}s.
+   */
+  public static KVComparator META_COMPARATOR = new MetaComparator();
+
+  /**
+   * A {@link KVComparator} for <code>.META.</code> catalog table
+   * {@link KeyValue} keys.
+   */
+  public static KeyComparator META_KEY_COMPARATOR = new MetaKeyComparator();
+
+  /**
+   * A {@link KVComparator} for <code>-ROOT-</code> catalog table
+   * {@link KeyValue}s.
+   */
+  public static KVComparator ROOT_COMPARATOR = new RootComparator();
+
+  /**
+   * A {@link KVComparator} for <code>-ROOT-</code> catalog table
+   * {@link KeyValue} keys.
+   */
+  public static KeyComparator ROOT_KEY_COMPARATOR = new RootKeyComparator();
+
+  /**
+   * Get the appropriate row comparator for the specified table.
+   *
+   * Hopefully we can get rid of this, I added this here because it's replacing
+   * something in HSK.  We should move completely off of that.
+   *
+   * @param tableName  The table name.
+   * @return The comparator.
+   */
+  public static KeyComparator getRowComparator(byte [] tableName) {
+    if(Bytes.equals(HTableDescriptor.ROOT_TABLEDESC.getName(),tableName)) {
+      return ROOT_COMPARATOR.getRawComparator();
+    }
+    if(Bytes.equals(HTableDescriptor.META_TABLEDESC.getName(), tableName)) {
+      return META_COMPARATOR.getRawComparator();
+    }
+    return COMPARATOR.getRawComparator();
+  }
+
+  // Size of the timestamp and type byte on end of a key -- a long + a byte.
+  public static final int TIMESTAMP_TYPE_SIZE =
+    Bytes.SIZEOF_LONG /* timestamp */ +
+    Bytes.SIZEOF_BYTE /*keytype*/;
+
+  // Size of the length shorts and bytes in key.
+  public static final int KEY_INFRASTRUCTURE_SIZE =
+    Bytes.SIZEOF_SHORT /*rowlength*/ +
+    Bytes.SIZEOF_BYTE /*columnfamilylength*/ +
+    TIMESTAMP_TYPE_SIZE;
+
+  // How far into the key the row starts at. First thing to read is the short
+  // that says how long the row is.
+  public static final int ROW_OFFSET =
+    Bytes.SIZEOF_INT /*keylength*/ +
+    Bytes.SIZEOF_INT /*valuelength*/;
+
+  // Size of the length ints in a KeyValue datastructure.
+  public static final int KEYVALUE_INFRASTRUCTURE_SIZE = ROW_OFFSET;
+
+  /**
+   * Key type.
+   * Has space for other key types to be added later.  Cannot rely on
+   * enum ordinals . They change if item is removed or moved.  Do our own codes.
+   */
+  public static enum Type {
+    Minimum((byte)0),
+    Put((byte)4),
+
+    Delete((byte)8),
+    DeleteColumn((byte)12),
+    DeleteFamily((byte)14),
+
+    // Maximum is used when searching; you look from maximum on down.
+    Maximum((byte)255);
+
+    private final byte code;
+
+    Type(final byte c) {
+      this.code = c;
+    }
+
+    public byte getCode() {
+      return this.code;
+    }
+
+    /**
+     * Cannot rely on enum ordinals . They change if item is removed or moved.
+     * Do our own codes.
+     * @param b
+     * @return Type associated with passed code.
+     */
+    public static Type codeToType(final byte b) {
+      for (Type t : Type.values()) {
+        if (t.getCode() == b) {
+          return t;
+        }
+      }
+      throw new RuntimeException("Unknown code " + b);
+    }
+  }
+
+  /**
+   * Lowest possible key.
+   * Makes a Key with highest possible Timestamp, empty row and column.  No
+   * key can be equal or lower than this one in memstore or in store file.
+   */
+  public static final KeyValue LOWESTKEY =
+    new KeyValue(HConstants.EMPTY_BYTE_ARRAY, HConstants.LATEST_TIMESTAMP);
+
+  private byte [] bytes = null;
+  private int offset = 0;
+  private int length = 0;
+
+  // the row cached
+  private byte [] rowCache = null;
+
+
+  /** Here be dragons **/
+
+  // used to achieve atomic operations in the memstore.
+  public long getMemstoreTS() {
+    return memstoreTS;
+  }
+
+  public void setMemstoreTS(long memstoreTS) {
+    this.memstoreTS = memstoreTS;
+  }
+
+  // default value is 0, aka DNC
+  private long memstoreTS = 0;
+
+  /** Dragon time over, return to normal business */
+
+
+  /** Writable Constructor -- DO NOT USE */
+  public KeyValue() {}
+
+  /**
+   * Creates a KeyValue from the start of the specified byte array.
+   * Presumes <code>bytes</code> content is formatted as a KeyValue blob.
+   * @param bytes byte array
+   */
+  public KeyValue(final byte [] bytes) {
+    this(bytes, 0);
+  }
+
+  /**
+   * Creates a KeyValue from the specified byte array and offset.
+   * Presumes <code>bytes</code> content starting at <code>offset</code> is
+   * formatted as a KeyValue blob.
+   * @param bytes byte array
+   * @param offset offset to start of KeyValue
+   */
+  public KeyValue(final byte [] bytes, final int offset) {
+    this(bytes, offset, getLength(bytes, offset));
+  }
+
+  /**
+   * Creates a KeyValue from the specified byte array, starting at offset, and
+   * for length <code>length</code>.
+   * @param bytes byte array
+   * @param offset offset to start of the KeyValue
+   * @param length length of the KeyValue
+   */
+  public KeyValue(final byte [] bytes, final int offset, final int length) {
+    this.bytes = bytes;
+    this.offset = offset;
+    this.length = length;
+  }
+
+  /** Constructors that build a new backing byte array from fields */
+
+  /**
+   * Constructs KeyValue structure filled with null value.
+   * Sets type to {@link KeyValue.Type#Maximum}
+   * @param row - row key (arbitrary byte array)
+   * @param timestamp
+   */
+  public KeyValue(final byte [] row, final long timestamp) {
+    this(row, timestamp, Type.Maximum);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with null value.
+   * @param row - row key (arbitrary byte array)
+   * @param timestamp
+   */
+  public KeyValue(final byte [] row, final long timestamp, Type type) {
+    this(row, null, null, timestamp, type, null);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with null value.
+   * Sets type to {@link KeyValue.Type#Maximum}
+   * @param row - row key (arbitrary byte array)
+   * @param family family name
+   * @param qualifier column qualifier
+   */
+  public KeyValue(final byte [] row, final byte [] family,
+      final byte [] qualifier) {
+    this(row, family, qualifier, HConstants.LATEST_TIMESTAMP, Type.Maximum);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with null value.
+   * @param row - row key (arbitrary byte array)
+   * @param family family name
+   * @param qualifier column qualifier
+   */
+  public KeyValue(final byte [] row, final byte [] family,
+      final byte [] qualifier, final byte [] value) {
+    this(row, family, qualifier, HConstants.LATEST_TIMESTAMP, Type.Put, value);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with specified values.
+   * @param row row key
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param timestamp version timestamp
+   * @param type key type
+   * @throws IllegalArgumentException
+   */
+  public KeyValue(final byte[] row, final byte[] family,
+      final byte[] qualifier, final long timestamp, Type type) {
+    this(row, family, qualifier, timestamp, type, null);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with specified values.
+   * @param row row key
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param timestamp version timestamp
+   * @param value column value
+   * @throws IllegalArgumentException
+   */
+  public KeyValue(final byte[] row, final byte[] family,
+      final byte[] qualifier, final long timestamp, final byte[] value) {
+    this(row, family, qualifier, timestamp, Type.Put, value);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with specified values.
+   * @param row row key
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param timestamp version timestamp
+   * @param type key type
+   * @param value column value
+   * @throws IllegalArgumentException
+   */
+  public KeyValue(final byte[] row, final byte[] family,
+      final byte[] qualifier, final long timestamp, Type type,
+      final byte[] value) {
+    this(row, family, qualifier, 0, qualifier==null ? 0 : qualifier.length,
+        timestamp, type, value, 0, value==null ? 0 : value.length);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with specified values.
+   * @param row row key
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param qoffset qualifier offset
+   * @param qlength qualifier length
+   * @param timestamp version timestamp
+   * @param type key type
+   * @param value column value
+   * @param voffset value offset
+   * @param vlength value length
+   * @throws IllegalArgumentException
+   */
+  public KeyValue(byte [] row, byte [] family,
+      byte [] qualifier, int qoffset, int qlength, long timestamp, Type type,
+      byte [] value, int voffset, int vlength) {
+    this(row, 0, row==null ? 0 : row.length,
+        family, 0, family==null ? 0 : family.length,
+        qualifier, qoffset, qlength, timestamp, type,
+        value, voffset, vlength);
+  }
+
+  /**
+   * Constructs KeyValue structure filled with specified values.
+   * <p>
+   * Column is split into two fields, family and qualifier.
+   * @param row row key
+   * @param roffset row offset
+   * @param rlength row length
+   * @param family family name
+   * @param foffset family offset
+   * @param flength family length
+   * @param qualifier column qualifier
+   * @param qoffset qualifier offset
+   * @param qlength qualifier length
+   * @param timestamp version timestamp
+   * @param type key type
+   * @param value column value
+   * @param voffset value offset
+   * @param vlength value length
+   * @throws IllegalArgumentException
+   */
+  public KeyValue(final byte [] row, final int roffset, final int rlength,
+      final byte [] family, final int foffset, final int flength,
+      final byte [] qualifier, final int qoffset, final int qlength,
+      final long timestamp, final Type type,
+      final byte [] value, final int voffset, final int vlength) {
+    this.bytes = createByteArray(row, roffset, rlength,
+        family, foffset, flength, qualifier, qoffset, qlength,
+        timestamp, type, value, voffset, vlength);
+    this.length = bytes.length;
+    this.offset = 0;
+  }
+
+  /**
+   * Write KeyValue format into a byte array.
+   *
+   * @param row row key
+   * @param roffset row offset
+   * @param rlength row length
+   * @param family family name
+   * @param foffset family offset
+   * @param flength family length
+   * @param qualifier column qualifier
+   * @param qoffset qualifier offset
+   * @param qlength qualifier length
+   * @param timestamp version timestamp
+   * @param type key type
+   * @param value column value
+   * @param voffset value offset
+   * @param vlength value length
+   * @return The newly created byte array.
+   */
+  static byte [] createByteArray(final byte [] row, final int roffset,
+      final int rlength, final byte [] family, final int foffset, int flength,
+      final byte [] qualifier, final int qoffset, int qlength,
+      final long timestamp, final Type type,
+      final byte [] value, final int voffset, int vlength) {
+    if (rlength > Short.MAX_VALUE) {
+      throw new IllegalArgumentException("Row > " + Short.MAX_VALUE);
+    }
+    if (row == null) {
+      throw new IllegalArgumentException("Row is null");
+    }
+    // Family length
+    flength = family == null ? 0 : flength;
+    if (flength > Byte.MAX_VALUE) {
+      throw new IllegalArgumentException("Family > " + Byte.MAX_VALUE);
+    }
+    // Qualifier length
+    qlength = qualifier == null ? 0 : qlength;
+    if (qlength > Integer.MAX_VALUE - rlength - flength) {
+      throw new IllegalArgumentException("Qualifier > " + Integer.MAX_VALUE);
+    }
+    // Key length
+    long longkeylength = KEY_INFRASTRUCTURE_SIZE + rlength + flength + qlength;
+    if (longkeylength > Integer.MAX_VALUE) {
+      throw new IllegalArgumentException("keylength " + longkeylength + " > " +
+        Integer.MAX_VALUE);
+    }
+    int keylength = (int)longkeylength;
+    // Value length
+    vlength = value == null? 0 : vlength;
+    if (vlength > HConstants.MAXIMUM_VALUE_LENGTH) { // FindBugs INT_VACUOUS_COMPARISON
+      throw new IllegalArgumentException("Valuer > " +
+          HConstants.MAXIMUM_VALUE_LENGTH);
+    }
+
+    // Allocate right-sized byte array.
+    byte [] bytes = new byte[KEYVALUE_INFRASTRUCTURE_SIZE + keylength + vlength];
+    // Write key, value and key row length.
+    int pos = 0;
+    pos = Bytes.putInt(bytes, pos, keylength);
+    pos = Bytes.putInt(bytes, pos, vlength);
+    pos = Bytes.putShort(bytes, pos, (short)(rlength & 0x0000ffff));
+    pos = Bytes.putBytes(bytes, pos, row, roffset, rlength);
+    pos = Bytes.putByte(bytes, pos, (byte)(flength & 0x0000ff));
+    if(flength != 0) {
+      pos = Bytes.putBytes(bytes, pos, family, foffset, flength);
+    }
+    if(qlength != 0) {
+      pos = Bytes.putBytes(bytes, pos, qualifier, qoffset, qlength);
+    }
+    pos = Bytes.putLong(bytes, pos, timestamp);
+    pos = Bytes.putByte(bytes, pos, type.getCode());
+    if (value != null && value.length > 0) {
+      pos = Bytes.putBytes(bytes, pos, value, voffset, vlength);
+    }
+    return bytes;
+  }
+
+  /**
+   * Write KeyValue format into a byte array.
+   * <p>
+   * Takes column in the form <code>family:qualifier</code>
+   * @param row - row key (arbitrary byte array)
+   * @param roffset
+   * @param rlength
+   * @param column
+   * @param coffset
+   * @param clength
+   * @param timestamp
+   * @param type
+   * @param value
+   * @param voffset
+   * @param vlength
+   * @return The newly created byte array.
+   */
+  static byte [] createByteArray(final byte [] row, final int roffset,
+        final int rlength,
+      final byte [] column, final int coffset, int clength,
+      final long timestamp, final Type type,
+      final byte [] value, final int voffset, int vlength) {
+    // If column is non-null, figure where the delimiter is at.
+    int delimiteroffset = 0;
+    if (column != null && column.length > 0) {
+      delimiteroffset = getFamilyDelimiterIndex(column, coffset, clength);
+      if (delimiteroffset > Byte.MAX_VALUE) {
+        throw new IllegalArgumentException("Family > " + Byte.MAX_VALUE);
+      }
+    } else {
+      return createByteArray(row,roffset,rlength,null,0,0,null,0,0,timestamp,
+          type,value,voffset,vlength);
+    }
+    int flength = delimiteroffset-coffset;
+    int qlength = clength - flength - 1;
+    return createByteArray(row, roffset, rlength, column, coffset,
+        flength, column, delimiteroffset+1, qlength, timestamp, type,
+        value, voffset, vlength);
+  }
+
+  // Needed doing 'contains' on List.  Only compares the key portion, not the
+  // value.
+  public boolean equals(Object other) {
+    if (!(other instanceof KeyValue)) {
+      return false;
+    }
+    KeyValue kv = (KeyValue)other;
+    // Comparing bytes should be fine doing equals test.  Shouldn't have to
+    // worry about special .META. comparators doing straight equals.
+    boolean result = Bytes.BYTES_RAWCOMPARATOR.compare(getBuffer(),
+        getKeyOffset(), getKeyLength(),
+      kv.getBuffer(), kv.getKeyOffset(), kv.getKeyLength()) == 0;
+    return result;
+  }
+
+  public int hashCode() {
+    byte[] b = getBuffer();
+    int start = getOffset(), end = getOffset() + getLength();
+    int h = b[start++];
+    for (int i = start; i < end; i++) {
+      h = (h * 13) ^ b[i];
+    }
+    return h;
+  }
+
+  //---------------------------------------------------------------------------
+  //
+  //  KeyValue cloning
+  //
+  //---------------------------------------------------------------------------
+
+  /**
+   * Clones a KeyValue.  This creates a copy, re-allocating the buffer.
+   * @return Fully copied clone of this KeyValue
+   */
+  public KeyValue clone() {
+    byte [] b = new byte[this.length];
+    System.arraycopy(this.bytes, this.offset, b, 0, this.length);
+    KeyValue ret = new KeyValue(b, 0, b.length);
+    // Important to clone the memstoreTS as well - otherwise memstore's
+    // update-in-place methods (eg increment) will end up creating
+    // new entries
+    ret.setMemstoreTS(memstoreTS);
+    return ret;
+  }
+
+  //---------------------------------------------------------------------------
+  //
+  //  String representation
+  //
+  //---------------------------------------------------------------------------
+
+  public String toString() {
+    if (this.bytes == null || this.bytes.length == 0) {
+      return "empty";
+    }
+    return keyToString(this.bytes, this.offset + ROW_OFFSET, getKeyLength()) +
+      "/vlen=" + getValueLength();
+  }
+
+  /**
+   * @param k Key portion of a KeyValue.
+   * @return Key as a String.
+   */
+  public static String keyToString(final byte [] k) {
+    return keyToString(k, 0, k.length);
+  }
+
+  /**
+   * Use for logging.
+   * @param b Key portion of a KeyValue.
+   * @param o Offset to start of key
+   * @param l Length of key.
+   * @return Key as a String.
+   */
+  public static String keyToString(final byte [] b, final int o, final int l) {
+    if (b == null) return "";
+    int rowlength = Bytes.toShort(b, o);
+    String row = Bytes.toStringBinary(b, o + Bytes.SIZEOF_SHORT, rowlength);
+    int columnoffset = o + Bytes.SIZEOF_SHORT + 1 + rowlength;
+    int familylength = b[columnoffset - 1];
+    int columnlength = l - ((columnoffset - o) + TIMESTAMP_TYPE_SIZE);
+    String family = familylength == 0? "":
+      Bytes.toStringBinary(b, columnoffset, familylength);
+    String qualifier = columnlength == 0? "":
+      Bytes.toStringBinary(b, columnoffset + familylength,
+      columnlength - familylength);
+    long timestamp = Bytes.toLong(b, o + (l - TIMESTAMP_TYPE_SIZE));
+    byte type = b[o + l - 1];
+//    return row + "/" + family +
+//      (family != null && family.length() > 0? COLUMN_FAMILY_DELIMITER: "") +
+//      qualifier + "/" + timestamp + "/" + Type.codeToType(type);
+    return row + "/" + family +
+      (family != null && family.length() > 0? ":" :"") +
+      qualifier + "/" + timestamp + "/" + Type.codeToType(type);
+  }
+
+  //---------------------------------------------------------------------------
+  //
+  //  Public Member Accessors
+  //
+  //---------------------------------------------------------------------------
+
+  /**
+   * @return The byte array backing this KeyValue.
+   */
+  public byte [] getBuffer() {
+    return this.bytes;
+  }
+
+  /**
+   * @return Offset into {@link #getBuffer()} at which this KeyValue starts.
+   */
+  public int getOffset() {
+    return this.offset;
+  }
+
+  /**
+   * @return Length of bytes this KeyValue occupies in {@link #getBuffer()}.
+   */
+  public int getLength() {
+    return length;
+  }
+
+  //---------------------------------------------------------------------------
+  //
+  //  Length and Offset Calculators
+  //
+  //---------------------------------------------------------------------------
+
+  /**
+   * Determines the total length of the KeyValue stored in the specified
+   * byte array and offset.  Includes all headers.
+   * @param bytes byte array
+   * @param offset offset to start of the KeyValue
+   * @return length of entire KeyValue, in bytes
+   */
+  private static int getLength(byte [] bytes, int offset) {
+    return (2 * Bytes.SIZEOF_INT) +
+        Bytes.toInt(bytes, offset) +
+        Bytes.toInt(bytes, offset + Bytes.SIZEOF_INT);
+  }
+
+  /**
+   * @return Key offset in backing buffer..
+   */
+  public int getKeyOffset() {
+    return this.offset + ROW_OFFSET;
+  }
+
+  public String getKeyString() {
+    return Bytes.toStringBinary(getBuffer(), getKeyOffset(), getKeyLength());
+  }
+
+  /**
+   * @return Length of key portion.
+   */
+  private int keyLength = 0;
+
+  public int getKeyLength() {
+    if (keyLength == 0) {
+      keyLength = Bytes.toInt(this.bytes, this.offset);
+    }
+    return keyLength;
+  }
+
+  /**
+   * @return Value offset
+   */
+  public int getValueOffset() {
+    return getKeyOffset() + getKeyLength();
+  }
+
+  /**
+   * @return Value length
+   */
+  public int getValueLength() {
+    return Bytes.toInt(this.bytes, this.offset + Bytes.SIZEOF_INT);
+  }
+
+  /**
+   * @return Row offset
+   */
+  public int getRowOffset() {
+    return getKeyOffset() + Bytes.SIZEOF_SHORT;
+  }
+
+  /**
+   * @return Row length
+   */
+  public short getRowLength() {
+    return Bytes.toShort(this.bytes, getKeyOffset());
+  }
+
+  /**
+   * @return Family offset
+   */
+  public int getFamilyOffset() {
+    return getFamilyOffset(getRowLength());
+  }
+
+  /**
+   * @return Family offset
+   */
+  public int getFamilyOffset(int rlength) {
+    return this.offset + ROW_OFFSET + Bytes.SIZEOF_SHORT + rlength + Bytes.SIZEOF_BYTE;
+  }
+
+  /**
+   * @return Family length
+   */
+  public byte getFamilyLength() {
+    return getFamilyLength(getFamilyOffset());
+  }
+
+  /**
+   * @return Family length
+   */
+  public byte getFamilyLength(int foffset) {
+    return this.bytes[foffset-1];
+  }
+
+  /**
+   * @return Qualifier offset
+   */
+  public int getQualifierOffset() {
+    return getQualifierOffset(getFamilyOffset());
+  }
+
+  /**
+   * @return Qualifier offset
+   */
+  public int getQualifierOffset(int foffset) {
+    return foffset + getFamilyLength(foffset);
+  }
+
+  /**
+   * @return Qualifier length
+   */
+  public int getQualifierLength() {
+    return getQualifierLength(getRowLength(),getFamilyLength());
+  }
+
+  /**
+   * @return Qualifier length
+   */
+  public int getQualifierLength(int rlength, int flength) {
+    return getKeyLength() -
+      (KEY_INFRASTRUCTURE_SIZE + rlength + flength);
+  }
+
+  /**
+   * @return Column (family + qualifier) length
+   */
+  public int getTotalColumnLength() {
+    int rlength = getRowLength();
+    int foffset = getFamilyOffset(rlength);
+    return getTotalColumnLength(rlength,foffset);
+  }
+
+  /**
+   * @return Column (family + qualifier) length
+   */
+  public int getTotalColumnLength(int rlength, int foffset) {
+    int flength = getFamilyLength(foffset);
+    int qlength = getQualifierLength(rlength,flength);
+    return flength + qlength;
+  }
+
+  /**
+   * @return Timestamp offset
+   */
+  public int getTimestampOffset() {
+    return getTimestampOffset(getKeyLength());
+  }
+
+  /**
+   * @param keylength Pass if you have it to save on a int creation.
+   * @return Timestamp offset
+   */
+  public int getTimestampOffset(final int keylength) {
+    return getKeyOffset() + keylength - TIMESTAMP_TYPE_SIZE;
+  }
+
+  /**
+   * @return True if this KeyValue has a LATEST_TIMESTAMP timestamp.
+   */
+  public boolean isLatestTimestamp() {
+    return  Bytes.compareTo(getBuffer(), getTimestampOffset(), Bytes.SIZEOF_LONG,
+      HConstants.LATEST_TIMESTAMP_BYTES, 0, Bytes.SIZEOF_LONG) == 0;
+  }
+
+  /**
+   * @param now Time to set into <code>this</code> IFF timestamp ==
+   * {@link HConstants#LATEST_TIMESTAMP} (else, its a noop).
+   * @return True is we modified this.
+   */
+  public boolean updateLatestStamp(final byte [] now) {
+    if (this.isLatestTimestamp()) {
+      int tsOffset = getTimestampOffset();
+      System.arraycopy(now, 0, this.bytes, tsOffset, Bytes.SIZEOF_LONG);
+      return true;
+    }
+    return false;
+  }
+
+  //---------------------------------------------------------------------------
+  //
+  //  Methods that return copies of fields
+  //
+  //---------------------------------------------------------------------------
+
+  /**
+   * Do not use unless you have to.  Used internally for compacting and testing.
+   *
+   * Use {@link #getRow()}, {@link #getFamily()}, {@link #getQualifier()}, and
+   * {@link #getValue()} if accessing a KeyValue client-side.
+   * @return Copy of the key portion only.
+   */
+  public byte [] getKey() {
+    int keylength = getKeyLength();
+    byte [] key = new byte[keylength];
+    System.arraycopy(getBuffer(), getKeyOffset(), key, 0, keylength);
+    return key;
+  }
+
+  /**
+   * Returns value in a new byte array.
+   * Primarily for use client-side. If server-side, use
+   * {@link #getBuffer()} with appropriate offsets and lengths instead to
+   * save on allocations.
+   * @return Value in a new byte array.
+   */
+  public byte [] getValue() {
+    int o = getValueOffset();
+    int l = getValueLength();
+    byte [] result = new byte[l];
+    System.arraycopy(getBuffer(), o, result, 0, l);
+    return result;
+  }
+
+  /**
+   * Primarily for use client-side.  Returns the row of this KeyValue in a new
+   * byte array.<p>
+   *
+   * If server-side, use {@link #getBuffer()} with appropriate offsets and
+   * lengths instead.
+   * @return Row in a new byte array.
+   */
+  public byte [] getRow() {
+    if (rowCache == null) {
+      int o = getRowOffset();
+      short l = getRowLength();
+      rowCache = new byte[l];
+      System.arraycopy(getBuffer(), o, rowCache, 0, l);
+    }
+    return rowCache;
+  }
+
+  /**
+   *
+   * @return Timestamp
+   */
+  private long timestampCache = -1;
+  public long getTimestamp() {
+    if (timestampCache == -1) {
+      timestampCache = getTimestamp(getKeyLength());
+    }
+    return timestampCache;
+  }
+
+  /**
+   * @param keylength Pass if you have it to save on a int creation.
+   * @return Timestamp
+   */
+  long getTimestamp(final int keylength) {
+    int tsOffset = getTimestampOffset(keylength);
+    return Bytes.toLong(this.bytes, tsOffset);
+  }
+
+  /**
+   * @return Type of this KeyValue.
+   */
+  public byte getType() {
+    return getType(getKeyLength());
+  }
+
+  /**
+   * @param keylength Pass if you have it to save on a int creation.
+   * @return Type of this KeyValue.
+   */
+  byte getType(final int keylength) {
+    return this.bytes[this.offset + keylength - 1 + ROW_OFFSET];
+  }
+
+  /**
+   * @return True if a delete type, a {@link KeyValue.Type#Delete} or
+   * a {KeyValue.Type#DeleteFamily} or a {@link KeyValue.Type#DeleteColumn}
+   * KeyValue type.
+   */
+  public boolean isDelete() {
+    int t = getType();
+    return Type.Delete.getCode() <= t && t <= Type.DeleteFamily.getCode();
+  }
+
+  /**
+   * @return True if this KV is a {@link KeyValue.Type#Delete} type.
+   */
+  public boolean isDeleteType() {
+    return getType() == Type.Delete.getCode();
+  }
+
+  /**
+   * @return True if this KV is a delete family type.
+   */
+  public boolean isDeleteFamily() {
+    return getType() == Type.DeleteFamily.getCode();
+  }
+
+  /**
+   *
+   * @return True if this KV is a delete family or column type.
+   */
+  public boolean isDeleteColumnOrFamily() {
+    int t = getType();
+    return t == Type.DeleteColumn.getCode() || t == Type.DeleteFamily.getCode();
+  }
+
+  /**
+   * Primarily for use client-side.  Returns the family of this KeyValue in a
+   * new byte array.<p>
+   *
+   * If server-side, use {@link #getBuffer()} with appropriate offsets and
+   * lengths instead.
+   * @return Returns family. Makes a copy.
+   */
+  public byte [] getFamily() {
+    int o = getFamilyOffset();
+    int l = getFamilyLength(o);
+    byte [] result = new byte[l];
+    System.arraycopy(this.bytes, o, result, 0, l);
+    return result;
+  }
+
+  /**
+   * Primarily for use client-side.  Returns the column qualifier of this
+   * KeyValue in a new byte array.<p>
+   *
+   * If server-side, use {@link #getBuffer()} with appropriate offsets and
+   * lengths instead.
+   * Use {@link #getBuffer()} with appropriate offsets and lengths instead.
+   * @return Returns qualifier. Makes a copy.
+   */
+  public byte [] getQualifier() {
+    int o = getQualifierOffset();
+    int l = getQualifierLength();
+    byte [] result = new byte[l];
+    System.arraycopy(this.bytes, o, result, 0, l);
+    return result;
+  }
+
+  //---------------------------------------------------------------------------
+  //
+  //  KeyValue splitter
+  //
+  //---------------------------------------------------------------------------
+
+  /**
+   * Utility class that splits a KeyValue buffer into separate byte arrays.
+   * <p>
+   * Should get rid of this if we can, but is very useful for debugging.
+   */
+  public static class SplitKeyValue {
+    private byte [][] split;
+    SplitKeyValue() {
+      this.split = new byte[6][];
+    }
+    public void setRow(byte [] value) { this.split[0] = value; }
+    public void setFamily(byte [] value) { this.split[1] = value; }
+    public void setQualifier(byte [] value) { this.split[2] = value; }
+    public void setTimestamp(byte [] value) { this.split[3] = value; }
+    public void setType(byte [] value) { this.split[4] = value; }
+    public void setValue(byte [] value) { this.split[5] = value; }
+    public byte [] getRow() { return this.split[0]; }
+    public byte [] getFamily() { return this.split[1]; }
+    public byte [] getQualifier() { return this.split[2]; }
+    public byte [] getTimestamp() { return this.split[3]; }
+    public byte [] getType() { return this.split[4]; }
+    public byte [] getValue() { return this.split[5]; }
+  }
+
+  public SplitKeyValue split() {
+    SplitKeyValue split = new SplitKeyValue();
+    int splitOffset = this.offset;
+    int keyLen = Bytes.toInt(bytes, splitOffset);
+    splitOffset += Bytes.SIZEOF_INT;
+    int valLen = Bytes.toInt(bytes, splitOffset);
+    splitOffset += Bytes.SIZEOF_INT;
+    short rowLen = Bytes.toShort(bytes, splitOffset);
+    splitOffset += Bytes.SIZEOF_SHORT;
+    byte [] row = new byte[rowLen];
+    System.arraycopy(bytes, splitOffset, row, 0, rowLen);
+    splitOffset += rowLen;
+    split.setRow(row);
+    byte famLen = bytes[splitOffset];
+    splitOffset += Bytes.SIZEOF_BYTE;
+    byte [] family = new byte[famLen];
+    System.arraycopy(bytes, splitOffset, family, 0, famLen);
+    splitOffset += famLen;
+    split.setFamily(family);
+    int colLen = keyLen -
+      (rowLen + famLen + Bytes.SIZEOF_SHORT + Bytes.SIZEOF_BYTE +
+      Bytes.SIZEOF_LONG + Bytes.SIZEOF_BYTE);
+    byte [] qualifier = new byte[colLen];
+    System.arraycopy(bytes, splitOffset, qualifier, 0, colLen);
+    splitOffset += colLen;
+    split.setQualifier(qualifier);
+    byte [] timestamp = new byte[Bytes.SIZEOF_LONG];
+    System.arraycopy(bytes, splitOffset, timestamp, 0, Bytes.SIZEOF_LONG);
+    splitOffset += Bytes.SIZEOF_LONG;
+    split.setTimestamp(timestamp);
+    byte [] type = new byte[1];
+    type[0] = bytes[splitOffset];
+    splitOffset += Bytes.SIZEOF_BYTE;
+    split.setType(type);
+    byte [] value = new byte[valLen];
+    System.arraycopy(bytes, splitOffset, value, 0, valLen);
+    split.setValue(value);
+    return split;
+  }
+
+  //---------------------------------------------------------------------------
+  //
+  //  Compare specified fields against those contained in this KeyValue
+  //
+  //---------------------------------------------------------------------------
+
+  /**
+   * @param family
+   * @return True if matching families.
+   */
+  public boolean matchingFamily(final byte [] family) {
+    return matchingFamily(family, 0, family.length);
+  }
+
+  public boolean matchingFamily(final byte[] family, int offset, int length) {
+    if (this.length == 0 || this.bytes.length == 0) {
+      return false;
+    }
+    return Bytes.compareTo(family, offset, length,
+        this.bytes, getFamilyOffset(), getFamilyLength()) == 0;
+  }
+
+  public boolean matchingFamily(final KeyValue other) {
+    return matchingFamily(other.getBuffer(), other.getFamilyOffset(),
+        other.getFamilyLength());
+  }
+
+  /**
+   * @param qualifier
+   * @return True if matching qualifiers.
+   */
+  public boolean matchingQualifier(final byte [] qualifier) {
+    return matchingQualifier(qualifier, 0, qualifier.length);
+  }
+
+  public boolean matchingQualifier(final byte [] qualifier, int offset, int length) {
+    return Bytes.compareTo(qualifier, offset, length,
+        this.bytes, getQualifierOffset(), getQualifierLength()) == 0;
+  }
+
+  public boolean matchingQualifier(final KeyValue other) {
+    return matchingQualifier(other.getBuffer(), other.getQualifierOffset(),
+        other.getQualifierLength());
+  }
+
+  public boolean matchingRow(final byte [] row) {
+    return matchingRow(row, 0, row.length);
+  }
+
+  public boolean matchingRow(final byte[] row, int offset, int length) {
+    return Bytes.compareTo(row, offset, length,
+        this.bytes, getRowOffset(), getRowLength()) == 0;
+  }
+
+  public boolean matchingRow(KeyValue other) {
+    return matchingRow(other.getBuffer(), other.getRowOffset(),
+        other.getRowLength());
+  }
+
+  /**
+   * @param column Column minus its delimiter
+   * @return True if column matches.
+   */
+  public boolean matchingColumnNoDelimiter(final byte [] column) {
+    int rl = getRowLength();
+    int o = getFamilyOffset(rl);
+    int fl = getFamilyLength(o);
+    int l = fl + getQualifierLength(rl,fl);
+    return Bytes.compareTo(column, 0, column.length, this.bytes, o, l) == 0;
+  }
+
+  /**
+   *
+   * @param family column family
+   * @param qualifier column qualifier
+   * @return True if column matches
+   */
+  public boolean matchingColumn(final byte[] family, final byte[] qualifier) {
+    int rl = getRowLength();
+    int o = getFamilyOffset(rl);
+    int fl = getFamilyLength(o);
+    int ql = getQualifierLength(rl,fl);
+    if (Bytes.compareTo(family, 0, family.length, this.bytes, o, family.length)
+        != 0) {
+      return false;
+    }
+    if (qualifier == null || qualifier.length == 0) {
+      if (ql == 0) {
+        return true;
+      }
+      return false;
+    }
+    return Bytes.compareTo(qualifier, 0, qualifier.length,
+        this.bytes, o + fl, ql) == 0;
+  }
+
+  /**
+   * @param left
+   * @param loffset
+   * @param llength
+   * @param lfamilylength Offset of family delimiter in left column.
+   * @param right
+   * @param roffset
+   * @param rlength
+   * @param rfamilylength Offset of family delimiter in right column.
+   * @return The result of the comparison.
+   */
+  static int compareColumns(final byte [] left, final int loffset,
+      final int llength, final int lfamilylength,
+      final byte [] right, final int roffset, final int rlength,
+      final int rfamilylength) {
+    // Compare family portion first.
+    int diff = Bytes.compareTo(left, loffset, lfamilylength,
+      right, roffset, rfamilylength);
+    if (diff != 0) {
+      return diff;
+    }
+    // Compare qualifier portion
+    return Bytes.compareTo(left, loffset + lfamilylength,
+      llength - lfamilylength,
+      right, roffset + rfamilylength, rlength - rfamilylength);
+  }
+
+  /**
+   * @return True if non-null row and column.
+   */
+  public boolean nonNullRowAndColumn() {
+    return getRowLength() > 0 && !isEmptyColumn();
+  }
+
+  /**
+   * @return True if column is empty.
+   */
+  public boolean isEmptyColumn() {
+    return getQualifierLength() == 0;
+  }
+
+  /**
+   * Converts this KeyValue to only contain the key portion (the value is
+   * changed to be null).  This method does a full copy of the backing byte
+   * array and does not modify the original byte array of this KeyValue.
+   * <p>
+   * This method is used by <code>KeyOnlyFilter</code> and is an advanced feature of
+   * KeyValue, proceed with caution.
+   * @param lenAsVal replace value with the actual value length (false=empty)
+   */
+  public void convertToKeyOnly(boolean lenAsVal) {
+    // KV format:  <keylen:4><valuelen:4><key:keylen><value:valuelen>
+    // Rebuild as: <keylen:4><0:4><key:keylen>
+    int dataLen = lenAsVal? Bytes.SIZEOF_INT : 0;
+    byte [] newBuffer = new byte[getKeyLength() + (2 * Bytes.SIZEOF_INT) + dataLen];
+    System.arraycopy(this.bytes, this.offset, newBuffer, 0, 
+        Math.min(newBuffer.length,this.length));
+    Bytes.putInt(newBuffer, Bytes.SIZEOF_INT, dataLen);
+    if (lenAsVal) {
+      Bytes.putInt(newBuffer, newBuffer.length - dataLen, this.getValueLength());
+    }
+    this.bytes = newBuffer;
+    this.offset = 0;
+    this.length = newBuffer.length;
+  }
+
+  /**
+   * Splits a column in family:qualifier form into separate byte arrays.
+   * <p>
+   * Not recommend to be used as this is old-style API.
+   * @param c  The column.
+   * @return The parsed column.
+   */
+  public static byte [][] parseColumn(byte [] c) {
+    final int index = getDelimiter(c, 0, c.length, COLUMN_FAMILY_DELIMITER);
+    if (index == -1) {
+      // If no delimiter, return array of size 1
+      return new byte [][] { c };
+    } else if(index == c.length - 1) {
+      // Only a family, return array size 1
+      byte [] family = new byte[c.length-1];
+      System.arraycopy(c, 0, family, 0, family.length);
+      return new byte [][] { family };
+    }
+    // Family and column, return array size 2
+    final byte [][] result = new byte [2][];
+    result[0] = new byte [index];
+    System.arraycopy(c, 0, result[0], 0, index);
+    final int len = c.length - (index + 1);
+    result[1] = new byte[len];
+    System.arraycopy(c, index + 1 /*Skip delimiter*/, result[1], 0,
+      len);
+    return result;
+  }
+
+  /**
+   * Makes a column in family:qualifier form from separate byte arrays.
+   * <p>
+   * Not recommended for usage as this is old-style API.
+   * @param family
+   * @param qualifier
+   * @return family:qualifier
+   */
+  public static byte [] makeColumn(byte [] family, byte [] qualifier) {
+    return Bytes.add(family, COLUMN_FAMILY_DELIM_ARRAY, qualifier);
+  }
+
+  /**
+   * @param b
+   * @return Index of the family-qualifier colon delimiter character in passed
+   * buffer.
+   */
+  public static int getFamilyDelimiterIndex(final byte [] b, final int offset,
+      final int length) {
+    return getRequiredDelimiter(b, offset, length, COLUMN_FAMILY_DELIMITER);
+  }
+
+  private static int getRequiredDelimiter(final byte [] b,
+      final int offset, final int length, final int delimiter) {
+    int index = getDelimiter(b, offset, length, delimiter);
+    if (index < 0) {
+      throw new IllegalArgumentException("No " + (char)delimiter + " in <" +
+        Bytes.toString(b) + ">" + ", length=" + length + ", offset=" + offset);
+    }
+    return index;
+  }
+
+  static int getRequiredDelimiterInReverse(final byte [] b,
+      final int offset, final int length, final int delimiter) {
+    int index = getDelimiterInReverse(b, offset, length, delimiter);
+    if (index < 0) {
+      throw new IllegalArgumentException("No " + delimiter + " in <" +
+        Bytes.toString(b) + ">" + ", length=" + length + ", offset=" + offset);
+    }
+    return index;
+  }
+
+  /**
+   * @param b
+   * @param delimiter
+   * @return Index of delimiter having started from start of <code>b</code>
+   * moving rightward.
+   */
+  public static int getDelimiter(final byte [] b, int offset, final int length,
+      final int delimiter) {
+    if (b == null) {
+      throw new NullPointerException();
+    }
+    int result = -1;
+    for (int i = offset; i < length + offset; i++) {
+      if (b[i] == delimiter) {
+        result = i;
+        break;
+      }
+    }
+    return result;
+  }
+
+  /**
+   * Find index of passed delimiter walking from end of buffer backwards.
+   * @param b
+   * @param delimiter
+   * @return Index of delimiter
+   */
+  public static int getDelimiterInReverse(final byte [] b, final int offset,
+      final int length, final int delimiter) {
+    if (b == null) {
+      throw new NullPointerException();
+    }
+    int result = -1;
+    for (int i = (offset + length) - 1; i >= offset; i--) {
+      if (b[i] == delimiter) {
+        result = i;
+        break;
+      }
+    }
+    return result;
+  }
+
+  /**
+   * A {@link KVComparator} for <code>-ROOT-</code> catalog table
+   * {@link KeyValue}s.
+   */
+  public static class RootComparator extends MetaComparator {
+    private final KeyComparator rawcomparator = new RootKeyComparator();
+
+    public KeyComparator getRawComparator() {
+      return this.rawcomparator;
+    }
+
+    @Override
+    protected Object clone() throws CloneNotSupportedException {
+      return new RootComparator();
+    }
+  }
+
+  /**
+   * A {@link KVComparator} for <code>.META.</code> catalog table
+   * {@link KeyValue}s.
+   */
+  public static class MetaComparator extends KVComparator {
+    private final KeyComparator rawcomparator = new MetaKeyComparator();
+
+    public KeyComparator getRawComparator() {
+      return this.rawcomparator;
+    }
+
+    @Override
+    protected Object clone() throws CloneNotSupportedException {
+      return new MetaComparator();
+    }
+  }
+
+  /**
+   * Compare KeyValues.  When we compare KeyValues, we only compare the Key
+   * portion.  This means two KeyValues with same Key but different Values are
+   * considered the same as far as this Comparator is concerned.
+   * Hosts a {@link KeyComparator}.
+   */
+  public static class KVComparator implements java.util.Comparator<KeyValue> {
+    private final KeyComparator rawcomparator = new KeyComparator();
+
+    /**
+     * @return RawComparator that can compare the Key portion of a KeyValue.
+     * Used in hfile where indices are the Key portion of a KeyValue.
+     */
+    public KeyComparator getRawComparator() {
+      return this.rawcomparator;
+    }
+
+    public int compare(final KeyValue left, final KeyValue right) {
+      int ret = getRawComparator().compare(left.getBuffer(),
+          left.getOffset() + ROW_OFFSET, left.getKeyLength(),
+          right.getBuffer(), right.getOffset() + ROW_OFFSET,
+          right.getKeyLength());
+      if (ret != 0) return ret;
+      // Negate this comparison so later edits show up first
+      return -Longs.compare(left.getMemstoreTS(), right.getMemstoreTS());
+    }
+
+    public int compareTimestamps(final KeyValue left, final KeyValue right) {
+      return compareTimestamps(left, left.getKeyLength(), right,
+        right.getKeyLength());
+    }
+
+    int compareTimestamps(final KeyValue left, final int lkeylength,
+        final KeyValue right, final int rkeylength) {
+      // Compare timestamps
+      long ltimestamp = left.getTimestamp(lkeylength);
+      long rtimestamp = right.getTimestamp(rkeylength);
+      return getRawComparator().compareTimestamps(ltimestamp, rtimestamp);
+    }
+
+    /**
+     * @param left
+     * @param right
+     * @return Result comparing rows.
+     */
+    public int compareRows(final KeyValue left, final KeyValue right) {
+      return compareRows(left, left.getRowLength(), right,
+          right.getRowLength());
+    }
+
+    /**
+     * @param left
+     * @param lrowlength Length of left row.
+     * @param right
+     * @param rrowlength Length of right row.
+     * @return Result comparing rows.
+     */
+    public int compareRows(final KeyValue left, final short lrowlength,
+        final KeyValue right, final short rrowlength) {
+      return getRawComparator().compareRows(left.getBuffer(),
+          left.getRowOffset(), lrowlength,
+        right.getBuffer(), right.getRowOffset(), rrowlength);
+    }
+
+    /**
+     * @param left
+     * @param row - row key (arbitrary byte array)
+     * @return RawComparator
+     */
+    public int compareRows(final KeyValue left, final byte [] row) {
+      return getRawComparator().compareRows(left.getBuffer(),
+          left.getRowOffset(), left.getRowLength(), row, 0, row.length);
+    }
+
+    public int compareRows(byte [] left, int loffset, int llength,
+        byte [] right, int roffset, int rlength) {
+      return getRawComparator().compareRows(left, loffset, llength,
+        right, roffset, rlength);
+    }
+
+    public int compareColumns(final KeyValue left, final byte [] right,
+        final int roffset, final int rlength, final int rfamilyoffset) {
+      int offset = left.getFamilyOffset();
+      int length = left.getFamilyLength() + left.getQualifierLength();
+      return getRawComparator().compareColumns(left.getBuffer(), offset, length,
+        left.getFamilyLength(offset),
+        right, roffset, rlength, rfamilyoffset);
+    }
+
+    int compareColumns(final KeyValue left, final short lrowlength,
+        final KeyValue right, final short rrowlength) {
+      int lfoffset = left.getFamilyOffset(lrowlength);
+      int rfoffset = right.getFamilyOffset(rrowlength);
+      int lclength = left.getTotalColumnLength(lrowlength,lfoffset);
+      int rclength = right.getTotalColumnLength(rrowlength, rfoffset);
+      int lfamilylength = left.getFamilyLength(lfoffset);
+      int rfamilylength = right.getFamilyLength(rfoffset);
+      return getRawComparator().compareColumns(left.getBuffer(), lfoffset,
+          lclength, lfamilylength,
+        right.getBuffer(), rfoffset, rclength, rfamilylength);
+    }
+
+    /**
+     * Compares the row and column of two keyvalues for equality
+     * @param left
+     * @param right
+     * @return True if same row and column.
+     */
+    public boolean matchingRowColumn(final KeyValue left,
+        final KeyValue right) {
+      short lrowlength = left.getRowLength();
+      short rrowlength = right.getRowLength();
+      // TsOffset = end of column data. just comparing Row+CF length of each
+      return left.getTimestampOffset() == right.getTimestampOffset() &&
+        matchingRows(left, lrowlength, right, rrowlength) &&
+        compareColumns(left, lrowlength, right, rrowlength) == 0;
+    }
+
+    /**
+     * @param left
+     * @param right
+     * @return True if rows match.
+     */
+    public boolean matchingRows(final KeyValue left, final byte [] right) {
+      return compareRows(left, right) == 0;
+    }
+
+    /**
+     * Compares the row of two keyvalues for equality
+     * @param left
+     * @param right
+     * @return True if rows match.
+     */
+    public boolean matchingRows(final KeyValue left, final KeyValue right) {
+      short lrowlength = left.getRowLength();
+      short rrowlength = right.getRowLength();
+      return matchingRows(left, lrowlength, right, rrowlength);
+    }
+
+    /**
+     * @param left
+     * @param lrowlength
+     * @param right
+     * @param rrowlength
+     * @return True if rows match.
+     */
+    public boolean matchingRows(final KeyValue left, final short lrowlength,
+        final KeyValue right, final short rrowlength) {
+      return lrowlength == rrowlength &&
+        compareRows(left, lrowlength, right, rrowlength) == 0;
+    }
+
+    public boolean matchingRows(final byte [] left, final int loffset,
+        final int llength,
+        final byte [] right, final int roffset, final int rlength) {
+      int compare = compareRows(left, loffset, llength,
+          right, roffset, rlength);
+      if (compare != 0) {
+        return false;
+      }
+      return true;
+    }
+
+    /**
+     * Compares the row and timestamp of two keys
+     * Was called matchesWithoutColumn in HStoreKey.
+     * @param right Key to compare against.
+     * @return True if same row and timestamp is greater than the timestamp in
+     * <code>right</code>
+     */
+    public boolean matchingRowsGreaterTimestamp(final KeyValue left,
+        final KeyValue right) {
+      short lrowlength = left.getRowLength();
+      short rrowlength = right.getRowLength();
+      if (!matchingRows(left, lrowlength, right, rrowlength)) {
+        return false;
+      }
+      return left.getTimestamp() >= right.getTimestamp();
+    }
+
+    @Override
+    protected Object clone() throws CloneNotSupportedException {
+      return new KVComparator();
+    }
+
+    /**
+     * @return Comparator that ignores timestamps; useful counting versions.
+     */
+    public KVComparator getComparatorIgnoringTimestamps() {
+      KVComparator c = null;
+      try {
+        c = (KVComparator)this.clone();
+        c.getRawComparator().ignoreTimestamp = true;
+      } catch (CloneNotSupportedException e) {
+        LOG.error("Not supported", e);
+      }
+      return c;
+    }
+
+    /**
+     * @return Comparator that ignores key type; useful checking deletes
+     */
+    public KVComparator getComparatorIgnoringType() {
+      KVComparator c = null;
+      try {
+        c = (KVComparator)this.clone();
+        c.getRawComparator().ignoreType = true;
+      } catch (CloneNotSupportedException e) {
+        LOG.error("Not supported", e);
+      }
+      return c;
+    }
+  }
+
+  /**
+   * Creates a KeyValue that is last on the specified row id. That is,
+   * every other possible KeyValue for the given row would compareTo()
+   * less than the result of this call.
+   * @param row row key
+   * @return Last possible KeyValue on passed <code>row</code>
+   */
+  public static KeyValue createLastOnRow(final byte[] row) {
+    return new KeyValue(row, null, null, HConstants.LATEST_TIMESTAMP, Type.Minimum);
+  }
+
+  /**
+   * Create a KeyValue that is smaller than all other possible KeyValues
+   * for the given row. That is any (valid) KeyValue on 'row' would sort
+   * _after_ the result.
+   *
+   * @param row - row key (arbitrary byte array)
+   * @return First possible KeyValue on passed <code>row</code>
+   */
+  public static KeyValue createFirstOnRow(final byte [] row) {
+    return createFirstOnRow(row, HConstants.LATEST_TIMESTAMP);
+  }
+
+  /**
+   * Creates a KeyValue that is smaller than all other KeyValues that
+   * are older than the passed timestamp.
+   * @param row - row key (arbitrary byte array)
+   * @param ts - timestamp
+   * @return First possible key on passed <code>row</code> and timestamp.
+   */
+  public static KeyValue createFirstOnRow(final byte [] row,
+      final long ts) {
+    return new KeyValue(row, null, null, ts, Type.Maximum);
+  }
+
+  /**
+   * @param row - row key (arbitrary byte array)
+   * @param c column - {@link #parseColumn(byte[])} is called to split
+   * the column.
+   * @param ts - timestamp
+   * @return First possible key on passed <code>row</code>, column and timestamp
+   * @deprecated
+   */
+  public static KeyValue createFirstOnRow(final byte [] row, final byte [] c,
+      final long ts) {
+    byte [][] split = parseColumn(c);
+    return new KeyValue(row, split[0], split[1], ts, Type.Maximum);
+  }
+
+  /**
+   * Create a KeyValue for the specified row, family and qualifier that would be
+   * smaller than all other possible KeyValues that have the same row,family,qualifier.
+   * Used for seeking.
+   * @param row - row key (arbitrary byte array)
+   * @param family - family name
+   * @param qualifier - column qualifier
+   * @return First possible key on passed <code>row</code>, and column.
+   */
+  public static KeyValue createFirstOnRow(final byte [] row, final byte [] family,
+      final byte [] qualifier) {
+    return new KeyValue(row, family, qualifier, HConstants.LATEST_TIMESTAMP, Type.Maximum);
+  }
+
+  /**
+   * @param row - row key (arbitrary byte array)
+   * @param f - family name
+   * @param q - column qualifier
+   * @param ts - timestamp
+   * @return First possible key on passed <code>row</code>, column and timestamp
+   */
+  public static KeyValue createFirstOnRow(final byte [] row, final byte [] f,
+      final byte [] q, final long ts) {
+    return new KeyValue(row, f, q, ts, Type.Maximum);
+  }
+
+  /**
+   * Create a KeyValue for the specified row, family and qualifier that would be
+   * smaller than all other possible KeyValues that have the same row,
+   * family, qualifier.
+   * Used for seeking.
+   * @param row row key
+   * @param roffset row offset
+   * @param rlength row length
+   * @param family family name
+   * @param foffset family offset
+   * @param flength family length
+   * @param qualifier column qualifier
+   * @param qoffset qualifier offset
+   * @param qlength qualifier length
+   * @return First possible key on passed Row, Family, Qualifier.
+   */
+  public static KeyValue createFirstOnRow(final byte [] row,
+      final int roffset, final int rlength, final byte [] family,
+      final int foffset, final int flength, final byte [] qualifier,
+      final int qoffset, final int qlength) {
+    return new KeyValue(row, roffset, rlength, family,
+        foffset, flength, qualifier, qoffset, qlength,
+        HConstants.LATEST_TIMESTAMP, Type.Maximum, null, 0, 0);
+  }
+
+  /**
+   * Create a KeyValue for the specified row, family and qualifier that would be
+   * larger than or equal to all other possible KeyValues that have the same
+   * row, family, qualifier.
+   * Used for reseeking.
+   * @param row row key
+   * @param roffset row offset
+   * @param rlength row length
+   * @param family family name
+   * @param foffset family offset
+   * @param flength family length
+   * @param qualifier column qualifier
+   * @param qoffset qualifier offset
+   * @param qlength qualifier length
+   * @return Last possible key on passed row, family, qualifier.
+   */
+  public static KeyValue createLastOnRow(final byte [] row,
+      final int roffset, final int rlength, final byte [] family,
+      final int foffset, final int flength, final byte [] qualifier,
+      final int qoffset, final int qlength) {
+    return new KeyValue(row, roffset, rlength, family,
+        foffset, flength, qualifier, qoffset, qlength,
+        HConstants.OLDEST_TIMESTAMP, Type.Minimum, null, 0, 0);
+  }
+
+  /**
+   * @param b
+   * @return A KeyValue made of a byte array that holds the key-only part.
+   * Needed to convert hfile index members to KeyValues.
+   */
+  public static KeyValue createKeyValueFromKey(final byte [] b) {
+    return createKeyValueFromKey(b, 0, b.length);
+  }
+
+  /**
+   * @param bb
+   * @return A KeyValue made of a byte buffer that holds the key-only part.
+   * Needed to convert hfile index members to KeyValues.
+   */
+  public static KeyValue createKeyValueFromKey(final ByteBuffer bb) {
+    return createKeyValueFromKey(bb.array(), bb.arrayOffset(), bb.limit());
+  }
+
+  /**
+   * @param b
+   * @param o
+   * @param l
+   * @return A KeyValue made of a byte array that holds the key-only part.
+   * Needed to convert hfile index members to KeyValues.
+   */
+  public static KeyValue createKeyValueFromKey(final byte [] b, final int o,
+      final int l) {
+    byte [] newb = new byte[b.length + ROW_OFFSET];
+    System.arraycopy(b, o, newb, ROW_OFFSET, l);
+    Bytes.putInt(newb, 0, b.length);
+    Bytes.putInt(newb, Bytes.SIZEOF_INT, 0);
+    return new KeyValue(newb);
+  }
+
+  /**
+   * Compare key portion of a {@link KeyValue} for keys in <code>-ROOT-<code>
+   * table.
+   */
+  public static class RootKeyComparator extends MetaKeyComparator {
+    public int compareRows(byte [] left, int loffset, int llength,
+        byte [] right, int roffset, int rlength) {
+      // Rows look like this: .META.,ROW_FROM_META,RID
+      //        LOG.info("ROOT " + Bytes.toString(left, loffset, llength) +
+      //          "---" + Bytes.toString(right, roffset, rlength));
+      final int metalength = 7; // '.META.' length
+      int lmetaOffsetPlusDelimiter = loffset + metalength;
+      int leftFarDelimiter = getDelimiterInReverse(left,
+          lmetaOffsetPlusDelimiter,
+          llength - metalength, HRegionInfo.DELIMITER);
+      int rmetaOffsetPlusDelimiter = roffset + metalength;
+      int rightFarDelimiter = getDelimiterInReverse(right,
+          rmetaOffsetPlusDelimiter, rlength - metalength,
+          HRegionInfo.DELIMITER);
+      if (leftFarDelimiter < 0 && rightFarDelimiter >= 0) {
+        // Nothing between .META. and regionid.  Its first key.
+        return -1;
+      } else if (rightFarDelimiter < 0 && leftFarDelimiter >= 0) {
+        return 1;
+      } else if (leftFarDelimiter < 0 && rightFarDelimiter < 0) {
+        return 0;
+      }
+      int result = super.compareRows(left, lmetaOffsetPlusDelimiter,
+          leftFarDelimiter - lmetaOffsetPlusDelimiter,
+          right, rmetaOffsetPlusDelimiter,
+          rightFarDelimiter - rmetaOffsetPlusDelimiter);
+      if (result != 0) {
+        return result;
+      }
+      // Compare last part of row, the rowid.
+      leftFarDelimiter++;
+      rightFarDelimiter++;
+      result = compareRowid(left, leftFarDelimiter,
+          llength - (leftFarDelimiter - loffset),
+          right, rightFarDelimiter, rlength - (rightFarDelimiter - roffset));
+      return result;
+    }
+  }
+
+  /**
+   * Comparator that compares row component only of a KeyValue.
+   */
+  public static class RowComparator implements Comparator<KeyValue> {
+    final KVComparator comparator;
+
+    public RowComparator(final KVComparator c) {
+      this.comparator = c;
+    }
+
+    public int compare(KeyValue left, KeyValue right) {
+      return comparator.compareRows(left, right);
+    }
+  }
+
+  /**
+   * Compare key portion of a {@link KeyValue} for keys in <code>.META.</code>
+   * table.
+   */
+  public static class MetaKeyComparator extends KeyComparator {
+    public int compareRows(byte [] left, int loffset, int llength,
+        byte [] right, int roffset, int rlength) {
+      //        LOG.info("META " + Bytes.toString(left, loffset, llength) +
+      //          "---" + Bytes.toString(right, roffset, rlength));
+      int leftDelimiter = getDelimiter(left, loffset, llength,
+          HRegionInfo.DELIMITER);
+      int rightDelimiter = getDelimiter(right, roffset, rlength,
+          HRegionInfo.DELIMITER);
+      if (leftDelimiter < 0 && rightDelimiter >= 0) {
+        // Nothing between .META. and regionid.  Its first key.
+        return -1;
+      } else if (rightDelimiter < 0 && leftDelimiter >= 0) {
+        return 1;
+      } else if (leftDelimiter < 0 && rightDelimiter < 0) {
+        return 0;
+      }
+      // Compare up to the delimiter
+      int result = Bytes.compareTo(left, loffset, leftDelimiter - loffset,
+          right, roffset, rightDelimiter - roffset);
+      if (result != 0) {
+        return result;
+      }
+      // Compare middle bit of the row.
+      // Move past delimiter
+      leftDelimiter++;
+      rightDelimiter++;
+      int leftFarDelimiter = getRequiredDelimiterInReverse(left, leftDelimiter,
+          llength - (leftDelimiter - loffset), HRegionInfo.DELIMITER);
+      int rightFarDelimiter = getRequiredDelimiterInReverse(right,
+          rightDelimiter, rlength - (rightDelimiter - roffset),
+          HRegionInfo.DELIMITER);
+      // Now compare middlesection of row.
+      result = super.compareRows(left, leftDelimiter,
+          leftFarDelimiter - leftDelimiter, right, rightDelimiter,
+          rightFarDelimiter - rightDelimiter);
+      if (result != 0) {
+        return result;
+      }
+      // Compare last part of row, the rowid.
+      leftFarDelimiter++;
+      rightFarDelimiter++;
+      result = compareRowid(left, leftFarDelimiter,
+          llength - (leftFarDelimiter - loffset),
+          right, rightFarDelimiter, rlength - (rightFarDelimiter - roffset));
+      return result;
+    }
+
+    protected int compareRowid(byte[] left, int loffset, int llength,
+        byte[] right, int roffset, int rlength) {
+      return Bytes.compareTo(left, loffset, llength, right, roffset, rlength);
+    }
+  }
+
+  /**
+   * Compare key portion of a {@link KeyValue}.
+   */
+  public static class KeyComparator implements RawComparator<byte []> {
+    volatile boolean ignoreTimestamp = false;
+    volatile boolean ignoreType = false;
+
+    public int compare(byte[] left, int loffset, int llength, byte[] right,
+        int roffset, int rlength) {
+      // Compare row
+      short lrowlength = Bytes.toShort(left, loffset);
+      short rrowlength = Bytes.toShort(right, roffset);
+      int compare = compareRows(left, loffset + Bytes.SIZEOF_SHORT,
+          lrowlength,
+          right, roffset + Bytes.SIZEOF_SHORT, rrowlength);
+      if (compare != 0) {
+        return compare;
+      }
+
+      // Compare column family.  Start compare past row and family length.
+      int lcolumnoffset = Bytes.SIZEOF_SHORT + lrowlength + 1 + loffset;
+      int rcolumnoffset = Bytes.SIZEOF_SHORT + rrowlength + 1 + roffset;
+      int lcolumnlength = llength - TIMESTAMP_TYPE_SIZE -
+        (lcolumnoffset - loffset);
+      int rcolumnlength = rlength - TIMESTAMP_TYPE_SIZE -
+        (rcolumnoffset - roffset);
+
+      // if row matches, and no column in the 'left' AND put type is 'minimum',
+      // then return that left is larger than right.
+
+      // This supports 'last key on a row' - the magic is if there is no column in the
+      // left operand, and the left operand has a type of '0' - magical value,
+      // then we say the left is bigger.  This will let us seek to the last key in
+      // a row.
+
+      byte ltype = left[loffset + (llength - 1)];
+      byte rtype = right[roffset + (rlength - 1)];
+
+      if (lcolumnlength == 0 && ltype == Type.Minimum.getCode()) {
+        return 1; // left is bigger.
+      }
+      if (rcolumnlength == 0 && rtype == Type.Minimum.getCode()) {
+        return -1;
+      }
+
+      // TODO the family and qualifier should be compared separately
+      compare = Bytes.compareTo(left, lcolumnoffset, lcolumnlength, right,
+          rcolumnoffset, rcolumnlength);
+      if (compare != 0) {
+        return compare;
+      }
+
+      if (!this.ignoreTimestamp) {
+        // Get timestamps.
+        long ltimestamp = Bytes.toLong(left,
+            loffset + (llength - TIMESTAMP_TYPE_SIZE));
+        long rtimestamp = Bytes.toLong(right,
+            roffset + (rlength - TIMESTAMP_TYPE_SIZE));
+        compare = compareTimestamps(ltimestamp, rtimestamp);
+        if (compare != 0) {
+          return compare;
+        }
+      }
+
+      if (!this.ignoreType) {
+        // Compare types. Let the delete types sort ahead of puts; i.e. types
+        // of higher numbers sort before those of lesser numbers
+        return (0xff & rtype) - (0xff & ltype);
+      }
+      return 0;
+    }
+
+    public int compare(byte[] left, byte[] right) {
+      return compare(left, 0, left.length, right, 0, right.length);
+    }
+
+    public int compareRows(byte [] left, int loffset, int llength,
+        byte [] right, int roffset, int rlength) {
+      return Bytes.compareTo(left, loffset, llength, right, roffset, rlength);
+    }
+
+    protected int compareColumns(
+        byte [] left, int loffset, int llength, final int lfamilylength,
+        byte [] right, int roffset, int rlength, final int rfamilylength) {
+      return KeyValue.compareColumns(left, loffset, llength, lfamilylength,
+        right, roffset, rlength, rfamilylength);
+    }
+
+    int compareTimestamps(final long ltimestamp, final long rtimestamp) {
+      // The below older timestamps sorting ahead of newer timestamps looks
+      // wrong but it is intentional. This way, newer timestamps are first
+      // found when we iterate over a memstore and newer versions are the
+      // first we trip over when reading from a store file.
+      if (ltimestamp < rtimestamp) {
+        return 1;
+      } else if (ltimestamp > rtimestamp) {
+        return -1;
+      }
+      return 0;
+    }
+  }
+
+  // HeapSize
+  public long heapSize() {
+    return ClassSize.align(ClassSize.OBJECT + (2 * ClassSize.REFERENCE) +
+        ClassSize.align(ClassSize.ARRAY) + ClassSize.align(length) +
+        (3 * Bytes.SIZEOF_INT) +
+        ClassSize.align(ClassSize.ARRAY) +
+        (2 * Bytes.SIZEOF_LONG));
+  }
+
+  // this overload assumes that the length bytes have already been read,
+  // and it expects the length of the KeyValue to be explicitly passed
+  // to it.
+  public void readFields(int length, final DataInput in) throws IOException {
+    this.length = length;
+    this.offset = 0;
+    this.bytes = new byte[this.length];
+    in.readFully(this.bytes, 0, this.length);
+  }
+
+  // Writable
+  public void readFields(final DataInput in) throws IOException {
+    int length = in.readInt();
+    readFields(length, in);
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    out.writeInt(this.length);
+    out.write(this.bytes, this.offset, this.length);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java b/0.90/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
new file mode 100644
index 0000000..0d696ab
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java
@@ -0,0 +1,454 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread;
+
+import java.util.concurrent.CopyOnWriteArrayList;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+
+/**
+ * This class creates a single process HBase cluster. One thread is created for
+ * a master and one per region server.
+ *
+ * Call {@link #startup()} to start the cluster running and {@link #shutdown()}
+ * to close it all down. {@link #join} the cluster is you want to wait on
+ * shutdown completion.
+ *
+ * <p>Runs master on port 60000 by default.  Because we can't just kill the
+ * process -- not till HADOOP-1700 gets fixed and even then.... -- we need to
+ * be able to find the master with a remote client to run shutdown.  To use a
+ * port other than 60000, set the hbase.master to a value of 'local:PORT':
+ * that is 'local', not 'localhost', and the port number the master should use
+ * instead of 60000.
+ *
+ */
+public class LocalHBaseCluster {
+  static final Log LOG = LogFactory.getLog(LocalHBaseCluster.class);
+  private final List<JVMClusterUtil.MasterThread> masterThreads =
+    new CopyOnWriteArrayList<JVMClusterUtil.MasterThread>();
+  private final List<JVMClusterUtil.RegionServerThread> regionThreads =
+    new CopyOnWriteArrayList<JVMClusterUtil.RegionServerThread>();
+  private final static int DEFAULT_NO = 1;
+  /** local mode */
+  public static final String LOCAL = "local";
+  /** 'local:' */
+  public static final String LOCAL_COLON = LOCAL + ":";
+  private final Configuration conf;
+  private final Class<? extends HMaster> masterClass;
+  private final Class<? extends HRegionServer> regionServerClass;
+
+  /**
+   * Constructor.
+   * @param conf
+   * @throws IOException
+   */
+  public LocalHBaseCluster(final Configuration conf)
+  throws IOException {
+    this(conf, DEFAULT_NO);
+  }
+
+  /**
+   * Constructor.
+   * @param conf Configuration to use.  Post construction has the master's
+   * address.
+   * @param noRegionServers Count of regionservers to start.
+   * @throws IOException
+   */
+  public LocalHBaseCluster(final Configuration conf, final int noRegionServers)
+  throws IOException {
+    this(conf, 1, noRegionServers, getMasterImplementation(conf),
+        getRegionServerImplementation(conf));
+  }
+
+  /**
+   * Constructor.
+   * @param conf Configuration to use.  Post construction has the active master
+   * address.
+   * @param noMasters Count of masters to start.
+   * @param noRegionServers Count of regionservers to start.
+   * @throws IOException
+   */
+  public LocalHBaseCluster(final Configuration conf, final int noMasters,
+      final int noRegionServers)
+  throws IOException {
+    this(conf, noMasters, noRegionServers, getMasterImplementation(conf),
+        getRegionServerImplementation(conf));
+  }
+
+  @SuppressWarnings("unchecked")
+  private static Class<? extends HRegionServer> getRegionServerImplementation(final Configuration conf) {
+    return (Class<? extends HRegionServer>)conf.getClass(HConstants.REGION_SERVER_IMPL,
+       HRegionServer.class);
+  }
+
+  @SuppressWarnings("unchecked")
+  private static Class<? extends HMaster> getMasterImplementation(final Configuration conf) {
+    return (Class<? extends HMaster>)conf.getClass(HConstants.MASTER_IMPL,
+       HMaster.class);
+  }
+
+  /**
+   * Constructor.
+   * @param conf Configuration to use.  Post construction has the master's
+   * address.
+   * @param noMasters Count of masters to start.
+   * @param noRegionServers Count of regionservers to start.
+   * @param masterClass
+   * @param regionServerClass
+   * @throws IOException
+   */
+  @SuppressWarnings("unchecked")
+  public LocalHBaseCluster(final Configuration conf, final int noMasters,
+    final int noRegionServers, final Class<? extends HMaster> masterClass,
+    final Class<? extends HRegionServer> regionServerClass)
+  throws IOException {
+    this.conf = conf;
+    // Always have masters and regionservers come up on port '0' so we don't
+    // clash over default ports.
+    conf.set(HConstants.MASTER_PORT, "0");
+    conf.set(HConstants.REGIONSERVER_PORT, "0");
+    // Start the HMasters.
+    this.masterClass =
+      (Class<? extends HMaster>)conf.getClass(HConstants.MASTER_IMPL,
+          masterClass);
+    for (int i = 0; i < noMasters; i++) {
+      addMaster(new Configuration(conf), i);
+    }
+    // Start the HRegionServers.
+    this.regionServerClass =
+      (Class<? extends HRegionServer>)conf.getClass(HConstants.REGION_SERVER_IMPL,
+       regionServerClass);
+
+    for (int i = 0; i < noRegionServers; i++) {
+      addRegionServer(new Configuration(conf), i);
+    }
+  }
+
+  public JVMClusterUtil.RegionServerThread addRegionServer()
+      throws IOException {
+    return addRegionServer(new Configuration(conf), this.regionThreads.size());
+  }
+
+  public JVMClusterUtil.RegionServerThread addRegionServer(
+      Configuration config, final int index)
+  throws IOException {
+    // Create each regionserver with its own Configuration instance so each has
+    // its HConnection instance rather than share (see HBASE_INSTANCES down in
+    // the guts of HConnectionManager.
+    JVMClusterUtil.RegionServerThread rst =
+      JVMClusterUtil.createRegionServerThread(config,
+          this.regionServerClass, index);
+    this.regionThreads.add(rst);
+    return rst;
+  }
+
+  public JVMClusterUtil.RegionServerThread addRegionServer(
+      final Configuration config, final int index, User user)
+  throws IOException, InterruptedException {
+    return user.runAs(
+        new PrivilegedExceptionAction<JVMClusterUtil.RegionServerThread>() {
+          public JVMClusterUtil.RegionServerThread run() throws Exception {
+            return addRegionServer(config, index);
+          }
+        });
+  }
+
+  public JVMClusterUtil.MasterThread addMaster() throws IOException {
+    return addMaster(new Configuration(conf), this.masterThreads.size());
+  }
+
+  public JVMClusterUtil.MasterThread addMaster(Configuration c, final int index)
+  throws IOException {
+    // Create each master with its own Configuration instance so each has
+    // its HConnection instance rather than share (see HBASE_INSTANCES down in
+    // the guts of HConnectionManager.
+    JVMClusterUtil.MasterThread mt =
+      JVMClusterUtil.createMasterThread(c,
+        this.masterClass, index);
+    this.masterThreads.add(mt);
+    return mt;
+  }
+
+  public JVMClusterUtil.MasterThread addMaster(
+      final Configuration c, final int index, User user)
+  throws IOException, InterruptedException {
+    return user.runAs(
+        new PrivilegedExceptionAction<JVMClusterUtil.MasterThread>() {
+          public JVMClusterUtil.MasterThread run() throws Exception {
+            return addMaster(c, index);
+          }
+        });
+  }
+
+  /**
+   * @param serverNumber
+   * @return region server
+   */
+  public HRegionServer getRegionServer(int serverNumber) {
+    return regionThreads.get(serverNumber).getRegionServer();
+  }
+
+  /**
+   * @return Read-only list of region server threads.
+   */
+  public List<JVMClusterUtil.RegionServerThread> getRegionServers() {
+    return Collections.unmodifiableList(this.regionThreads);
+  }
+
+  /**
+   * @return List of running servers (Some servers may have been killed or
+   * aborted during lifetime of cluster; these servers are not included in this
+   * list).
+   */
+  public List<JVMClusterUtil.RegionServerThread> getLiveRegionServers() {
+    List<JVMClusterUtil.RegionServerThread> liveServers =
+      new ArrayList<JVMClusterUtil.RegionServerThread>();
+    List<RegionServerThread> list = getRegionServers();
+    for (JVMClusterUtil.RegionServerThread rst: list) {
+      if (rst.isAlive()) liveServers.add(rst);
+    }
+    return liveServers;
+  }
+
+  /**
+   * Wait for the specified region server to stop
+   * Removes this thread from list of running threads.
+   * @param serverNumber
+   * @return Name of region server that just went down.
+   */
+  public String waitOnRegionServer(int serverNumber) {
+    JVMClusterUtil.RegionServerThread regionServerThread =
+      this.regionThreads.remove(serverNumber);
+    while (regionServerThread.isAlive()) {
+      try {
+        LOG.info("Waiting on " +
+          regionServerThread.getRegionServer().getHServerInfo().toString());
+        regionServerThread.join();
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      } catch (IOException e) {
+        e.printStackTrace();
+      }
+    }
+    return regionServerThread.getName();
+  }
+
+  /**
+   * Wait for the specified region server to stop
+   * Removes this thread from list of running threads.
+   * @param rst
+   * @return Name of region server that just went down.
+   */
+  public String waitOnRegionServer(JVMClusterUtil.RegionServerThread rst) {
+    while (rst.isAlive()) {
+      try {
+        LOG.info("Waiting on " +
+          rst.getRegionServer().getHServerInfo().toString());
+        rst.join();
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      } catch (IOException e) {
+        e.printStackTrace();
+      }
+    }
+    for (int i=0;i<regionThreads.size();i++) {
+      if (regionThreads.get(i) == rst) {
+        regionThreads.remove(i);
+        break;
+      }
+    }
+    return rst.getName();
+  }
+
+  /**
+   * @param serverNumber
+   * @return the HMaster thread
+   */
+  public HMaster getMaster(int serverNumber) {
+    return masterThreads.get(serverNumber).getMaster();
+  }
+
+  /**
+   * Gets the current active master, if available.  If no active master, returns
+   * null.
+   * @return the HMaster for the active master
+   */
+  public HMaster getActiveMaster() {
+    for (JVMClusterUtil.MasterThread mt : masterThreads) {
+      if (mt.getMaster().isActiveMaster()) {
+        return mt.getMaster();
+      }
+    }
+    return null;
+  }
+
+  /**
+   * @return Read-only list of master threads.
+   */
+  public List<JVMClusterUtil.MasterThread> getMasters() {
+    return Collections.unmodifiableList(this.masterThreads);
+  }
+
+  /**
+   * @return List of running master servers (Some servers may have been killed
+   * or aborted during lifetime of cluster; these servers are not included in
+   * this list).
+   */
+  public List<JVMClusterUtil.MasterThread> getLiveMasters() {
+    List<JVMClusterUtil.MasterThread> liveServers =
+      new ArrayList<JVMClusterUtil.MasterThread>();
+    List<JVMClusterUtil.MasterThread> list = getMasters();
+    for (JVMClusterUtil.MasterThread mt: list) {
+      if (mt.isAlive()) {
+        liveServers.add(mt);
+      }
+    }
+    return liveServers;
+  }
+
+  /**
+   * Wait for the specified master to stop
+   * Removes this thread from list of running threads.
+   * @param serverNumber
+   * @return Name of master that just went down.
+   */
+  public String waitOnMaster(int serverNumber) {
+    JVMClusterUtil.MasterThread masterThread =
+      this.masterThreads.remove(serverNumber);
+    while (masterThread.isAlive()) {
+      try {
+        LOG.info("Waiting on " +
+          masterThread.getMaster().getServerName().toString());
+        masterThread.join();
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+    }
+    return masterThread.getName();
+  }
+
+  /**
+   * Wait for the specified master to stop
+   * Removes this thread from list of running threads.
+   * @param masterThread
+   * @return Name of master that just went down.
+   */
+  public String waitOnMaster(JVMClusterUtil.MasterThread masterThread) {
+    while (masterThread.isAlive()) {
+      try {
+        LOG.info("Waiting on " +
+          masterThread.getMaster().getServerName().toString());
+        masterThread.join();
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+    }
+    for (int i=0;i<masterThreads.size();i++) {
+      if (masterThreads.get(i) == masterThread) {
+        masterThreads.remove(i);
+        break;
+      }
+    }
+    return masterThread.getName();
+  }
+
+  /**
+   * Wait for Mini HBase Cluster to shut down.
+   * Presumes you've already called {@link #shutdown()}.
+   */
+  public void join() {
+    if (this.regionThreads != null) {
+        for(Thread t: this.regionThreads) {
+          if (t.isAlive()) {
+            try {
+              t.join();
+          } catch (InterruptedException e) {
+            // continue
+          }
+        }
+      }
+    }
+    if (this.masterThreads != null) {
+      for (Thread t : this.masterThreads) {
+        if (t.isAlive()) {
+          try {
+            t.join();
+          } catch (InterruptedException e) {
+            // continue
+          }
+        }
+      }
+    }
+  }
+
+  /**
+   * Start the cluster.
+   */
+  public void startup() {
+    JVMClusterUtil.startup(this.masterThreads, this.regionThreads);
+  }
+
+  /**
+   * Shut down the mini HBase cluster
+   */
+  public void shutdown() {
+    JVMClusterUtil.shutdown(this.masterThreads, this.regionThreads);
+  }
+
+  /**
+   * @param c Configuration to check.
+   * @return True if a 'local' address in hbase.master value.
+   */
+  public static boolean isLocal(final Configuration c) {
+    final String mode = c.get(HConstants.CLUSTER_DISTRIBUTED);
+    return mode == null || mode.equals(HConstants.CLUSTER_IS_LOCAL);
+  }
+
+  /**
+   * Test things basically work.
+   * @param args
+   * @throws IOException
+   */
+  public static void main(String[] args) throws IOException {
+    Configuration conf = HBaseConfiguration.create();
+    LocalHBaseCluster cluster = new LocalHBaseCluster(conf);
+    cluster.startup();
+    HBaseAdmin admin = new HBaseAdmin(conf);
+    HTableDescriptor htd =
+      new HTableDescriptor(Bytes.toBytes(cluster.getClass().getName()));
+    admin.createTable(htd);
+    cluster.shutdown();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/MasterAddressTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/MasterAddressTracker.java
new file mode 100644
index 0000000..1da9742
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/MasterAddressTracker.java
@@ -0,0 +1,88 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+
+/**
+ * Manages the location of the current active Master for this RegionServer.
+ * <p>
+ * Listens for ZooKeeper events related to the master address. The node
+ * <code>/master</code> will contain the address of the current master.
+ * This listener is interested in
+ * <code>NodeDeleted</code> and <code>NodeCreated</code> events on
+ * <code>/master</code>.
+ * <p>
+ * Utilizes {@link ZooKeeperNodeTracker} for zk interactions.
+ * <p>
+ * You can get the current master via {@link #getMasterAddress()}
+ */
+public class MasterAddressTracker extends ZooKeeperNodeTracker {
+  /**
+   * Construct a master address listener with the specified
+   * <code>zookeeper</code> reference.
+   * <p>
+   * This constructor does not trigger any actions, you must call methods
+   * explicitly.  Normally you will just want to execute {@link #start()} to
+   * begin tracking of the master address.
+   *
+   * @param watcher zk reference and watcher
+   * @param abortable abortable in case of fatal error
+   */
+  public MasterAddressTracker(ZooKeeperWatcher watcher, Abortable abortable) {
+    super(watcher, watcher.masterAddressZNode, abortable);
+  }
+
+  /**
+   * Get the address of the current master if one is available.  Returns null
+   * if no current master.
+   *
+   * @return server address of current active master, or null if none available
+   */
+  public HServerAddress getMasterAddress() {
+    byte [] data = super.getData();
+    return data == null ? null : new HServerAddress(Bytes.toString(data));
+  }
+
+  /**
+   * Check if there is a master available.
+   * @return true if there is a master set, false if not.
+   */
+  public boolean hasMaster() {
+    return super.getData() != null;
+  }
+
+  /**
+   * Get the address of the current master.  If no master is available, method
+   * will block until one is available, the thread is interrupted, or timeout
+   * has passed.
+   *
+   * @param timeout maximum time to wait for master in millis, 0 for forever
+   * @return server address of current active master, null if timed out
+   * @throws InterruptedException if the thread is interrupted while waiting
+   */
+  public synchronized HServerAddress waitForMaster(long timeout)
+  throws InterruptedException {
+    byte [] data = super.blockUntilAvailable();
+    return data == null ? null : new HServerAddress(Bytes.toString(data));
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java b/0.90/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java
new file mode 100644
index 0000000..6cf564c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java
@@ -0,0 +1,49 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Thrown if the master is not running
+ */
+public class MasterNotRunningException extends IOException {
+  private static final long serialVersionUID = 1L << 23 - 1L;
+  /** default constructor */
+  public MasterNotRunningException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public MasterNotRunningException(String s) {
+    super(s);
+  }
+
+  /**
+   * Constructor taking another exception.
+   * @param e Exception to grab data from.
+   */
+  public MasterNotRunningException(Exception e) {
+    super(e);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java b/0.90/src/main/java/org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java
new file mode 100644
index 0000000..2c275e3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/NotAllMetaRegionsOnlineException.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown when an operation requires the root and all meta regions to be online
+ */
+public class NotAllMetaRegionsOnlineException extends DoNotRetryIOException {
+  private static final long serialVersionUID = 6439786157874827523L;
+  /**
+   * default constructor
+   */
+  public NotAllMetaRegionsOnlineException() {
+    super();
+  }
+
+  /**
+   * @param message
+   */
+  public NotAllMetaRegionsOnlineException(String message) {
+    super(message);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java b/0.90/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java
new file mode 100644
index 0000000..32da8cb
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java
@@ -0,0 +1,53 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Thrown by a region server if it is sent a request for a region it is not
+ * serving.
+ */
+public class NotServingRegionException extends IOException {
+  private static final long serialVersionUID = 1L << 17 - 1L;
+
+  /** default constructor */
+  public NotServingRegionException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public NotServingRegionException(String s) {
+    super(s);
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public NotServingRegionException(final byte [] s) {
+    super(Bytes.toString(s));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java b/0.90/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java
new file mode 100644
index 0000000..e3a9315
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java
@@ -0,0 +1,34 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * This exception is thrown by the master when a region server was shut down
+ * and restarted so fast that the master still hasn't processed the server
+ * shutdown of the first instance.
+ */
+@SuppressWarnings("serial")
+public class PleaseHoldException extends IOException {
+  public PleaseHoldException(String message) {
+    super(message);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/RegionException.java b/0.90/src/main/java/org/apache/hadoop/hbase/RegionException.java
new file mode 100644
index 0000000..63063a5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/RegionException.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+/**
+ * Thrown when something happens related to region handling.
+ * Subclasses have to be more specific.
+ */
+public class RegionException extends IOException {
+  private static final long serialVersionUID = 1473510258071111371L;
+
+  /** default constructor */
+  public RegionException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public RegionException(String s) {
+    super(s);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java
new file mode 100644
index 0000000..485c254
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java
@@ -0,0 +1,120 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * An immutable class which contains a static method for handling
+ * org.apache.hadoop.ipc.RemoteException exceptions.
+ */
+public class RemoteExceptionHandler {
+  /* Not instantiable */
+  private RemoteExceptionHandler() {super();}
+
+  /**
+   * Examine passed Throwable.  See if its carrying a RemoteException. If so,
+   * run {@link #decodeRemoteException(RemoteException)} on it.  Otherwise,
+   * pass back <code>t</code> unaltered.
+   * @param t Throwable to examine.
+   * @return Decoded RemoteException carried by <code>t</code> or
+   * <code>t</code> unaltered.
+   */
+  public static Throwable checkThrowable(final Throwable t) {
+    Throwable result = t;
+    if (t instanceof RemoteException) {
+      try {
+        result =
+          RemoteExceptionHandler.decodeRemoteException((RemoteException)t);
+      } catch (Throwable tt) {
+        result = tt;
+      }
+    }
+    return result;
+  }
+
+  /**
+   * Examine passed IOException.  See if its carrying a RemoteException. If so,
+   * run {@link #decodeRemoteException(RemoteException)} on it.  Otherwise,
+   * pass back <code>e</code> unaltered.
+   * @param e Exception to examine.
+   * @return Decoded RemoteException carried by <code>e</code> or
+   * <code>e</code> unaltered.
+   */
+  public static IOException checkIOException(final IOException e) {
+    Throwable t = checkThrowable(e);
+    return t instanceof IOException? (IOException)t: new IOException(t);
+  }
+
+  /**
+   * Converts org.apache.hadoop.ipc.RemoteException into original exception,
+   * if possible. If the original exception is an Error or a RuntimeException,
+   * throws the original exception.
+   *
+   * @param re original exception
+   * @return decoded RemoteException if it is an instance of or a subclass of
+   *         IOException, or the original RemoteException if it cannot be decoded.
+   *
+   * @throws IOException indicating a server error ocurred if the decoded
+   *         exception is not an IOException. The decoded exception is set as
+   *         the cause.
+   * @deprecated Use {@link RemoteException#unwrapRemoteException()} instead.
+   * In fact we should look into deprecating this whole class - St.Ack 2010929
+   */
+  public static IOException decodeRemoteException(final RemoteException re)
+  throws IOException {
+    IOException i = re;
+
+    try {
+      Class<?> c = Class.forName(re.getClassName());
+
+      Class<?>[] parameterTypes = { String.class };
+      Constructor<?> ctor = c.getConstructor(parameterTypes);
+
+      Object[] arguments = { re.getMessage() };
+      Throwable t = (Throwable) ctor.newInstance(arguments);
+
+      if (t instanceof IOException) {
+        i = (IOException) t;
+
+      } else {
+        i = new IOException("server error");
+        i.initCause(t);
+        throw i;
+      }
+
+    } catch (ClassNotFoundException x) {
+      // continue
+    } catch (NoSuchMethodException x) {
+      // continue
+    } catch (IllegalAccessException x) {
+      // continue
+    } catch (InvocationTargetException x) {
+      // continue
+    } catch (InstantiationException x) {
+      // continue
+    }
+    return i;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/Server.java b/0.90/src/main/java/org/apache/hadoop/hbase/Server.java
new file mode 100644
index 0000000..df396fa
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/Server.java
@@ -0,0 +1,54 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+
+/**
+ * Defines the set of shared functions implemented by HBase servers (Masters
+ * and RegionServers).
+ */
+public interface Server extends Abortable, Stoppable {
+  /**
+   * Gets the configuration object for this server.
+   */
+  public Configuration getConfiguration();
+
+  /**
+   * Gets the ZooKeeper instance for this server.
+   */
+  public ZooKeeperWatcher getZooKeeper();
+
+  /**
+   * @return Master's instance of {@link CatalogTracker}
+   */
+  public CatalogTracker getCatalogTracker();
+
+  /**
+   * Gets the unique server name for this server.
+   * If a RegionServer, it returns a concatenation of hostname, port and
+   * startcode formatted as <code>&lt;hostname> ',' &lt;port> ',' &lt;startcode></code>.
+   * If the master, it returns <code>&lt;hostname> ':' &lt;port>'.
+   * @return unique server name
+   */
+  public String getServerName();
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/Stoppable.java b/0.90/src/main/java/org/apache/hadoop/hbase/Stoppable.java
new file mode 100644
index 0000000..74d4f4a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/Stoppable.java
@@ -0,0 +1,36 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/**
+ * Implementers are Stoppable.
+ */
+public interface Stoppable {
+  /**
+   * Stop this service.
+   * @param why Why we're stopping.
+   */
+  public void stop(String why);
+
+  /**
+   * @return True if {@link #stop(String)} has been closed.
+   */
+  public boolean isStopped();
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/TableExistsException.java b/0.90/src/main/java/org/apache/hadoop/hbase/TableExistsException.java
new file mode 100644
index 0000000..5fde219
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/TableExistsException.java
@@ -0,0 +1,38 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Thrown when a table exists but should not
+ */
+public class TableExistsException extends IOException {
+  private static final long serialVersionUID = 1L << 7 - 1L;
+  /** default constructor */
+  public TableExistsException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   *
+   * @param s message
+   */
+  public TableExistsException(String s) {
+    super(s);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java b/0.90/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java
new file mode 100644
index 0000000..4287800
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Thrown if a table should be offline but is not
+ */
+public class TableNotDisabledException extends IOException {
+  private static final long serialVersionUID = 1L << 19 - 1L;
+  /** default constructor */
+  public TableNotDisabledException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public TableNotDisabledException(String s) {
+    super(s);
+  }
+
+  /**
+   * @param tableName Name of table that is not disabled
+   */
+  public TableNotDisabledException(byte[] tableName) {
+    this(Bytes.toString(tableName));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java b/0.90/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java
new file mode 100644
index 0000000..dc6da43
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java
@@ -0,0 +1,35 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+/** Thrown when a table can not be located */
+public class TableNotFoundException extends RegionException {
+  private static final long serialVersionUID = 993179627856392526L;
+
+  /** default constructor */
+  public TableNotFoundException() {
+    super();
+  }
+
+  /** @param s message */
+  public TableNotFoundException(String s) {
+    super(s);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/UnknownRegionException.java b/0.90/src/main/java/org/apache/hadoop/hbase/UnknownRegionException.java
new file mode 100644
index 0000000..e87f42a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/UnknownRegionException.java
@@ -0,0 +1,33 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Thrown when we are asked to operate on a region we know nothing about.
+ */
+public class UnknownRegionException extends IOException {
+  private static final long serialVersionUID = 1968858760475205392L;
+
+  public UnknownRegionException(String regionName) {
+    super(regionName);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/UnknownRowLockException.java b/0.90/src/main/java/org/apache/hadoop/hbase/UnknownRowLockException.java
new file mode 100644
index 0000000..8ca50a9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/UnknownRowLockException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+/**
+ * Thrown if a region server is passed an unknown row lock id
+ */
+public class UnknownRowLockException extends DoNotRetryIOException {
+  private static final long serialVersionUID = 993179627856392526L;
+
+  /** constructor */
+  public UnknownRowLockException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public UnknownRowLockException(String s) {
+    super(s);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java b/0.90/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java
new file mode 100644
index 0000000..13f2f6c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/UnknownScannerException.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+/**
+ * Thrown if a region server is passed an unknown scanner id.
+ * Usually means the client has take too long between checkins and so the
+ * scanner lease on the serverside has expired OR the serverside is closing
+ * down and has cancelled all leases.
+ */
+public class UnknownScannerException extends DoNotRetryIOException {
+  private static final long serialVersionUID = 993179627856392526L;
+
+  /** constructor */
+  public UnknownScannerException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public UnknownScannerException(String s) {
+    super(s);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java b/0.90/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java
new file mode 100644
index 0000000..ecea580
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.lang.annotation.*;
+
+/**
+ * A package attribute that captures the version of hbase that was compiled.
+ * Copied down from hadoop.  All is same except name of interface.
+ */
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.PACKAGE)
+public @interface VersionAnnotation {
+
+  /**
+   * Get the Hadoop version
+   * @return the version string "0.6.3-dev"
+   */
+  String version();
+
+  /**
+   * Get the username that compiled Hadoop.
+   */
+  String user();
+
+  /**
+   * Get the date when Hadoop was compiled.
+   * @return the date in unix 'date' format
+   */
+  String date();
+
+  /**
+   * Get the url for the subversion repository.
+   */
+  String url();
+
+  /**
+   * Get the subversion revision.
+   * @return the revision number as a string (eg. "451451")
+   */
+  String revision();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/YouAreDeadException.java b/0.90/src/main/java/org/apache/hadoop/hbase/YouAreDeadException.java
new file mode 100644
index 0000000..fcd2ccd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/YouAreDeadException.java
@@ -0,0 +1,34 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * This exception is thrown by the master when a region server reports and is
+ * already being processed as dead. This can happen when a region server loses
+ * its session but didn't figure it yet.
+ */
+@SuppressWarnings("serial")
+public class YouAreDeadException extends IOException {
+  public YouAreDeadException(String message) {
+    super(message);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java b/0.90/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java
new file mode 100644
index 0000000..b4ce03c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java
@@ -0,0 +1,49 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+/**
+ * Thrown if the client can't connect to zookeeper
+ */
+public class ZooKeeperConnectionException extends IOException {
+  private static final long serialVersionUID = 1L << 23 - 1L;
+  /** default constructor */
+  public ZooKeeperConnectionException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public ZooKeeperConnectionException(String s) {
+    super(s);
+  }
+
+  /**
+   * Constructor taking another exception.
+   * @param e Exception to grab data from.
+   */
+  public ZooKeeperConnectionException(Exception e) {
+    super(e);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/AvroServer.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/AvroServer.java
new file mode 100644
index 0000000..0ea2376
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/AvroServer.java
@@ -0,0 +1,576 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.avro;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+
+import org.apache.avro.Schema;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.generic.GenericData;
+import org.apache.avro.ipc.HttpServer;
+import org.apache.avro.specific.SpecificResponder;
+import org.apache.avro.util.Utf8;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.avro.generated.AClusterStatus;
+import org.apache.hadoop.hbase.avro.generated.ADelete;
+import org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor;
+import org.apache.hadoop.hbase.avro.generated.AGet;
+import org.apache.hadoop.hbase.avro.generated.AIOError;
+import org.apache.hadoop.hbase.avro.generated.AIllegalArgument;
+import org.apache.hadoop.hbase.avro.generated.AMasterNotRunning;
+import org.apache.hadoop.hbase.avro.generated.APut;
+import org.apache.hadoop.hbase.avro.generated.AResult;
+import org.apache.hadoop.hbase.avro.generated.AScan;
+import org.apache.hadoop.hbase.avro.generated.ATableDescriptor;
+import org.apache.hadoop.hbase.avro.generated.ATableExists;
+import org.apache.hadoop.hbase.avro.generated.HBase;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Start an Avro server
+ */
+public class AvroServer {
+
+  /**
+   * The HBaseImpl is a glue object that connects Avro RPC calls to the
+   * HBase client API primarily defined in the HBaseAdmin and HTable objects.
+   */
+  public static class HBaseImpl implements HBase {
+    //
+    // PROPERTIES
+    //
+    protected Configuration conf = null;
+    protected HBaseAdmin admin = null;
+    protected HTablePool htablePool = null;
+    protected final Log LOG = LogFactory.getLog(this.getClass().getName());
+
+    // nextScannerId and scannerMap are used to manage scanner state
+    protected int nextScannerId = 0;
+    protected HashMap<Integer, ResultScanner> scannerMap = null;
+
+    //
+    // UTILITY METHODS
+    //
+
+    /**
+     * Assigns a unique ID to the scanner and adds the mapping to an internal
+     * hash-map.
+     *
+     * @param scanner
+     * @return integer scanner id
+     */
+    protected synchronized int addScanner(ResultScanner scanner) {
+      int id = nextScannerId++;
+      scannerMap.put(id, scanner);
+      return id;
+    }
+
+    /**
+     * Returns the scanner associated with the specified ID.
+     *
+     * @param id
+     * @return a Scanner, or null if ID was invalid.
+     */
+    protected synchronized ResultScanner getScanner(int id) {
+      return scannerMap.get(id);
+    }
+
+    /**
+     * Removes the scanner associated with the specified ID from the internal
+     * id->scanner hash-map.
+     *
+     * @param id
+     * @return a Scanner, or null if ID was invalid.
+     */
+    protected synchronized ResultScanner removeScanner(int id) {
+      return scannerMap.remove(id);
+    }
+
+    //
+    // CTOR METHODS
+    //
+
+    // TODO(hammer): figure out appropriate setting of maxSize for htablePool
+    /**
+     * Constructs an HBaseImpl object.
+     * @throws IOException 
+     */
+    HBaseImpl() throws IOException {
+      this(HBaseConfiguration.create());
+    }
+
+    HBaseImpl(final Configuration c) throws IOException {
+      conf = c;
+      admin = new HBaseAdmin(conf);
+      htablePool = new HTablePool(conf, 10);
+      scannerMap = new HashMap<Integer, ResultScanner>();
+    }
+
+    //
+    // SERVICE METHODS
+    //
+
+    // TODO(hammer): Investigate use of the Command design pattern
+
+    //
+    // Cluster metadata
+    //
+
+    public Utf8 getHBaseVersion() throws AIOError {
+      try {
+	return new Utf8(admin.getClusterStatus().getHBaseVersion());
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    public AClusterStatus getClusterStatus() throws AIOError {
+      try {
+	return AvroUtil.csToACS(admin.getClusterStatus());
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    public GenericArray<ATableDescriptor> listTables() throws AIOError {
+      try {
+        HTableDescriptor[] tables = admin.listTables();
+	Schema atdSchema = Schema.createArray(ATableDescriptor.SCHEMA$);
+        GenericData.Array<ATableDescriptor> result = null;
+	result = new GenericData.Array<ATableDescriptor>(tables.length, atdSchema);
+        for (HTableDescriptor table : tables) {
+	  result.add(AvroUtil.htdToATD(table));
+	}
+        return result;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    //
+    // Table metadata
+    //
+
+    // TODO(hammer): Handle the case where the table does not exist explicitly?
+    public ATableDescriptor describeTable(ByteBuffer table) throws AIOError {
+      try {
+	return AvroUtil.htdToATD(admin.getTableDescriptor(Bytes.toBytes(table)));
+      } catch (IOException e) {
+        AIOError ioe = new AIOError();
+        ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    public boolean isTableEnabled(ByteBuffer table) throws AIOError {
+      try {
+	return admin.isTableEnabled(Bytes.toBytes(table));
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    public boolean tableExists(ByteBuffer table) throws AIOError {
+      try {
+	return admin.tableExists(Bytes.toBytes(table));
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    //
+    // Family metadata
+    //
+
+    // TODO(hammer): Handle the case where the family does not exist explicitly?
+    public AFamilyDescriptor describeFamily(ByteBuffer table, ByteBuffer family) throws AIOError {
+      try {
+	HTableDescriptor htd = admin.getTableDescriptor(Bytes.toBytes(table));
+	return AvroUtil.hcdToAFD(htd.getFamily(Bytes.toBytes(family)));
+      } catch (IOException e) {
+        AIOError ioe = new AIOError();
+        ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    //
+    // Table admin
+    //
+
+    public Void createTable(ATableDescriptor table) throws AIOError, 
+                                                           AIllegalArgument,
+                                                           ATableExists,
+                                                           AMasterNotRunning {
+      try {
+        admin.createTable(AvroUtil.atdToHTD(table));
+	return null;
+      } catch (IllegalArgumentException e) {
+	AIllegalArgument iae = new AIllegalArgument();
+	iae.message = new Utf8(e.getMessage());
+        throw iae;
+      } catch (TableExistsException e) {
+	ATableExists tee = new ATableExists();
+	tee.message = new Utf8(e.getMessage());
+        throw tee;
+      } catch (MasterNotRunningException e) {
+	AMasterNotRunning mnre = new AMasterNotRunning();
+	mnre.message = new Utf8(e.getMessage());
+        throw mnre;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    // Note that disable, flush and major compaction of .META. needed in client
+    // TODO(hammer): more selective cache dirtying than flush?
+    public Void deleteTable(ByteBuffer table) throws AIOError {
+      try {
+	admin.deleteTable(Bytes.toBytes(table));
+	return null;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    // NB: Asynchronous operation
+    public Void modifyTable(ByteBuffer tableName, ATableDescriptor tableDescriptor) throws AIOError {
+      try {
+	admin.modifyTable(Bytes.toBytes(tableName),
+                          AvroUtil.atdToHTD(tableDescriptor));
+	return null;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    public Void enableTable(ByteBuffer table) throws AIOError {
+      try {
+	admin.enableTable(Bytes.toBytes(table));
+	return null;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+    
+    public Void disableTable(ByteBuffer table) throws AIOError {
+      try {
+	admin.disableTable(Bytes.toBytes(table));
+	return null;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+    
+    // NB: Asynchronous operation
+    public Void flush(ByteBuffer table) throws AIOError {
+      try {
+	admin.flush(Bytes.toBytes(table));
+	return null;
+      } catch (InterruptedException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    // NB: Asynchronous operation
+    public Void split(ByteBuffer table) throws AIOError {
+      try {
+	admin.split(Bytes.toBytes(table));
+	return null;
+      } catch (InterruptedException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    //
+    // Family admin
+    //
+
+    public Void addFamily(ByteBuffer table, AFamilyDescriptor family) throws AIOError {
+      try {
+	admin.addColumn(Bytes.toBytes(table), 
+                        AvroUtil.afdToHCD(family));
+	return null;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    // NB: Asynchronous operation
+    public Void deleteFamily(ByteBuffer table, ByteBuffer family) throws AIOError {
+      try {
+	admin.deleteColumn(Bytes.toBytes(table), Bytes.toBytes(family));
+	return null;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    // NB: Asynchronous operation
+    public Void modifyFamily(ByteBuffer table, ByteBuffer familyName, AFamilyDescriptor familyDescriptor) throws AIOError {
+      try {
+	admin.modifyColumn(Bytes.toBytes(table), AvroUtil.afdToHCD(familyDescriptor));
+	return null;
+      } catch (IOException e) {
+	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    //
+    // Single-row DML
+    //
+
+    // TODO(hammer): Java with statement for htablepool concision?
+    // TODO(hammer): Can Get have timestamp and timerange simultaneously?
+    // TODO(hammer): Do I need to catch the RuntimeException of getTable?
+    // TODO(hammer): Handle gets with no results
+    // TODO(hammer): Uses exists(Get) to ensure columns exist
+    public AResult get(ByteBuffer table, AGet aget) throws AIOError {
+      HTableInterface htable = htablePool.getTable(Bytes.toBytes(table));
+      try {
+        return AvroUtil.resultToAResult(htable.get(AvroUtil.agetToGet(aget)));
+      } catch (IOException e) {
+    	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } finally {
+        htablePool.putTable(htable);
+      }
+    }
+
+    public boolean exists(ByteBuffer table, AGet aget) throws AIOError {
+      HTableInterface htable = htablePool.getTable(Bytes.toBytes(table));
+      try {
+        return htable.exists(AvroUtil.agetToGet(aget));
+      } catch (IOException e) {
+    	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } finally {
+        htablePool.putTable(htable);
+      }
+    }
+
+    public Void put(ByteBuffer table, APut aput) throws AIOError {
+      HTableInterface htable = htablePool.getTable(Bytes.toBytes(table));
+      try {
+	htable.put(AvroUtil.aputToPut(aput));
+        return null;
+      } catch (IOException e) {
+        AIOError ioe = new AIOError();
+        ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } finally {
+        htablePool.putTable(htable);
+      }
+    }
+
+    public Void delete(ByteBuffer table, ADelete adelete) throws AIOError {
+      HTableInterface htable = htablePool.getTable(Bytes.toBytes(table));
+      try {
+        htable.delete(AvroUtil.adeleteToDelete(adelete));
+        return null;
+      } catch (IOException e) {
+    	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } finally {
+        htablePool.putTable(htable);
+      }
+    }
+
+    public long incrementColumnValue(ByteBuffer table, ByteBuffer row, ByteBuffer family, ByteBuffer qualifier, long amount, boolean writeToWAL) throws AIOError {
+      HTableInterface htable = htablePool.getTable(Bytes.toBytes(table));
+      try {
+	return htable.incrementColumnValue(Bytes.toBytes(row), Bytes.toBytes(family), Bytes.toBytes(qualifier), amount, writeToWAL);
+      } catch (IOException e) {
+        AIOError ioe = new AIOError();
+        ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } finally {
+        htablePool.putTable(htable);
+      }
+    }
+
+    //
+    // Multi-row DML
+    //
+
+    public int scannerOpen(ByteBuffer table, AScan ascan) throws AIOError {
+      HTableInterface htable = htablePool.getTable(Bytes.toBytes(table));
+      try {
+        Scan scan = AvroUtil.ascanToScan(ascan);
+        return addScanner(htable.getScanner(scan));
+      } catch (IOException e) {
+    	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      } finally {
+        htablePool.putTable(htable);
+      }
+    }
+
+    public Void scannerClose(int scannerId) throws AIOError, AIllegalArgument {
+      try {
+        ResultScanner scanner = getScanner(scannerId);
+        if (scanner == null) {
+      	  AIllegalArgument aie = new AIllegalArgument();
+	  aie.message = new Utf8("scanner ID is invalid: " + scannerId);
+          throw aie;
+        }
+        scanner.close();
+        removeScanner(scannerId);
+        return null;
+      } catch (IOException e) {
+    	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+
+    public GenericArray<AResult> scannerGetRows(int scannerId, int numberOfRows) throws AIOError, AIllegalArgument {
+      try {
+        ResultScanner scanner = getScanner(scannerId);
+        if (scanner == null) {
+      	  AIllegalArgument aie = new AIllegalArgument();
+	  aie.message = new Utf8("scanner ID is invalid: " + scannerId);
+          throw aie;
+        }
+        return AvroUtil.resultsToAResults(scanner.next(numberOfRows));
+      } catch (IOException e) {
+    	AIOError ioe = new AIOError();
+	ioe.message = new Utf8(e.getMessage());
+        throw ioe;
+      }
+    }
+  }
+
+  //
+  // MAIN PROGRAM
+  //
+
+  private static void printUsageAndExit() {
+    printUsageAndExit(null);
+  }
+  
+  private static void printUsageAndExit(final String message) {
+    if (message != null) {
+      System.err.println(message);
+    }
+    System.out.println("Usage: java org.apache.hadoop.hbase.avro.AvroServer " +
+      "--help | [--port=PORT] start");
+    System.out.println("Arguments:");
+    System.out.println(" start Start Avro server");
+    System.out.println(" stop  Stop Avro server");
+    System.out.println("Options:");
+    System.out.println(" port  Port to listen on. Default: 9090");
+    System.out.println(" help  Print this message and exit");
+    System.exit(0);
+  }
+
+  // TODO(hammer): Figure out a better way to keep the server alive!
+  protected static void doMain(final String[] args) throws Exception {
+    if (args.length < 1) {
+      printUsageAndExit();
+    }
+    int port = 9090;
+    final String portArgKey = "--port=";
+    for (String cmd: args) {
+      if (cmd.startsWith(portArgKey)) {
+        port = Integer.parseInt(cmd.substring(portArgKey.length()));
+        continue;
+      } else if (cmd.equals("--help") || cmd.equals("-h")) {
+        printUsageAndExit();
+      } else if (cmd.equals("start")) {
+        continue;
+      } else if (cmd.equals("stop")) {
+        printUsageAndExit("To shutdown the Avro server run " +
+          "bin/hbase-daemon.sh stop avro or send a kill signal to " +
+          "the Avro server pid");
+      }
+      
+      // Print out usage if we get to here.
+      printUsageAndExit();
+    }
+    Log LOG = LogFactory.getLog("AvroServer");
+    LOG.info("starting HBase Avro server on port " + Integer.toString(port));
+    SpecificResponder r = new SpecificResponder(HBase.class, new HBaseImpl());
+    new HttpServer(r, 9090);
+    Thread.sleep(1000000);
+  }
+
+  // TODO(hammer): Look at Cassandra's daemonization and integration with JSVC
+  // TODO(hammer): Don't eat it after a single exception
+  // TODO(hammer): Figure out why we do doMain()
+  // TODO(hammer): Figure out if we want String[] or String [] syntax
+  public static void main(String[] args) throws Exception {
+    doMain(args);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/AvroUtil.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/AvroUtil.java
new file mode 100644
index 0000000..df7a752
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/AvroUtil.java
@@ -0,0 +1,413 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.avro;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Collection;
+import java.util.List;
+
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.avro.generated.AClusterStatus;
+import org.apache.hadoop.hbase.avro.generated.AColumn;
+import org.apache.hadoop.hbase.avro.generated.AColumnValue;
+import org.apache.hadoop.hbase.avro.generated.ACompressionAlgorithm;
+import org.apache.hadoop.hbase.avro.generated.ADelete;
+import org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor;
+import org.apache.hadoop.hbase.avro.generated.AGet;
+import org.apache.hadoop.hbase.avro.generated.AIllegalArgument;
+import org.apache.hadoop.hbase.avro.generated.APut;
+import org.apache.hadoop.hbase.avro.generated.ARegionLoad;
+import org.apache.hadoop.hbase.avro.generated.AResult;
+import org.apache.hadoop.hbase.avro.generated.AResultEntry;
+import org.apache.hadoop.hbase.avro.generated.AScan;
+import org.apache.hadoop.hbase.avro.generated.AServerAddress;
+import org.apache.hadoop.hbase.avro.generated.AServerInfo;
+import org.apache.hadoop.hbase.avro.generated.AServerLoad;
+import org.apache.hadoop.hbase.avro.generated.ATableDescriptor;
+
+import org.apache.avro.Schema;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.generic.GenericData;
+import org.apache.avro.util.Utf8;
+
+public class AvroUtil {
+
+  //
+  // Cluster metadata
+  //
+
+  static public AServerAddress hsaToASA(HServerAddress hsa) throws IOException {
+    AServerAddress asa = new AServerAddress();
+    asa.hostname = new Utf8(hsa.getHostname());
+    asa.inetSocketAddress = new Utf8(hsa.getInetSocketAddress().toString());
+    asa.port = hsa.getPort();
+    return asa;
+  }
+
+  static public ARegionLoad hrlToARL(HServerLoad.RegionLoad rl) throws IOException {
+    ARegionLoad arl = new ARegionLoad();
+    arl.memStoreSizeMB = rl.getMemStoreSizeMB();
+    arl.name = ByteBuffer.wrap(rl.getName());
+    arl.storefileIndexSizeMB = rl.getStorefileIndexSizeMB();
+    arl.storefiles = rl.getStorefiles();
+    arl.storefileSizeMB = rl.getStorefileSizeMB();
+    arl.stores = rl.getStores();
+    return arl;
+  }
+
+  static public AServerLoad hslToASL(HServerLoad hsl) throws IOException {
+    AServerLoad asl = new AServerLoad();
+    asl.load = hsl.getLoad();
+    asl.maxHeapMB = hsl.getMaxHeapMB();
+    asl.memStoreSizeInMB = hsl.getMemStoreSizeInMB();
+    asl.numberOfRegions = hsl.getNumberOfRegions();
+    asl.numberOfRequests = hsl.getNumberOfRequests();
+
+    Collection<HServerLoad.RegionLoad> regionLoads = hsl.getRegionsLoad();
+    Schema s = Schema.createArray(ARegionLoad.SCHEMA$);
+    GenericData.Array<ARegionLoad> aregionLoads = null;
+    if (regionLoads != null) {
+      aregionLoads = new GenericData.Array<ARegionLoad>(regionLoads.size(), s);
+      for (HServerLoad.RegionLoad rl : regionLoads) {
+	aregionLoads.add(hrlToARL(rl));
+      }
+    } else {
+      aregionLoads = new GenericData.Array<ARegionLoad>(0, s);
+    }
+    asl.regionsLoad = aregionLoads;
+
+    asl.storefileIndexSizeInMB = hsl.getStorefileIndexSizeInMB();
+    asl.storefiles = hsl.getStorefiles();
+    asl.storefileSizeInMB = hsl.getStorefileSizeInMB();
+    asl.usedHeapMB = hsl.getUsedHeapMB();
+    return asl;
+  }
+
+  static public AServerInfo hsiToASI(HServerInfo hsi) throws IOException {
+    AServerInfo asi = new AServerInfo();
+    asi.infoPort = hsi.getInfoPort();
+    asi.load = hslToASL(hsi.getLoad());
+    asi.serverAddress = hsaToASA(hsi.getServerAddress());
+    asi.serverName = new Utf8(hsi.getServerName());
+    asi.startCode = hsi.getStartCode();
+    return asi;
+  }
+
+  static public AClusterStatus csToACS(ClusterStatus cs) throws IOException {
+    AClusterStatus acs = new AClusterStatus();
+    acs.averageLoad = cs.getAverageLoad();
+    Collection<String> deadServerNames = cs.getDeadServerNames();
+    Schema stringArraySchema = Schema.createArray(Schema.create(Schema.Type.STRING));
+    GenericData.Array<Utf8> adeadServerNames = null;
+    if (deadServerNames != null) {
+      adeadServerNames = new GenericData.Array<Utf8>(deadServerNames.size(), stringArraySchema);
+      for (String deadServerName : deadServerNames) {
+	adeadServerNames.add(new Utf8(deadServerName));
+      }
+    } else {
+      adeadServerNames = new GenericData.Array<Utf8>(0, stringArraySchema);
+    }
+    acs.deadServerNames = adeadServerNames;
+    acs.deadServers = cs.getDeadServers();
+    acs.hbaseVersion = new Utf8(cs.getHBaseVersion());
+    acs.regionsCount = cs.getRegionsCount();
+    acs.requestsCount = cs.getRequestsCount();
+    Collection<HServerInfo> hserverInfos = cs.getServerInfo();
+    Schema s = Schema.createArray(AServerInfo.SCHEMA$);
+    GenericData.Array<AServerInfo> aserverInfos = null;
+    if (hserverInfos != null) {
+      aserverInfos = new GenericData.Array<AServerInfo>(hserverInfos.size(), s);
+      for (HServerInfo hsi : hserverInfos) {
+	aserverInfos.add(hsiToASI(hsi));
+      }
+    } else {
+      aserverInfos = new GenericData.Array<AServerInfo>(0, s);
+    }
+    acs.serverInfos = aserverInfos;
+    acs.servers = cs.getServers();
+    return acs;
+  }
+
+  //
+  // Table metadata
+  //
+
+  static public ATableDescriptor htdToATD(HTableDescriptor table) throws IOException {
+    ATableDescriptor atd = new ATableDescriptor();
+    atd.name = ByteBuffer.wrap(table.getName());
+    Collection<HColumnDescriptor> families = table.getFamilies();
+    Schema afdSchema = Schema.createArray(AFamilyDescriptor.SCHEMA$);
+    GenericData.Array<AFamilyDescriptor> afamilies = null;
+    if (families.size() > 0) {
+      afamilies = new GenericData.Array<AFamilyDescriptor>(families.size(), afdSchema);
+      for (HColumnDescriptor hcd : families) {
+	AFamilyDescriptor afamily = hcdToAFD(hcd);
+        afamilies.add(afamily);
+      }
+    } else {
+      afamilies = new GenericData.Array<AFamilyDescriptor>(0, afdSchema);
+    }
+    atd.families = afamilies;
+    atd.maxFileSize = table.getMaxFileSize();
+    atd.memStoreFlushSize = table.getMemStoreFlushSize();
+    atd.rootRegion = table.isRootRegion();
+    atd.metaRegion = table.isMetaRegion();
+    atd.metaTable = table.isMetaTable();
+    atd.readOnly = table.isReadOnly();
+    atd.deferredLogFlush = table.isDeferredLogFlush();
+    return atd;
+  }
+
+  static public HTableDescriptor atdToHTD(ATableDescriptor atd) throws IOException, AIllegalArgument {
+    HTableDescriptor htd = new HTableDescriptor(Bytes.toBytes(atd.name));
+    if (atd.families != null && atd.families.size() > 0) {
+      for (AFamilyDescriptor afd : atd.families) {
+	htd.addFamily(afdToHCD(afd));
+      }
+    }
+    if (atd.maxFileSize != null) {
+      htd.setMaxFileSize(atd.maxFileSize);
+    }
+    if (atd.memStoreFlushSize != null) {
+      htd.setMemStoreFlushSize(atd.memStoreFlushSize);
+    }
+    if (atd.readOnly != null) {
+      htd.setReadOnly(atd.readOnly);
+    }
+    if (atd.deferredLogFlush != null) {
+      htd.setDeferredLogFlush(atd.deferredLogFlush);
+    }
+    if (atd.rootRegion != null || atd.metaRegion != null || atd.metaTable != null) {
+      AIllegalArgument aie = new AIllegalArgument();
+      aie.message = new Utf8("Can't set root or meta flag on create table.");
+      throw aie;
+    }
+    return htd;
+  }
+
+  //
+  // Family metadata
+  //
+
+  static public AFamilyDescriptor hcdToAFD(HColumnDescriptor hcd) throws IOException {
+    AFamilyDescriptor afamily = new AFamilyDescriptor();
+    afamily.name = ByteBuffer.wrap(hcd.getName());
+    String compressionAlgorithm = hcd.getCompressionType().getName();
+    if (compressionAlgorithm == "LZO") {
+      afamily.compression = ACompressionAlgorithm.LZO;
+    } else if (compressionAlgorithm == "GZ") {
+      afamily.compression = ACompressionAlgorithm.GZ;
+    } else {
+      afamily.compression = ACompressionAlgorithm.NONE;
+    }
+    afamily.maxVersions = hcd.getMaxVersions();
+    afamily.blocksize = hcd.getBlocksize();
+    afamily.inMemory = hcd.isInMemory();
+    afamily.timeToLive = hcd.getTimeToLive();
+    afamily.blockCacheEnabled = hcd.isBlockCacheEnabled();
+    return afamily;
+  }
+
+  static public HColumnDescriptor afdToHCD(AFamilyDescriptor afd) throws IOException {
+    HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toBytes(afd.name));
+
+    ACompressionAlgorithm compressionAlgorithm = afd.compression;
+    if (compressionAlgorithm == ACompressionAlgorithm.LZO) {
+      hcd.setCompressionType(Compression.Algorithm.LZO);
+    } else if (compressionAlgorithm == ACompressionAlgorithm.GZ) {
+      hcd.setCompressionType(Compression.Algorithm.GZ);
+    } else {
+      hcd.setCompressionType(Compression.Algorithm.NONE);
+    }
+
+    if (afd.maxVersions != null) {
+      hcd.setMaxVersions(afd.maxVersions);
+    }
+
+    if (afd.blocksize != null) {
+      hcd.setBlocksize(afd.blocksize);
+    }
+
+    if (afd.inMemory != null) {
+      hcd.setInMemory(afd.inMemory);
+    }
+
+    if (afd.timeToLive != null) {
+      hcd.setTimeToLive(afd.timeToLive);
+    }
+
+    if (afd.blockCacheEnabled != null) {
+      hcd.setBlockCacheEnabled(afd.blockCacheEnabled);
+    }
+    return hcd;
+  }
+
+  //
+  // Single-Row DML (Get)
+  //
+
+  // TODO(hammer): More concise idiom than if not null assign?
+  static public Get agetToGet(AGet aget) throws IOException {
+    Get get = new Get(Bytes.toBytes(aget.row));
+    if (aget.columns != null) {
+      for (AColumn acolumn : aget.columns) {
+	if (acolumn.qualifier != null) {
+	  get.addColumn(Bytes.toBytes(acolumn.family), Bytes.toBytes(acolumn.qualifier));
+	} else {
+	  get.addFamily(Bytes.toBytes(acolumn.family));
+	}
+      }
+    }
+    if (aget.timestamp != null) {
+      get.setTimeStamp(aget.timestamp);
+    }
+    if (aget.timerange != null) {
+      get.setTimeRange(aget.timerange.minStamp, aget.timerange.maxStamp);
+    }
+    if (aget.maxVersions != null) {
+      get.setMaxVersions(aget.maxVersions);
+    }
+    return get;
+  }
+
+  // TODO(hammer): Pick one: Timestamp or TimeStamp
+  static public AResult resultToAResult(Result result) {
+    AResult aresult = new AResult();
+    aresult.row = ByteBuffer.wrap(result.getRow());
+    Schema s = Schema.createArray(AResultEntry.SCHEMA$);
+    GenericData.Array<AResultEntry> entries = null;
+    List<KeyValue> resultKeyValues = result.list();
+    if (resultKeyValues != null && resultKeyValues.size() > 0) {
+      entries = new GenericData.Array<AResultEntry>(resultKeyValues.size(), s);
+      for (KeyValue resultKeyValue : resultKeyValues) {
+	AResultEntry entry = new AResultEntry();
+	entry.family = ByteBuffer.wrap(resultKeyValue.getFamily());
+	entry.qualifier = ByteBuffer.wrap(resultKeyValue.getQualifier());
+	entry.value = ByteBuffer.wrap(resultKeyValue.getValue());
+	entry.timestamp = resultKeyValue.getTimestamp();
+	entries.add(entry);
+      }
+    } else {
+      entries = new GenericData.Array<AResultEntry>(0, s);
+    }
+    aresult.entries = entries;
+    return aresult;
+  }
+
+  //
+  // Single-Row DML (Put)
+  //
+
+  static public Put aputToPut(APut aput) throws IOException {
+    Put put = new Put(Bytes.toBytes(aput.row));
+    for (AColumnValue acv : aput.columnValues) {
+      if (acv.timestamp != null) {
+        put.add(Bytes.toBytes(acv.family),
+                Bytes.toBytes(acv.qualifier),
+                acv.timestamp,
+	        Bytes.toBytes(acv.value));
+      } else {
+        put.add(Bytes.toBytes(acv.family),
+                Bytes.toBytes(acv.qualifier),
+	        Bytes.toBytes(acv.value));
+      }
+    }
+    return put;
+  }
+
+  //
+  // Single-Row DML (Delete)
+  //
+
+  static public Delete adeleteToDelete(ADelete adelete) throws IOException {
+    Delete delete = new Delete(Bytes.toBytes(adelete.row));
+    if (adelete.columns != null) {
+      for (AColumn acolumn : adelete.columns) {
+	if (acolumn.qualifier != null) {
+	  delete.deleteColumns(Bytes.toBytes(acolumn.family), Bytes.toBytes(acolumn.qualifier));
+	} else {
+	  delete.deleteFamily(Bytes.toBytes(acolumn.family));
+	}
+      }
+    }
+    return delete;
+  }
+
+  //
+  // Multi-row DML (Scan)
+  //
+
+  static public Scan ascanToScan(AScan ascan) throws IOException {
+    Scan scan = new Scan();
+    if (ascan.startRow != null) {
+      scan.setStartRow(Bytes.toBytes(ascan.startRow));
+    }
+    if (ascan.stopRow != null) {
+      scan.setStopRow(Bytes.toBytes(ascan.stopRow));
+    }
+    if (ascan.columns != null) {
+      for (AColumn acolumn : ascan.columns) {
+	if (acolumn.qualifier != null) {
+	  scan.addColumn(Bytes.toBytes(acolumn.family), Bytes.toBytes(acolumn.qualifier));
+	} else {
+	  scan.addFamily(Bytes.toBytes(acolumn.family));
+	}
+      }
+    }
+    if (ascan.timestamp != null) {
+      scan.setTimeStamp(ascan.timestamp);
+    }
+    if (ascan.timerange != null) {
+      scan.setTimeRange(ascan.timerange.minStamp, ascan.timerange.maxStamp);
+    }
+    if (ascan.maxVersions != null) {
+      scan.setMaxVersions(ascan.maxVersions);
+    }
+    return scan;
+  }
+
+  // TODO(hammer): Better to return null or empty array?
+  static public GenericArray<AResult> resultsToAResults(Result[] results) {
+    Schema s = Schema.createArray(AResult.SCHEMA$);
+    GenericData.Array<AResult> aresults = null;
+    if (results != null && results.length > 0) {
+      aresults = new GenericData.Array<AResult>(results.length, s);
+      for (Result result : results) {
+	aresults.add(resultToAResult(result));
+      }
+    } else {
+      aresults = new GenericData.Array<AResult>(0, s);
+    }
+    return aresults;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AAlreadyExists.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AAlreadyExists.java
new file mode 100644
index 0000000..7f05f09
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AAlreadyExists.java
@@ -0,0 +1,21 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AAlreadyExists extends org.apache.avro.specific.SpecificExceptionBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"error\",\"name\":\"AAlreadyExists\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]}");
+  public org.apache.avro.util.Utf8 message;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return message;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: message = (org.apache.avro.util.Utf8)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AClusterStatus.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AClusterStatus.java
new file mode 100644
index 0000000..21e38c6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AClusterStatus.java
@@ -0,0 +1,42 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AClusterStatus extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AClusterStatus\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"averageLoad\",\"type\":\"double\"},{\"name\":\"deadServerNames\",\"type\":{\"type\":\"array\",\"items\":\"string\"}},{\"name\":\"deadServers\",\"type\":\"int\"},{\"name\":\"hbaseVersion\",\"type\":\"string\"},{\"name\":\"regionsCount\",\"type\":\"int\"},{\"name\":\"requestsCount\",\"type\":\"int\"},{\"name\":\"serverInfos\",\"type\":{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"AServerInfo\",\"fields\":[{\"name\":\"infoPort\",\"type\":\"int\"},{\"name\":\"load\",\"type\":{\"type\":\"record\",\"name\":\"AServerLoad\",\"fields\":[{\"name\":\"load\",\"type\":\"int\"},{\"name\":\"maxHeapMB\",\"type\":\"int\"},{\"name\":\"memStoreSizeInMB\",\"type\":\"int\"},{\"name\":\"numberOfRegions\",\"type\":\"int\"},{\"name\":\"numberOfRequests\",\"type\":\"int\"},{\"name\":\"regionsLoad\",\"type\":{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"ARegionLoad\",\"fields\":[{\"name\":\"memStoreSizeMB\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"storefileIndexSizeMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeMB\",\"type\":\"int\"},{\"name\":\"stores\",\"type\":\"int\"}]}}},{\"name\":\"storefileIndexSizeInMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeInMB\",\"type\":\"int\"},{\"name\":\"usedHeapMB\",\"type\":\"int\"}]}},{\"name\":\"serverAddress\",\"type\":{\"type\":\"record\",\"name\":\"AServerAddress\",\"fields\":[{\"name\":\"hostname\",\"type\":\"string\"},{\"name\":\"inetSocketAddress\",\"type\":\"string\"},{\"name\":\"port\",\"type\":\"int\"}]}},{\"name\":\"serverName\",\"type\":\"string\"},{\"name\":\"startCode\",\"type\":\"long\"}]}}},{\"name\":\"servers\",\"type\":\"int\"}]}");
+  public double averageLoad;
+  public org.apache.avro.generic.GenericArray<org.apache.avro.util.Utf8> deadServerNames;
+  public int deadServers;
+  public org.apache.avro.util.Utf8 hbaseVersion;
+  public int regionsCount;
+  public int requestsCount;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AServerInfo> serverInfos;
+  public int servers;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return averageLoad;
+    case 1: return deadServerNames;
+    case 2: return deadServers;
+    case 3: return hbaseVersion;
+    case 4: return regionsCount;
+    case 5: return requestsCount;
+    case 6: return serverInfos;
+    case 7: return servers;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: averageLoad = (java.lang.Double)value$; break;
+    case 1: deadServerNames = (org.apache.avro.generic.GenericArray<org.apache.avro.util.Utf8>)value$; break;
+    case 2: deadServers = (java.lang.Integer)value$; break;
+    case 3: hbaseVersion = (org.apache.avro.util.Utf8)value$; break;
+    case 4: regionsCount = (java.lang.Integer)value$; break;
+    case 5: requestsCount = (java.lang.Integer)value$; break;
+    case 6: serverInfos = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AServerInfo>)value$; break;
+    case 7: servers = (java.lang.Integer)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumn.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumn.java
new file mode 100644
index 0000000..b3509f7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumn.java
@@ -0,0 +1,24 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AColumn extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AColumn\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":[\"bytes\",\"null\"]}]}");
+  public java.nio.ByteBuffer family;
+  public java.nio.ByteBuffer qualifier;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return family;
+    case 1: return qualifier;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: family = (java.nio.ByteBuffer)value$; break;
+    case 1: qualifier = (java.nio.ByteBuffer)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumnFamilyDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumnFamilyDescriptor.java
new file mode 100644
index 0000000..e5b14ef
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumnFamilyDescriptor.java
@@ -0,0 +1,42 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AColumnFamilyDescriptor extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AColumnFamilyDescriptor\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"compression\",\"type\":{\"type\":\"enum\",\"name\":\"ACompressionAlgorithm\",\"symbols\":[\"LZO\",\"GZ\",\"NONE\"]}},{\"name\":\"maxVersions\",\"type\":\"int\"},{\"name\":\"blocksize\",\"type\":\"int\"},{\"name\":\"inMemory\",\"type\":\"boolean\"},{\"name\":\"timeToLive\",\"type\":\"int\"},{\"name\":\"blockCacheEnabled\",\"type\":\"boolean\"},{\"name\":\"bloomfilterEnabled\",\"type\":\"boolean\"}]}");
+  public java.nio.ByteBuffer name;
+  public org.apache.hadoop.hbase.avro.generated.ACompressionAlgorithm compression;
+  public int maxVersions;
+  public int blocksize;
+  public boolean inMemory;
+  public int timeToLive;
+  public boolean blockCacheEnabled;
+  public boolean bloomfilterEnabled;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return name;
+    case 1: return compression;
+    case 2: return maxVersions;
+    case 3: return blocksize;
+    case 4: return inMemory;
+    case 5: return timeToLive;
+    case 6: return blockCacheEnabled;
+    case 7: return bloomfilterEnabled;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: name = (java.nio.ByteBuffer)value$; break;
+    case 1: compression = (org.apache.hadoop.hbase.avro.generated.ACompressionAlgorithm)value$; break;
+    case 2: maxVersions = (java.lang.Integer)value$; break;
+    case 3: blocksize = (java.lang.Integer)value$; break;
+    case 4: inMemory = (java.lang.Boolean)value$; break;
+    case 5: timeToLive = (java.lang.Integer)value$; break;
+    case 6: blockCacheEnabled = (java.lang.Boolean)value$; break;
+    case 7: bloomfilterEnabled = (java.lang.Boolean)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumnValue.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumnValue.java
new file mode 100644
index 0000000..2b550cb
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AColumnValue.java
@@ -0,0 +1,30 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AColumnValue extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AColumnValue\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":\"bytes\"},{\"name\":\"value\",\"type\":\"bytes\"},{\"name\":\"timestamp\",\"type\":[\"long\",\"null\"]}]}");
+  public java.nio.ByteBuffer family;
+  public java.nio.ByteBuffer qualifier;
+  public java.nio.ByteBuffer value;
+  public java.lang.Long timestamp;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return family;
+    case 1: return qualifier;
+    case 2: return value;
+    case 3: return timestamp;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: family = (java.nio.ByteBuffer)value$; break;
+    case 1: qualifier = (java.nio.ByteBuffer)value$; break;
+    case 2: value = (java.nio.ByteBuffer)value$; break;
+    case 3: timestamp = (java.lang.Long)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ACompressionAlgorithm.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ACompressionAlgorithm.java
new file mode 100644
index 0000000..4f7736c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ACompressionAlgorithm.java
@@ -0,0 +1,6 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public enum ACompressionAlgorithm { 
+  LZO, GZ, NONE
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ADelete.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ADelete.java
new file mode 100644
index 0000000..1d83512
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ADelete.java
@@ -0,0 +1,24 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class ADelete extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"ADelete\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"columns\",\"type\":[{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"AColumn\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":[\"bytes\",\"null\"]}]}},\"null\"]}]}");
+  public java.nio.ByteBuffer row;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumn> columns;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return row;
+    case 1: return columns;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: row = (java.nio.ByteBuffer)value$; break;
+    case 1: columns = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumn>)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AFamilyDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AFamilyDescriptor.java
new file mode 100644
index 0000000..5d6b93b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AFamilyDescriptor.java
@@ -0,0 +1,39 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AFamilyDescriptor extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AFamilyDescriptor\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"compression\",\"type\":[{\"type\":\"enum\",\"name\":\"ACompressionAlgorithm\",\"symbols\":[\"LZO\",\"GZ\",\"NONE\"]},\"null\"]},{\"name\":\"maxVersions\",\"type\":[\"int\",\"null\"]},{\"name\":\"blocksize\",\"type\":[\"int\",\"null\"]},{\"name\":\"inMemory\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"timeToLive\",\"type\":[\"int\",\"null\"]},{\"name\":\"blockCacheEnabled\",\"type\":[\"boolean\",\"null\"]}]}");
+  public java.nio.ByteBuffer name;
+  public org.apache.hadoop.hbase.avro.generated.ACompressionAlgorithm compression;
+  public java.lang.Integer maxVersions;
+  public java.lang.Integer blocksize;
+  public java.lang.Boolean inMemory;
+  public java.lang.Integer timeToLive;
+  public java.lang.Boolean blockCacheEnabled;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return name;
+    case 1: return compression;
+    case 2: return maxVersions;
+    case 3: return blocksize;
+    case 4: return inMemory;
+    case 5: return timeToLive;
+    case 6: return blockCacheEnabled;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: name = (java.nio.ByteBuffer)value$; break;
+    case 1: compression = (org.apache.hadoop.hbase.avro.generated.ACompressionAlgorithm)value$; break;
+    case 2: maxVersions = (java.lang.Integer)value$; break;
+    case 3: blocksize = (java.lang.Integer)value$; break;
+    case 4: inMemory = (java.lang.Boolean)value$; break;
+    case 5: timeToLive = (java.lang.Integer)value$; break;
+    case 6: blockCacheEnabled = (java.lang.Boolean)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AGet.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AGet.java
new file mode 100644
index 0000000..445b6c1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AGet.java
@@ -0,0 +1,33 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AGet extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AGet\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"columns\",\"type\":[{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"AColumn\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":[\"bytes\",\"null\"]}]}},\"null\"]},{\"name\":\"timestamp\",\"type\":[\"long\",\"null\"]},{\"name\":\"timerange\",\"type\":[{\"type\":\"record\",\"name\":\"ATimeRange\",\"fields\":[{\"name\":\"minStamp\",\"type\":\"long\"},{\"name\":\"maxStamp\",\"type\":\"long\"}]},\"null\"]},{\"name\":\"maxVersions\",\"type\":[\"int\",\"null\"]}]}");
+  public java.nio.ByteBuffer row;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumn> columns;
+  public java.lang.Long timestamp;
+  public org.apache.hadoop.hbase.avro.generated.ATimeRange timerange;
+  public java.lang.Integer maxVersions;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return row;
+    case 1: return columns;
+    case 2: return timestamp;
+    case 3: return timerange;
+    case 4: return maxVersions;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: row = (java.nio.ByteBuffer)value$; break;
+    case 1: columns = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumn>)value$; break;
+    case 2: timestamp = (java.lang.Long)value$; break;
+    case 3: timerange = (org.apache.hadoop.hbase.avro.generated.ATimeRange)value$; break;
+    case 4: maxVersions = (java.lang.Integer)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AIOError.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AIOError.java
new file mode 100644
index 0000000..444f3de
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AIOError.java
@@ -0,0 +1,22 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AIOError extends org.apache.avro.specific.SpecificExceptionBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ =
+    org.apache.avro.Schema.parse("{\"type\":\"error\",\"name\":\"AIOError\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]}");
+  public org.apache.avro.util.Utf8 message;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return message;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: message = (org.apache.avro.util.Utf8)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AIllegalArgument.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AIllegalArgument.java
new file mode 100644
index 0000000..65c072d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AIllegalArgument.java
@@ -0,0 +1,21 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AIllegalArgument extends org.apache.avro.specific.SpecificExceptionBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"error\",\"name\":\"AIllegalArgument\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]}");
+  public org.apache.avro.util.Utf8 message;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return message;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: message = (org.apache.avro.util.Utf8)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AMasterNotRunning.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AMasterNotRunning.java
new file mode 100644
index 0000000..5608464
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AMasterNotRunning.java
@@ -0,0 +1,21 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AMasterNotRunning extends org.apache.avro.specific.SpecificExceptionBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"error\",\"name\":\"AMasterNotRunning\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]}");
+  public org.apache.avro.util.Utf8 message;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return message;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: message = (org.apache.avro.util.Utf8)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/APut.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/APut.java
new file mode 100644
index 0000000..badf142
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/APut.java
@@ -0,0 +1,24 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class APut extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"APut\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"columnValues\",\"type\":{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"AColumnValue\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":\"bytes\"},{\"name\":\"value\",\"type\":\"bytes\"},{\"name\":\"timestamp\",\"type\":[\"long\",\"null\"]}]}}}]}");
+  public java.nio.ByteBuffer row;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumnValue> columnValues;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return row;
+    case 1: return columnValues;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: row = (java.nio.ByteBuffer)value$; break;
+    case 1: columnValues = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumnValue>)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ARegionLoad.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ARegionLoad.java
new file mode 100644
index 0000000..ca14d98
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ARegionLoad.java
@@ -0,0 +1,36 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class ARegionLoad extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"ARegionLoad\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"memStoreSizeMB\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"storefileIndexSizeMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeMB\",\"type\":\"int\"},{\"name\":\"stores\",\"type\":\"int\"}]}");
+  public int memStoreSizeMB;
+  public java.nio.ByteBuffer name;
+  public int storefileIndexSizeMB;
+  public int storefiles;
+  public int storefileSizeMB;
+  public int stores;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return memStoreSizeMB;
+    case 1: return name;
+    case 2: return storefileIndexSizeMB;
+    case 3: return storefiles;
+    case 4: return storefileSizeMB;
+    case 5: return stores;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: memStoreSizeMB = (java.lang.Integer)value$; break;
+    case 1: name = (java.nio.ByteBuffer)value$; break;
+    case 2: storefileIndexSizeMB = (java.lang.Integer)value$; break;
+    case 3: storefiles = (java.lang.Integer)value$; break;
+    case 4: storefileSizeMB = (java.lang.Integer)value$; break;
+    case 5: stores = (java.lang.Integer)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AResult.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AResult.java
new file mode 100644
index 0000000..f8df2d3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AResult.java
@@ -0,0 +1,24 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AResult extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AResult\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"entries\",\"type\":{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"AResultEntry\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":\"bytes\"},{\"name\":\"value\",\"type\":\"bytes\"},{\"name\":\"timestamp\",\"type\":\"long\"}]}}}]}");
+  public java.nio.ByteBuffer row;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AResultEntry> entries;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return row;
+    case 1: return entries;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: row = (java.nio.ByteBuffer)value$; break;
+    case 1: entries = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AResultEntry>)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AResultEntry.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AResultEntry.java
new file mode 100644
index 0000000..454ba13
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AResultEntry.java
@@ -0,0 +1,30 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AResultEntry extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AResultEntry\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":\"bytes\"},{\"name\":\"value\",\"type\":\"bytes\"},{\"name\":\"timestamp\",\"type\":\"long\"}]}");
+  public java.nio.ByteBuffer family;
+  public java.nio.ByteBuffer qualifier;
+  public java.nio.ByteBuffer value;
+  public long timestamp;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return family;
+    case 1: return qualifier;
+    case 2: return value;
+    case 3: return timestamp;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: family = (java.nio.ByteBuffer)value$; break;
+    case 1: qualifier = (java.nio.ByteBuffer)value$; break;
+    case 2: value = (java.nio.ByteBuffer)value$; break;
+    case 3: timestamp = (java.lang.Long)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AScan.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AScan.java
new file mode 100644
index 0000000..65c26c9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AScan.java
@@ -0,0 +1,36 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AScan extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AScan\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"startRow\",\"type\":[\"bytes\",\"null\"]},{\"name\":\"stopRow\",\"type\":[\"bytes\",\"null\"]},{\"name\":\"columns\",\"type\":[{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"AColumn\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":[\"bytes\",\"null\"]}]}},\"null\"]},{\"name\":\"timestamp\",\"type\":[\"long\",\"null\"]},{\"name\":\"timerange\",\"type\":[{\"type\":\"record\",\"name\":\"ATimeRange\",\"fields\":[{\"name\":\"minStamp\",\"type\":\"long\"},{\"name\":\"maxStamp\",\"type\":\"long\"}]},\"null\"]},{\"name\":\"maxVersions\",\"type\":[\"int\",\"null\"]}]}");
+  public java.nio.ByteBuffer startRow;
+  public java.nio.ByteBuffer stopRow;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumn> columns;
+  public java.lang.Long timestamp;
+  public org.apache.hadoop.hbase.avro.generated.ATimeRange timerange;
+  public java.lang.Integer maxVersions;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return startRow;
+    case 1: return stopRow;
+    case 2: return columns;
+    case 3: return timestamp;
+    case 4: return timerange;
+    case 5: return maxVersions;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: startRow = (java.nio.ByteBuffer)value$; break;
+    case 1: stopRow = (java.nio.ByteBuffer)value$; break;
+    case 2: columns = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AColumn>)value$; break;
+    case 3: timestamp = (java.lang.Long)value$; break;
+    case 4: timerange = (org.apache.hadoop.hbase.avro.generated.ATimeRange)value$; break;
+    case 5: maxVersions = (java.lang.Integer)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerAddress.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerAddress.java
new file mode 100644
index 0000000..d1d1423
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerAddress.java
@@ -0,0 +1,27 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AServerAddress extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AServerAddress\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"hostname\",\"type\":\"string\"},{\"name\":\"inetSocketAddress\",\"type\":\"string\"},{\"name\":\"port\",\"type\":\"int\"}]}");
+  public org.apache.avro.util.Utf8 hostname;
+  public org.apache.avro.util.Utf8 inetSocketAddress;
+  public int port;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return hostname;
+    case 1: return inetSocketAddress;
+    case 2: return port;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: hostname = (org.apache.avro.util.Utf8)value$; break;
+    case 1: inetSocketAddress = (org.apache.avro.util.Utf8)value$; break;
+    case 2: port = (java.lang.Integer)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerInfo.java
new file mode 100644
index 0000000..297de0c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerInfo.java
@@ -0,0 +1,33 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AServerInfo extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AServerInfo\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"infoPort\",\"type\":\"int\"},{\"name\":\"load\",\"type\":{\"type\":\"record\",\"name\":\"AServerLoad\",\"fields\":[{\"name\":\"load\",\"type\":\"int\"},{\"name\":\"maxHeapMB\",\"type\":\"int\"},{\"name\":\"memStoreSizeInMB\",\"type\":\"int\"},{\"name\":\"numberOfRegions\",\"type\":\"int\"},{\"name\":\"numberOfRequests\",\"type\":\"int\"},{\"name\":\"regionsLoad\",\"type\":{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"ARegionLoad\",\"fields\":[{\"name\":\"memStoreSizeMB\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"storefileIndexSizeMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeMB\",\"type\":\"int\"},{\"name\":\"stores\",\"type\":\"int\"}]}}},{\"name\":\"storefileIndexSizeInMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeInMB\",\"type\":\"int\"},{\"name\":\"usedHeapMB\",\"type\":\"int\"}]}},{\"name\":\"serverAddress\",\"type\":{\"type\":\"record\",\"name\":\"AServerAddress\",\"fields\":[{\"name\":\"hostname\",\"type\":\"string\"},{\"name\":\"inetSocketAddress\",\"type\":\"string\"},{\"name\":\"port\",\"type\":\"int\"}]}},{\"name\":\"serverName\",\"type\":\"string\"},{\"name\":\"startCode\",\"type\":\"long\"}]}");
+  public int infoPort;
+  public org.apache.hadoop.hbase.avro.generated.AServerLoad load;
+  public org.apache.hadoop.hbase.avro.generated.AServerAddress serverAddress;
+  public org.apache.avro.util.Utf8 serverName;
+  public long startCode;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return infoPort;
+    case 1: return load;
+    case 2: return serverAddress;
+    case 3: return serverName;
+    case 4: return startCode;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: infoPort = (java.lang.Integer)value$; break;
+    case 1: load = (org.apache.hadoop.hbase.avro.generated.AServerLoad)value$; break;
+    case 2: serverAddress = (org.apache.hadoop.hbase.avro.generated.AServerAddress)value$; break;
+    case 3: serverName = (org.apache.avro.util.Utf8)value$; break;
+    case 4: startCode = (java.lang.Long)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerLoad.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerLoad.java
new file mode 100644
index 0000000..8d4175d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/AServerLoad.java
@@ -0,0 +1,48 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class AServerLoad extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"AServerLoad\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"load\",\"type\":\"int\"},{\"name\":\"maxHeapMB\",\"type\":\"int\"},{\"name\":\"memStoreSizeInMB\",\"type\":\"int\"},{\"name\":\"numberOfRegions\",\"type\":\"int\"},{\"name\":\"numberOfRequests\",\"type\":\"int\"},{\"name\":\"regionsLoad\",\"type\":{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"ARegionLoad\",\"fields\":[{\"name\":\"memStoreSizeMB\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"storefileIndexSizeMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeMB\",\"type\":\"int\"},{\"name\":\"stores\",\"type\":\"int\"}]}}},{\"name\":\"storefileIndexSizeInMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeInMB\",\"type\":\"int\"},{\"name\":\"usedHeapMB\",\"type\":\"int\"}]}");
+  public int load;
+  public int maxHeapMB;
+  public int memStoreSizeInMB;
+  public int numberOfRegions;
+  public int numberOfRequests;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.ARegionLoad> regionsLoad;
+  public int storefileIndexSizeInMB;
+  public int storefiles;
+  public int storefileSizeInMB;
+  public int usedHeapMB;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return load;
+    case 1: return maxHeapMB;
+    case 2: return memStoreSizeInMB;
+    case 3: return numberOfRegions;
+    case 4: return numberOfRequests;
+    case 5: return regionsLoad;
+    case 6: return storefileIndexSizeInMB;
+    case 7: return storefiles;
+    case 8: return storefileSizeInMB;
+    case 9: return usedHeapMB;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: load = (java.lang.Integer)value$; break;
+    case 1: maxHeapMB = (java.lang.Integer)value$; break;
+    case 2: memStoreSizeInMB = (java.lang.Integer)value$; break;
+    case 3: numberOfRegions = (java.lang.Integer)value$; break;
+    case 4: numberOfRequests = (java.lang.Integer)value$; break;
+    case 5: regionsLoad = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.ARegionLoad>)value$; break;
+    case 6: storefileIndexSizeInMB = (java.lang.Integer)value$; break;
+    case 7: storefiles = (java.lang.Integer)value$; break;
+    case 8: storefileSizeInMB = (java.lang.Integer)value$; break;
+    case 9: usedHeapMB = (java.lang.Integer)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATableDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATableDescriptor.java
new file mode 100644
index 0000000..7d285b2
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATableDescriptor.java
@@ -0,0 +1,45 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class ATableDescriptor extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"ATableDescriptor\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"families\",\"type\":[{\"type\":\"array\",\"items\":{\"type\":\"record\",\"name\":\"AFamilyDescriptor\",\"fields\":[{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"compression\",\"type\":[{\"type\":\"enum\",\"name\":\"ACompressionAlgorithm\",\"symbols\":[\"LZO\",\"GZ\",\"NONE\"]},\"null\"]},{\"name\":\"maxVersions\",\"type\":[\"int\",\"null\"]},{\"name\":\"blocksize\",\"type\":[\"int\",\"null\"]},{\"name\":\"inMemory\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"timeToLive\",\"type\":[\"int\",\"null\"]},{\"name\":\"blockCacheEnabled\",\"type\":[\"boolean\",\"null\"]}]}},\"null\"]},{\"name\":\"maxFileSize\",\"type\":[\"long\",\"null\"]},{\"name\":\"memStoreFlushSize\",\"type\":[\"long\",\"null\"]},{\"name\":\"rootRegion\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"metaRegion\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"metaTable\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"readOnly\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"deferredLogFlush\",\"type\":[\"boolean\",\"null\"]}]}");
+  public java.nio.ByteBuffer name;
+  public org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor> families;
+  public java.lang.Long maxFileSize;
+  public java.lang.Long memStoreFlushSize;
+  public java.lang.Boolean rootRegion;
+  public java.lang.Boolean metaRegion;
+  public java.lang.Boolean metaTable;
+  public java.lang.Boolean readOnly;
+  public java.lang.Boolean deferredLogFlush;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return name;
+    case 1: return families;
+    case 2: return maxFileSize;
+    case 3: return memStoreFlushSize;
+    case 4: return rootRegion;
+    case 5: return metaRegion;
+    case 6: return metaTable;
+    case 7: return readOnly;
+    case 8: return deferredLogFlush;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: name = (java.nio.ByteBuffer)value$; break;
+    case 1: families = (org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor>)value$; break;
+    case 2: maxFileSize = (java.lang.Long)value$; break;
+    case 3: memStoreFlushSize = (java.lang.Long)value$; break;
+    case 4: rootRegion = (java.lang.Boolean)value$; break;
+    case 5: metaRegion = (java.lang.Boolean)value$; break;
+    case 6: metaTable = (java.lang.Boolean)value$; break;
+    case 7: readOnly = (java.lang.Boolean)value$; break;
+    case 8: deferredLogFlush = (java.lang.Boolean)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATableExists.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATableExists.java
new file mode 100644
index 0000000..2ce4d4f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATableExists.java
@@ -0,0 +1,21 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class ATableExists extends org.apache.avro.specific.SpecificExceptionBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"error\",\"name\":\"ATableExists\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]}");
+  public org.apache.avro.util.Utf8 message;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return message;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: message = (org.apache.avro.util.Utf8)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATimeRange.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATimeRange.java
new file mode 100644
index 0000000..358f2c3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/ATimeRange.java
@@ -0,0 +1,24 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class ATimeRange extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"ATimeRange\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"minStamp\",\"type\":\"long\"},{\"name\":\"maxStamp\",\"type\":\"long\"}]}");
+  public long minStamp;
+  public long maxStamp;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return minStamp;
+    case 1: return maxStamp;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: minStamp = (java.lang.Long)value$; break;
+    case 1: maxStamp = (java.lang.Long)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/HBase.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/HBase.java
new file mode 100644
index 0000000..7079fe8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/HBase.java
@@ -0,0 +1,56 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public interface HBase {
+  public static final org.apache.avro.Protocol PROTOCOL = org.apache.avro.Protocol.parse("{\"protocol\":\"HBase\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"types\":[{\"type\":\"record\",\"name\":\"AServerAddress\",\"fields\":[{\"name\":\"hostname\",\"type\":\"string\"},{\"name\":\"inetSocketAddress\",\"type\":\"string\"},{\"name\":\"port\",\"type\":\"int\"}]},{\"type\":\"record\",\"name\":\"ARegionLoad\",\"fields\":[{\"name\":\"memStoreSizeMB\",\"type\":\"int\"},{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"storefileIndexSizeMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeMB\",\"type\":\"int\"},{\"name\":\"stores\",\"type\":\"int\"}]},{\"type\":\"record\",\"name\":\"AServerLoad\",\"fields\":[{\"name\":\"load\",\"type\":\"int\"},{\"name\":\"maxHeapMB\",\"type\":\"int\"},{\"name\":\"memStoreSizeInMB\",\"type\":\"int\"},{\"name\":\"numberOfRegions\",\"type\":\"int\"},{\"name\":\"numberOfRequests\",\"type\":\"int\"},{\"name\":\"regionsLoad\",\"type\":{\"type\":\"array\",\"items\":\"ARegionLoad\"}},{\"name\":\"storefileIndexSizeInMB\",\"type\":\"int\"},{\"name\":\"storefiles\",\"type\":\"int\"},{\"name\":\"storefileSizeInMB\",\"type\":\"int\"},{\"name\":\"usedHeapMB\",\"type\":\"int\"}]},{\"type\":\"record\",\"name\":\"AServerInfo\",\"fields\":[{\"name\":\"infoPort\",\"type\":\"int\"},{\"name\":\"load\",\"type\":\"AServerLoad\"},{\"name\":\"serverAddress\",\"type\":\"AServerAddress\"},{\"name\":\"serverName\",\"type\":\"string\"},{\"name\":\"startCode\",\"type\":\"long\"}]},{\"type\":\"record\",\"name\":\"AClusterStatus\",\"fields\":[{\"name\":\"averageLoad\",\"type\":\"double\"},{\"name\":\"deadServerNames\",\"type\":{\"type\":\"array\",\"items\":\"string\"}},{\"name\":\"deadServers\",\"type\":\"int\"},{\"name\":\"hbaseVersion\",\"type\":\"string\"},{\"name\":\"regionsCount\",\"type\":\"int\"},{\"name\":\"requestsCount\",\"type\":\"int\"},{\"name\":\"serverInfos\",\"type\":{\"type\":\"array\",\"items\":\"AServerInfo\"}},{\"name\":\"servers\",\"type\":\"int\"}]},{\"type\":\"enum\",\"name\":\"ACompressionAlgorithm\",\"symbols\":[\"LZO\",\"GZ\",\"NONE\"]},{\"type\":\"record\",\"name\":\"AFamilyDescriptor\",\"fields\":[{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"compression\",\"type\":[\"ACompressionAlgorithm\",\"null\"]},{\"name\":\"maxVersions\",\"type\":[\"int\",\"null\"]},{\"name\":\"blocksize\",\"type\":[\"int\",\"null\"]},{\"name\":\"inMemory\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"timeToLive\",\"type\":[\"int\",\"null\"]},{\"name\":\"blockCacheEnabled\",\"type\":[\"boolean\",\"null\"]}]},{\"type\":\"record\",\"name\":\"ATableDescriptor\",\"fields\":[{\"name\":\"name\",\"type\":\"bytes\"},{\"name\":\"families\",\"type\":[{\"type\":\"array\",\"items\":\"AFamilyDescriptor\"},\"null\"]},{\"name\":\"maxFileSize\",\"type\":[\"long\",\"null\"]},{\"name\":\"memStoreFlushSize\",\"type\":[\"long\",\"null\"]},{\"name\":\"rootRegion\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"metaRegion\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"metaTable\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"readOnly\",\"type\":[\"boolean\",\"null\"]},{\"name\":\"deferredLogFlush\",\"type\":[\"boolean\",\"null\"]}]},{\"type\":\"record\",\"name\":\"AColumn\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":[\"bytes\",\"null\"]}]},{\"type\":\"record\",\"name\":\"ATimeRange\",\"fields\":[{\"name\":\"minStamp\",\"type\":\"long\"},{\"name\":\"maxStamp\",\"type\":\"long\"}]},{\"type\":\"record\",\"name\":\"AGet\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"columns\",\"type\":[{\"type\":\"array\",\"items\":\"AColumn\"},\"null\"]},{\"name\":\"timestamp\",\"type\":[\"long\",\"null\"]},{\"name\":\"timerange\",\"type\":[\"ATimeRange\",\"null\"]},{\"name\":\"maxVersions\",\"type\":[\"int\",\"null\"]}]},{\"type\":\"record\",\"name\":\"AResultEntry\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":\"bytes\"},{\"name\":\"value\",\"type\":\"bytes\"},{\"name\":\"timestamp\",\"type\":\"long\"}]},{\"type\":\"record\",\"name\":\"AResult\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"entries\",\"type\":{\"type\":\"array\",\"items\":\"AResultEntry\"}}]},{\"type\":\"record\",\"name\":\"AColumnValue\",\"fields\":[{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":\"bytes\"},{\"name\":\"value\",\"type\":\"bytes\"},{\"name\":\"timestamp\",\"type\":[\"long\",\"null\"]}]},{\"type\":\"record\",\"name\":\"APut\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"columnValues\",\"type\":{\"type\":\"array\",\"items\":\"AColumnValue\"}}]},{\"type\":\"record\",\"name\":\"ADelete\",\"fields\":[{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"columns\",\"type\":[{\"type\":\"array\",\"items\":\"AColumn\"},\"null\"]}]},{\"type\":\"record\",\"name\":\"AScan\",\"fields\":[{\"name\":\"startRow\",\"type\":[\"bytes\",\"null\"]},{\"name\":\"stopRow\",\"type\":[\"bytes\",\"null\"]},{\"name\":\"columns\",\"type\":[{\"type\":\"array\",\"items\":\"AColumn\"},\"null\"]},{\"name\":\"timestamp\",\"type\":[\"long\",\"null\"]},{\"name\":\"timerange\",\"type\":[\"ATimeRange\",\"null\"]},{\"name\":\"maxVersions\",\"type\":[\"int\",\"null\"]}]},{\"type\":\"error\",\"name\":\"AIOError\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]},{\"type\":\"error\",\"name\":\"AIllegalArgument\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]},{\"type\":\"error\",\"name\":\"ATableExists\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]},{\"type\":\"error\",\"name\":\"AMasterNotRunning\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]}],\"messages\":{\"getHBaseVersion\":{\"request\":[],\"response\":\"string\",\"errors\":[\"AIOError\"]},\"getClusterStatus\":{\"request\":[],\"response\":\"AClusterStatus\",\"errors\":[\"AIOError\"]},\"listTables\":{\"request\":[],\"response\":{\"type\":\"array\",\"items\":\"ATableDescriptor\"},\"errors\":[\"AIOError\"]},\"describeTable\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"ATableDescriptor\",\"errors\":[\"AIOError\"]},\"isTableEnabled\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"boolean\",\"errors\":[\"AIOError\"]},\"tableExists\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"boolean\",\"errors\":[\"AIOError\"]},\"describeFamily\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"family\",\"type\":\"bytes\"}],\"response\":\"AFamilyDescriptor\",\"errors\":[\"AIOError\"]},\"createTable\":{\"request\":[{\"name\":\"table\",\"type\":\"ATableDescriptor\"}],\"response\":\"null\",\"errors\":[\"AIOError\",\"AIllegalArgument\",\"ATableExists\",\"AMasterNotRunning\"]},\"deleteTable\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"modifyTable\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"tableDescriptor\",\"type\":\"ATableDescriptor\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"enableTable\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"disableTable\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"flush\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"split\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"addFamily\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"family\",\"type\":\"AFamilyDescriptor\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"deleteFamily\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"family\",\"type\":\"bytes\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"modifyFamily\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"familyName\",\"type\":\"bytes\"},{\"name\":\"familyDescriptor\",\"type\":\"AFamilyDescriptor\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"get\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"get\",\"type\":\"AGet\"}],\"response\":\"AResult\",\"errors\":[\"AIOError\"]},\"exists\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"get\",\"type\":\"AGet\"}],\"response\":\"boolean\",\"errors\":[\"AIOError\"]},\"put\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"put\",\"type\":\"APut\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"delete\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"delete\",\"type\":\"ADelete\"}],\"response\":\"null\",\"errors\":[\"AIOError\"]},\"incrementColumnValue\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"row\",\"type\":\"bytes\"},{\"name\":\"family\",\"type\":\"bytes\"},{\"name\":\"qualifier\",\"type\":\"bytes\"},{\"name\":\"amount\",\"type\":\"long\"},{\"name\":\"writeToWAL\",\"type\":\"boolean\"}],\"response\":\"long\",\"errors\":[\"AIOError\"]},\"scannerOpen\":{\"request\":[{\"name\":\"table\",\"type\":\"bytes\"},{\"name\":\"scan\",\"type\":\"AScan\"}],\"response\":\"int\",\"errors\":[\"AIOError\"]},\"scannerClose\":{\"request\":[{\"name\":\"scannerId\",\"type\":\"int\"}],\"response\":\"null\",\"errors\":[\"AIOError\",\"AIllegalArgument\"]},\"scannerGetRows\":{\"request\":[{\"name\":\"scannerId\",\"type\":\"int\"},{\"name\":\"numberOfRows\",\"type\":\"int\"}],\"response\":{\"type\":\"array\",\"items\":\"AResult\"},\"errors\":[\"AIOError\",\"AIllegalArgument\"]}}}");
+  org.apache.avro.util.Utf8 getHBaseVersion()
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  org.apache.hadoop.hbase.avro.generated.AClusterStatus getClusterStatus()
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.ATableDescriptor> listTables()
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  org.apache.hadoop.hbase.avro.generated.ATableDescriptor describeTable(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  boolean isTableEnabled(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  boolean tableExists(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor describeFamily(java.nio.ByteBuffer table, java.nio.ByteBuffer family)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void createTable(org.apache.hadoop.hbase.avro.generated.ATableDescriptor table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError, org.apache.hadoop.hbase.avro.generated.AIllegalArgument, org.apache.hadoop.hbase.avro.generated.ATableExists, org.apache.hadoop.hbase.avro.generated.AMasterNotRunning;
+  java.lang.Void deleteTable(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void modifyTable(java.nio.ByteBuffer table, org.apache.hadoop.hbase.avro.generated.ATableDescriptor tableDescriptor)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void enableTable(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void disableTable(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void flush(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void split(java.nio.ByteBuffer table)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void addFamily(java.nio.ByteBuffer table, org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor family)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void deleteFamily(java.nio.ByteBuffer table, java.nio.ByteBuffer family)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void modifyFamily(java.nio.ByteBuffer table, java.nio.ByteBuffer familyName, org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor familyDescriptor)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  org.apache.hadoop.hbase.avro.generated.AResult get(java.nio.ByteBuffer table, org.apache.hadoop.hbase.avro.generated.AGet get)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  boolean exists(java.nio.ByteBuffer table, org.apache.hadoop.hbase.avro.generated.AGet get)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void put(java.nio.ByteBuffer table, org.apache.hadoop.hbase.avro.generated.APut put)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void delete(java.nio.ByteBuffer table, org.apache.hadoop.hbase.avro.generated.ADelete delete)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  long incrementColumnValue(java.nio.ByteBuffer table, java.nio.ByteBuffer row, java.nio.ByteBuffer family, java.nio.ByteBuffer qualifier, long amount, boolean writeToWAL)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  int scannerOpen(java.nio.ByteBuffer table, org.apache.hadoop.hbase.avro.generated.AScan scan)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError;
+  java.lang.Void scannerClose(int scannerId)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError, org.apache.hadoop.hbase.avro.generated.AIllegalArgument;
+  org.apache.avro.generic.GenericArray<org.apache.hadoop.hbase.avro.generated.AResult> scannerGetRows(int scannerId, int numberOfRows)
+    throws org.apache.avro.ipc.AvroRemoteException, org.apache.hadoop.hbase.avro.generated.AIOError, org.apache.hadoop.hbase.avro.generated.AIllegalArgument;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/IOError.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/IOError.java
new file mode 100644
index 0000000..55cfa95
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/IOError.java
@@ -0,0 +1,21 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class IOError extends org.apache.avro.specific.SpecificExceptionBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"error\",\"name\":\"IOError\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"message\",\"type\":\"string\"}]}");
+  public org.apache.avro.util.Utf8 message;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return message;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: message = (org.apache.avro.util.Utf8)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/TCell.java b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/TCell.java
new file mode 100644
index 0000000..602873e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/generated/TCell.java
@@ -0,0 +1,24 @@
+package org.apache.hadoop.hbase.avro.generated;
+
+@SuppressWarnings("all")
+public class TCell extends org.apache.avro.specific.SpecificRecordBase implements org.apache.avro.specific.SpecificRecord {
+  public static final org.apache.avro.Schema SCHEMA$ = org.apache.avro.Schema.parse("{\"type\":\"record\",\"name\":\"TCell\",\"namespace\":\"org.apache.hadoop.hbase.avro.generated\",\"fields\":[{\"name\":\"value\",\"type\":\"bytes\"},{\"name\":\"timestamp\",\"type\":\"long\"}]}");
+  public java.nio.ByteBuffer value;
+  public long timestamp;
+  public org.apache.avro.Schema getSchema() { return SCHEMA$; }
+  public java.lang.Object get(int field$) {
+    switch (field$) {
+    case 0: return value;
+    case 1: return timestamp;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+  @SuppressWarnings(value="unchecked")
+  public void put(int field$, java.lang.Object value$) {
+    switch (field$) {
+    case 0: value = (java.nio.ByteBuffer)value$; break;
+    case 1: timestamp = (java.lang.Long)value$; break;
+    default: throw new org.apache.avro.AvroRuntimeException("Bad index");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/hbase.avpr b/0.90/src/main/java/org/apache/hadoop/hbase/avro/hbase.avpr
new file mode 100644
index 0000000..68f3664
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/hbase.avpr
@@ -0,0 +1,609 @@
+{
+  "protocol" : "HBase",
+  "namespace" : "org.apache.hadoop.hbase.avro.generated",
+  "types" : [ {
+    "type" : "record",
+    "name" : "AServerAddress",
+    "fields" : [ {
+      "name" : "hostname",
+      "type" : "string"
+    }, {
+      "name" : "inetSocketAddress",
+      "type" : "string"
+    }, {
+      "name" : "port",
+      "type" : "int"
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "ARegionLoad",
+    "fields" : [ {
+      "name" : "memStoreSizeMB",
+      "type" : "int"
+    }, {
+      "name" : "name",
+      "type" : "bytes"
+    }, {
+      "name" : "storefileIndexSizeMB",
+      "type" : "int"
+    }, {
+      "name" : "storefiles",
+      "type" : "int"
+    }, {
+      "name" : "storefileSizeMB",
+      "type" : "int"
+    }, {
+      "name" : "stores",
+      "type" : "int"
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AServerLoad",
+    "fields" : [ {
+      "name" : "load",
+      "type" : "int"
+    }, {
+      "name" : "maxHeapMB",
+      "type" : "int"
+    }, {
+      "name" : "memStoreSizeInMB",
+      "type" : "int"
+    }, {
+      "name" : "numberOfRegions",
+      "type" : "int"
+    }, {
+      "name" : "numberOfRequests",
+      "type" : "int"
+    }, {
+      "name" : "regionsLoad",
+      "type" : {
+        "type" : "array",
+        "items" : "ARegionLoad"
+      }
+    }, {
+      "name" : "storefileIndexSizeInMB",
+      "type" : "int"
+    }, {
+      "name" : "storefiles",
+      "type" : "int"
+    }, {
+      "name" : "storefileSizeInMB",
+      "type" : "int"
+    }, {
+      "name" : "usedHeapMB",
+      "type" : "int"
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AServerInfo",
+    "fields" : [ {
+      "name" : "infoPort",
+      "type" : "int"
+    }, {
+      "name" : "load",
+      "type" : "AServerLoad"
+    }, {
+      "name" : "serverAddress",
+      "type" : "AServerAddress"
+    }, {
+      "name" : "serverName",
+      "type" : "string"
+    }, {
+      "name" : "startCode",
+      "type" : "long"
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AClusterStatus",
+    "fields" : [ {
+      "name" : "averageLoad",
+      "type" : "double"
+    }, {
+      "name" : "deadServerNames",
+      "type" : {
+        "type" : "array",
+        "items" : "string"
+      }
+    }, {
+      "name" : "deadServers",
+      "type" : "int"
+    }, {
+      "name" : "hbaseVersion",
+      "type" : "string"
+    }, {
+      "name" : "regionsCount",
+      "type" : "int"
+    }, {
+      "name" : "requestsCount",
+      "type" : "int"
+    }, {
+      "name" : "serverInfos",
+      "type" : {
+        "type" : "array",
+        "items" : "AServerInfo"
+      }
+    }, {
+      "name" : "servers",
+      "type" : "int"
+    } ]
+  }, {
+    "type" : "enum",
+    "name" : "ACompressionAlgorithm",
+    "symbols" : [ "LZO", "GZ", "NONE" ]
+  }, {
+    "type" : "record",
+    "name" : "AFamilyDescriptor",
+    "fields" : [ {
+      "name" : "name",
+      "type" : "bytes"
+    }, {
+      "name" : "compression",
+      "type" : [ "ACompressionAlgorithm", "null" ]
+    }, {
+      "name" : "maxVersions",
+      "type" : [ "int", "null" ]
+    }, {
+      "name" : "blocksize",
+      "type" : [ "int", "null" ]
+    }, {
+      "name" : "inMemory",
+      "type" : [ "boolean", "null" ]
+    }, {
+      "name" : "timeToLive",
+      "type" : [ "int", "null" ]
+    }, {
+      "name" : "blockCacheEnabled",
+      "type" : [ "boolean", "null" ]
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "ATableDescriptor",
+    "fields" : [ {
+      "name" : "name",
+      "type" : "bytes"
+    }, {
+      "name" : "families",
+      "type" : [ {
+        "type" : "array",
+        "items" : "AFamilyDescriptor"
+      }, "null" ]
+    }, {
+      "name" : "maxFileSize",
+      "type" : [ "long", "null" ]
+    }, {
+      "name" : "memStoreFlushSize",
+      "type" : [ "long", "null" ]
+    }, {
+      "name" : "rootRegion",
+      "type" : [ "boolean", "null" ]
+    }, {
+      "name" : "metaRegion",
+      "type" : [ "boolean", "null" ]
+    }, {
+      "name" : "metaTable",
+      "type" : [ "boolean", "null" ]
+    }, {
+      "name" : "readOnly",
+      "type" : [ "boolean", "null" ]
+    }, {
+      "name" : "deferredLogFlush",
+      "type" : [ "boolean", "null" ]
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AColumn",
+    "fields" : [ {
+      "name" : "family",
+      "type" : "bytes"
+    }, {
+      "name" : "qualifier",
+      "type" : [ "bytes", "null" ]
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "ATimeRange",
+    "fields" : [ {
+      "name" : "minStamp",
+      "type" : "long"
+    }, {
+      "name" : "maxStamp",
+      "type" : "long"
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AGet",
+    "fields" : [ {
+      "name" : "row",
+      "type" : "bytes"
+    }, {
+      "name" : "columns",
+      "type" : [ {
+        "type" : "array",
+        "items" : "AColumn"
+      }, "null" ]
+    }, {
+      "name" : "timestamp",
+      "type" : [ "long", "null" ]
+    }, {
+      "name" : "timerange",
+      "type" : [ "ATimeRange", "null" ]
+    }, {
+      "name" : "maxVersions",
+      "type" : [ "int", "null" ]
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AResultEntry",
+    "fields" : [ {
+      "name" : "family",
+      "type" : "bytes"
+    }, {
+      "name" : "qualifier",
+      "type" : "bytes"
+    }, {
+      "name" : "value",
+      "type" : "bytes"
+    }, {
+      "name" : "timestamp",
+      "type" : "long"
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AResult",
+    "fields" : [ {
+      "name" : "row",
+      "type" : "bytes"
+    }, {
+      "name" : "entries",
+      "type" : {
+        "type" : "array",
+        "items" : "AResultEntry"
+      }
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AColumnValue",
+    "fields" : [ {
+      "name" : "family",
+      "type" : "bytes"
+    }, {
+      "name" : "qualifier",
+      "type" : "bytes"
+    }, {
+      "name" : "value",
+      "type" : "bytes"
+    }, {
+      "name" : "timestamp",
+      "type" : [ "long", "null" ]
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "APut",
+    "fields" : [ {
+      "name" : "row",
+      "type" : "bytes"
+    }, {
+      "name" : "columnValues",
+      "type" : {
+        "type" : "array",
+        "items" : "AColumnValue"
+      }
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "ADelete",
+    "fields" : [ {
+      "name" : "row",
+      "type" : "bytes"
+    }, {
+      "name" : "columns",
+      "type" : [ {
+        "type" : "array",
+        "items" : "AColumn"
+      }, "null" ]
+    } ]
+  }, {
+    "type" : "record",
+    "name" : "AScan",
+    "fields" : [ {
+      "name" : "startRow",
+      "type" : [ "bytes", "null" ]
+    }, {
+      "name" : "stopRow",
+      "type" : [ "bytes", "null" ]
+    }, {
+      "name" : "columns",
+      "type" : [ {
+        "type" : "array",
+        "items" : "AColumn"
+      }, "null" ]
+    }, {
+      "name" : "timestamp",
+      "type" : [ "long", "null" ]
+    }, {
+      "name" : "timerange",
+      "type" : [ "ATimeRange", "null" ]
+    }, {
+      "name" : "maxVersions",
+      "type" : [ "int", "null" ]
+    } ]
+  }, {
+    "type" : "error",
+    "name" : "AIOError",
+    "fields" : [ {
+      "name" : "message",
+      "type" : "string"
+    } ]
+  }, {
+    "type" : "error",
+    "name" : "AIllegalArgument",
+    "fields" : [ {
+      "name" : "message",
+      "type" : "string"
+    } ]
+  }, {
+    "type" : "error",
+    "name" : "ATableExists",
+    "fields" : [ {
+      "name" : "message",
+      "type" : "string"
+    } ]
+  }, {
+    "type" : "error",
+    "name" : "AMasterNotRunning",
+    "fields" : [ {
+      "name" : "message",
+      "type" : "string"
+    } ]
+  } ],
+  "messages" : {
+    "getHBaseVersion" : {
+      "request" : [ ],
+      "response" : "string",
+      "errors" : [ "AIOError" ]
+    },
+    "getClusterStatus" : {
+      "request" : [ ],
+      "response" : "AClusterStatus",
+      "errors" : [ "AIOError" ]
+    },
+    "listTables" : {
+      "request" : [ ],
+      "response" : {
+        "type" : "array",
+        "items" : "ATableDescriptor"
+      },
+      "errors" : [ "AIOError" ]
+    },
+    "describeTable" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "ATableDescriptor",
+      "errors" : [ "AIOError" ]
+    },
+    "isTableEnabled" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "boolean",
+      "errors" : [ "AIOError" ]
+    },
+    "tableExists" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "boolean",
+      "errors" : [ "AIOError" ]
+    },
+    "describeFamily" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "family",
+        "type" : "bytes"
+      } ],
+      "response" : "AFamilyDescriptor",
+      "errors" : [ "AIOError" ]
+    },
+    "createTable" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "ATableDescriptor"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError", "AIllegalArgument", "ATableExists", "AMasterNotRunning" ]
+    },
+    "deleteTable" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "modifyTable" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "tableDescriptor",
+        "type" : "ATableDescriptor"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "enableTable" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "disableTable" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "flush" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "split" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "addFamily" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "family",
+        "type" : "AFamilyDescriptor"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "deleteFamily" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "family",
+        "type" : "bytes"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "modifyFamily" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "familyName",
+        "type" : "bytes"
+      }, {
+        "name" : "familyDescriptor",
+        "type" : "AFamilyDescriptor"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "get" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "get",
+        "type" : "AGet"
+      } ],
+      "response" : "AResult",
+      "errors" : [ "AIOError" ]
+    },
+    "exists" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "get",
+        "type" : "AGet"
+      } ],
+      "response" : "boolean",
+      "errors" : [ "AIOError" ]
+    },
+    "put" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "put",
+        "type" : "APut"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "delete" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "delete",
+        "type" : "ADelete"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError" ]
+    },
+    "incrementColumnValue" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "row",
+        "type" : "bytes"
+      }, {
+        "name" : "family",
+        "type" : "bytes"
+      }, {
+        "name" : "qualifier",
+        "type" : "bytes"
+      }, {
+        "name" : "amount",
+        "type" : "long"
+      }, {
+        "name" : "writeToWAL",
+        "type" : "boolean"
+      } ],
+      "response" : "long",
+      "errors" : [ "AIOError" ]
+    },
+    "scannerOpen" : {
+      "request" : [ {
+        "name" : "table",
+        "type" : "bytes"
+      }, {
+        "name" : "scan",
+        "type" : "AScan"
+      } ],
+      "response" : "int",
+      "errors" : [ "AIOError" ]
+    },
+    "scannerClose" : {
+      "request" : [ {
+        "name" : "scannerId",
+        "type" : "int"
+      } ],
+      "response" : "null",
+      "errors" : [ "AIOError", "AIllegalArgument" ]
+    },
+    "scannerGetRows" : {
+      "request" : [ {
+        "name" : "scannerId",
+        "type" : "int"
+      }, {
+        "name" : "numberOfRows",
+        "type" : "int"
+      } ],
+      "response" : {
+        "type" : "array",
+        "items" : "AResult"
+      },
+      "errors" : [ "AIOError", "AIllegalArgument" ]
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/hbase.genavro b/0.90/src/main/java/org/apache/hadoop/hbase/avro/hbase.genavro
new file mode 100644
index 0000000..c326072
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/hbase.genavro
@@ -0,0 +1,271 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Avro protocol for a "gateway" service
+ */
+@namespace("org.apache.hadoop.hbase.avro.generated")
+protocol HBase {
+
+  //
+  // TYPES
+  //
+
+  //
+  // Cluster metadata
+  //
+  // TODO(hammer): Best way to represent java.net.InetSocketAddress?
+  record AServerAddress {
+    string hostname;
+    string inetSocketAddress;
+    int port;
+  }
+
+  record ARegionLoad {
+    int memStoreSizeMB;
+    bytes name;
+    int storefileIndexSizeMB;
+    int storefiles;
+    int storefileSizeMB;
+    int stores;
+  }
+
+  record AServerLoad {
+    int load;
+    int maxHeapMB;
+    int memStoreSizeInMB;
+    int numberOfRegions;
+    int numberOfRequests;
+    array<ARegionLoad> regionsLoad;
+    int storefileIndexSizeInMB;
+    int storefiles;
+    int storefileSizeInMB;
+    int usedHeapMB;
+  }
+
+  record AServerInfo {
+    int infoPort;
+    AServerLoad load;
+    AServerAddress serverAddress;
+    string serverName;
+    long startCode;
+  }
+
+  // TODO(hammer): Implement reusable Writable to Avro record converter?
+  record AClusterStatus {
+    double averageLoad;
+    array<string> deadServerNames;
+    int deadServers;
+    string hbaseVersion;
+    int regionsCount;
+    int requestsCount;
+    array<AServerInfo> serverInfos;
+    int servers;
+  }
+
+  //
+  // Family metadata
+  //
+  // TODO(hammer): how to keep in sync with Java Enum?
+  enum ACompressionAlgorithm {
+    LZO, GZ, NONE
+  }
+
+  // TODO(hammer): include COLUMN_DESCRIPTOR_VERSION?
+  // TODO(hammer): add new bloomfilter stuff
+  record AFamilyDescriptor {
+    bytes name;
+    union { ACompressionAlgorithm, null } compression;
+    union { int, null } maxVersions;
+    union { int, null } blocksize;
+    union { boolean, null } inMemory;
+    union { int, null } timeToLive;
+    union { boolean, null } blockCacheEnabled;
+  }
+
+  //
+  // Table metadata
+  //
+  // TODO(hammer): include TABLE_DESCRIPTOR_VERSION?
+  record ATableDescriptor {
+    bytes name;
+    union { array<AFamilyDescriptor>, null } families;
+    union { long, null } maxFileSize;
+    union { long, null } memStoreFlushSize;
+    union { boolean, null } rootRegion;
+    union { boolean, null } metaRegion;
+    union { boolean, null } metaTable;
+    union { boolean, null } readOnly;
+    union { boolean, null } deferredLogFlush;
+  }
+
+  //
+  // Single-Row DML (Get)
+  //
+  record AColumn {
+    bytes family;
+    union { bytes, null } qualifier;
+  }
+
+  record ATimeRange {
+    long minStamp;
+    long maxStamp;
+  }
+
+  // TODO(hammer): Add filter options
+  record AGet {
+    bytes row;
+    union { array<AColumn>, null } columns;
+    union { long, null } timestamp;
+    union { ATimeRange, null } timerange;
+    union { int, null } maxVersions;
+  }
+
+  record AResultEntry {
+    bytes family;
+    bytes qualifier;
+    bytes value;
+    long timestamp;
+  }
+
+  // Avro maps can't use non-string keys, so using an array for now
+  record AResult {
+    bytes row;
+    array<AResultEntry> entries;
+  }
+
+  //
+  // Single-Row DML (Put)
+  //
+  // TODO(hammer): Reuse a single KeyValue-style record for Get and Put?
+  record AColumnValue {
+    bytes family;
+    bytes qualifier;
+    bytes value;
+    union { long, null } timestamp;
+  }
+
+  record APut {
+    bytes row;
+    array<AColumnValue> columnValues;
+  }
+
+  //
+  // Single-Row DML (Delete)
+  //
+  // TODO(hammer): Add fields when API is rationalized (HBASE-2609)
+  record ADelete {
+    bytes row;
+    union { array<AColumn>, null } columns;
+  }
+
+  //
+  // Multi-Row DML (Scan)
+  //
+  record AScan {
+    union { bytes, null } startRow;
+    union { bytes, null } stopRow;
+    union { array<AColumn>, null } columns;
+    union { long, null } timestamp;
+    union { ATimeRange, null } timerange;
+    union { int, null } maxVersions;
+  }
+
+  //
+  // ERRORS
+  //
+
+  /**
+   * An AIOError error signals that an error occurred communicating
+   * to the HBase master or a HBase region server. Also used to return
+   * more general HBase error conditions.
+   */
+  error AIOError {
+    string message;
+  }
+
+  /**
+   * An AIllegalArgument error indicates an illegal or invalid
+   * argument was passed into a procedure.
+   */
+  error AIllegalArgument {
+    string message;
+  }
+
+  /**
+   * An ATableExists error that a table with the specified
+   * name already exists
+   */
+  error ATableExists {
+    string message;
+  }
+
+  /**
+   * An AMasterNotRunning error means we couldn't reach the Master.
+   */
+  error AMasterNotRunning {
+    string message;
+  }
+
+  //
+  // MESSAGES
+  //
+
+  // TODO(hammer): surgery tools
+  // TODO(hammer): checkAndPut/flushCommits
+  // TODO(hammer): MultiPut/Get/Delete
+
+  // Cluster metadata
+  string getHBaseVersion() throws AIOError;
+  AClusterStatus getClusterStatus() throws AIOError;
+  array<ATableDescriptor> listTables() throws AIOError;
+
+  // Table metadata
+  ATableDescriptor describeTable(bytes table) throws AIOError;
+  boolean isTableEnabled(bytes table) throws AIOError;
+  boolean tableExists(bytes table) throws AIOError;
+
+  // Family metadata
+  AFamilyDescriptor describeFamily(bytes table, bytes family) throws AIOError;
+
+  // Table admin
+  void createTable(ATableDescriptor table) throws AIOError, AIllegalArgument, ATableExists, AMasterNotRunning;
+  void deleteTable(bytes table) throws AIOError;
+  void modifyTable(bytes table, ATableDescriptor tableDescriptor) throws AIOError;
+  void enableTable(bytes table) throws AIOError;
+  void disableTable(bytes table) throws AIOError;
+  void flush(bytes table) throws AIOError;
+  void split(bytes table) throws AIOError;
+
+  // Family admin
+  void addFamily(bytes table, AFamilyDescriptor family) throws AIOError;
+  void deleteFamily(bytes table, bytes family) throws AIOError;
+  void modifyFamily(bytes table, bytes familyName, AFamilyDescriptor familyDescriptor) throws AIOError;
+
+  // Single-row DML
+  AResult get(bytes table, AGet get) throws AIOError;
+  boolean exists(bytes table, AGet get) throws AIOError;
+  void put(bytes table, APut put) throws AIOError;
+  void delete(bytes table, ADelete delete) throws AIOError;
+  long incrementColumnValue(bytes table, bytes row, bytes family, bytes qualifier, long amount, boolean writeToWAL) throws AIOError;
+
+  // Multi-row DML (read-only)
+  int scannerOpen(bytes table, AScan scan) throws AIOError;
+  void scannerClose(int scannerId) throws AIOError, AIllegalArgument;
+  array<AResult> scannerGetRows(int scannerId, int numberOfRows) throws AIOError, AIllegalArgument;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/avro/package.html b/0.90/src/main/java/org/apache/hadoop/hbase/avro/package.html
new file mode 100644
index 0000000..298b561
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/avro/package.html
@@ -0,0 +1,70 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+Provides an HBase <a href="http://avro.apache.org">Avro</a> service.
+
+This directory contains an Avro interface definition file for an HBase RPC
+service and a Java server implementation.
+
+<h2><a name="whatisavro">What is Avro?</a></h2> 
+
+<p>Avro is a data serialization and RPC system. For more, see the
+<a href="http://avro.apache.org/docs/current/spec.html">current specification</a>.
+</p>
+
+<h2><a name="description">Description</a></h2>
+
+<p>The <a href="generated/HBase.html">HBase API</a> is defined in the
+file hbase.genavro.  A server-side implementation of the API is in
+<code>org.apache.hadoop.hbase.avro.AvroServer</code>.  The generated interfaces,
+types, and RPC utility files are checked into SVN under the
+<code>org.apache.hadoop.hbase.avro.generated</code> directory.
+
+</p>
+
+<p>The files were generated by running the commands:
+<pre>
+  java -jar avro-tools-1.3.2.jar genavro hbase.genavro hbase.avpr
+  java -jar avro-tools-1.3.2.jar compile protocol hbase.avro $HBASE_HOME/src/java
+</pre>
+</p>
+
+<p>The 'avro-tools-x.y.z.jar' jarfile is an Avro utility, and it is 
+distributed as a part of the Avro package.  Additionally, specific 
+language runtime libraries are apart of the Avro package.  A version of the 
+Java runtime is listed as a dendency in Maven.
+</p>
+
+<p>To start AvroServer, use:
+<pre>
+  ./bin/hbase avro start [--port=PORT]
+</pre>
+The default port is 9090.
+</p>
+
+<p>To stop, use:
+<pre>
+  ./bin/hbase-daemon.sh stop avro
+</pre>
+</p>
+</body>
+</html>
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java
new file mode 100644
index 0000000..b58cdb5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java
@@ -0,0 +1,488 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.catalog;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.net.ConnectException;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.RetriesExhaustedException;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.MetaNodeTracker;
+import org.apache.hadoop.hbase.zookeeper.RootRegionTracker;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * Tracks the availability of the catalog tables <code>-ROOT-</code> and
+ * <code>.META.</code>.
+ * 
+ * This class is "read-only" in that the locations of the catalog tables cannot
+ * be explicitly set.  Instead, ZooKeeper is used to learn of the availability
+ * and location of <code>-ROOT-</code>.  <code>-ROOT-</code> is used to learn of
+ * the location of <code>.META.</code>  If not available in <code>-ROOT-</code>,
+ * ZooKeeper is used to monitor for a new location of <code>.META.</code>.
+ *
+ * <p>Call {@link #start()} to start up operation.  Call {@link #stop()}} to
+ * interrupt waits and close up shop.
+ */
+public class CatalogTracker {
+  private static final Log LOG = LogFactory.getLog(CatalogTracker.class);
+  private final HConnection connection;
+  private final ZooKeeperWatcher zookeeper;
+  private final RootRegionTracker rootRegionTracker;
+  private final MetaNodeTracker metaNodeTracker;
+  private final AtomicBoolean metaAvailable = new AtomicBoolean(false);
+  /**
+   * Do not clear this address once set.  Let it be cleared by
+   * {@link #setMetaLocation(HServerAddress)} only.  Its needed when we do
+   * server shutdown processing -- we need to know who had .META. last.  If you
+   * want to know if the address is good, rely on {@link #metaAvailable} value.
+   */
+  private HServerAddress metaLocation;
+  private final int defaultTimeout;
+  private boolean stopped = false;
+
+  public static final byte [] ROOT_REGION =
+    HRegionInfo.ROOT_REGIONINFO.getRegionName();
+  public static final byte [] META_REGION =
+    HRegionInfo.FIRST_META_REGIONINFO.getRegionName();
+
+  /**
+   * Constructs a catalog tracker.  Find current state of catalog tables and
+   * begin active tracking by executing {@link #start()} post construction.
+   * Does not timeout.
+   * @param connection Server connection; if problem, this connections
+   * {@link HConnection#abort(String, Throwable)} will be called.
+   * @throws IOException 
+   */
+  public CatalogTracker(final HConnection connection) throws IOException {
+    this(connection.getZooKeeperWatcher(), connection, connection);
+  }
+
+  /**
+   * Constructs the catalog tracker.  Find current state of catalog tables and
+   * begin active tracking by executing {@link #start()} post construction.
+   * Does not timeout.
+   * @param zk
+   * @param connection server connection
+   * @param abortable if fatal exception
+   * @throws IOException 
+   */
+  public CatalogTracker(final ZooKeeperWatcher zk, final HConnection connection,
+      final Abortable abortable)
+  throws IOException {
+    this(zk, connection, abortable, 0);
+  }
+
+  /**
+   * Constructs the catalog tracker.  Find current state of catalog tables and
+   * begin active tracking by executing {@link #start()} post construction.
+   * @param zk
+   * @param connection server connection
+   * @param abortable if fatal exception
+   * @param defaultTimeout Timeout to use.  Pass zero for no timeout
+   * ({@link Object#wait(long)} when passed a <code>0</code> waits for ever).
+   * @throws IOException 
+   */
+  public CatalogTracker(final ZooKeeperWatcher zk, final HConnection connection,
+      final Abortable abortable, final int defaultTimeout)
+  throws IOException {
+    this.zookeeper = zk;
+    this.connection = connection;
+    this.rootRegionTracker = new RootRegionTracker(zookeeper, abortable);
+    this.metaNodeTracker = new MetaNodeTracker(zookeeper, this, abortable);
+    this.defaultTimeout = defaultTimeout;
+  }
+
+  /**
+   * Starts the catalog tracker.
+   * Determines current availability of catalog tables and ensures all further
+   * transitions of either region are tracked.
+   * @throws IOException
+   * @throws InterruptedException 
+   */
+  public void start() throws IOException, InterruptedException {
+    this.rootRegionTracker.start();
+    this.metaNodeTracker.start();
+    LOG.debug("Starting catalog tracker " + this);
+  }
+
+  /**
+   * Stop working.
+   * Interrupts any ongoing waits.
+   */
+  public void stop() {
+    LOG.debug("Stopping catalog tracker " + this);
+    this.stopped = true;
+    this.rootRegionTracker.stop();
+    this.metaNodeTracker.stop();
+    // Call this and it will interrupt any ongoing waits on meta.
+    synchronized (this.metaAvailable) {
+      this.metaAvailable.notifyAll();
+    }
+  }
+
+  /**
+   * Gets the current location for <code>-ROOT-</code> or null if location is
+   * not currently available.
+   * @return location of root, null if not available
+   * @throws InterruptedException 
+   */
+  public HServerAddress getRootLocation() throws InterruptedException {
+    return this.rootRegionTracker.getRootRegionLocation();
+  }
+
+  /**
+   * @return Location of meta or null if not yet available.
+   */
+  public HServerAddress getMetaLocation() {
+    return this.metaLocation;
+  }
+
+  /**
+   * Waits indefinitely for availability of <code>-ROOT-</code>.  Used during
+   * cluster startup.
+   * @throws InterruptedException if interrupted while waiting
+   */
+  public void waitForRoot()
+  throws InterruptedException {
+    this.rootRegionTracker.blockUntilAvailable();
+  }
+
+  /**
+   * Gets the current location for <code>-ROOT-</code> if available and waits
+   * for up to the specified timeout if not immediately available.  Returns null
+   * if the timeout elapses before root is available.
+   * @param timeout maximum time to wait for root availability, in milliseconds
+   * @return location of root
+   * @throws InterruptedException if interrupted while waiting
+   * @throws NotAllMetaRegionsOnlineException if root not available before
+   *                                          timeout
+   */
+  HServerAddress waitForRoot(final long timeout)
+  throws InterruptedException, NotAllMetaRegionsOnlineException {
+    HServerAddress address = rootRegionTracker.waitRootRegionLocation(timeout);
+    if (address == null) {
+      throw new NotAllMetaRegionsOnlineException("Timed out; " + timeout + "ms");
+    }
+    return address;
+  }
+
+  /**
+   * Gets a connection to the server hosting root, as reported by ZooKeeper,
+   * waiting up to the specified timeout for availability.
+   * @see #waitForRoot(long) for additional information
+   * @return connection to server hosting root
+   * @throws InterruptedException
+   * @throws NotAllMetaRegionsOnlineException if timed out waiting
+   * @throws IOException
+   */
+  public HRegionInterface waitForRootServerConnection(long timeout)
+  throws InterruptedException, NotAllMetaRegionsOnlineException, IOException {
+    return getCachedConnection(waitForRoot(timeout));
+  }
+
+  /**
+   * Gets a connection to the server hosting root, as reported by ZooKeeper,
+   * waiting for the default timeout specified on instantiation.
+   * @see #waitForRoot(long) for additional information
+   * @return connection to server hosting root
+   * @throws NotAllMetaRegionsOnlineException if timed out waiting
+   * @throws IOException
+   */
+  public HRegionInterface waitForRootServerConnectionDefault()
+  throws NotAllMetaRegionsOnlineException, IOException {
+    try {
+      return getCachedConnection(waitForRoot(defaultTimeout));
+    } catch (InterruptedException e) {
+      throw new NotAllMetaRegionsOnlineException("Interrupted");
+    }
+  }
+
+  /**
+   * Gets a connection to the server hosting root, as reported by ZooKeeper,
+   * if available.  Returns null if no location is immediately available.
+   * @return connection to server hosting root, null if not available
+   * @throws IOException
+   * @throws InterruptedException 
+   */
+  private HRegionInterface getRootServerConnection()
+  throws IOException, InterruptedException {
+    HServerAddress address = this.rootRegionTracker.getRootRegionLocation();
+    if (address == null) {
+      return null;
+    }
+    return getCachedConnection(address);
+  }
+
+  /**
+   * Gets a connection to the server currently hosting <code>.META.</code> or
+   * null if location is not currently available.
+   * <p>
+   * If a location is known, a connection to the cached location is returned.
+   * If refresh is true, the cached connection is verified first before
+   * returning.  If the connection is not valid, it is reset and rechecked.
+   * <p>
+   * If no location for meta is currently known, method checks ROOT for a new
+   * location, verifies META is currently there, and returns a cached connection
+   * to the server hosting META.
+   *
+   * @return connection to server hosting meta, null if location not available
+   * @throws IOException
+   * @throws InterruptedException 
+   */
+  private HRegionInterface getMetaServerConnection(boolean refresh)
+  throws IOException, InterruptedException {
+    synchronized (metaAvailable) {
+      if (metaAvailable.get()) {
+        HRegionInterface current = getCachedConnection(metaLocation);
+        if (!refresh) {
+          return current;
+        }
+        if (verifyRegionLocation(current, this.metaLocation, META_REGION)) {
+          return current;
+        }
+        resetMetaLocation();
+      }
+      HRegionInterface rootConnection = getRootServerConnection();
+      if (rootConnection == null) {
+        return null;
+      }
+      HServerAddress newLocation = MetaReader.readMetaLocation(rootConnection);
+      if (newLocation == null) {
+        return null;
+      }
+      HRegionInterface newConnection = getCachedConnection(newLocation);
+      if (verifyRegionLocation(newConnection, this.metaLocation, META_REGION)) {
+        setMetaLocation(newLocation);
+        return newConnection;
+      }
+      return null;
+    }
+  }
+
+  /**
+   * Waits indefinitely for availability of <code>.META.</code>.  Used during
+   * cluster startup.
+   * @throws InterruptedException if interrupted while waiting
+   */
+  public void waitForMeta() throws InterruptedException {
+    synchronized (metaAvailable) {
+      while (!stopped && !metaAvailable.get()) {
+        metaAvailable.wait();
+      }
+    }
+  }
+
+  /**
+   * Gets the current location for <code>.META.</code> if available and waits
+   * for up to the specified timeout if not immediately available.  Throws an
+   * exception if timed out waiting.  This method differs from {@link #waitForMeta()}
+   * in that it will go ahead and verify the location gotten from ZooKeeper by
+   * trying to use returned connection.
+   * @param timeout maximum time to wait for meta availability, in milliseconds
+   * @return location of meta
+   * @throws InterruptedException if interrupted while waiting
+   * @throws IOException unexpected exception connecting to meta server
+   * @throws NotAllMetaRegionsOnlineException if meta not available before
+   *                                          timeout
+   */
+  public HServerAddress waitForMeta(long timeout)
+  throws InterruptedException, IOException, NotAllMetaRegionsOnlineException {
+    long stop = System.currentTimeMillis() + timeout;
+    synchronized (metaAvailable) {
+      if (getMetaServerConnection(true) != null) {
+        return metaLocation;
+      }
+      while(!stopped && !metaAvailable.get() &&
+          (timeout == 0 || System.currentTimeMillis() < stop)) {
+        metaAvailable.wait(timeout);
+      }
+      if (getMetaServerConnection(true) == null) {
+        throw new NotAllMetaRegionsOnlineException(
+            "Timed out (" + timeout + "ms)");
+      }
+      return metaLocation;
+    }
+  }
+
+  /**
+   * Gets a connection to the server hosting meta, as reported by ZooKeeper,
+   * waiting up to the specified timeout for availability.
+   * @see #waitForMeta(long) for additional information
+   * @return connection to server hosting meta
+   * @throws InterruptedException
+   * @throws NotAllMetaRegionsOnlineException if timed out waiting
+   * @throws IOException
+   */
+  public HRegionInterface waitForMetaServerConnection(long timeout)
+  throws InterruptedException, NotAllMetaRegionsOnlineException, IOException {
+    return getCachedConnection(waitForMeta(timeout));
+  }
+
+  /**
+   * Gets a connection to the server hosting meta, as reported by ZooKeeper,
+   * waiting up to the specified timeout for availability.
+   * @see #waitForMeta(long) for additional information
+   * @return connection to server hosting meta
+   * @throws NotAllMetaRegionsOnlineException if timed out or interrupted
+   * @throws IOException
+   */
+  public HRegionInterface waitForMetaServerConnectionDefault()
+  throws NotAllMetaRegionsOnlineException, IOException {
+    try {
+      return getCachedConnection(waitForMeta(defaultTimeout));
+    } catch (InterruptedException e) {
+      throw new NotAllMetaRegionsOnlineException("Interrupted");
+    }
+  }
+
+  private void resetMetaLocation() {
+    LOG.info("Current cached META location is not valid, resetting");
+    this.metaAvailable.set(false);
+  }
+
+  private void setMetaLocation(HServerAddress metaLocation) {
+    metaAvailable.set(true);
+    this.metaLocation = metaLocation;
+    // no synchronization because these are private and already under lock
+    metaAvailable.notifyAll();
+  }
+
+  private HRegionInterface getCachedConnection(HServerAddress address)
+  throws IOException {
+    HRegionInterface protocol = null;
+    try {
+      protocol = connection.getHRegionConnection(address, false);
+    } catch (RetriesExhaustedException e) {
+      if (e.getCause() != null && e.getCause() instanceof ConnectException) {
+        // Catch this; presume it means the cached connection has gone bad.
+      } else {
+        throw e;
+      }
+    } catch (IOException ioe) {
+      Throwable cause = ioe.getCause();
+      if (cause != null && cause instanceof EOFException) {
+        // Catch. Other end disconnected us.
+      } else if (cause != null && cause.getMessage() != null &&
+        cause.getMessage().toLowerCase().contains("connection reset")) {
+        // Catch. Connection reset.
+      } else {
+        throw ioe;
+      }
+      
+    }
+    return protocol;
+  }
+
+  private boolean verifyRegionLocation(HRegionInterface metaServer,
+      final HServerAddress address,
+      byte [] regionName)
+  throws IOException {
+    if (metaServer == null) {
+      LOG.info("Passed metaserver is null");
+      return false;
+    }
+    Throwable t = null;
+    try {
+      return metaServer.getRegionInfo(regionName) != null;
+    } catch (ConnectException e) {
+      t = e;
+    } catch (RemoteException e) {
+      IOException ioe = e.unwrapRemoteException();
+      if (ioe instanceof NotServingRegionException) {
+        t = ioe;
+      } else {
+        throw e;
+      }
+    } catch (IOException e) {
+      Throwable cause = e.getCause();
+      if (cause != null && cause instanceof EOFException) {
+        t = cause;
+      } else if (cause != null && cause.getMessage() != null
+          && cause.getMessage().contains("Connection reset")) {
+        t = cause;
+      } else {
+        throw e;
+      }
+    }
+    LOG.info("Failed verification of " + Bytes.toString(regionName) +
+      " at address=" + address + "; " + t);
+    return false;
+  }
+
+  /**
+   * Verify <code>-ROOT-</code> is deployed and accessible.
+   * @param timeout How long to wait on zk for root address (passed through to
+   * the internal call to {@link #waitForRootServerConnection(long)}.
+   * @return True if the <code>-ROOT-</code> location is healthy.
+   * @throws IOException
+   * @throws InterruptedException 
+   */
+  public boolean verifyRootRegionLocation(final long timeout)
+  throws InterruptedException, IOException {
+    HRegionInterface connection = null;
+    try {
+      connection = waitForRootServerConnection(timeout);
+    } catch (NotAllMetaRegionsOnlineException e) {
+      // Pass
+    } catch (org.apache.hadoop.hbase.ipc.ServerNotRunningException e) {
+      // Pass -- remote server is not up so can't be carrying root
+    } catch (IOException e) {
+      // Unexpected exception
+      throw e;
+    }
+    return (connection == null)? false:
+      verifyRegionLocation(connection,this.rootRegionTracker.getRootRegionLocation(),
+        HRegionInfo.ROOT_REGIONINFO.getRegionName());
+  }
+
+  /**
+   * Verify <code>.META.</code> is deployed and accessible.
+   * @param timeout How long to wait on zk for <code>.META.</code> address
+   * (passed through to the internal call to {@link #waitForMetaServerConnection(long)}.
+   * @return True if the <code>.META.</code> location is healthy.
+   * @throws IOException Some unexpected IOE.
+   * @throws InterruptedException
+   */
+  public boolean verifyMetaRegionLocation(final long timeout)
+  throws InterruptedException, IOException {
+    return getMetaServerConnection(true) != null;
+  }
+
+  MetaNodeTracker getMetaNodeTracker() {
+    return this.metaNodeTracker;
+  }
+
+  public HConnection getConnection() {
+    return this.connection;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java
new file mode 100644
index 0000000..263d9b4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java
@@ -0,0 +1,242 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.catalog;
+
+import java.io.IOException;
+import java.net.ConnectException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Writes region and assignment information to <code>.META.</code>.
+ * <p>
+ * Uses the {@link CatalogTracker} to obtain locations and connections to
+ * catalogs.
+ */
+public class MetaEditor {
+  private static final Log LOG = LogFactory.getLog(MetaEditor.class);
+
+  /**
+   * Adds a META row for the specified new region.
+   * @param regionInfo region information
+   * @throws IOException if problem connecting or updating meta
+   */
+  public static void addRegionToMeta(CatalogTracker catalogTracker,
+      HRegionInfo regionInfo)
+  throws IOException {
+    Put put = new Put(regionInfo.getRegionName());
+    put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+        Writables.getBytes(regionInfo));
+    catalogTracker.waitForMetaServerConnectionDefault().put(
+        CatalogTracker.META_REGION, put);
+    LOG.info("Added region " + regionInfo.getRegionNameAsString() + " to META");
+  }
+
+  /**
+   * Offline parent in meta.
+   * Used when splitting.
+   * @param catalogTracker
+   * @param parent
+   * @param a Split daughter region A
+   * @param b Split daughter region B
+   * @throws NotAllMetaRegionsOnlineException
+   * @throws IOException
+   */
+  public static void offlineParentInMeta(CatalogTracker catalogTracker,
+      HRegionInfo parent, final HRegionInfo a, final HRegionInfo b)
+  throws NotAllMetaRegionsOnlineException, IOException {
+    HRegionInfo copyOfParent = new HRegionInfo(parent);
+    copyOfParent.setOffline(true);
+    copyOfParent.setSplit(true);
+    Put put = new Put(copyOfParent.getRegionName());
+    addRegionInfo(put, copyOfParent);
+    put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER,
+        HConstants.EMPTY_BYTE_ARRAY);
+    put.add(HConstants.CATALOG_FAMILY, HConstants.STARTCODE_QUALIFIER,
+        HConstants.EMPTY_BYTE_ARRAY);
+    put.add(HConstants.CATALOG_FAMILY, HConstants.SPLITA_QUALIFIER,
+      Writables.getBytes(a));
+    put.add(HConstants.CATALOG_FAMILY, HConstants.SPLITB_QUALIFIER,
+      Writables.getBytes(b));
+    catalogTracker.waitForMetaServerConnectionDefault().put(CatalogTracker.META_REGION, put);
+    LOG.info("Offlined parent region " + parent.getRegionNameAsString() +
+      " in META");
+  }
+
+  public static void addDaughter(final CatalogTracker catalogTracker,
+      final HRegionInfo regionInfo, final HServerInfo serverInfo)
+  throws NotAllMetaRegionsOnlineException, IOException {
+    HRegionInterface server = catalogTracker.waitForMetaServerConnectionDefault();
+    byte [] catalogRegionName = CatalogTracker.META_REGION;
+    Put put = new Put(regionInfo.getRegionName());
+    addRegionInfo(put, regionInfo);
+    if (serverInfo != null) addLocation(put, serverInfo);
+    server.put(catalogRegionName, put);
+    LOG.info("Added daughter " + regionInfo.getRegionNameAsString() +
+      " in region " + Bytes.toString(catalogRegionName) +
+      (serverInfo == null?
+        ", serverInfo=null": ", serverInfo=" + serverInfo.getServerName()));
+  }
+
+  /**
+   * Updates the location of the specified META region in ROOT to be the
+   * specified server hostname and startcode.
+   * <p>
+   * Uses passed catalog tracker to get a connection to the server hosting
+   * ROOT and makes edits to that region.
+   *
+   * @param catalogTracker catalog tracker
+   * @param regionInfo region to update location of
+   * @param serverInfo server the region is located on
+   * @throws IOException
+   * @throws ConnectException Usually because the regionserver carrying .META.
+   * is down.
+   * @throws NullPointerException Because no -ROOT- server connection
+   */
+  public static void updateMetaLocation(CatalogTracker catalogTracker,
+      HRegionInfo regionInfo, HServerInfo serverInfo)
+  throws IOException, ConnectException {
+    HRegionInterface server = catalogTracker.waitForRootServerConnectionDefault();
+    if (server == null) throw new IOException("No server for -ROOT-");
+    updateLocation(server, CatalogTracker.ROOT_REGION, regionInfo, serverInfo);
+  }
+
+  /**
+   * Updates the location of the specified region in META to be the specified
+   * server hostname and startcode.
+   * <p>
+   * Uses passed catalog tracker to get a connection to the server hosting
+   * META and makes edits to that region.
+   *
+   * @param catalogTracker catalog tracker
+   * @param regionInfo region to update location of
+   * @param serverInfo server the region is located on
+   * @throws IOException
+   */
+  public static void updateRegionLocation(CatalogTracker catalogTracker,
+      HRegionInfo regionInfo, HServerInfo serverInfo)
+  throws IOException {
+    updateLocation(catalogTracker.waitForMetaServerConnectionDefault(),
+        CatalogTracker.META_REGION, regionInfo, serverInfo);
+  }
+
+  /**
+   * Updates the location of the specified region to be the specified server.
+   * <p>
+   * Connects to the specified server which should be hosting the specified
+   * catalog region name to perform the edit.
+   *
+   * @param server connection to server hosting catalog region
+   * @param catalogRegionName name of catalog region being updated
+   * @param regionInfo region to update location of
+   * @param serverInfo server the region is located on
+   * @throws IOException In particular could throw {@link java.net.ConnectException}
+   * if the server is down on other end.
+   */
+  private static void updateLocation(HRegionInterface server,
+      byte [] catalogRegionName, HRegionInfo regionInfo, HServerInfo serverInfo)
+  throws IOException {
+    Put put = new Put(regionInfo.getRegionName());
+    addLocation(put, serverInfo);
+    server.put(catalogRegionName, put);
+    LOG.info("Updated row " + regionInfo.getRegionNameAsString() +
+      " in region " + Bytes.toString(catalogRegionName) + " with " +
+      "server=" + serverInfo.getHostnamePort() + ", " +
+      "startcode=" + serverInfo.getStartCode());
+  }
+
+  /**
+   * Deletes the specified region from META.
+   * @param catalogTracker
+   * @param regionInfo region to be deleted from META
+   * @throws IOException
+   */
+  public static void deleteRegion(CatalogTracker catalogTracker,
+      HRegionInfo regionInfo)
+  throws IOException {
+    Delete delete = new Delete(regionInfo.getRegionName());
+    catalogTracker.waitForMetaServerConnectionDefault().
+      delete(CatalogTracker.META_REGION, delete);
+    LOG.info("Deleted region " + regionInfo.getRegionNameAsString() + " from META");
+  }
+
+  /**
+   * Deletes daughter reference in offlined split parent.
+   * @param catalogTracker
+   * @param parent Parent row we're to remove daughter reference from
+   * @param qualifier SplitA or SplitB daughter to remove
+   * @param daughter
+   * @throws NotAllMetaRegionsOnlineException
+   * @throws IOException
+   */
+  public static void deleteDaughterReferenceInParent(CatalogTracker catalogTracker,
+      final HRegionInfo parent, final byte [] qualifier,
+      final HRegionInfo daughter)
+  throws NotAllMetaRegionsOnlineException, IOException {
+    Delete delete = new Delete(parent.getRegionName());
+    delete.deleteColumns(HConstants.CATALOG_FAMILY, qualifier);
+    catalogTracker.waitForMetaServerConnectionDefault().
+      delete(CatalogTracker.META_REGION, delete);
+    LOG.info("Deleted daughter reference " + daughter.getRegionNameAsString() +
+      ", qualifier=" + Bytes.toString(qualifier) + ", from parent " +
+      parent.getRegionNameAsString());
+  }
+
+  /**
+   * Updates the region information for the specified region in META.
+   * @param catalogTracker
+   * @param regionInfo region to be updated in META
+   * @throws IOException
+   */
+  public static void updateRegionInfo(CatalogTracker catalogTracker,
+      HRegionInfo regionInfo)
+  throws IOException {
+    Put put = new Put(regionInfo.getRegionName());
+    addRegionInfo(put, regionInfo);
+    catalogTracker.waitForMetaServerConnectionDefault().put(
+        CatalogTracker.META_REGION, put);
+    LOG.info("Updated region " + regionInfo.getRegionNameAsString() + " in META");
+  }
+
+  private static Put addRegionInfo(final Put p, final HRegionInfo hri)
+  throws IOException {
+    p.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+        Writables.getBytes(hri));
+    return p;
+  }
+
+  private static Put addLocation(final Put p, final HServerInfo hsi) {
+    p.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER,
+      Bytes.toBytes(hsi.getHostnamePort()));
+    p.add(HConstants.CATALOG_FAMILY, HConstants.STARTCODE_QUALIFIER,
+      Bytes.toBytes(hsi.getStartCode()));
+    return p;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
new file mode 100644
index 0000000..7bf680d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/MetaReader.java
@@ -0,0 +1,602 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.catalog;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * Reads region and assignment information from <code>.META.</code>.
+ * <p>
+ * Uses the {@link CatalogTracker} to obtain locations and connections to
+ * catalogs.
+ */
+public class MetaReader {
+  public static final byte [] META_REGION_PREFIX;
+  static {
+    // Copy the prefix from FIRST_META_REGIONINFO into META_REGION_PREFIX.
+    // FIRST_META_REGIONINFO == '.META.,,1'.  META_REGION_PREFIX == '.META.,'
+    int len = HRegionInfo.FIRST_META_REGIONINFO.getRegionName().length - 2;
+    META_REGION_PREFIX = new byte [len];
+    System.arraycopy(HRegionInfo.FIRST_META_REGIONINFO.getRegionName(), 0,
+      META_REGION_PREFIX, 0, len);
+  }
+
+  /**
+   * @param ct
+   * @param tableName A user tablename or a .META. table name.
+   * @return Interface on to server hosting the <code>-ROOT-</code> or
+   * <code>.META.</code> regions.
+   * @throws NotAllMetaRegionsOnlineException
+   * @throws IOException
+   */
+  private static HRegionInterface getCatalogRegionInterface(final CatalogTracker ct,
+      final byte [] tableName)
+  throws NotAllMetaRegionsOnlineException, IOException {
+    return Bytes.equals(HConstants.META_TABLE_NAME, tableName)?
+      ct.waitForRootServerConnectionDefault():
+      ct.waitForMetaServerConnectionDefault();
+  }
+
+  /**
+   * @param tableName
+   * @return Returns region name to look in for regions for <code>tableName</code>;
+   * e.g. if we are looking for <code>.META.</code> regions, we need to look
+   * in the <code>-ROOT-</code> region, else if a user table, we need to look
+   * in the <code>.META.</code> region.
+   */
+  private static byte [] getCatalogRegionNameForTable(final byte [] tableName) {
+    return Bytes.equals(HConstants.META_TABLE_NAME, tableName)?
+      HRegionInfo.ROOT_REGIONINFO.getRegionName():
+      HRegionInfo.FIRST_META_REGIONINFO.getRegionName();
+  }
+
+  /**
+   * @param regionName
+   * @return Returns region name to look in for <code>regionName</code>;
+   * e.g. if we are looking for <code>.META.,,1</code> region, we need to look
+   * in <code>-ROOT-</code> region, else if a user region, we need to look
+   * in the <code>.META.,,1</code> region.
+   */
+  private static byte [] getCatalogRegionNameForRegion(final byte [] regionName) {
+    return isMetaRegion(regionName)?
+      HRegionInfo.ROOT_REGIONINFO.getRegionName():
+      HRegionInfo.FIRST_META_REGIONINFO.getRegionName();
+  }
+
+  /**
+   * @param regionName
+   * @return True if <code>regionName</code> is from <code>.META.</code> table.
+   */
+  private static boolean isMetaRegion(final byte [] regionName) {
+    if (regionName.length < META_REGION_PREFIX.length + 2 /* ',', + '1' */) {
+      // Can't be meta table region.
+      return false;
+    }
+    // Compare the prefix of regionName.  If it matches META_REGION_PREFIX prefix,
+    // then this is region from .META. table.
+    return Bytes.compareTo(regionName, 0, META_REGION_PREFIX.length,
+      META_REGION_PREFIX, 0, META_REGION_PREFIX.length) == 0;
+  }
+
+  /**
+   * Performs a full scan of <code>.META.</code>.
+   * <p>
+   * Returns a map of every region to it's currently assigned server, according
+   * to META.  If the region does not have an assignment it will have a null
+   * value in the map.
+   *
+   * @return map of regions to their currently assigned server
+   * @throws IOException
+   */
+  public static Map<HRegionInfo,HServerAddress> fullScan(
+      CatalogTracker catalogTracker)
+  throws IOException {
+    return fullScan(catalogTracker, new TreeSet<String>());
+  }
+
+  /**
+   * Performs a full scan of <code>.META.</code>, skipping regions from any
+   * tables in the specified set of disabled tables.
+   * <p>
+   * Returns a map of every region to it's currently assigned server, according
+   * to META.  If the region does not have an assignment it will have a null
+   * value in the map.
+   *
+   * @param catalogTracker
+   * @param disabledTables set of disabled tables that will not be returned
+   * @return map of regions to their currently assigned server
+   * @throws IOException
+   */
+  public static Map<HRegionInfo,HServerAddress> fullScan(
+      CatalogTracker catalogTracker, final Set<String> disabledTables)
+  throws IOException {
+    return fullScan(catalogTracker, disabledTables, false);
+  }
+
+  /**
+   * Performs a full scan of <code>.META.</code>, skipping regions from any
+   * tables in the specified set of disabled tables.
+   * <p>
+   * Returns a map of every region to it's currently assigned server, according
+   * to META.  If the region does not have an assignment it will have a null
+   * value in the map.
+   *
+   * @param catalogTracker
+   * @param disabledTables set of disabled tables that will not be returned
+   * @param excludeOfflinedSplitParents If true, do not include offlined split
+   * parents in the return.
+   * @return map of regions to their currently assigned server
+   * @throws IOException
+   */
+  public static Map<HRegionInfo,HServerAddress> fullScan(
+      CatalogTracker catalogTracker, final Set<String> disabledTables,
+      final boolean excludeOfflinedSplitParents)
+  throws IOException {
+    final Map<HRegionInfo,HServerAddress> regions =
+      new TreeMap<HRegionInfo,HServerAddress>();
+    Visitor v = new Visitor() {
+      @Override
+      public boolean visit(Result r) throws IOException {
+        if (r ==  null || r.isEmpty()) return true;
+        Pair<HRegionInfo,HServerAddress> region = metaRowToRegionPair(r);
+        if (region == null) return true;
+        HRegionInfo hri = region.getFirst();
+        if (disabledTables.contains(
+            hri.getTableDesc().getNameAsString())) return true;
+        // Are we to include split parents in the list?
+        if (excludeOfflinedSplitParents && hri.isSplitParent()) return true;
+        regions.put(hri, region.getSecond());
+        return true;
+      }
+    };
+    fullScan(catalogTracker, v);
+    return regions;
+  }
+
+  /**
+   * Performs a full scan of <code>.META.</code>.
+   * <p>
+   * Returns a map of every region to it's currently assigned server, according
+   * to META.  If the region does not have an assignment it will have a null
+   * value in the map.
+   * <p>
+   * Returns HServerInfo which includes server startcode.
+   *
+   * @return map of regions to their currently assigned server
+   * @throws IOException
+   */
+  public static List<Result> fullScanOfResults(
+      CatalogTracker catalogTracker)
+  throws IOException {
+    final List<Result> regions = new ArrayList<Result>();
+    Visitor v = new Visitor() {
+      @Override
+      public boolean visit(Result r) throws IOException {
+        if (r ==  null || r.isEmpty()) return true;
+        regions.add(r);
+        return true;
+      }
+    };
+    fullScan(catalogTracker, v);
+    return regions;
+  }
+
+  /**
+   * Performs a full scan of <code>.META.</code>.
+   * <p>
+   * Returns a map of every region to it's currently assigned server, according
+   * to META.  If the region does not have an assignment it will have a null
+   * value in the map.
+   * @param catalogTracker
+   * @param visitor
+   * @throws IOException
+   */
+  public static void fullScan(CatalogTracker catalogTracker,
+      final Visitor visitor)
+  throws IOException {
+    HRegionInterface metaServer =
+      catalogTracker.waitForMetaServerConnectionDefault();
+    Scan scan = new Scan();
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    long scannerid = metaServer.openScanner(
+        HRegionInfo.FIRST_META_REGIONINFO.getRegionName(), scan);
+    try {
+      Result data;
+      while((data = metaServer.next(scannerid)) != null) {
+        if (!data.isEmpty()) visitor.visit(data);
+      }
+    } finally {
+      metaServer.close(scannerid);
+    }
+    return;
+  }
+
+  /**
+   * Reads the location of META from ROOT.
+   * @param metaServer connection to server hosting ROOT
+   * @return location of META in ROOT, null if not available
+   * @throws IOException
+   */
+  public static HServerAddress readMetaLocation(HRegionInterface metaServer)
+  throws IOException {
+    return readLocation(metaServer, CatalogTracker.ROOT_REGION,
+        CatalogTracker.META_REGION);
+  }
+
+  /**
+   * Reads the location of the specified region from META.
+   * @param catalogTracker
+   * @param regionName region to read location of
+   * @return location of region in META, null if not available
+   * @throws IOException
+   */
+  public static HServerAddress readRegionLocation(CatalogTracker catalogTracker,
+      byte [] regionName)
+  throws IOException {
+    if (isMetaRegion(regionName)) throw new IllegalArgumentException("See readMetaLocation");
+    return readLocation(catalogTracker.waitForMetaServerConnectionDefault(),
+        CatalogTracker.META_REGION, regionName);
+  }
+
+  private static HServerAddress readLocation(HRegionInterface metaServer,
+      byte [] catalogRegionName, byte [] regionName)
+  throws IOException {
+    Result r = null;
+    try {
+      r = metaServer.get(catalogRegionName,
+        new Get(regionName).addColumn(HConstants.CATALOG_FAMILY,
+        HConstants.SERVER_QUALIFIER));
+    } catch (java.net.SocketTimeoutException e) {
+      // Treat this exception + message as unavailable catalog table. Catch it
+      // and fall through to return a null
+    } catch (java.net.ConnectException e) {
+      if (e.getMessage() != null &&
+          e.getMessage().contains("Connection refused")) {
+        // Treat this exception + message as unavailable catalog table. Catch it
+        // and fall through to return a null
+      } else {
+        throw e;
+      }
+    } catch (RemoteException re) {
+      IOException ioe = re.unwrapRemoteException();
+      if (ioe instanceof NotServingRegionException) {
+        // Treat this NSRE as unavailable table.  Catch and fall through to
+        // return null below
+      } else if (ioe.getMessage().contains("Server not running")) {
+        // Treat as unavailable table.
+      } else {
+        throw re;
+      }
+    } catch (IOException e) {
+      if (e.getCause() != null && e.getCause() instanceof IOException &&
+          e.getCause().getMessage() != null &&
+          e.getCause().getMessage().contains("Connection reset by peer")) {
+        // Treat this exception + message as unavailable catalog table. Catch it
+        // and fall through to return a null
+      } else {
+        throw e;
+      }
+    }
+    if (r == null || r.isEmpty()) {
+      return null;
+    }
+    byte [] value = r.getValue(HConstants.CATALOG_FAMILY,
+      HConstants.SERVER_QUALIFIER);
+    return new HServerAddress(Bytes.toString(value));
+  }
+
+  /**
+   * Gets the region info and assignment for the specified region from META.
+   * @param catalogTracker
+   * @param regionName
+   * @return region info and assignment from META, null if not available
+   * @throws IOException
+   */
+  public static Pair<HRegionInfo, HServerAddress> getRegion(
+      CatalogTracker catalogTracker, byte [] regionName)
+  throws IOException {
+    Get get = new Get(regionName);
+    get.addFamily(HConstants.CATALOG_FAMILY);
+    byte [] meta = getCatalogRegionNameForRegion(regionName);
+    Result r = catalogTracker.waitForMetaServerConnectionDefault().get(meta, get);
+    if(r == null || r.isEmpty()) {
+      return null;
+    }
+    return metaRowToRegionPair(r);
+  }
+
+  /**
+   * @param data A .META. table row.
+   * @return A pair of the regioninfo and the server address from <code>data</code>
+   * or null for server address if no address set in .META. or null for a result
+   * if no HRegionInfo found.
+   * @throws IOException
+   */
+  public static Pair<HRegionInfo, HServerAddress> metaRowToRegionPair(
+      Result data) throws IOException {
+    byte [] bytes =
+      data.getValue(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    if (bytes == null) return null;
+    HRegionInfo info = Writables.getHRegionInfo(bytes);
+    final byte[] value = data.getValue(HConstants.CATALOG_FAMILY,
+      HConstants.SERVER_QUALIFIER);
+    if (value != null && value.length > 0) {
+      HServerAddress server = new HServerAddress(Bytes.toString(value));
+      return new Pair<HRegionInfo,HServerAddress>(info, server);
+    } else {
+      return new Pair<HRegionInfo, HServerAddress>(info, null);
+    }
+  }
+
+  /**
+   * @param data A .META. table row.
+   * @return A pair of the regioninfo and the server info from <code>data</code>
+   * (or null for server address if no address set in .META.).
+   * @throws IOException
+   */
+  public static Pair<HRegionInfo, HServerInfo> metaRowToRegionPairWithInfo(
+      Result data) throws IOException {
+    byte [] bytes = data.getValue(HConstants.CATALOG_FAMILY,
+      HConstants.REGIONINFO_QUALIFIER);
+    if (bytes == null) return null;
+    HRegionInfo info = Writables.getHRegionInfo(bytes);
+    final byte[] value = data.getValue(HConstants.CATALOG_FAMILY,
+      HConstants.SERVER_QUALIFIER);
+    if (value != null && value.length > 0) {
+      final long startCode = Bytes.toLong(data.getValue(HConstants.CATALOG_FAMILY,
+          HConstants.STARTCODE_QUALIFIER));
+      HServerAddress server = new HServerAddress(Bytes.toString(value));
+      HServerInfo hsi = new HServerInfo(server, startCode, 0,
+          server.getHostname());
+      return new Pair<HRegionInfo,HServerInfo>(info, hsi);
+    } else {
+      return new Pair<HRegionInfo, HServerInfo>(info, null);
+    }
+  }
+
+  /**
+   * Checks if the specified table exists.  Looks at the META table hosted on
+   * the specified server.
+   * @param catalogTracker
+   * @param tableName table to check
+   * @return true if the table exists in meta, false if not
+   * @throws IOException
+   */
+  public static boolean tableExists(CatalogTracker catalogTracker,
+      String tableName)
+  throws IOException {
+    if (tableName.equals(HTableDescriptor.ROOT_TABLEDESC.getNameAsString()) ||
+        tableName.equals(HTableDescriptor.META_TABLEDESC.getNameAsString())) {
+      // Catalog tables always exist.
+      return true;
+    }
+    HRegionInterface metaServer =
+      catalogTracker.waitForMetaServerConnectionDefault();
+    byte[] firstRowInTable = Bytes.toBytes(tableName + ",,");
+    Scan scan = new Scan(firstRowInTable);
+    scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    long scannerid = metaServer.openScanner(
+        HRegionInfo.FIRST_META_REGIONINFO.getRegionName(), scan);
+    try {
+      Result data = metaServer.next(scannerid);
+      if (data != null && data.size() > 0) {
+        HRegionInfo info = Writables.getHRegionInfo(
+          data.getValue(HConstants.CATALOG_FAMILY,
+              HConstants.REGIONINFO_QUALIFIER));
+        if (info.getTableDesc().getNameAsString().equals(tableName)) {
+          // A region for this table already exists. Ergo table exists.
+          return true;
+        }
+      }
+      return false;
+    } finally {
+      metaServer.close(scannerid);
+    }
+  }
+
+  /**
+   * Gets all of the regions of the specified table.
+   * @param catalogTracker
+   * @param tableName
+   * @return Ordered list of {@link HRegionInfo}.
+   * @throws IOException
+   */
+  public static List<HRegionInfo> getTableRegions(CatalogTracker catalogTracker,
+      byte [] tableName)
+  throws IOException {
+    return getTableRegions(catalogTracker, tableName, false);
+  }
+
+  /**
+   * Gets all of the regions of the specified table.
+   * @param catalogTracker
+   * @param tableName
+   * @param excludeOfflinedSplitParents If true, do not include offlined split
+   * parents in the return.
+   * @return Ordered list of {@link HRegionInfo}.
+   * @throws IOException
+   */
+  public static List<HRegionInfo> getTableRegions(CatalogTracker catalogTracker,
+      byte [] tableName, final boolean excludeOfflinedSplitParents)
+  throws IOException {
+    if (Bytes.equals(tableName, HConstants.ROOT_TABLE_NAME)) {
+      // If root, do a bit of special handling.
+      List<HRegionInfo> list = new ArrayList<HRegionInfo>();
+      list.add(HRegionInfo.ROOT_REGIONINFO);
+      return list;
+    } else if (Bytes.equals(tableName, HConstants.META_TABLE_NAME)) {
+      // Same for .META. table
+      List<HRegionInfo> list = new ArrayList<HRegionInfo>();
+      list.add(HRegionInfo.FIRST_META_REGIONINFO);
+      return list;
+    }
+
+    // Its a user table.
+    HRegionInterface metaServer =
+      getCatalogRegionInterface(catalogTracker, tableName);
+    List<HRegionInfo> regions = new ArrayList<HRegionInfo>();
+    String tableString = Bytes.toString(tableName);
+    byte[] firstRowInTable = Bytes.toBytes(tableString + ",,");
+    Scan scan = new Scan(firstRowInTable);
+    scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    long scannerid =
+      metaServer.openScanner(getCatalogRegionNameForTable(tableName), scan);
+    try {
+      Result data;
+      while((data = metaServer.next(scannerid)) != null) {
+        if (data != null && data.size() > 0) {
+          HRegionInfo info = Writables.getHRegionInfo(
+              data.getValue(HConstants.CATALOG_FAMILY,
+                  HConstants.REGIONINFO_QUALIFIER));
+          if (info.getTableDesc().getNameAsString().equals(tableString)) {
+            // Are we to include split parents in the list?
+            if (excludeOfflinedSplitParents && info.isSplitParent()) continue;
+            regions.add(info);
+          } else {
+            break;
+          }
+        }
+      }
+      return regions;
+    } finally {
+      metaServer.close(scannerid);
+    }
+  }
+
+  /**
+   * @param catalogTracker
+   * @param tableName
+   * @return Return list of regioninfos and server addresses.
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  public static List<Pair<HRegionInfo, HServerAddress>>
+  getTableRegionsAndLocations(CatalogTracker catalogTracker, String tableName)
+  throws IOException, InterruptedException {
+    byte [] tableNameBytes = Bytes.toBytes(tableName);
+    if (Bytes.equals(tableNameBytes, HConstants.ROOT_TABLE_NAME)) {
+      // If root, do a bit of special handling.
+      HServerAddress hsa = catalogTracker.getRootLocation();
+      List<Pair<HRegionInfo, HServerAddress>> list =
+        new ArrayList<Pair<HRegionInfo, HServerAddress>>();
+      list.add(new Pair<HRegionInfo, HServerAddress>(HRegionInfo.ROOT_REGIONINFO, hsa));
+      return list;
+    }
+    HRegionInterface metaServer =
+      getCatalogRegionInterface(catalogTracker, tableNameBytes);
+    List<Pair<HRegionInfo, HServerAddress>> regions =
+      new ArrayList<Pair<HRegionInfo, HServerAddress>>();
+    byte[] firstRowInTable = Bytes.toBytes(tableName + ",,");
+    Scan scan = new Scan(firstRowInTable);
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    long scannerid =
+      metaServer.openScanner(getCatalogRegionNameForTable(tableNameBytes), scan);
+    try {
+      Result data;
+      while((data = metaServer.next(scannerid)) != null) {
+        if (data != null && data.size() > 0) {
+          Pair<HRegionInfo, HServerAddress> region = metaRowToRegionPair(data);
+          if (region == null) continue;
+          if (region.getFirst().getTableDesc().getNameAsString().equals(
+              tableName)) {
+            regions.add(region);
+          } else {
+            break;
+          }
+        }
+      }
+      return regions;
+    } finally {
+      metaServer.close(scannerid);
+    }
+  }
+
+  /**
+   * @param catalogTracker
+   * @param hsi Server specification
+   * @return List of user regions installed on this server (does not include
+   * catalog regions).
+   * @throws IOException
+   */
+  public static NavigableMap<HRegionInfo, Result>
+  getServerUserRegions(CatalogTracker catalogTracker, final HServerInfo hsi)
+  throws IOException {
+    HRegionInterface metaServer =
+      catalogTracker.waitForMetaServerConnectionDefault();
+    NavigableMap<HRegionInfo, Result> hris = new TreeMap<HRegionInfo, Result>();
+    Scan scan = new Scan();
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    long scannerid = metaServer.openScanner(
+        HRegionInfo.FIRST_META_REGIONINFO.getRegionName(), scan);
+    try {
+      Result result;
+      while((result = metaServer.next(scannerid)) != null) {
+        if (result != null && result.size() > 0) {
+          Pair<HRegionInfo, HServerInfo> pair =
+            metaRowToRegionPairWithInfo(result);
+          if (pair == null) continue;
+          if (pair.getSecond() == null || !pair.getSecond().equals(hsi)) {
+            continue;
+          }
+          hris.put(pair.getFirst(), result);
+        }
+      }
+      return hris;
+    } finally {
+      metaServer.close(scannerid);
+    }
+  }
+
+  /**
+   * Implementations 'visit' a catalog table row.
+   */
+  public interface Visitor {
+    /**
+     * Visit the catalog table row.
+     * @param r A row from catalog table
+     * @return True if we are to proceed scanning the table, else false if
+     * we are to stop now.
+     */
+    public boolean visit(final Result r) throws IOException;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/catalog/RootLocationEditor.java b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/RootLocationEditor.java
new file mode 100644
index 0000000..aee64c5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/catalog/RootLocationEditor.java
@@ -0,0 +1,72 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.catalog;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Makes changes to the location of <code>-ROOT-</code> in ZooKeeper.
+ */
+public class RootLocationEditor {
+  private static final Log LOG = LogFactory.getLog(RootLocationEditor.class);
+
+  /**
+   * Deletes the location of <code>-ROOT-</code> in ZooKeeper.
+   * @param zookeeper zookeeper reference
+   * @throws KeeperException unexpected zookeeper exception
+   */
+  public static void deleteRootLocation(ZooKeeperWatcher zookeeper)
+  throws KeeperException {
+    LOG.info("Unsetting ROOT region location in ZooKeeper");
+    try {
+      // Just delete the node.  Don't need any watches, only we will create it.
+      ZKUtil.deleteNode(zookeeper, zookeeper.rootServerZNode);
+    } catch(KeeperException.NoNodeException nne) {
+      // Has already been deleted
+    }
+  }
+
+  /**
+   * Sets the location of <code>-ROOT-</code> in ZooKeeper to the
+   * specified server address.
+   * @param zookeeper zookeeper reference
+   * @param location server address hosting root
+   * @throws KeeperException unexpected zookeeper exception
+   */
+  public static void setRootLocation(ZooKeeperWatcher zookeeper,
+      HServerAddress location)
+  throws KeeperException {
+    LOG.info("Setting ROOT region location in ZooKeeper as " + location);
+    try {
+      ZKUtil.createAndWatch(zookeeper, zookeeper.rootServerZNode,
+        Bytes.toBytes(location.toString()));
+    } catch(KeeperException.NodeExistsException nee) {
+      LOG.debug("ROOT region location already existed, updated location");
+      ZKUtil.setData(zookeeper, zookeeper.rootServerZNode,
+          Bytes.toBytes(location.toString()));
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Action.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Action.java
new file mode 100644
index 0000000..556ea81
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Action.java
@@ -0,0 +1,100 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+/*
+ * A Get, Put or Delete associated with it's region.  Used internally by  
+ * {@link HTable::batch} to associate the action with it's region and maintain 
+ * the index from the original request. 
+ */
+public class Action implements Writable, Comparable {
+
+  private byte[] regionName;
+  private Row action;
+  private int originalIndex;
+  private Result result;
+
+  public Action() {
+    super();
+  }
+
+  public Action(byte[] regionName, Row action, int originalIndex) {
+    super();
+    this.regionName = regionName;
+    this.action = action;
+    this.originalIndex = originalIndex;
+  }
+
+  public byte[] getRegionName() {
+    return regionName;
+  }
+
+  public void setRegionName(byte[] regionName) {
+    this.regionName = regionName;
+  }
+
+  public Result getResult() {
+    return result;
+  }
+
+  public void setResult(Result result) {
+    this.result = result;
+  }
+
+  public Row getAction() {
+    return action;
+  }
+
+  public int getOriginalIndex() {
+    return originalIndex;
+  }
+
+  @Override
+  public int compareTo(Object o) {
+    return action.compareTo(((Action) o).getAction());
+  }
+
+  // ///////////////////////////////////////////////////////////////////////////
+  // Writable
+  // ///////////////////////////////////////////////////////////////////////////
+
+  public void write(final DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, regionName);
+    HbaseObjectWritable.writeObject(out, action, Row.class, null);
+    out.writeInt(originalIndex);
+    HbaseObjectWritable.writeObject(out, result, Result.class, null);
+  }
+
+  public void readFields(final DataInput in) throws IOException {
+    this.regionName = Bytes.readByteArray(in);
+    this.action = (Row) HbaseObjectWritable.readObject(in, null);
+    this.originalIndex = in.readInt();
+    this.result = (Result) HbaseObjectWritable.readObject(in, null);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Delete.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Delete.java
new file mode 100644
index 0000000..54f2244
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Delete.java
@@ -0,0 +1,391 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * Used to perform Delete operations on a single row.
+ * <p>
+ * To delete an entire row, instantiate a Delete object with the row
+ * to delete.  To further define the scope of what to delete, perform
+ * additional methods as outlined below.
+ * <p>
+ * To delete specific families, execute {@link #deleteFamily(byte[]) deleteFamily}
+ * for each family to delete.
+ * <p>
+ * To delete multiple versions of specific columns, execute
+ * {@link #deleteColumns(byte[], byte[]) deleteColumns}
+ * for each column to delete.
+ * <p>
+ * To delete specific versions of specific columns, execute
+ * {@link #deleteColumn(byte[], byte[], long) deleteColumn}
+ * for each column version to delete.
+ * <p>
+ * Specifying timestamps, deleteFamily and deleteColumns will delete all
+ * versions with a timestamp less than or equal to that passed.  If no
+ * timestamp is specified, an entry is added with a timestamp of 'now'
+ * where 'now' is the servers's System.currentTimeMillis().
+ * Specifying a timestamp to the deleteColumn method will
+ * delete versions only with a timestamp equal to that specified.
+ * If no timestamp is passed to deleteColumn, internally, it figures the
+ * most recent cell's timestamp and adds a delete at that timestamp; i.e.
+ * it deletes the most recently added cell.
+ * <p>The timestamp passed to the constructor is used ONLY for delete of
+ * rows.  For anything less -- a deleteColumn, deleteColumns or
+ * deleteFamily -- then you need to use the method overrides that take a
+ * timestamp.  The constructor timestamp is not referenced.
+ */
+public class Delete implements Writable, Row, Comparable<Row> {
+  private static final byte DELETE_VERSION = (byte)1;
+
+  private byte [] row = null;
+  // This ts is only used when doing a deleteRow.  Anything less,
+  private long ts;
+  private long lockId = -1L;
+  private final Map<byte [], List<KeyValue>> familyMap =
+    new TreeMap<byte [], List<KeyValue>>(Bytes.BYTES_COMPARATOR);
+
+  /** Constructor for Writable.  DO NOT USE */
+  public Delete() {
+    this((byte [])null);
+  }
+
+  /**
+   * Create a Delete operation for the specified row.
+   * <p>
+   * If no further operations are done, this will delete everything
+   * associated with the specified row (all versions of all columns in all
+   * families).
+   * @param row row key
+   */
+  public Delete(byte [] row) {
+    this(row, HConstants.LATEST_TIMESTAMP, null);
+  }
+
+  /**
+   * Create a Delete operation for the specified row and timestamp, using
+   * an optional row lock.<p>
+   *
+   * If no further operations are done, this will delete all columns in all
+   * families of the specified row with a timestamp less than or equal to the
+   * specified timestamp.<p>
+   *
+   * This timestamp is ONLY used for a delete row operation.  If specifying
+   * families or columns, you must specify each timestamp individually.
+   * @param row row key
+   * @param timestamp maximum version timestamp (only for delete row)
+   * @param rowLock previously acquired row lock, or null
+   */
+  public Delete(byte [] row, long timestamp, RowLock rowLock) {
+    this.row = row;
+    this.ts = timestamp;
+    if (rowLock != null) {
+    	this.lockId = rowLock.getLockId();
+    }
+  }
+
+  /**
+   * @param d Delete to clone.
+   */
+  public Delete(final Delete d) {
+    this.row = d.getRow();
+    this.ts = d.getTimeStamp();
+    this.lockId = d.getLockId();
+    this.familyMap.putAll(d.getFamilyMap());
+  }
+
+  public int compareTo(final Row d) {
+    return Bytes.compareTo(this.getRow(), d.getRow());
+  }
+
+  /**
+   * Method to check if the familyMap is empty
+   * @return true if empty, false otherwise
+   */
+  public boolean isEmpty() {
+    return familyMap.isEmpty();
+  }
+
+  /**
+   * Delete all versions of all columns of the specified family.
+   * <p>
+   * Overrides previous calls to deleteColumn and deleteColumns for the
+   * specified family.
+   * @param family family name
+   * @return this for invocation chaining
+   */
+  public Delete deleteFamily(byte [] family) {
+    this.deleteFamily(family, HConstants.LATEST_TIMESTAMP);
+    return this;
+  }
+
+  /**
+   * Delete all columns of the specified family with a timestamp less than
+   * or equal to the specified timestamp.
+   * <p>
+   * Overrides previous calls to deleteColumn and deleteColumns for the
+   * specified family.
+   * @param family family name
+   * @param timestamp maximum version timestamp
+   * @return this for invocation chaining
+   */
+  public Delete deleteFamily(byte [] family, long timestamp) {
+    List<KeyValue> list = familyMap.get(family);
+    if(list == null) {
+      list = new ArrayList<KeyValue>();
+    } else if(!list.isEmpty()) {
+      list.clear();
+    }
+    list.add(new KeyValue(row, family, null, timestamp, KeyValue.Type.DeleteFamily));
+    familyMap.put(family, list);
+    return this;
+  }
+
+  /**
+   * Delete all versions of the specified column.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @return this for invocation chaining
+   */
+  public Delete deleteColumns(byte [] family, byte [] qualifier) {
+    this.deleteColumns(family, qualifier, HConstants.LATEST_TIMESTAMP);
+    return this;
+  }
+
+  /**
+   * Delete all versions of the specified column with a timestamp less than
+   * or equal to the specified timestamp.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param timestamp maximum version timestamp
+   * @return this for invocation chaining
+   */
+  public Delete deleteColumns(byte [] family, byte [] qualifier, long timestamp) {
+    List<KeyValue> list = familyMap.get(family);
+    if (list == null) {
+      list = new ArrayList<KeyValue>();
+    }
+    list.add(new KeyValue(this.row, family, qualifier, timestamp,
+      KeyValue.Type.DeleteColumn));
+    familyMap.put(family, list);
+    return this;
+  }
+
+  /**
+   * Delete the latest version of the specified column.
+   * This is an expensive call in that on the server-side, it first does a
+   * get to find the latest versions timestamp.  Then it adds a delete using
+   * the fetched cells timestamp.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @return this for invocation chaining
+   */
+  public Delete deleteColumn(byte [] family, byte [] qualifier) {
+    this.deleteColumn(family, qualifier, HConstants.LATEST_TIMESTAMP);
+    return this;
+  }
+
+  /**
+   * Delete the specified version of the specified column.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param timestamp version timestamp
+   * @return this for invocation chaining
+   */
+  public Delete deleteColumn(byte [] family, byte [] qualifier, long timestamp) {
+    List<KeyValue> list = familyMap.get(family);
+    if(list == null) {
+      list = new ArrayList<KeyValue>();
+    }
+    list.add(new KeyValue(
+        this.row, family, qualifier, timestamp, KeyValue.Type.Delete));
+    familyMap.put(family, list);
+    return this;
+  }
+
+  /**
+   * Method for retrieving the delete's familyMap
+   * @return familyMap
+   */
+  public Map<byte [], List<KeyValue>> getFamilyMap() {
+    return this.familyMap;
+  }
+
+  /**
+   *  Method for retrieving the delete's row
+   * @return row
+   */
+  public byte [] getRow() {
+    return this.row;
+  }
+
+  /**
+   * Method for retrieving the delete's RowLock
+   * @return RowLock
+   */
+  public RowLock getRowLock() {
+    return new RowLock(this.row, this.lockId);
+  }
+
+  /**
+   * Method for retrieving the delete's lock ID.
+   *
+   * @return The lock ID.
+   */
+  public long getLockId() {
+	return this.lockId;
+  }
+
+  /**
+   * Method for retrieving the delete's timestamp
+   * @return timestamp
+   */
+  public long getTimeStamp() {
+    return this.ts;
+  }
+
+  /**
+   * Set the timestamp of the delete.
+   * 
+   * @param timestamp
+   */
+  public void setTimestamp(long timestamp) {
+    this.ts = timestamp;
+  }
+
+  /**
+   * @return string
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("row=");
+    sb.append(Bytes.toString(this.row));
+    sb.append(", ts=");
+    sb.append(this.ts);
+    sb.append(", families={");
+    boolean moreThanOne = false;
+    for(Map.Entry<byte [], List<KeyValue>> entry : this.familyMap.entrySet()) {
+      if(moreThanOne) {
+        sb.append(", ");
+      } else {
+        moreThanOne = true;
+      }
+      sb.append("(family=");
+      sb.append(Bytes.toString(entry.getKey()));
+      sb.append(", keyvalues=(");
+      boolean moreThanOneB = false;
+      for(KeyValue kv : entry.getValue()) {
+        if(moreThanOneB) {
+          sb.append(", ");
+        } else {
+          moreThanOneB = true;
+        }
+        sb.append(kv.toString());
+      }
+      sb.append(")");
+    }
+    sb.append("}");
+    return sb.toString();
+  }
+
+  //Writable
+  public void readFields(final DataInput in) throws IOException {
+    int version = in.readByte();
+    if (version > DELETE_VERSION) {
+      throw new IOException("version not supported");
+    }
+    this.row = Bytes.readByteArray(in);
+    this.ts = in.readLong();
+    this.lockId = in.readLong();
+    this.familyMap.clear();
+    int numFamilies = in.readInt();
+    for(int i=0;i<numFamilies;i++) {
+      byte [] family = Bytes.readByteArray(in);
+      int numColumns = in.readInt();
+      List<KeyValue> list = new ArrayList<KeyValue>(numColumns);
+      for(int j=0;j<numColumns;j++) {
+    	KeyValue kv = new KeyValue();
+    	kv.readFields(in);
+    	list.add(kv);
+      }
+      this.familyMap.put(family, list);
+    }
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    out.writeByte(DELETE_VERSION);
+    Bytes.writeByteArray(out, this.row);
+    out.writeLong(this.ts);
+    out.writeLong(this.lockId);
+    out.writeInt(familyMap.size());
+    for(Map.Entry<byte [], List<KeyValue>> entry : familyMap.entrySet()) {
+      Bytes.writeByteArray(out, entry.getKey());
+      List<KeyValue> list = entry.getValue();
+      out.writeInt(list.size());
+      for(KeyValue kv : list) {
+        kv.write(out);
+      }
+    }
+  }
+
+  /**
+   * Delete all versions of the specified column, given in
+   * <code>family:qualifier</code> notation, and with a timestamp less than
+   * or equal to the specified timestamp.
+   * @param column colon-delimited family and qualifier
+   * @param timestamp maximum version timestamp
+   * @deprecated use {@link #deleteColumn(byte[], byte[], long)} instead
+   * @return this for invocation chaining
+   */
+  public Delete deleteColumns(byte [] column, long timestamp) {
+    byte [][] parts = KeyValue.parseColumn(column);
+    this.deleteColumns(parts[0], parts[1], timestamp);
+    return this;
+  }
+
+  /**
+   * Delete the latest version of the specified column, given in
+   * <code>family:qualifier</code> notation.
+   * @param column colon-delimited family and qualifier
+   * @deprecated use {@link #deleteColumn(byte[], byte[])} instead
+   * @return this for invocation chaining
+   */
+  public Delete deleteColumn(byte [] column) {
+    byte [][] parts = KeyValue.parseColumn(column);
+    this.deleteColumn(parts[0], parts[1], HConstants.LATEST_TIMESTAMP);
+    return this;
+  }
+
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Get.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Get.java
new file mode 100644
index 0000000..0547933
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Get.java
@@ -0,0 +1,475 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableFactories;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+/**
+ * Used to perform Get operations on a single row.
+ * <p>
+ * To get everything for a row, instantiate a Get object with the row to get.
+ * To further define the scope of what to get, perform additional methods as
+ * outlined below.
+ * <p>
+ * To get all columns from specific families, execute {@link #addFamily(byte[]) addFamily}
+ * for each family to retrieve.
+ * <p>
+ * To get specific columns, execute {@link #addColumn(byte[], byte[]) addColumn}
+ * for each column to retrieve.
+ * <p>
+ * To only retrieve columns within a specific range of version timestamps,
+ * execute {@link #setTimeRange(long, long) setTimeRange}.
+ * <p>
+ * To only retrieve columns with a specific timestamp, execute
+ * {@link #setTimeStamp(long) setTimestamp}.
+ * <p>
+ * To limit the number of versions of each column to be returned, execute
+ * {@link #setMaxVersions(int) setMaxVersions}.
+ * <p>
+ * To add a filter, execute {@link #setFilter(Filter) setFilter}.
+ */
+public class Get implements Writable, Row, Comparable<Row> {
+  private static final byte GET_VERSION = (byte)1;
+
+  private byte [] row = null;
+  private long lockId = -1L;
+  private int maxVersions = 1;
+  private boolean cacheBlocks = true;
+  private Filter filter = null;
+  private TimeRange tr = new TimeRange();
+  private Map<byte [], NavigableSet<byte []>> familyMap =
+    new TreeMap<byte [], NavigableSet<byte []>>(Bytes.BYTES_COMPARATOR);
+
+  /** Constructor for Writable.  DO NOT USE */
+  public Get() {}
+
+  /**
+   * Create a Get operation for the specified row.
+   * <p>
+   * If no further operations are done, this will get the latest version of
+   * all columns in all families of the specified row.
+   * @param row row key
+   */
+  public Get(byte [] row) {
+    this(row, null);
+  }
+
+  /**
+   * Create a Get operation for the specified row, using an existing row lock.
+   * <p>
+   * If no further operations are done, this will get the latest version of
+   * all columns in all families of the specified row.
+   * @param row row key
+   * @param rowLock previously acquired row lock, or null
+   */
+  public Get(byte [] row, RowLock rowLock) {
+    this.row = row;
+    if(rowLock != null) {
+      this.lockId = rowLock.getLockId();
+    }
+  }
+
+  /**
+   * Get all columns from the specified family.
+   * <p>
+   * Overrides previous calls to addColumn for this family.
+   * @param family family name
+   * @return the Get object
+   */
+  public Get addFamily(byte [] family) {
+    familyMap.remove(family);
+    familyMap.put(family, null);
+    return this;
+  }
+
+  /**
+   * Get the column from the specific family with the specified qualifier.
+   * <p>
+   * Overrides previous calls to addFamily for this family.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @return the Get objec
+   */
+  public Get addColumn(byte [] family, byte [] qualifier) {
+    NavigableSet<byte []> set = familyMap.get(family);
+    if(set == null) {
+      set = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+    }
+    set.add(qualifier);
+    familyMap.put(family, set);
+    return this;
+  }
+
+  /**
+   * Get versions of columns only within the specified timestamp range,
+   * [minStamp, maxStamp).
+   * @param minStamp minimum timestamp value, inclusive
+   * @param maxStamp maximum timestamp value, exclusive
+   * @throws IOException if invalid time range
+   * @return this for invocation chaining
+   */
+  public Get setTimeRange(long minStamp, long maxStamp)
+  throws IOException {
+    tr = new TimeRange(minStamp, maxStamp);
+    return this;
+  }
+
+  /**
+   * Get versions of columns with the specified timestamp.
+   * @param timestamp version timestamp
+   * @return this for invocation chaining
+   */
+  public Get setTimeStamp(long timestamp) {
+    try {
+      tr = new TimeRange(timestamp, timestamp+1);
+    } catch(IOException e) {
+      // Will never happen
+    }
+    return this;
+  }
+
+  /**
+   * Get all available versions.
+   * @return this for invocation chaining
+   */
+  public Get setMaxVersions() {
+    this.maxVersions = Integer.MAX_VALUE;
+    return this;
+  }
+
+  /**
+   * Get up to the specified number of versions of each column.
+   * @param maxVersions maximum versions for each column
+   * @throws IOException if invalid number of versions
+   * @return this for invocation chaining
+   */
+  public Get setMaxVersions(int maxVersions) throws IOException {
+    if(maxVersions <= 0) {
+      throw new IOException("maxVersions must be positive");
+    }
+    this.maxVersions = maxVersions;
+    return this;
+  }
+
+  /**
+   * Apply the specified server-side filter when performing the Get.
+   * Only {@link Filter#filterKeyValue(KeyValue)} is called AFTER all tests
+   * for ttl, column match, deletes and max versions have been run.
+   * @param filter filter to run on the server
+   * @return this for invocation chaining
+   */
+  public Get setFilter(Filter filter) {
+    this.filter = filter;
+    return this;
+  }
+
+  /* Accessors */
+
+  /**
+   * @return Filter
+   */
+  public Filter getFilter() {
+    return this.filter;
+  }
+
+  /**
+   * Set whether blocks should be cached for this Get.
+   * <p>
+   * This is true by default.  When true, default settings of the table and
+   * family are used (this will never override caching blocks if the block
+   * cache is disabled for that family or entirely).
+   *
+   * @param cacheBlocks if false, default settings are overridden and blocks
+   * will not be cached
+   */
+  public void setCacheBlocks(boolean cacheBlocks) {
+    this.cacheBlocks = cacheBlocks;
+  }
+
+  /**
+   * Get whether blocks should be cached for this Get.
+   * @return true if default caching should be used, false if blocks should not
+   * be cached
+   */
+  public boolean getCacheBlocks() {
+    return cacheBlocks;
+  }
+
+  /**
+   * Method for retrieving the get's row
+   * @return row
+   */
+  public byte [] getRow() {
+    return this.row;
+  }
+
+  /**
+   * Method for retrieving the get's RowLock
+   * @return RowLock
+   */
+  public RowLock getRowLock() {
+    return new RowLock(this.row, this.lockId);
+  }
+
+  /**
+   * Method for retrieving the get's lockId
+   * @return lockId
+   */
+  public long getLockId() {
+    return this.lockId;
+  }
+
+  /**
+   * Method for retrieving the get's maximum number of version
+   * @return the maximum number of version to fetch for this get
+   */
+  public int getMaxVersions() {
+    return this.maxVersions;
+  }
+
+  /**
+   * Method for retrieving the get's TimeRange
+   * @return timeRange
+   */
+  public TimeRange getTimeRange() {
+    return this.tr;
+  }
+
+  /**
+   * Method for retrieving the keys in the familyMap
+   * @return keys in the current familyMap
+   */
+  public Set<byte[]> familySet() {
+    return this.familyMap.keySet();
+  }
+
+  /**
+   * Method for retrieving the number of families to get from
+   * @return number of families
+   */
+  public int numFamilies() {
+    return this.familyMap.size();
+  }
+
+  /**
+   * Method for checking if any families have been inserted into this Get
+   * @return true if familyMap is non empty false otherwise
+   */
+  public boolean hasFamilies() {
+    return !this.familyMap.isEmpty();
+  }
+
+  /**
+   * Method for retrieving the get's familyMap
+   * @return familyMap
+   */
+  public Map<byte[],NavigableSet<byte[]>> getFamilyMap() {
+    return this.familyMap;
+  }
+
+  /**
+   * @return String
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("row=");
+    sb.append(Bytes.toString(this.row));
+    sb.append(", maxVersions=");
+    sb.append("").append(this.maxVersions);
+    sb.append(", cacheBlocks=");
+    sb.append(this.cacheBlocks);
+    sb.append(", timeRange=");
+    sb.append("[").append(this.tr.getMin()).append(",");
+    sb.append(this.tr.getMax()).append(")");
+    sb.append(", families=");
+    if(this.familyMap.size() == 0) {
+      sb.append("ALL");
+      return sb.toString();
+    }
+    boolean moreThanOne = false;
+    for(Map.Entry<byte [], NavigableSet<byte[]>> entry :
+      this.familyMap.entrySet()) {
+      if(moreThanOne) {
+        sb.append("), ");
+      } else {
+        moreThanOne = true;
+        sb.append("{");
+      }
+      sb.append("(family=");
+      sb.append(Bytes.toString(entry.getKey()));
+      sb.append(", columns=");
+      if(entry.getValue() == null) {
+        sb.append("ALL");
+      } else {
+        sb.append("{");
+        boolean moreThanOneB = false;
+        for(byte [] column : entry.getValue()) {
+          if(moreThanOneB) {
+            sb.append(", ");
+          } else {
+            moreThanOneB = true;
+          }
+          sb.append(Bytes.toString(column));
+        }
+        sb.append("}");
+      }
+    }
+    sb.append("}");
+    return sb.toString();
+  }
+
+  //Row
+  public int compareTo(Row other) {
+    return Bytes.compareTo(this.getRow(), other.getRow());
+  }
+  
+  //Writable
+  public void readFields(final DataInput in)
+  throws IOException {
+    int version = in.readByte();
+    if (version > GET_VERSION) {
+      throw new IOException("unsupported version");
+    }
+    this.row = Bytes.readByteArray(in);
+    this.lockId = in.readLong();
+    this.maxVersions = in.readInt();
+    boolean hasFilter = in.readBoolean();
+    if (hasFilter) {
+      this.filter = (Filter)createForName(Bytes.toString(Bytes.readByteArray(in)));
+      this.filter.readFields(in);
+    }
+    this.cacheBlocks = in.readBoolean();
+    this.tr = new TimeRange();
+    tr.readFields(in);
+    int numFamilies = in.readInt();
+    this.familyMap =
+      new TreeMap<byte [],NavigableSet<byte []>>(Bytes.BYTES_COMPARATOR);
+    for(int i=0; i<numFamilies; i++) {
+      byte [] family = Bytes.readByteArray(in);
+      boolean hasColumns = in.readBoolean();
+      NavigableSet<byte []> set = null;
+      if(hasColumns) {
+        int numColumns = in.readInt();
+        set = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+        for(int j=0; j<numColumns; j++) {
+          byte [] qualifier = Bytes.readByteArray(in);
+          set.add(qualifier);
+        }
+      }
+      this.familyMap.put(family, set);
+    }
+  }
+
+  public void write(final DataOutput out)
+  throws IOException {
+    out.writeByte(GET_VERSION);
+    Bytes.writeByteArray(out, this.row);
+    out.writeLong(this.lockId);
+    out.writeInt(this.maxVersions);
+    if(this.filter == null) {
+      out.writeBoolean(false);
+    } else {
+      out.writeBoolean(true);
+      Bytes.writeByteArray(out, Bytes.toBytes(filter.getClass().getName()));
+      filter.write(out);
+    }
+    out.writeBoolean(this.cacheBlocks);
+    tr.write(out);
+    out.writeInt(familyMap.size());
+    for(Map.Entry<byte [], NavigableSet<byte []>> entry :
+      familyMap.entrySet()) {
+      Bytes.writeByteArray(out, entry.getKey());
+      NavigableSet<byte []> columnSet = entry.getValue();
+      if(columnSet == null) {
+        out.writeBoolean(false);
+      } else {
+        out.writeBoolean(true);
+        out.writeInt(columnSet.size());
+        for(byte [] qualifier : columnSet) {
+          Bytes.writeByteArray(out, qualifier);
+        }
+      }
+    }
+  }
+
+  @SuppressWarnings("unchecked")
+  private Writable createForName(String className) {
+    try {
+      Class<? extends Writable> clazz =
+        (Class<? extends Writable>) Class.forName(className);
+      return WritableFactories.newInstance(clazz, new Configuration());
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Can't find class " + className);
+    }
+  }
+
+  /**
+   * Adds an array of columns specified the old format, family:qualifier.
+   * <p>
+   * Overrides previous calls to addFamily for any families in the input.
+   * @param columns array of columns, formatted as <pre>family:qualifier</pre>
+   * @deprecated issue multiple {@link #addColumn(byte[], byte[])} instead
+   * @return this for invocation chaining
+   */
+  @SuppressWarnings({"deprecation"})
+  public Get addColumns(byte [][] columns) {
+    if (columns == null) return this;
+    for (byte[] column : columns) {
+      try {
+        addColumn(column);
+      } catch (Exception ignored) {
+      }
+    }
+    return this;
+  }
+
+  /**
+   *
+   * @param column Old format column.
+   * @return This.
+   * @deprecated use {@link #addColumn(byte[], byte[])} instead
+   */
+  public Get addColumn(final byte [] column) {
+    if (column == null) return this;
+    byte [][] split = KeyValue.parseColumn(column);
+    if (split.length > 1 && split[1] != null && split[1].length > 0) {
+      addColumn(split[0], split[1]);
+    } else {
+      addFamily(split[0]);
+    }
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
new file mode 100644
index 0000000..2fba18e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
@@ -0,0 +1,1200 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.RegionException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.UnknownRegionException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * Provides an interface to manage HBase database table metadata + general 
+ * administrative functions.  Use HBaseAdmin to create, drop, list, enable and 
+ * disable tables. Use it also to add and drop table column families. 
+ * 
+ * <p>See {@link HTable} to add, update, and delete data from an individual table.
+ * <p>Currently HBaseAdmin instances are not expected to be long-lived.  For
+ * example, an HBaseAdmin instance will not ride over a Master restart.
+ */
+public class HBaseAdmin implements Abortable {
+  private final Log LOG = LogFactory.getLog(this.getClass().getName());
+//  private final HConnection connection;
+  final HConnection connection;
+  private volatile Configuration conf;
+  private final long pause;
+  private final int numRetries;
+  // Some operations can take a long time such as disable of big table.
+  // numRetries is for 'normal' stuff... Mutliply by this factor when
+  // want to wait a long time.
+  private final int retryLongerMultiplier;
+
+  /**
+   * Constructor
+   *
+   * @param conf Configuration object
+   * @throws MasterNotRunningException if the master is not running
+   * @throws ZooKeeperConnectionException if unable to connect to zookeeper
+   */
+  public HBaseAdmin(Configuration conf)
+  throws MasterNotRunningException, ZooKeeperConnectionException {
+    this.connection = HConnectionManager.getConnection(conf);
+    this.conf = conf;
+    this.pause = conf.getLong("hbase.client.pause", 1000);
+    this.numRetries = conf.getInt("hbase.client.retries.number", 10);
+    this.retryLongerMultiplier = conf.getInt("hbase.client.retries.longer.multiplier", 10);
+    this.connection.getMaster();
+  }
+
+  /**
+   * @return A new CatalogTracker instance; call {@link #cleanupCatalogTracker(CatalogTracker)}
+   * to cleanup the returned catalog tracker.
+   * @throws ZooKeeperConnectionException
+   * @throws IOException
+   * @see #cleanupCatalogTracker(CatalogTracker);
+   */
+  private synchronized CatalogTracker getCatalogTracker()
+  throws ZooKeeperConnectionException, IOException {
+    CatalogTracker ct = null;
+    try {
+      HConnection connection =
+        HConnectionManager.getConnection(new Configuration(this.conf));
+      ct = new CatalogTracker(connection);
+      ct.start();
+    } catch (InterruptedException e) {
+      // Let it out as an IOE for now until we redo all so tolerate IEs
+      Thread.currentThread().interrupt();
+      throw new IOException("Interrupted", e);
+    }
+    return ct;
+  }
+
+  private void cleanupCatalogTracker(final CatalogTracker ct) {
+    ct.stop();
+    HConnectionManager.deleteConnection(ct.getConnection().getConfiguration(), true);
+  }
+
+  @Override
+  public void abort(String why, Throwable e) {
+    // Currently does nothing but throw the passed message and exception
+    throw new RuntimeException(why, e);
+  }
+
+  /** @return HConnection used by this object. */
+  public HConnection getConnection() {
+    return connection;
+  }
+
+  /**
+   * Get a connection to the currently set master.
+   * @return proxy connection to master server for this instance
+   * @throws MasterNotRunningException if the master is not running
+   * @throws ZooKeeperConnectionException if unable to connect to zookeeper
+   */
+  public HMasterInterface getMaster()
+  throws MasterNotRunningException, ZooKeeperConnectionException {
+    return this.connection.getMaster();
+  }
+
+  /** @return - true if the master server is running
+   * @throws ZooKeeperConnectionException
+   * @throws MasterNotRunningException */
+  public boolean isMasterRunning()
+  throws MasterNotRunningException, ZooKeeperConnectionException {
+    return this.connection.isMasterRunning();
+  }
+
+  /**
+   * @param tableName Table to check.
+   * @return True if table exists already.
+   * @throws IOException 
+   */
+  public boolean tableExists(final String tableName)
+  throws IOException {
+    boolean b = false;
+    CatalogTracker ct = getCatalogTracker();
+    try {
+      b = MetaReader.tableExists(ct, tableName);
+    } finally {
+      cleanupCatalogTracker(ct);
+    }
+    return b;
+  }
+
+  /**
+   * @param tableName Table to check.
+   * @return True if table exists already.
+   * @throws IOException 
+   */
+  public boolean tableExists(final byte [] tableName)
+  throws IOException {
+    return tableExists(Bytes.toString(tableName));
+  }
+
+  /**
+   * List all the userspace tables.  In other words, scan the META table.
+   *
+   * If we wanted this to be really fast, we could implement a special
+   * catalog table that just contains table names and their descriptors.
+   * Right now, it only exists as part of the META table's region info.
+   *
+   * @return - returns an array of HTableDescriptors
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HTableDescriptor[] listTables() throws IOException {
+    return this.connection.listTables();
+  }
+
+
+  /**
+   * Method for getting the tableDescriptor
+   * @param tableName as a byte []
+   * @return the tableDescriptor
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HTableDescriptor getTableDescriptor(final byte [] tableName)
+  throws IOException {
+    return this.connection.getHTableDescriptor(tableName);
+  }
+
+  private long getPauseTime(int tries) {
+    int triesCount = tries;
+    if (triesCount >= HConstants.RETRY_BACKOFF.length) {
+      triesCount = HConstants.RETRY_BACKOFF.length - 1;
+    }
+    return this.pause * HConstants.RETRY_BACKOFF[triesCount];
+  }
+
+  /**
+   * Creates a new table.
+   * Synchronous operation.
+   *
+   * @param desc table descriptor for table
+   *
+   * @throws IllegalArgumentException if the table name is reserved
+   * @throws MasterNotRunningException if master is not running
+   * @throws TableExistsException if table already exists (If concurrent
+   * threads, the table may have been created between test-for-existence
+   * and attempt-at-creation).
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void createTable(HTableDescriptor desc)
+  throws IOException {
+    createTable(desc, null);
+  }
+
+  /**
+   * Creates a new table with the specified number of regions.  The start key
+   * specified will become the end key of the first region of the table, and
+   * the end key specified will become the start key of the last region of the
+   * table (the first region has a null start key and the last region has a
+   * null end key).
+   *
+   * BigInteger math will be used to divide the key range specified into
+   * enough segments to make the required number of total regions.
+   *
+   * Synchronous operation.
+   *
+   * @param desc table descriptor for table
+   * @param startKey beginning of key range
+   * @param endKey end of key range
+   * @param numRegions the total number of regions to create
+   *
+   * @throws IllegalArgumentException if the table name is reserved
+   * @throws MasterNotRunningException if master is not running
+   * @throws TableExistsException if table already exists (If concurrent
+   * threads, the table may have been created between test-for-existence
+   * and attempt-at-creation).
+   * @throws IOException
+   */
+  public void createTable(HTableDescriptor desc, byte [] startKey,
+      byte [] endKey, int numRegions)
+  throws IOException {
+    HTableDescriptor.isLegalTableName(desc.getName());
+    if(numRegions < 3) {
+      throw new IllegalArgumentException("Must create at least three regions");
+    } else if(Bytes.compareTo(startKey, endKey) >= 0) {
+      throw new IllegalArgumentException("Start key must be smaller than end key");
+    }
+    byte [][] splitKeys = Bytes.split(startKey, endKey, numRegions - 3);
+    if(splitKeys == null || splitKeys.length != numRegions - 1) {
+      throw new IllegalArgumentException("Unable to split key range into enough regions");
+    }
+    createTable(desc, splitKeys);
+  }
+
+  /**
+   * Creates a new table with an initial set of empty regions defined by the
+   * specified split keys.  The total number of regions created will be the
+   * number of split keys plus one (the first region has a null start key and
+   * the last region has a null end key).
+   * Synchronous operation.
+   *
+   * @param desc table descriptor for table
+   * @param splitKeys array of split keys for the initial regions of the table
+   *
+   * @throws IllegalArgumentException if the table name is reserved
+   * @throws MasterNotRunningException if master is not running
+   * @throws TableExistsException if table already exists (If concurrent
+   * threads, the table may have been created between test-for-existence
+   * and attempt-at-creation).
+   * @throws IOException
+   */
+  public void createTable(HTableDescriptor desc, byte [][] splitKeys)
+  throws IOException {
+    HTableDescriptor.isLegalTableName(desc.getName());
+    if(splitKeys != null && splitKeys.length > 1) {
+      Arrays.sort(splitKeys, Bytes.BYTES_COMPARATOR);
+      // Verify there are no duplicate split keys
+      byte [] lastKey = null;
+      for(byte [] splitKey : splitKeys) {
+        if(lastKey != null && Bytes.equals(splitKey, lastKey)) {
+          throw new IllegalArgumentException("All split keys must be unique, " +
+            "found duplicate: " + Bytes.toStringBinary(splitKey) +
+            ", " + Bytes.toStringBinary(lastKey));
+        }
+        lastKey = splitKey;
+      }
+    }
+    createTableAsync(desc, splitKeys);
+    for (int tries = 0; tries < numRetries; tries++) {
+      try {
+        // Wait for new table to come on-line
+        connection.locateRegion(desc.getName(), HConstants.EMPTY_START_ROW);
+        break;
+
+      } catch (RegionException e) {
+        if (tries == numRetries - 1) {
+          // Ran out of tries
+          throw e;
+        }
+      }
+      try {
+        Thread.sleep(getPauseTime(tries));
+      } catch (InterruptedException e) {
+        // Just continue; ignore the interruption.
+      }
+    }
+  }
+
+  /**
+   * Creates a new table but does not block and wait for it to come online.
+   * Asynchronous operation.
+   *
+   * @param desc table descriptor for table
+   *
+   * @throws IllegalArgumentException Bad table name.
+   * @throws MasterNotRunningException if master is not running
+   * @throws TableExistsException if table already exists (If concurrent
+   * threads, the table may have been created between test-for-existence
+   * and attempt-at-creation).
+   * @throws IOException
+   */
+  public void createTableAsync(HTableDescriptor desc, byte [][] splitKeys)
+  throws IOException {
+    HTableDescriptor.isLegalTableName(desc.getName());
+    try {
+      getMaster().createTable(desc, splitKeys);
+    } catch (RemoteException e) {
+      throw e.unwrapRemoteException();
+    }
+  }
+
+  /**
+   * Deletes a table.
+   * Synchronous operation.
+   *
+   * @param tableName name of table to delete
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void deleteTable(final String tableName) throws IOException {
+    deleteTable(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Deletes a table.
+   * Synchronous operation.
+   *
+   * @param tableName name of table to delete
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void deleteTable(final byte [] tableName) throws IOException {
+    isMasterRunning();
+    HTableDescriptor.isLegalTableName(tableName);
+    HRegionLocation firstMetaServer = getFirstMetaServerForTable(tableName);
+    try {
+      getMaster().deleteTable(tableName);
+    } catch (RemoteException e) {
+      throw RemoteExceptionHandler.decodeRemoteException(e);
+    }
+    final int batchCount = this.conf.getInt("hbase.admin.scanner.caching", 10);
+    // Wait until all regions deleted
+    HRegionInterface server =
+      connection.getHRegionConnection(firstMetaServer.getServerAddress());
+    HRegionInfo info = new HRegionInfo();
+    for (int tries = 0; tries < (this.numRetries * this.retryLongerMultiplier); tries++) {
+      long scannerId = -1L;
+      try {
+        Scan scan = new Scan().addColumn(HConstants.CATALOG_FAMILY,
+          HConstants.REGIONINFO_QUALIFIER);
+        scannerId = server.openScanner(
+          firstMetaServer.getRegionInfo().getRegionName(), scan);
+        // Get a batch at a time.
+        Result [] values = server.next(scannerId, batchCount);
+        if (values == null || values.length == 0) {
+          break;
+        }
+        boolean found = false;
+        for (Result r : values) {
+          NavigableMap<byte[], byte[]> infoValues =
+              r.getFamilyMap(HConstants.CATALOG_FAMILY);
+          for (Map.Entry<byte[], byte[]> e : infoValues.entrySet()) {
+            if (Bytes.equals(e.getKey(), HConstants.REGIONINFO_QUALIFIER)) {
+              info = (HRegionInfo) Writables.getWritable(e.getValue(), info);
+              if (Bytes.equals(info.getTableDesc().getName(), tableName)) {
+                found = true;
+              } else {
+                found = false;
+                break;
+              }
+            }
+          }
+        }
+        if (!found) {
+          break;
+        }
+      } catch (IOException ex) {
+        if(tries == numRetries - 1) {           // no more tries left
+          if (ex instanceof RemoteException) {
+            ex = RemoteExceptionHandler.decodeRemoteException((RemoteException) ex);
+          }
+          throw ex;
+        }
+      } finally {
+        if (scannerId != -1L) {
+          try {
+            server.close(scannerId);
+          } catch (Exception ex) {
+            LOG.warn(ex);
+          }
+        }
+      }
+      try {
+        Thread.sleep(getPauseTime(tries));
+      } catch (InterruptedException e) {
+        // continue
+      }
+    }
+    // Delete cached information to prevent clients from using old locations
+    this.connection.clearRegionCache(tableName);
+    LOG.info("Deleted " + Bytes.toString(tableName));
+  }
+
+  public void enableTable(final String tableName)
+  throws IOException {
+    enableTable(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Enable a table.  May timeout.  Use {@link #enableTableAsync(byte[])}
+   * and {@link #isTableEnabled(byte[])} instead.
+   * @param tableName name of the table
+   * @throws IOException if a remote or network exception occurs
+   * @see #isTableEnabled(byte[])
+   * @see #disableTable(byte[])
+   * @see #enableTableAsync(byte[])
+   */
+  public void enableTable(final byte [] tableName)
+  throws IOException {
+    enableTableAsync(tableName);
+ 
+    // Wait until all regions are enabled
+    boolean enabled = false;
+    for (int tries = 0; tries < (this.numRetries * this.retryLongerMultiplier); tries++) {
+      enabled = isTableEnabled(tableName);
+      if (enabled) {
+        break;
+      }
+      long sleep = getPauseTime(tries);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Sleeping= " + sleep + "ms, waiting for all regions to be " +
+          "enabled in " + Bytes.toString(tableName));
+      }
+      try {
+        Thread.sleep(sleep);
+      } catch (InterruptedException e) {
+        Thread.currentThread().interrupt();
+        // Do this conversion rather than let it out because do not want to
+        // change the method signature.
+        throw new IOException("Interrupted", e);
+      }
+    }
+    if (!enabled) {
+      throw new IOException("Unable to enable table " +
+        Bytes.toString(tableName));
+    }
+    LOG.info("Enabled table " + Bytes.toString(tableName));
+  }
+
+  public void enableTableAsync(final String tableName)
+  throws IOException {
+    enableTableAsync(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Brings a table on-line (enables it).  Method returns immediately though
+   * enable of table may take some time to complete, especially if the table
+   * is large (All regions are opened as part of enabling process).  Check
+   * {@link #isTableEnabled(byte[])} to learn when table is fully online.  If
+   * table is taking too long to online, check server logs.
+   * @param tableName
+   * @throws IOException
+   * @since 0.90.0
+   */
+  public void enableTableAsync(final byte [] tableName)
+  throws IOException {
+    isMasterRunning();
+    try {
+      getMaster().enableTable(tableName);
+    } catch (RemoteException e) {
+      throw e.unwrapRemoteException();
+    }
+    LOG.info("Started enable of " + Bytes.toString(tableName));
+  }
+
+  public void disableTableAsync(final String tableName) throws IOException {
+    disableTableAsync(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Starts the disable of a table.  If it is being served, the master
+   * will tell the servers to stop serving it.  This method returns immediately.
+   * The disable of a table can take some time if the table is large (all
+   * regions are closed as part of table disable operation).
+   * Call {@link #isTableDisabled(byte[])} to check for when disable completes.
+   * If table is taking too long to online, check server logs.
+   * @param tableName name of table
+   * @throws IOException if a remote or network exception occurs
+   * @see #isTableDisabled(byte[])
+   * @see #isTableEnabled(byte[])
+   * @since 0.90.0
+   */
+  public void disableTableAsync(final byte [] tableName) throws IOException {
+    isMasterRunning();
+    try {
+      getMaster().disableTable(tableName);
+    } catch (RemoteException e) {
+      throw e.unwrapRemoteException();
+    }
+    LOG.info("Started disable of " + Bytes.toString(tableName));
+  }
+
+  public void disableTable(final String tableName)
+  throws IOException {
+    disableTable(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Disable table and wait on completion.  May timeout eventually.  Use
+   * {@link #disableTableAsync(byte[])} and {@link #isTableDisabled(String)}
+   * instead.
+   * @param tableName
+   * @throws IOException
+   */
+  public void disableTable(final byte [] tableName)
+  throws IOException {
+    disableTableAsync(tableName);
+    // Wait until table is disabled
+    boolean disabled = false;
+    for (int tries = 0; tries < (this.numRetries * this.retryLongerMultiplier); tries++) {
+      disabled = isTableDisabled(tableName);
+      if (disabled) {
+        break;
+      }
+      long sleep = getPauseTime(tries);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Sleeping= " + sleep + "ms, waiting for all regions to be " +
+          "disabled in " + Bytes.toString(tableName));
+      }
+      try {
+        Thread.sleep(sleep);
+      } catch (InterruptedException e) {
+        // Do this conversion rather than let it out because do not want to
+        // change the method signature.
+        Thread.currentThread().interrupt();
+        throw new IOException("Interrupted", e);
+      }
+    }
+    if (!disabled) {
+      throw new RegionException("Retries exhausted, it took too long to wait"+
+        " for the table " + Bytes.toString(tableName) + " to be disabled.");
+    }
+    LOG.info("Disabled " + Bytes.toString(tableName));
+  }
+
+  /**
+   * @param tableName name of table to check
+   * @return true if table is on-line
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableEnabled(String tableName) throws IOException {
+    return isTableEnabled(Bytes.toBytes(tableName));
+  }
+  /**
+   * @param tableName name of table to check
+   * @return true if table is on-line
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableEnabled(byte[] tableName) throws IOException {
+    return connection.isTableEnabled(tableName);
+  }
+
+  /**
+   * @param tableName name of table to check
+   * @return true if table is off-line
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableDisabled(final String tableName) throws IOException {
+    return isTableDisabled(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * @param tableName name of table to check
+   * @return true if table is off-line
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableDisabled(byte[] tableName) throws IOException {
+    return connection.isTableDisabled(tableName);
+  }
+
+  /**
+   * @param tableName name of table to check
+   * @return true if all regions of the table are available
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableAvailable(byte[] tableName) throws IOException {
+    return connection.isTableAvailable(tableName);
+  }
+
+  /**
+   * @param tableName name of table to check
+   * @return true if all regions of the table are available
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableAvailable(String tableName) throws IOException {
+    return connection.isTableAvailable(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Add a column to an existing table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of the table to add column to
+   * @param column column descriptor of column to be added
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void addColumn(final String tableName, HColumnDescriptor column)
+  throws IOException {
+    addColumn(Bytes.toBytes(tableName), column);
+  }
+
+  /**
+   * Add a column to an existing table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of the table to add column to
+   * @param column column descriptor of column to be added
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void addColumn(final byte [] tableName, HColumnDescriptor column)
+  throws IOException {
+    HTableDescriptor.isLegalTableName(tableName);
+    try {
+      getMaster().addColumn(tableName, column);
+    } catch (RemoteException e) {
+      throw RemoteExceptionHandler.decodeRemoteException(e);
+    }
+  }
+
+  /**
+   * Delete a column from a table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of table
+   * @param columnName name of column to be deleted
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void deleteColumn(final String tableName, final String columnName)
+  throws IOException {
+    deleteColumn(Bytes.toBytes(tableName), Bytes.toBytes(columnName));
+  }
+
+  /**
+   * Delete a column from a table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of table
+   * @param columnName name of column to be deleted
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void deleteColumn(final byte [] tableName, final byte [] columnName)
+  throws IOException {
+    try {
+      getMaster().deleteColumn(tableName, columnName);
+    } catch (RemoteException e) {
+      throw RemoteExceptionHandler.decodeRemoteException(e);
+    }
+  }
+
+  /**
+   * Modify an existing column family on a table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of table
+   * @param columnName name of column to be modified
+   * @param descriptor new column descriptor to use
+   * @throws IOException if a remote or network exception occurs
+   * @deprecated The <code>columnName</code> is redundant. Use {@link #addColumn(String, HColumnDescriptor)}
+   */
+  public void modifyColumn(final String tableName, final String columnName,
+      HColumnDescriptor descriptor)
+  throws IOException {
+    modifyColumn(tableName,  descriptor);
+  }
+
+  /**
+   * Modify an existing column family on a table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of table
+   * @param descriptor new column descriptor to use
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void modifyColumn(final String tableName, HColumnDescriptor descriptor)
+  throws IOException {
+    modifyColumn(Bytes.toBytes(tableName), descriptor);
+  }
+
+  /**
+   * Modify an existing column family on a table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of table
+   * @param columnName name of column to be modified
+   * @param descriptor new column descriptor to use
+   * @throws IOException if a remote or network exception occurs
+   * @deprecated The <code>columnName</code> is redundant. Use {@link #modifyColumn(byte[], HColumnDescriptor)}
+   */
+  public void modifyColumn(final byte [] tableName, final byte [] columnName,
+    HColumnDescriptor descriptor)
+  throws IOException {
+    modifyColumn(tableName, descriptor);
+  }
+
+  /**
+   * Modify an existing column family on a table.
+   * Asynchronous operation.
+   *
+   * @param tableName name of table
+   * @param descriptor new column descriptor to use
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void modifyColumn(final byte [] tableName, HColumnDescriptor descriptor)
+  throws IOException {
+    try {
+      getMaster().modifyColumn(tableName, descriptor);
+    } catch (RemoteException re) {
+      // Convert RE exceptions in here; client shouldn't have to deal with them,
+      // at least w/ the type of exceptions that come out of this method:
+      // TableNotFoundException, etc.
+      throw RemoteExceptionHandler.decodeRemoteException(re);
+    }
+  }
+
+  /**
+   * Close a region. For expert-admins.  Runs close on the regionserver.  The
+   * master will not be informed of the close.
+   * @param regionname region name to close
+   * @param hostAndPort If supplied, we'll use this location rather than
+   * the one currently in <code>.META.</code>
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void closeRegion(final String regionname, final String hostAndPort)
+  throws IOException {
+    closeRegion(Bytes.toBytes(regionname), hostAndPort);
+  }
+
+  /**
+   * Close a region.  For expert-admins  Runs close on the regionserver.  The
+   * master will not be informed of the close.
+   * @param regionname region name to close
+   * @param hostAndPort If supplied, we'll use this location rather than
+   * the one currently in <code>.META.</code>
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void closeRegion(final byte [] regionname, final String hostAndPort)
+  throws IOException {
+    CatalogTracker ct = getCatalogTracker();
+    try {
+      if (hostAndPort != null) {
+        HServerAddress hsa = new HServerAddress(hostAndPort);
+        Pair<HRegionInfo, HServerAddress> pair =
+          MetaReader.getRegion(ct, regionname);
+        if (pair == null || pair.getSecond() == null) {
+          LOG.info("No server in .META. for " +
+            Bytes.toString(regionname) + "; pair=" + pair);
+        } else {
+          closeRegion(hsa, pair.getFirst());
+        }
+      } else {
+        Pair<HRegionInfo, HServerAddress> pair =
+          MetaReader.getRegion(ct, regionname);
+        if (pair == null || pair.getSecond() == null) {
+          LOG.info("No server in .META. for " +
+            Bytes.toString(regionname) + "; pair=" + pair);
+        } else {
+          closeRegion(pair.getSecond(), pair.getFirst());
+        }
+      }
+    } finally {
+      cleanupCatalogTracker(ct);
+    }
+  }
+
+  private void closeRegion(final HServerAddress hsa, final HRegionInfo hri)
+  throws IOException {
+    HRegionInterface rs = this.connection.getHRegionConnection(hsa);
+    // Close the region without updating zk state.
+    rs.closeRegion(hri, false);
+  }
+
+  /**
+   * Flush a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to flush
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void flush(final String tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    flush(Bytes.toBytes(tableNameOrRegionName));
+  }
+
+  /**
+   * Flush a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to flush
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void flush(final byte [] tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    boolean isRegionName = isRegionName(tableNameOrRegionName);
+    CatalogTracker ct = getCatalogTracker();
+    try {
+      if (isRegionName) {
+        Pair<HRegionInfo, HServerAddress> pair =
+          MetaReader.getRegion(getCatalogTracker(), tableNameOrRegionName);
+        if (pair == null || pair.getSecond() == null) {
+          LOG.info("No server in .META. for " +
+            Bytes.toString(tableNameOrRegionName) + "; pair=" + pair);
+        } else {
+          flush(pair.getSecond(), pair.getFirst());
+        }
+      } else {
+        List<Pair<HRegionInfo, HServerAddress>> pairs =
+          MetaReader.getTableRegionsAndLocations(getCatalogTracker(),
+              Bytes.toString(tableNameOrRegionName));
+        for (Pair<HRegionInfo, HServerAddress> pair: pairs) {
+          if (pair.getSecond() == null) continue;
+          flush(pair.getSecond(), pair.getFirst());
+        }
+      }
+    } finally {
+      cleanupCatalogTracker(ct);
+    }
+  }
+
+  private void flush(final HServerAddress hsa, final HRegionInfo hri)
+  throws IOException {
+    HRegionInterface rs = this.connection.getHRegionConnection(hsa);
+    rs.flushRegion(hri);
+  }
+
+  /**
+   * Compact a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to compact
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void compact(final String tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    compact(Bytes.toBytes(tableNameOrRegionName));
+  }
+
+  /**
+   * Compact a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to compact
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void compact(final byte [] tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    compact(tableNameOrRegionName, false);
+  }
+
+  /**
+   * Major compact a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to major compact
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void majorCompact(final String tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    majorCompact(Bytes.toBytes(tableNameOrRegionName));
+  }
+
+  /**
+   * Major compact a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to major compact
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void majorCompact(final byte [] tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    compact(tableNameOrRegionName, true);
+  }
+
+  /**
+   * Compact a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to compact
+   * @param major True if we are to do a major compaction.
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  private void compact(final byte [] tableNameOrRegionName, final boolean major)
+  throws IOException, InterruptedException {
+    CatalogTracker ct = getCatalogTracker();
+    try {
+      if (isRegionName(tableNameOrRegionName)) {
+        Pair<HRegionInfo, HServerAddress> pair =
+          MetaReader.getRegion(ct, tableNameOrRegionName);
+        if (pair == null || pair.getSecond() == null) {
+          LOG.info("No server in .META. for " +
+            Bytes.toString(tableNameOrRegionName) + "; pair=" + pair);
+        } else {
+          compact(pair.getSecond(), pair.getFirst(), major);
+        }
+      } else {
+        List<Pair<HRegionInfo, HServerAddress>> pairs =
+          MetaReader.getTableRegionsAndLocations(ct,
+              Bytes.toString(tableNameOrRegionName));
+        for (Pair<HRegionInfo, HServerAddress> pair: pairs) {
+          if (pair.getSecond() == null) continue;
+          compact(pair.getSecond(), pair.getFirst(), major);
+        }
+      }
+    } finally {
+      cleanupCatalogTracker(ct);
+    }
+  }
+
+  private void compact(final HServerAddress hsa, final HRegionInfo hri,
+      final boolean major)
+  throws IOException {
+    HRegionInterface rs = this.connection.getHRegionConnection(hsa);
+    rs.compactRegion(hri, major);
+  }
+
+  /**
+   * Move the region <code>r</code> to <code>dest</code>.
+   * @param encodedRegionName The encoded region name; i.e. the hash that makes
+   * up the region name suffix: e.g. if regionname is
+   * <code>TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396.</code>,
+   * then the encoded region name is: <code>527db22f95c8a9e0116f0cc13c680396</code>.
+   * @param destServerName The servername of the destination regionserver.  If
+   * passed the empty byte array we'll assign to a random server.  A server name
+   * is made of host, port and startcode.  Here is an example:
+   * <code> host187.example.com,60020,1289493121758</code>.
+   * @throws UnknownRegionException Thrown if we can't find a region named
+   * <code>encodedRegionName</code>
+   * @throws ZooKeeperConnectionException 
+   * @throws MasterNotRunningException 
+   */
+  public void move(final byte [] encodedRegionName, final byte [] destServerName)
+  throws UnknownRegionException, MasterNotRunningException, ZooKeeperConnectionException {
+    getMaster().move(encodedRegionName, destServerName);
+  }
+
+  /**
+   * @param regionName Region name to assign.
+   * @param force True to force assign.
+   * @throws MasterNotRunningException
+   * @throws ZooKeeperConnectionException
+   * @throws IOException
+   */
+  public void assign(final byte [] regionName, final boolean force)
+  throws MasterNotRunningException, ZooKeeperConnectionException, IOException {
+    getMaster().assign(regionName, force);
+  }
+
+  /**
+   * Unassign a region from current hosting regionserver.  Region will then be
+   * assigned to a regionserver chosen at random.  Region could be reassigned
+   * back to the same server.  Use {@link #move(byte[], byte[])} if you want
+   * to control the region movement.
+   * @param regionName Region to unassign. Will clear any existing RegionPlan
+   * if one found.
+   * @param force If true, force unassign (Will remove region from
+   * regions-in-transition too if present).
+   * @throws MasterNotRunningException
+   * @throws ZooKeeperConnectionException
+   * @throws IOException
+   */
+  public void unassign(final byte [] regionName, final boolean force)
+  throws MasterNotRunningException, ZooKeeperConnectionException, IOException {
+    getMaster().unassign(regionName, force);
+  }
+
+  /**
+   * Turn the load balancer on or off.
+   * @param b If true, enable balancer. If false, disable balancer.
+   * @return Previous balancer value
+   */
+  public boolean balanceSwitch(final boolean b)
+  throws MasterNotRunningException, ZooKeeperConnectionException {
+    return getMaster().balanceSwitch(b);
+  }
+
+  /**
+   * Invoke the balancer.  Will run the balancer and if regions to move, it will
+   * go ahead and do the reassignments.  Can NOT run for various reasons.  Check
+   * logs.
+   * @return True if balancer ran, false otherwise.
+   */
+  public boolean balancer()
+  throws MasterNotRunningException, ZooKeeperConnectionException {
+    return getMaster().balance();
+  }
+
+  /**
+   * Split a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table or region to split
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void split(final String tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    split(Bytes.toBytes(tableNameOrRegionName));
+  }
+
+  /**
+   * Split a table or an individual region.
+   * Asynchronous operation.
+   *
+   * @param tableNameOrRegionName table to region to split
+   * @throws IOException if a remote or network exception occurs
+   * @throws InterruptedException 
+   */
+  public void split(final byte [] tableNameOrRegionName)
+  throws IOException, InterruptedException {
+    CatalogTracker ct = getCatalogTracker();
+    try {
+      if (isRegionName(tableNameOrRegionName)) {
+        // Its a possible region name.
+        Pair<HRegionInfo, HServerAddress> pair =
+          MetaReader.getRegion(getCatalogTracker(), tableNameOrRegionName);
+        if (pair == null || pair.getSecond() == null) {
+          LOG.info("No server in .META. for " +
+            Bytes.toString(tableNameOrRegionName) + "; pair=" + pair);
+        } else {
+          split(pair.getSecond(), pair.getFirst());
+        }
+      } else {
+        List<Pair<HRegionInfo, HServerAddress>> pairs =
+          MetaReader.getTableRegionsAndLocations(getCatalogTracker(),
+              Bytes.toString(tableNameOrRegionName));
+        for (Pair<HRegionInfo, HServerAddress> pair: pairs) {
+          // May not be a server for a particular row
+          if (pair.getSecond() == null) continue;
+          split(pair.getSecond(), pair.getFirst());
+        }
+      }
+    } finally {
+      cleanupCatalogTracker(ct);
+    }
+  }
+
+  private void split(final HServerAddress hsa, final HRegionInfo hri)
+  throws IOException {
+    HRegionInterface rs = this.connection.getHRegionConnection(hsa);
+    rs.splitRegion(hri);
+  }
+
+  /**
+   * Modify an existing table, more IRB friendly version.
+   * Asynchronous operation.  This means that it may be a while before your
+   * schema change is updated across all of the table.
+   *
+   * @param tableName name of table.
+   * @param htd modified description of the table
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void modifyTable(final byte [] tableName, HTableDescriptor htd)
+  throws IOException {
+    try {
+      getMaster().modifyTable(tableName, htd);
+    } catch (RemoteException re) {
+      // Convert RE exceptions in here; client shouldn't have to deal with them,
+      // at least w/ the type of exceptions that come out of this method:
+      // TableNotFoundException, etc.
+      throw RemoteExceptionHandler.decodeRemoteException(re);
+    }
+  }
+
+  /**
+   * @param tableNameOrRegionName Name of a table or name of a region.
+   * @return True if <code>tableNameOrRegionName</code> is *possibly* a region
+   * name else false if a verified tablename (we call {@link #tableExists(byte[])};
+   * else we throw an exception.
+   * @throws IOException 
+   */
+  private boolean isRegionName(final byte [] tableNameOrRegionName)
+  throws IOException {
+    if (tableNameOrRegionName == null) {
+      throw new IllegalArgumentException("Pass a table name or region name");
+    }
+    return !tableExists(tableNameOrRegionName);
+  }
+
+  /**
+   * Shuts down the HBase cluster
+   * @throws IOException if a remote or network exception occurs
+   */
+  public synchronized void shutdown() throws IOException {
+    isMasterRunning();
+    try {
+      getMaster().shutdown();
+    } catch (RemoteException e) {
+      throw RemoteExceptionHandler.decodeRemoteException(e);
+    }
+  }
+
+  /**
+   * Shuts down the current HBase master only.
+   * Does not shutdown the cluster.
+   * @see #shutdown()
+   * @throws IOException if a remote or network exception occurs
+   */
+  public synchronized void stopMaster() throws IOException {
+    isMasterRunning();
+    try {
+      getMaster().stopMaster();
+    } catch (RemoteException e) {
+      throw RemoteExceptionHandler.decodeRemoteException(e);
+    }
+  }
+
+  /**
+   * Stop the designated regionserver.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public synchronized void stopRegionServer(final HServerAddress hsa)
+  throws IOException {
+    HRegionInterface rs = this.connection.getHRegionConnection(hsa);
+    rs.stop("Called by admin client " + this.connection.toString());
+  }
+
+  /**
+   * @return cluster status
+   * @throws IOException if a remote or network exception occurs
+   */
+  public ClusterStatus getClusterStatus() throws IOException {
+    return getMaster().getClusterStatus();
+  }
+
+  private HRegionLocation getFirstMetaServerForTable(final byte [] tableName)
+  throws IOException {
+    return connection.locateRegion(HConstants.META_TABLE_NAME,
+      HRegionInfo.createRegionName(tableName, null, HConstants.NINES, false));
+  }
+
+  /**
+   * @return Configuration used by the instance.
+   */
+  public Configuration getConfiguration() {
+    return this.conf;
+  }
+
+  /**
+   * Check to see if HBase is running. Throw an exception if not.
+   *
+   * @param conf system configuration
+   * @throws MasterNotRunningException if the master is not running
+   * @throws ZooKeeperConnectionException if unable to connect to zookeeper
+   */
+  public static void checkHBaseAvailable(Configuration conf)
+  throws MasterNotRunningException, ZooKeeperConnectionException {
+    Configuration copyOfConf = HBaseConfiguration.create(conf);
+    copyOfConf.setInt("hbase.client.retries.number", 1);
+    new HBaseAdmin(copyOfConf);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HConnection.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
new file mode 100644
index 0000000..ed2f554
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HConnection.java
@@ -0,0 +1,299 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+
+/**
+ * Cluster connection.  Hosts a connection to the ZooKeeper ensemble and
+ * thereafter into the HBase cluster.  Knows how to locate regions out on the cluster,
+ * keeps a cache of locations and then knows how to recalibrate after they move.
+ * {@link HConnectionManager} manages instances of this class.
+ *
+ * <p>HConnections are used by {@link HTable} mostly but also by
+ * {@link HBaseAdmin}, {@link CatalogTracker},
+ * and {@link ZooKeeperWatcher}.  HConnection instances can be shared.  Sharing
+ * is usually what you want because rather than each HConnection instance
+ * having to do its own discovery of regions out on the cluster, instead, all
+ * clients get to share the one cache of locations.  Sharing makes cleanup of
+ * HConnections awkward.  See {@link HConnectionManager} for cleanup
+ * discussion.
+ *
+ * @see HConnectionManager
+ */
+public interface HConnection extends Abortable {
+  /**
+   * @return Configuration instance being used by this HConnection instance.
+   */
+  public Configuration getConfiguration();
+
+  /**
+   * Retrieve ZooKeeperWatcher used by this connection.
+   * @return ZooKeeperWatcher handle being used by the connection.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public ZooKeeperWatcher getZooKeeperWatcher() throws IOException;
+
+  /**
+   * @return proxy connection to master server for this instance
+   * @throws MasterNotRunningException if the master is not running
+   * @throws ZooKeeperConnectionException if unable to connect to zookeeper
+   */
+  public HMasterInterface getMaster()
+  throws MasterNotRunningException, ZooKeeperConnectionException;
+
+  /** @return - true if the master server is running */
+  public boolean isMasterRunning()
+  throws MasterNotRunningException, ZooKeeperConnectionException;
+
+  /**
+   * A table that isTableEnabled == false and isTableDisabled == false
+   * is possible. This happens when a table has a lot of regions
+   * that must be processed.
+   * @param tableName table name
+   * @return true if the table is enabled, false otherwise
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableEnabled(byte[] tableName) throws IOException;
+
+  /**
+   * @param tableName table name
+   * @return true if the table is disabled, false otherwise
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableDisabled(byte[] tableName) throws IOException;
+
+  /**
+   * @param tableName table name
+   * @return true if all regions of the table are available, false otherwise
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableAvailable(byte[] tableName) throws IOException;
+
+  /**
+   * List all the userspace tables.  In other words, scan the META table.
+   *
+   * If we wanted this to be really fast, we could implement a special
+   * catalog table that just contains table names and their descriptors.
+   * Right now, it only exists as part of the META table's region info.
+   *
+   * @return - returns an array of HTableDescriptors
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HTableDescriptor[] listTables() throws IOException;
+
+  /**
+   * @param tableName table name
+   * @return table metadata
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HTableDescriptor getHTableDescriptor(byte[] tableName)
+  throws IOException;
+
+  /**
+   * Find the location of the region of <i>tableName</i> that <i>row</i>
+   * lives in.
+   * @param tableName name of the table <i>row</i> is in
+   * @param row row key you're trying to find the region of
+   * @return HRegionLocation that describes where to find the region in
+   * question
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HRegionLocation locateRegion(final byte [] tableName,
+      final byte [] row)
+  throws IOException;
+
+  /**
+   * Allows flushing the region cache.
+   */
+  public void clearRegionCache();
+
+  /**
+   * Allows flushing the region cache of all locations that pertain to
+   * <code>tableName</code>
+   * @param tableName Name of the table whose regions we are to remove from
+   * cache.
+   */
+  public void clearRegionCache(final byte [] tableName);
+
+  /**
+   * Find the location of the region of <i>tableName</i> that <i>row</i>
+   * lives in, ignoring any value that might be in the cache.
+   * @param tableName name of the table <i>row</i> is in
+   * @param row row key you're trying to find the region of
+   * @return HRegionLocation that describes where to find the region in
+   * question
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HRegionLocation relocateRegion(final byte [] tableName,
+      final byte [] row)
+  throws IOException;
+
+  /**
+   * Gets the location of the region of <i>regionName</i>.
+   * @param regionName name of the region to locate
+   * @return HRegionLocation that describes where to find the region in
+   * question
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HRegionLocation locateRegion(final byte [] regionName)
+  throws IOException;
+
+  /**
+   * Gets the locations of all regions in the specified table, <i>tableName</i>.
+   * @param tableName table to get regions of
+   * @return list of region locations for all regions of table
+   * @throws IOException
+   */
+  public List<HRegionLocation> locateRegions(byte[] tableName)
+  throws IOException;
+
+  /**
+   * Establishes a connection to the region server at the specified address.
+   * @param regionServer - the server to connect to
+   * @return proxy for HRegionServer
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HRegionInterface getHRegionConnection(HServerAddress regionServer)
+  throws IOException;
+
+  /**
+   * Establishes a connection to the region server at the specified address.
+   * @param regionServer - the server to connect to
+   * @param getMaster - do we check if master is alive
+   * @return proxy for HRegionServer
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HRegionInterface getHRegionConnection(
+      HServerAddress regionServer, boolean getMaster)
+  throws IOException;
+
+  /**
+   * Find region location hosting passed row
+   * @param tableName table name
+   * @param row Row to find.
+   * @param reload If true do not use cache, otherwise bypass.
+   * @return Location of row.
+   * @throws IOException if a remote or network exception occurs
+   */
+  HRegionLocation getRegionLocation(byte [] tableName, byte [] row,
+    boolean reload)
+  throws IOException;
+
+  /**
+   * Pass in a ServerCallable with your particular bit of logic defined and
+   * this method will manage the process of doing retries with timed waits
+   * and refinds of missing regions.
+   *
+   * @param <T> the type of the return value
+   * @param callable callable to run
+   * @return an object of type T
+   * @throws IOException if a remote or network exception occurs
+   * @throws RuntimeException other unspecified error
+   */
+  public <T> T getRegionServerWithRetries(ServerCallable<T> callable)
+  throws IOException, RuntimeException;
+
+  /**
+   * Pass in a ServerCallable with your particular bit of logic defined and
+   * this method will pass it to the defined region server.
+   * @param <T> the type of the return value
+   * @param callable callable to run
+   * @return an object of type T
+   * @throws IOException if a remote or network exception occurs
+   * @throws RuntimeException other unspecified error
+   */
+  public <T> T getRegionServerWithoutRetries(ServerCallable<T> callable)
+  throws IOException, RuntimeException;
+
+  /**
+   * Process a mixed batch of Get, Put and Delete actions. All actions for a
+   * RegionServer are forwarded in one RPC call.
+   *
+   *
+   * @param actions The collection of actions.
+   * @param tableName Name of the hbase table
+   * @param pool thread pool for parallel execution
+   * @param results An empty array, same size as list. If an exception is thrown,
+   * you can test here for partial results, and to determine which actions
+   * processed successfully.
+   * @throws IOException if there are problems talking to META. Per-item
+   * exceptions are stored in the results array.
+   */
+  public void processBatch(List<Row> actions, final byte[] tableName,
+      ExecutorService pool, Object[] results)
+      throws IOException, InterruptedException;
+
+  /**
+   * Process a batch of Puts.
+   *
+   * @param list The collection of actions. The list is mutated: all successful Puts
+   * are removed from the list.
+   * @param tableName Name of the hbase table
+   * @param pool Thread pool for parallel execution
+   * @throws IOException
+   * @deprecated Use HConnectionManager::processBatch instead.
+   */
+  public void processBatchOfPuts(List<Put> list,
+                                 final byte[] tableName, ExecutorService pool)
+      throws IOException;
+
+  /**
+   * Enable or disable region cache prefetch for the table. It will be
+   * applied for the given table's all HTable instances within this
+   * connection. By default, the cache prefetch is enabled.
+   * @param tableName name of table to configure.
+   * @param enable Set to true to enable region cache prefetch.
+   */
+  public void setRegionCachePrefetch(final byte[] tableName,
+      final boolean enable);
+
+  /**
+   * Check whether region cache prefetch is enabled or not.
+   * @param tableName name of table to check
+   * @return true if table's region cache prefetch is enabled. Otherwise
+   * it is disabled.
+   */
+  public boolean getRegionCachePrefetch(final byte[] tableName);
+
+  /**
+   * Load the region map and warm up the global region cache for the table.
+   * @param tableName name of the table to perform region cache prewarm.
+   * @param regions a region map.
+   */
+  public void prewarmRegionCache(final byte[] tableName,
+      final Map<HRegionInfo, HServerAddress> regions);
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
new file mode 100644
index 0000000..fe13655
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
@@ -0,0 +1,1337 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeSet;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.CopyOnWriteArraySet;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.MasterAddressTracker;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor;
+import org.apache.hadoop.hbase.ipc.HBaseRPC;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.SoftValueSortedMap;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.zookeeper.RootRegionTracker;
+import org.apache.hadoop.hbase.zookeeper.ZKTable;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * A non-instantiable class that manages {@link HConnection}s.
+ * This class has a static Map of {@link HConnection} instances keyed by
+ * {@link Configuration}; all invocations of {@link #getConnection(Configuration)}
+ * that pass the same {@link Configuration} instance will be returned the same
+ * {@link  HConnection} instance (Adding properties to a Configuration
+ * instance does not change its object identity).  Sharing {@link HConnection}
+ * instances is usually what you want; all clients of the {@link HConnection}
+ * instances share the HConnections' cache of Region locations rather than each
+ * having to discover for itself the location of meta, root, etc.  It makes
+ * sense for the likes of the pool of HTables class {@link HTablePool}, for
+ * instance (If concerned that a single {@link HConnection} is insufficient
+ * for sharing amongst clients in say an heavily-multithreaded environment,
+ * in practise its not proven to be an issue.  Besides, {@link HConnection} is
+ * implemented atop Hadoop RPC and as of this writing, Hadoop RPC does a
+ * connection per cluster-member, exclusively).
+ *
+ * <p>But sharing connections
+ * makes clean up of {@link HConnection} instances a little awkward.  Currently,
+ * clients cleanup by calling
+ * {@link #deleteConnection(Configuration, boolean)}.  This will shutdown the
+ * zookeeper connection the HConnection was using and clean up all
+ * HConnection resources as well as stopping proxies to servers out on the
+ * cluster. Not running the cleanup will not end the world; it'll
+ * just stall the closeup some and spew some zookeeper connection failed
+ * messages into the log.  Running the cleanup on a {@link HConnection} that is
+ * subsequently used by another will cause breakage so be careful running
+ * cleanup.
+ * <p>To create a {@link HConnection} that is not shared by others, you can
+ * create a new {@link Configuration} instance, pass this new instance to
+ * {@link #getConnection(Configuration)}, and then when done, close it up by
+ * doing something like the following:
+ * <pre>
+ * {@code
+ * Configuration newConfig = new Configuration(originalConf);
+ * HConnection connection = HConnectionManager.getConnection(newConfig);
+ * // Use the connection to your hearts' delight and then when done...
+ * HConnectionManager.deleteConnection(newConfig, true);
+ * }
+ * </pre>
+ * <p>Cleanup used to be done inside in a shutdown hook.  On startup we'd
+ * register a shutdown hook that called {@link #deleteAllConnections(boolean)}
+ * on its way out but the order in which shutdown hooks run is not defined so
+ * were problematic for clients of HConnection that wanted to register their
+ * own shutdown hooks so we removed ours though this shifts the onus for
+ * cleanup to the client.
+ */
+@SuppressWarnings("serial")
+public class HConnectionManager {
+  static final int MAX_CACHED_HBASE_INSTANCES = 31;
+
+  // A LRU Map of Configuration hashcode -> TableServers. We set instances to 31.
+  // The zk default max connections to the ensemble from the one client is 30 so
+  // should run into zk issues before hit this value of 31.
+  private static final Map<Configuration, HConnectionImplementation> HBASE_INSTANCES =
+    new LinkedHashMap<Configuration, HConnectionImplementation>
+      ((int) (MAX_CACHED_HBASE_INSTANCES/0.75F)+1, 0.75F, true) {
+      @Override
+      protected boolean removeEldestEntry(Map.Entry<Configuration, HConnectionImplementation> eldest) {
+        return size() > MAX_CACHED_HBASE_INSTANCES;
+      }
+  };
+
+  /*
+   * Non-instantiable.
+   */
+  protected HConnectionManager() {
+    super();
+  }
+
+  /**
+   * Get the connection that goes with the passed <code>conf</code>
+   * configuration instance.
+   * If no current connection exists, method creates a new connection for the
+   * passed <code>conf</code> instance.
+   * @param conf configuration
+   * @return HConnection object for <code>conf</code>
+   * @throws ZooKeeperConnectionException
+   */
+  public static HConnection getConnection(Configuration conf)
+  throws ZooKeeperConnectionException {
+    HConnectionImplementation connection;
+    synchronized (HBASE_INSTANCES) {
+      connection = HBASE_INSTANCES.get(conf);
+      if (connection == null) {
+        connection = new HConnectionImplementation(conf);
+        HBASE_INSTANCES.put(conf, connection);
+      }
+    }
+    return connection;
+  }
+
+  /**
+   * Delete connection information for the instance specified by configuration.
+   * This will close connection to the zookeeper ensemble and let go of all
+   * resources.
+   * @param conf configuration whose identity is used to find {@link HConnection}
+   * instance.
+   * @param stopProxy Shuts down all the proxy's put up to cluster members
+   * including to cluster HMaster.  Calls {@link HBaseRPC#stopProxy(org.apache.hadoop.ipc.VersionedProtocol)}.
+   */
+  public static void deleteConnection(Configuration conf, boolean stopProxy) {
+    synchronized (HBASE_INSTANCES) {
+      HConnectionImplementation t = HBASE_INSTANCES.remove(conf);
+      if (t != null) {
+        t.close(stopProxy);
+      }
+    }
+  }
+
+  /**
+   * Delete information for all connections.
+   * @param stopProxy stop the proxy as well
+   * @throws IOException
+   */
+  public static void deleteAllConnections(boolean stopProxy) {
+    synchronized (HBASE_INSTANCES) {
+      for (HConnectionImplementation t : HBASE_INSTANCES.values()) {
+        if (t != null) {
+          t.close(stopProxy);
+        }
+      }
+    }
+  }
+
+  /**
+   * It is provided for unit test cases which verify the behavior of region
+   * location cache prefetch.
+   * @return Number of cached regions for the table.
+   * @throws ZooKeeperConnectionException
+   */
+  static int getCachedRegionCount(Configuration conf,
+      byte[] tableName)
+  throws ZooKeeperConnectionException {
+    HConnectionImplementation connection = (HConnectionImplementation)getConnection(conf);
+    return connection.getNumberOfCachedRegionLocations(tableName);
+  }
+
+  /**
+   * It's provided for unit test cases which verify the behavior of region
+   * location cache prefetch.
+   * @return true if the region where the table and row reside is cached.
+   * @throws ZooKeeperConnectionException
+   */
+  static boolean isRegionCached(Configuration conf,
+      byte[] tableName, byte[] row) throws ZooKeeperConnectionException {
+    HConnectionImplementation connection = (HConnectionImplementation)getConnection(conf);
+    return connection.isRegionCached(tableName, row);
+  }
+
+  /* Encapsulates connection to zookeeper and regionservers.*/
+  static class HConnectionImplementation implements HConnection {
+    static final Log LOG = LogFactory.getLog(HConnectionImplementation.class);
+    private final Class<? extends HRegionInterface> serverInterfaceClass;
+    private final long pause;
+    private final int numRetries;
+    private final int maxRPCAttempts;
+    private final int rpcTimeout;
+    private final int prefetchRegionLimit;
+
+    private final Object masterLock = new Object();
+    private volatile boolean closed;
+    private volatile HMasterInterface master;
+    private volatile boolean masterChecked;
+    // ZooKeeper reference
+    private ZooKeeperWatcher zooKeeper;
+    // ZooKeeper-based master address tracker
+    private MasterAddressTracker masterAddressTracker;
+    private RootRegionTracker rootRegionTracker;
+    
+    private final Object metaRegionLock = new Object();
+
+    private final Object userRegionLock = new Object();
+
+    private final Configuration conf;
+    // Known region HServerAddress.toString() -> HRegionInterface
+
+    private final Map<String, HRegionInterface> servers =
+      new ConcurrentHashMap<String, HRegionInterface>();
+
+    /**
+     * Map of table to table {@link HRegionLocation}s.  The table key is made
+     * by doing a {@link Bytes#mapKey(byte[])} of the table's name.
+     */
+    private final Map<Integer, SoftValueSortedMap<byte [], HRegionLocation>>
+      cachedRegionLocations =
+        new HashMap<Integer, SoftValueSortedMap<byte [], HRegionLocation>>();
+
+    // region cache prefetch is enabled by default. this set contains all
+    // tables whose region cache prefetch are disabled.
+    private final Set<Integer> regionCachePrefetchDisabledTables =
+      new CopyOnWriteArraySet<Integer>();
+
+    /**
+     * constructor
+     * @param conf Configuration object
+     */
+    @SuppressWarnings("unchecked")
+    public HConnectionImplementation(Configuration conf)
+    throws ZooKeeperConnectionException {
+      this.conf = conf;
+      String serverClassName = conf.get(HConstants.REGION_SERVER_CLASS,
+        HConstants.DEFAULT_REGION_SERVER_CLASS);
+      this.closed = false;
+      try {
+        this.serverInterfaceClass =
+          (Class<? extends HRegionInterface>) Class.forName(serverClassName);
+      } catch (ClassNotFoundException e) {
+        throw new UnsupportedOperationException(
+            "Unable to find region server interface " + serverClassName, e);
+      }
+
+      this.pause = conf.getLong("hbase.client.pause", 1000);
+      this.numRetries = conf.getInt("hbase.client.retries.number", 10);
+      this.maxRPCAttempts = conf.getInt("hbase.client.rpc.maxattempts", 1);
+      this.rpcTimeout = conf.getInt(
+          HConstants.HBASE_RPC_TIMEOUT_KEY,
+          HConstants.DEFAULT_HBASE_RPC_TIMEOUT);
+
+      this.prefetchRegionLimit = conf.getInt("hbase.client.prefetch.limit",
+          10);
+
+      setupZookeeperTrackers();
+
+      this.master = null;
+      this.masterChecked = false;
+    }
+
+    private synchronized void setupZookeeperTrackers()
+        throws ZooKeeperConnectionException{
+      // initialize zookeeper and master address manager
+      this.zooKeeper = getZooKeeperWatcher();
+      masterAddressTracker = new MasterAddressTracker(this.zooKeeper, this);
+      zooKeeper.registerListener(masterAddressTracker);
+      masterAddressTracker.start();
+
+      this.rootRegionTracker = new RootRegionTracker(this.zooKeeper, this);
+      this.rootRegionTracker.start();
+    }
+
+    private synchronized void resetZooKeeperTrackers()
+        throws ZooKeeperConnectionException {
+      LOG.info("Trying to reconnect to zookeeper");
+      masterAddressTracker.stop();
+      masterAddressTracker = null;
+      rootRegionTracker.stop();
+      rootRegionTracker = null;
+      this.zooKeeper = null;
+      setupZookeeperTrackers();
+    }
+
+    public Configuration getConfiguration() {
+      return this.conf;
+    }
+
+    private long getPauseTime(int tries) {
+      int ntries = tries;
+      if (ntries >= HConstants.RETRY_BACKOFF.length) {
+        ntries = HConstants.RETRY_BACKOFF.length - 1;
+      }
+      return this.pause * HConstants.RETRY_BACKOFF[ntries];
+    }
+
+    public HMasterInterface getMaster()
+    throws MasterNotRunningException, ZooKeeperConnectionException {
+
+      // Check if we already have a good master connection
+      if (master != null) {
+        if (master.isMasterRunning()) {
+          return master;
+        }
+      }
+
+      HServerAddress masterLocation = null;
+      synchronized (this.masterLock) {
+        for (int tries = 0;
+          !this.closed &&
+          !this.masterChecked && this.master == null &&
+          tries < numRetries;
+        tries++) {
+
+          try {
+            masterLocation = masterAddressTracker.getMasterAddress();
+            if(masterLocation == null) {
+              LOG.info("ZooKeeper available but no active master location found");
+              throw new MasterNotRunningException();
+            }
+
+            HMasterInterface tryMaster = (HMasterInterface)HBaseRPC.getProxy(
+                HMasterInterface.class, HBaseRPCProtocolVersion.versionID,
+                masterLocation.getInetSocketAddress(), this.conf, this.rpcTimeout);
+
+            if (tryMaster.isMasterRunning()) {
+              this.master = tryMaster;
+              this.masterLock.notifyAll();
+              break;
+            }
+
+          } catch (IOException e) {
+            if (tries == numRetries - 1) {
+              // This was our last chance - don't bother sleeping
+              LOG.info("getMaster attempt " + tries + " of " + this.numRetries +
+                " failed; no more retrying.", e);
+              break;
+            }
+            LOG.info("getMaster attempt " + tries + " of " + this.numRetries +
+              " failed; retrying after sleep of " +
+              getPauseTime(tries), e);
+          }
+
+          // Cannot connect to master or it is not running. Sleep & retry
+          try {
+            this.masterLock.wait(getPauseTime(tries));
+          } catch (InterruptedException e) {
+            Thread.currentThread().interrupt();
+            throw new RuntimeException("Thread was interrupted while trying to connect to master.");
+          }
+        }
+        this.masterChecked = true;
+      }
+      if (this.master == null) {
+        if (masterLocation == null) {
+          throw new MasterNotRunningException();
+        }
+        throw new MasterNotRunningException(masterLocation.toString());
+      }
+      return this.master;
+    }
+
+    public boolean isMasterRunning()
+    throws MasterNotRunningException, ZooKeeperConnectionException {
+      if (this.master == null) {
+        getMaster();
+      }
+      boolean isRunning = master.isMasterRunning();
+      if(isRunning) {
+        return true;
+      }
+      throw new MasterNotRunningException();
+    }
+
+    public HRegionLocation getRegionLocation(final byte [] name,
+        final byte [] row, boolean reload)
+    throws IOException {
+      return reload? relocateRegion(name, row): locateRegion(name, row);
+    }
+
+    public HTableDescriptor[] listTables() throws IOException {
+      final TreeSet<HTableDescriptor> uniqueTables =
+        new TreeSet<HTableDescriptor>();
+      MetaScannerVisitor visitor = new MetaScannerVisitor() {
+        public boolean processRow(Result result) throws IOException {
+          try {
+            byte[] value = result.getValue(HConstants.CATALOG_FAMILY,
+                HConstants.REGIONINFO_QUALIFIER);
+            HRegionInfo info = null;
+            if (value != null) {
+              info = Writables.getHRegionInfo(value);
+            }
+            // Only examine the rows where the startKey is zero length
+            if (info != null && info.getStartKey().length == 0) {
+              uniqueTables.add(info.getTableDesc());
+            }
+            return true;
+          } catch (RuntimeException e) {
+            LOG.error("Result=" + result);
+            throw e;
+          }
+        }
+      };
+      MetaScanner.metaScan(conf, visitor);
+      return uniqueTables.toArray(new HTableDescriptor[uniqueTables.size()]);
+    }
+
+    public boolean isTableEnabled(byte[] tableName) throws IOException {
+      return testTableOnlineState(tableName, true);
+    }
+
+    public boolean isTableDisabled(byte[] tableName) throws IOException {
+      return testTableOnlineState(tableName, false);
+    }
+
+    public boolean isTableAvailable(final byte[] tableName) throws IOException {
+      final AtomicBoolean available = new AtomicBoolean(true);
+      MetaScannerVisitor visitor = new MetaScannerVisitor() {
+        @Override
+        public boolean processRow(Result row) throws IOException {
+          byte[] value = row.getValue(HConstants.CATALOG_FAMILY,
+              HConstants.REGIONINFO_QUALIFIER);
+          HRegionInfo info = Writables.getHRegionInfoOrNull(value);
+          if (info != null) {
+            if (Bytes.equals(tableName, info.getTableDesc().getName())) {
+              value = row.getValue(HConstants.CATALOG_FAMILY,
+                  HConstants.SERVER_QUALIFIER);
+              if (value == null) {
+                available.set(false);
+                return false;
+              }
+            }
+          }
+          return true;
+        }
+      };
+      MetaScanner.metaScan(conf, visitor);
+      return available.get();
+    }
+
+    /*
+     * @param True if table is online
+     */
+    private boolean testTableOnlineState(byte [] tableName, boolean online)
+    throws IOException {
+      if (Bytes.equals(tableName, HConstants.ROOT_TABLE_NAME)) {
+        // The root region is always enabled
+        return online;
+      }
+      String tableNameStr = Bytes.toString(tableName);
+      try {
+        if (online) {
+          return ZKTable.isEnabledTable(this.zooKeeper, tableNameStr);
+        }
+        return ZKTable.isDisabledTable(this.zooKeeper, tableNameStr);
+      } catch (KeeperException e) {
+        throw new IOException("Enable/Disable failed", e);
+      }
+    }
+
+    private static class HTableDescriptorFinder
+    implements MetaScanner.MetaScannerVisitor {
+        byte[] tableName;
+        HTableDescriptor result;
+        protected HTableDescriptorFinder(byte[] tableName) {
+          this.tableName = tableName;
+        }
+        public boolean processRow(Result rowResult) throws IOException {
+          HRegionInfo info = Writables.getHRegionInfoOrNull(
+              rowResult.getValue(HConstants.CATALOG_FAMILY,
+                  HConstants.REGIONINFO_QUALIFIER));
+          if (info == null) return true;
+          HTableDescriptor desc = info.getTableDesc();
+          if (Bytes.compareTo(desc.getName(), tableName) == 0) {
+            result = desc;
+            return false;
+          }
+          return true;
+        }
+        HTableDescriptor getResult() {
+          return result;
+        }
+    }
+
+    public HTableDescriptor getHTableDescriptor(final byte[] tableName)
+    throws IOException {
+      if (Bytes.equals(tableName, HConstants.ROOT_TABLE_NAME)) {
+        return new UnmodifyableHTableDescriptor(HTableDescriptor.ROOT_TABLEDESC);
+      }
+      if (Bytes.equals(tableName, HConstants.META_TABLE_NAME)) {
+        return HTableDescriptor.META_TABLEDESC;
+      }
+      HTableDescriptorFinder finder = new HTableDescriptorFinder(tableName);
+      MetaScanner.metaScan(conf, finder, tableName);
+      HTableDescriptor result = finder.getResult();
+      if (result == null) {
+        throw new TableNotFoundException(Bytes.toString(tableName));
+      }
+      return result;
+    }
+
+    @Override
+    public HRegionLocation locateRegion(final byte [] regionName)
+    throws IOException {
+      // TODO implement.  use old stuff or new stuff?
+      return null;
+    }
+
+    @Override
+    public List<HRegionLocation> locateRegions(final byte [] tableName)
+    throws IOException {
+      // TODO implement.  use old stuff or new stuff?
+      return null;
+    }
+
+    public HRegionLocation locateRegion(final byte [] tableName,
+        final byte [] row)
+    throws IOException{
+      return locateRegion(tableName, row, true);
+    }
+
+    public HRegionLocation relocateRegion(final byte [] tableName,
+        final byte [] row)
+    throws IOException{
+      return locateRegion(tableName, row, false);
+    }
+
+    private HRegionLocation locateRegion(final byte [] tableName,
+      final byte [] row, boolean useCache)
+    throws IOException {
+      if (this.closed) throw new IOException(toString() + " closed");
+      if (tableName == null || tableName.length == 0) {
+        throw new IllegalArgumentException(
+            "table name cannot be null or zero length");
+      }
+
+      if (Bytes.equals(tableName, HConstants.ROOT_TABLE_NAME)) {
+        try {
+          HServerAddress hsa =
+            this.rootRegionTracker.waitRootRegionLocation(this.rpcTimeout);
+          LOG.debug("Lookedup root region location, connection=" + this +
+            "; hsa=" + hsa);
+          if (hsa == null) return null;
+          return new HRegionLocation(HRegionInfo.ROOT_REGIONINFO, hsa);
+        } catch (InterruptedException e) {
+          Thread.currentThread().interrupt();
+          return null;
+        }
+      } else if (Bytes.equals(tableName, HConstants.META_TABLE_NAME)) {
+        return locateRegionInMeta(HConstants.ROOT_TABLE_NAME, tableName, row,
+            useCache, metaRegionLock);
+      } else {
+        // Region not in the cache - have to go to the meta RS
+        return locateRegionInMeta(HConstants.META_TABLE_NAME, tableName, row,
+            useCache, userRegionLock);
+      }
+    }
+
+    /*
+     * Search .META. for the HRegionLocation info that contains the table and
+     * row we're seeking. It will prefetch certain number of regions info and
+     * save them to the global region cache.
+     */
+    private void prefetchRegionCache(final byte[] tableName,
+        final byte[] row) {
+      // Implement a new visitor for MetaScanner, and use it to walk through
+      // the .META.
+      MetaScannerVisitor visitor = new MetaScannerVisitor() {
+        public boolean processRow(Result result) throws IOException {
+          try {
+            byte[] value = result.getValue(HConstants.CATALOG_FAMILY,
+                HConstants.REGIONINFO_QUALIFIER);
+            HRegionInfo regionInfo = null;
+
+            if (value != null) {
+              // convert the row result into the HRegionLocation we need!
+              regionInfo = Writables.getHRegionInfo(value);
+
+              // possible we got a region of a different table...
+              if (!Bytes.equals(regionInfo.getTableDesc().getName(),
+                  tableName)) {
+                return false; // stop scanning
+              }
+              if (regionInfo.isOffline()) {
+                // don't cache offline regions
+                return true;
+              }
+              value = result.getValue(HConstants.CATALOG_FAMILY,
+                  HConstants.SERVER_QUALIFIER);
+              if (value == null) {
+                return true;  // don't cache it
+              }
+              final String serverAddress = Bytes.toString(value);
+
+              // instantiate the location
+              HRegionLocation loc = new HRegionLocation(regionInfo,
+                new HServerAddress(serverAddress));
+              // cache this meta entry
+              cacheLocation(tableName, loc);
+            }
+            return true;
+          } catch (RuntimeException e) {
+            throw new IOException(e);
+          }
+        }
+      };
+      try {
+        // pre-fetch certain number of regions info at region cache.
+        MetaScanner.metaScan(conf, visitor, tableName, row,
+            this.prefetchRegionLimit);
+      } catch (IOException e) {
+        LOG.warn("Encountered problems when prefetch META table: ", e);
+      }
+    }
+
+    /*
+      * Search one of the meta tables (-ROOT- or .META.) for the HRegionLocation
+      * info that contains the table and row we're seeking.
+      */
+    private HRegionLocation locateRegionInMeta(final byte [] parentTable,
+      final byte [] tableName, final byte [] row, boolean useCache,
+      Object regionLockObject)
+    throws IOException {
+      HRegionLocation location;
+      // If we are supposed to be using the cache, look in the cache to see if
+      // we already have the region.
+      if (useCache) {
+        location = getCachedLocation(tableName, row);
+        if (location != null) {
+          return location;
+        }
+      }
+
+      // build the key of the meta region we should be looking for.
+      // the extra 9's on the end are necessary to allow "exact" matches
+      // without knowing the precise region names.
+      byte [] metaKey = HRegionInfo.createRegionName(tableName, row,
+        HConstants.NINES, false);
+      for (int tries = 0; true; tries++) {
+        if (tries >= numRetries) {
+          throw new NoServerForRegionException("Unable to find region for "
+            + Bytes.toStringBinary(row) + " after " + numRetries + " tries.");
+        }
+
+        HRegionLocation metaLocation = null;
+        try {
+          // locate the root or meta region
+          metaLocation = locateRegion(parentTable, metaKey);
+          // If null still, go around again.
+          if (metaLocation == null) continue;
+          HRegionInterface server =
+            getHRegionConnection(metaLocation.getServerAddress());
+
+          Result regionInfoRow = null;
+          // This block guards against two threads trying to load the meta
+          // region at the same time. The first will load the meta region and
+          // the second will use the value that the first one found.
+          synchronized (regionLockObject) {
+            // If the parent table is META, we may want to pre-fetch some
+            // region info into the global region cache for this table.
+            if (Bytes.equals(parentTable, HConstants.META_TABLE_NAME) &&
+                (getRegionCachePrefetch(tableName)) )  {
+              prefetchRegionCache(tableName, row);
+            }
+
+            // Check the cache again for a hit in case some other thread made the
+            // same query while we were waiting on the lock. If not supposed to
+            // be using the cache, delete any existing cached location so it won't
+            // interfere.
+            if (useCache) {
+              location = getCachedLocation(tableName, row);
+              if (location != null) {
+                return location;
+              }
+            } else {
+              deleteCachedLocation(tableName, row);
+            }
+
+          // Query the root or meta region for the location of the meta region
+            regionInfoRow = server.getClosestRowBefore(
+            metaLocation.getRegionInfo().getRegionName(), metaKey,
+            HConstants.CATALOG_FAMILY);
+          }
+          if (regionInfoRow == null) {
+            throw new TableNotFoundException(Bytes.toString(tableName));
+          }
+          byte[] value = regionInfoRow.getValue(HConstants.CATALOG_FAMILY,
+              HConstants.REGIONINFO_QUALIFIER);
+          if (value == null || value.length == 0) {
+            throw new IOException("HRegionInfo was null or empty in " +
+              Bytes.toString(parentTable) + ", row=" + regionInfoRow);
+          }
+          // convert the row result into the HRegionLocation we need!
+          HRegionInfo regionInfo = (HRegionInfo) Writables.getWritable(
+              value, new HRegionInfo());
+          // possible we got a region of a different table...
+          if (!Bytes.equals(regionInfo.getTableDesc().getName(), tableName)) {
+            throw new TableNotFoundException(
+              "Table '" + Bytes.toString(tableName) + "' was not found.");
+          }
+          if (regionInfo.isOffline()) {
+            throw new RegionOfflineException("region offline: " +
+              regionInfo.getRegionNameAsString());
+          }
+
+          value = regionInfoRow.getValue(HConstants.CATALOG_FAMILY,
+              HConstants.SERVER_QUALIFIER);
+          String serverAddress = "";
+          if(value != null) {
+            serverAddress = Bytes.toString(value);
+          }
+          if (serverAddress.equals("")) {
+            throw new NoServerForRegionException("No server address listed " +
+              "in " + Bytes.toString(parentTable) + " for region " +
+              regionInfo.getRegionNameAsString());
+          }
+
+          // instantiate the location
+          location = new HRegionLocation(regionInfo,
+            new HServerAddress(serverAddress));
+          cacheLocation(tableName, location);
+          return location;
+        } catch (TableNotFoundException e) {
+          // if we got this error, probably means the table just plain doesn't
+          // exist. rethrow the error immediately. this should always be coming
+          // from the HTable constructor.
+          throw e;
+        } catch (IOException e) {
+          if (e instanceof RemoteException) {
+            e = RemoteExceptionHandler.decodeRemoteException(
+                (RemoteException) e);
+          }
+          if (tries < numRetries - 1) {
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("locateRegionInMeta parentTable=" +
+                Bytes.toString(parentTable) + ", metaLocation=" +
+                ((metaLocation == null)? "null": metaLocation) + ", attempt=" +
+                tries + " of " +
+                this.numRetries + " failed; retrying after sleep of " +
+                getPauseTime(tries) + " because: " + e.getMessage());
+            }
+          } else {
+            throw e;
+          }
+          // Only relocate the parent region if necessary
+          if(!(e instanceof RegionOfflineException ||
+              e instanceof NoServerForRegionException)) {
+            relocateRegion(parentTable, metaKey);
+          }
+        }
+        try{
+          Thread.sleep(getPauseTime(tries));
+        } catch (InterruptedException e) {
+          Thread.currentThread().interrupt();
+          throw new IOException("Giving up trying to location region in " +
+            "meta: thread is interrupted.");
+        }
+      }
+    }
+
+    /*
+     * Search the cache for a location that fits our table and row key.
+     * Return null if no suitable region is located. TODO: synchronization note
+     *
+     * <p>TODO: This method during writing consumes 15% of CPU doing lookup
+     * into the Soft Reference SortedMap.  Improve.
+     *
+     * @param tableName
+     * @param row
+     * @return Null or region location found in cache.
+     */
+    HRegionLocation getCachedLocation(final byte [] tableName,
+        final byte [] row) {
+      SoftValueSortedMap<byte [], HRegionLocation> tableLocations =
+        getTableLocations(tableName);
+
+      // start to examine the cache. we can only do cache actions
+      // if there's something in the cache for this table.
+      if (tableLocations.isEmpty()) {
+        return null;
+      }
+
+      HRegionLocation rl = tableLocations.get(row);
+      if (rl != null) {
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Cache hit for row <" +
+            Bytes.toStringBinary(row) +
+            "> in tableName " + Bytes.toString(tableName) +
+            ": location server " + rl.getServerAddress() +
+            ", location region name " +
+            rl.getRegionInfo().getRegionNameAsString());
+        }
+        return rl;
+      }
+
+      // Cut the cache so that we only get the part that could contain
+      // regions that match our key
+      SoftValueSortedMap<byte[], HRegionLocation> matchingRegions =
+        tableLocations.headMap(row);
+
+      // if that portion of the map is empty, then we're done. otherwise,
+      // we need to examine the cached location to verify that it is
+      // a match by end key as well.
+      if (!matchingRegions.isEmpty()) {
+        HRegionLocation possibleRegion =
+          matchingRegions.get(matchingRegions.lastKey());
+
+        // there is a possibility that the reference was garbage collected
+        // in the instant since we checked isEmpty().
+        if (possibleRegion != null) {
+          byte[] endKey = possibleRegion.getRegionInfo().getEndKey();
+
+          // make sure that the end key is greater than the row we're looking
+          // for, otherwise the row actually belongs in the next region, not
+          // this one. the exception case is when the endkey is
+          // HConstants.EMPTY_START_ROW, signifying that the region we're
+          // checking is actually the last region in the table.
+          if (Bytes.equals(endKey, HConstants.EMPTY_END_ROW) ||
+              KeyValue.getRowComparator(tableName).compareRows(endKey, 0, endKey.length,
+                  row, 0, row.length) > 0) {
+            return possibleRegion;
+          }
+        }
+      }
+
+      // Passed all the way through, so we got nothin - complete cache miss
+      return null;
+    }
+
+    /**
+     * Delete a cached location
+     * @param tableName tableName
+     * @param row
+     */
+    void deleteCachedLocation(final byte [] tableName, final byte [] row) {
+      synchronized (this.cachedRegionLocations) {
+        SoftValueSortedMap<byte [], HRegionLocation> tableLocations =
+            getTableLocations(tableName);
+        // start to examine the cache. we can only do cache actions
+        // if there's something in the cache for this table.
+        if (!tableLocations.isEmpty()) {
+          HRegionLocation rl = getCachedLocation(tableName, row);
+          if (rl != null) {
+            tableLocations.remove(rl.getRegionInfo().getStartKey());
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("Removed " +
+                rl.getRegionInfo().getRegionNameAsString() +
+                " for tableName=" + Bytes.toString(tableName) +
+                " from cache " + "because of " + Bytes.toStringBinary(row));
+            }
+          }
+        }
+      }
+    }
+
+    /*
+     * @param tableName
+     * @return Map of cached locations for passed <code>tableName</code>
+     */
+    private SoftValueSortedMap<byte [], HRegionLocation> getTableLocations(
+        final byte [] tableName) {
+      // find the map of cached locations for this table
+      Integer key = Bytes.mapKey(tableName);
+      SoftValueSortedMap<byte [], HRegionLocation> result;
+      synchronized (this.cachedRegionLocations) {
+        result = this.cachedRegionLocations.get(key);
+        // if tableLocations for this table isn't built yet, make one
+        if (result == null) {
+          result = new SoftValueSortedMap<byte [], HRegionLocation>(
+              Bytes.BYTES_COMPARATOR);
+          this.cachedRegionLocations.put(key, result);
+        }
+      }
+      return result;
+    }
+
+    @Override
+    public void clearRegionCache() {
+      synchronized(this.cachedRegionLocations) {
+        this.cachedRegionLocations.clear();
+      }
+    }
+
+    @Override
+    public void clearRegionCache(final byte [] tableName) {
+      synchronized (this.cachedRegionLocations) {
+        this.cachedRegionLocations.remove(Bytes.mapKey(tableName));
+      }
+    }
+
+    /*
+     * Put a newly discovered HRegionLocation into the cache.
+     */
+    private void cacheLocation(final byte [] tableName,
+        final HRegionLocation location) {
+      byte [] startKey = location.getRegionInfo().getStartKey();
+      SoftValueSortedMap<byte [], HRegionLocation> tableLocations =
+        getTableLocations(tableName);
+      if (tableLocations.put(startKey, location) == null) {
+        LOG.debug("Cached location for " +
+            location.getRegionInfo().getRegionNameAsString() +
+            " is " + location.getServerAddress());
+      }
+    }
+
+    public HRegionInterface getHRegionConnection(
+        HServerAddress regionServer, boolean getMaster)
+    throws IOException {
+      if (getMaster) {
+        getMaster();
+      }
+      HRegionInterface server;
+      synchronized (this.servers) {
+        // See if we already have a connection
+        server = this.servers.get(regionServer.toString());
+        if (server == null) { // Get a connection
+          try {
+            server = (HRegionInterface)HBaseRPC.waitForProxy(
+                serverInterfaceClass, HBaseRPCProtocolVersion.versionID,
+                regionServer.getInetSocketAddress(), this.conf,
+                this.maxRPCAttempts, this.rpcTimeout, this.rpcTimeout);
+          } catch (RemoteException e) {
+            LOG.warn("RemoteException connecting to RS", e);
+            // Throw what the RemoteException was carrying.
+            throw RemoteExceptionHandler.decodeRemoteException(e);
+          }
+          this.servers.put(regionServer.toString(), server);
+        }
+      }
+      return server;
+    }
+
+    public HRegionInterface getHRegionConnection(
+        HServerAddress regionServer)
+    throws IOException {
+      return getHRegionConnection(regionServer, false);
+    }
+
+    /**
+     * Get the ZooKeeper instance for this TableServers instance.
+     *
+     * If ZK has not been initialized yet, this will connect to ZK.
+     * @returns zookeeper reference
+     * @throws ZooKeeperConnectionException if there's a problem connecting to zk
+     */
+    public synchronized ZooKeeperWatcher getZooKeeperWatcher()
+        throws ZooKeeperConnectionException {
+      if(zooKeeper == null) {
+        try {
+          this.zooKeeper = new ZooKeeperWatcher(conf, "hconnection", this);
+        } catch (IOException e) {
+          throw new ZooKeeperConnectionException(e);
+        }
+      }
+      return zooKeeper;
+    }
+
+    public <T> T getRegionServerWithRetries(ServerCallable<T> callable)
+    throws IOException, RuntimeException {
+      List<Throwable> exceptions = new ArrayList<Throwable>();
+      for(int tries = 0; tries < numRetries; tries++) {
+        try {
+          callable.instantiateServer(tries != 0);
+          return callable.call();
+        } catch (Throwable t) {
+          t = translateException(t);
+          exceptions.add(t);
+          if (tries == numRetries - 1) {
+            throw new RetriesExhaustedException(callable.getServerName(),
+                callable.getRegionName(), callable.getRow(), tries, exceptions);
+          }
+        }
+        try {
+          Thread.sleep(getPauseTime(tries));
+        } catch (InterruptedException e) {
+          Thread.currentThread().interrupt();
+          throw new IOException("Giving up trying to get region server: thread is interrupted.");
+        }
+      }
+      return null;
+    }
+
+    public <T> T getRegionServerWithoutRetries(ServerCallable<T> callable)
+        throws IOException, RuntimeException {
+      try {
+        callable.instantiateServer(false);
+        return callable.call();
+      } catch (Throwable t) {
+        Throwable t2 = translateException(t);
+        if (t2 instanceof IOException) {
+          throw (IOException)t2;
+        } else {
+          throw new RuntimeException(t2);
+        }
+      }
+    }
+
+    void close(boolean stopProxy) {
+      if (master != null) {
+        if (stopProxy) {
+          HBaseRPC.stopProxy(master);
+        }
+        master = null;
+        masterChecked = false;
+      }
+      if (stopProxy) {
+        for (HRegionInterface i: servers.values()) {
+          HBaseRPC.stopProxy(i);
+        }
+      }
+      if (this.zooKeeper != null) {
+        LOG.info("Closed zookeeper sessionid=0x" +
+          Long.toHexString(this.zooKeeper.getZooKeeper().getSessionId()));
+        this.zooKeeper.close();
+        this.zooKeeper = null;
+      }
+      this.closed = true;
+    }
+
+    private Callable<MultiResponse> createCallable(
+        final HServerAddress address,
+        final MultiAction multi,
+        final byte [] tableName) {
+      final HConnection connection = this;
+      return new Callable<MultiResponse>() {
+        public MultiResponse call() throws IOException {
+          return getRegionServerWithoutRetries(
+              new ServerCallable<MultiResponse>(connection, tableName, null) {
+                public MultiResponse call() throws IOException {
+                  return server.multi(multi);
+                }
+                @Override
+                public void instantiateServer(boolean reload) throws IOException {
+                  server = connection.getHRegionConnection(address);
+                }
+              }
+          );
+        }
+      };
+    }
+
+    public void processBatch(List<Row> list,
+        final byte[] tableName,
+        ExecutorService pool,
+        Object[] results) throws IOException, InterruptedException {
+
+      // results must be the same size as list
+      if (results.length != list.size()) {
+        throw new IllegalArgumentException("argument results must be the same size as argument list");
+      }
+
+      if (list.size() == 0) {
+        return;
+      }
+
+      // Keep track of the most recent servers for any given item for better
+      // exceptional reporting.
+      HServerAddress [] lastServers = new HServerAddress[results.length];
+      List<Row> workingList = new ArrayList<Row>(list);
+      boolean retry = true;
+      Throwable singleRowCause = null;
+
+      for (int tries = 0; tries < numRetries && retry; ++tries) {
+
+        // sleep first, if this is a retry
+        if (tries >= 1) {
+          long sleepTime = getPauseTime(tries);
+          LOG.debug("Retry " +tries+ ", sleep for " +sleepTime+ "ms!");
+          Thread.sleep(sleepTime);
+        }
+
+        // step 1: break up into regionserver-sized chunks and build the data structs
+
+        Map<HServerAddress, MultiAction> actionsByServer = new HashMap<HServerAddress, MultiAction>();
+        for (int i = 0; i < workingList.size(); i++) {
+          Row row = workingList.get(i);
+          if (row != null) {
+            HRegionLocation loc = locateRegion(tableName, row.getRow(), true);
+            HServerAddress address = loc.getServerAddress();
+            byte[] regionName = loc.getRegionInfo().getRegionName();
+
+            MultiAction actions = actionsByServer.get(address);
+            if (actions == null) {
+              actions = new MultiAction();
+              actionsByServer.put(address, actions);
+            }
+
+            Action action = new Action(regionName, row, i);
+            lastServers[i] = address;
+            actions.add(regionName, action);
+          }
+        }
+
+        // step 2: make the requests
+
+        Map<HServerAddress,Future<MultiResponse>> futures =
+            new HashMap<HServerAddress, Future<MultiResponse>>(actionsByServer.size());
+
+        for (Entry<HServerAddress, MultiAction> e : actionsByServer.entrySet()) {
+          futures.put(e.getKey(), pool.submit(createCallable(e.getKey(), e.getValue(), tableName)));
+        }
+
+        // step 3: collect the failures and successes and prepare for retry
+
+        for (Entry<HServerAddress, Future<MultiResponse>> responsePerServer : futures.entrySet()) {
+          HServerAddress address = responsePerServer.getKey();
+
+          try {
+            Future<MultiResponse> future = responsePerServer.getValue();
+            MultiResponse resp = future.get();
+
+            if (resp == null) {
+              // Entire server failed
+              LOG.debug("Failed all for server: " + address + ", removing from cache");
+              continue;
+            }
+
+            for (Entry<byte[], List<Pair<Integer,Object>>> e : resp.getResults().entrySet()) {
+              byte[] regionName = e.getKey();
+              List<Pair<Integer, Object>> regionResults = e.getValue();
+              for (Pair<Integer, Object> regionResult : regionResults) {
+                if (regionResult == null) {
+                  // if the first/only record is 'null' the entire region failed.
+                  LOG.debug("Failures for region: " +
+                      Bytes.toStringBinary(regionName) +
+                      ", removing from cache");
+                } else {
+                  // Result might be an Exception, including DNRIOE
+                  results[regionResult.getFirst()] = regionResult.getSecond();
+                }
+              }
+            }
+          } catch (ExecutionException e) {
+            LOG.debug("Failed all from " + address, e);
+          }
+        }
+
+        // step 4: identify failures and prep for a retry (if applicable).
+
+        // Find failures (i.e. null Result), and add them to the workingList (in
+        // order), so they can be retried.
+        retry = false;
+        workingList.clear();
+        for (int i = 0; i < results.length; i++) {
+          // if null (fail) or instanceof Throwable && not instanceof DNRIOE
+          // then retry that row. else dont.
+          if (results[i] == null ||
+              (results[i] instanceof Throwable &&
+                  !(results[i] instanceof DoNotRetryIOException))) {
+
+            retry = true;
+
+            Row row = list.get(i);
+            workingList.add(row);
+            deleteCachedLocation(tableName, row.getRow());
+          } else {
+            // add null to workingList, so the order remains consistent with the original list argument.
+            workingList.add(null);
+          }
+        }
+      }
+
+      if (retry) {
+        // Simple little check for 1 item failures.
+        if (singleRowCause != null) {
+          throw new IOException(singleRowCause);
+        }
+      }
+
+
+      List<Throwable> exceptions = new ArrayList<Throwable>();
+      List<Row> actions = new ArrayList<Row>();
+      List<HServerAddress> addresses = new ArrayList<HServerAddress>();
+
+      for (int i = 0 ; i < results.length; i++) {
+        if (results[i] == null || results[i] instanceof Throwable) {
+          exceptions.add((Throwable)results[i]);
+          actions.add(list.get(i));
+          addresses.add(lastServers[i]);
+        }
+      }
+
+      if (!exceptions.isEmpty()) {
+        throw new RetriesExhaustedWithDetailsException(exceptions,
+            actions,
+            addresses);
+      }
+    }
+
+    /**
+     * @deprecated Use HConnectionManager::processBatch instead.
+     */
+    public void processBatchOfPuts(List<Put> list,
+        final byte[] tableName,
+        ExecutorService pool) throws IOException {
+      Object[] results = new Object[list.size()];
+      try {
+        processBatch((List) list, tableName, pool, results);
+      } catch (InterruptedException e) {
+        throw new IOException(e);
+      } finally {
+
+        // mutate list so that it is empty for complete success, or contains only failed records
+        // results are returned in the same order as the requests in list
+        // walk the list backwards, so we can remove from list without impacting the indexes of earlier members
+        for (int i = results.length - 1; i>=0; i--) {
+          if (results[i] instanceof Result) {
+            // successful Puts are removed from the list here.
+            list.remove(i);
+          }
+        }
+      }
+    }
+
+    private Throwable translateException(Throwable t) throws IOException {
+      if (t instanceof UndeclaredThrowableException) {
+        t = t.getCause();
+      }
+      if (t instanceof RemoteException) {
+        t = RemoteExceptionHandler.decodeRemoteException((RemoteException)t);
+      }
+      if (t instanceof DoNotRetryIOException) {
+        throw (DoNotRetryIOException)t;
+      }
+      return t;
+    }
+
+    /*
+     * Return the number of cached region for a table. It will only be called
+     * from a unit test.
+     */
+    int getNumberOfCachedRegionLocations(final byte[] tableName) {
+      Integer key = Bytes.mapKey(tableName);
+      synchronized (this.cachedRegionLocations) {
+        SoftValueSortedMap<byte[], HRegionLocation> tableLocs =
+          this.cachedRegionLocations.get(key);
+
+        if (tableLocs == null) {
+          return 0;
+        }
+        return tableLocs.values().size();
+      }
+    }
+
+    /**
+     * Check the region cache to see whether a region is cached yet or not.
+     * Called by unit tests.
+     * @param tableName tableName
+     * @param row row
+     * @return Region cached or not.
+     */
+    boolean isRegionCached(final byte[] tableName, final byte[] row) {
+      HRegionLocation location = getCachedLocation(tableName, row);
+      return location != null;
+    }
+
+    public void setRegionCachePrefetch(final byte[] tableName,
+        final boolean enable) {
+      if (!enable) {
+        regionCachePrefetchDisabledTables.add(Bytes.mapKey(tableName));
+      }
+      else {
+        regionCachePrefetchDisabledTables.remove(Bytes.mapKey(tableName));
+      }
+    }
+
+    public boolean getRegionCachePrefetch(final byte[] tableName) {
+      return !regionCachePrefetchDisabledTables.contains(Bytes.mapKey(tableName));
+    }
+
+    public void prewarmRegionCache(final byte[] tableName,
+        final Map<HRegionInfo, HServerAddress> regions) {
+      for (Map.Entry<HRegionInfo, HServerAddress> e : regions.entrySet()) {
+        cacheLocation(tableName,
+            new HRegionLocation(e.getKey(), e.getValue()));
+      }
+    }
+
+    @Override
+    public void abort(final String msg, Throwable t) {
+      if (t instanceof KeeperException.SessionExpiredException) {
+        try {
+          LOG.info("This client just lost it's session with ZooKeeper, trying" +
+              " to reconnect.");
+          resetZooKeeperTrackers();
+          LOG.info("Reconnected successfully. This disconnect could have been" +
+              " caused by a network partition or a long-running GC pause," +
+              " either way it's recommended that you verify your environment.");
+          return;
+        } catch (ZooKeeperConnectionException e) {
+          LOG.error("Could not reconnect to ZooKeeper after session" +
+              " expiration, aborting");
+          t = e;
+        }
+      }
+      if (t != null) LOG.fatal(msg, t);
+      else LOG.fatal(msg);
+      this.closed = true;
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HTable.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTable.java
new file mode 100644
index 0000000..4e0e21a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTable.java
@@ -0,0 +1,1328 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Used to communicate with a single HBase table.
+ *
+ * This class is not thread safe for updates; the underlying write buffer can
+ * be corrupted if multiple threads contend over a single HTable instance.
+ *
+ * <p>Instances of HTable passed the same {@link Configuration} instance will
+ * share connections to servers out on the cluster and to the zookeeper ensemble
+ * as well as caches of  region locations.  This is usually a *good* thing.
+ * This happens because they will all share the same underlying
+ * {@link HConnection} instance.  See {@link HConnectionManager} for more on
+ * how this mechanism works.
+ *
+ * <p>{@link HConnection} will read most of the
+ * configuration it needs from the passed {@link Configuration} on initial
+ * construction.  Thereafter, for settings such as
+ * <code>hbase.client.pause</code>, <code>hbase.client.retries.number</code>,
+ * and <code>hbase.client.rpc.maxattempts</code> updating their values in the
+ * passed {@link Configuration} subsequent to {@link HConnection} construction
+ * will go unnoticed.  To run with changed values, make a new
+ * {@link HTable} passing a new {@link Configuration} instance that has the
+ * new configuration.
+ *
+ * @see HBaseAdmin for create, drop, list, enable and disable of tables.
+ * @see HConnection
+ * @see HConnectionManager
+ */
+public class HTable implements HTableInterface {
+  private static final Log LOG = LogFactory.getLog(HTable.class);
+  private final HConnection connection;
+  private final byte [] tableName;
+  protected final int scannerTimeout;
+  private volatile Configuration configuration;
+  private final ArrayList<Put> writeBuffer = new ArrayList<Put>();
+  private long writeBufferSize;
+  private boolean autoFlush;
+  private long currentWriteBufferSize;
+  protected int scannerCaching;
+  private int maxKeyValueSize;
+  private ExecutorService pool;  // For Multi
+  private long maxScannerResultSize;
+
+  /**
+   * Creates an object to access a HBase table.
+   * Internally it creates a new instance of {@link Configuration} and a new
+   * client to zookeeper as well as other resources.  It also comes up with
+   * a fresh view of the cluster and must do discovery from scratch of region
+   * locations; i.e. it will not make use of already-cached region locations if
+   * available. Use only when being quick and dirty.
+   * @throws IOException if a remote or network exception occurs
+   * @see #HTable(Configuration, String)
+   */
+  public HTable(final String tableName)
+  throws IOException {
+    this(HBaseConfiguration.create(), Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Creates an object to access a HBase table.
+   * Internally it creates a new instance of {@link Configuration} and a new
+   * client to zookeeper as well as other resources.  It also comes up with
+   * a fresh view of the cluster and must do discovery from scratch of region
+   * locations; i.e. it will not make use of already-cached region locations if
+   * available. Use only when being quick and dirty.
+   * @param tableName Name of the table.
+   * @throws IOException if a remote or network exception occurs
+   * @see #HTable(Configuration, String)
+   */
+  public HTable(final byte [] tableName)
+  throws IOException {
+    this(HBaseConfiguration.create(), tableName);
+  }
+
+  /**
+   * Creates an object to access a HBase table.
+   * Shares zookeeper connection and other resources with other HTable instances
+   * created with the same <code>conf</code> instance.  Uses already-populated
+   * region cache if one is available, populated by any other HTable instances
+   * sharing this <code>conf</code> instance.  Recommended.
+   * @param conf Configuration object to use.
+   * @param tableName Name of the table.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HTable(Configuration conf, final String tableName)
+  throws IOException {
+    this(conf, Bytes.toBytes(tableName));
+  }
+
+
+  /**
+   * Creates an object to access a HBase table.
+   * Shares zookeeper connection and other resources with other HTable instances
+   * created with the same <code>conf</code> instance.  Uses already-populated
+   * region cache if one is available, populated by any other HTable instances
+   * sharing this <code>conf</code> instance.  Recommended.
+   * @param conf Configuration object to use.
+   * @param tableName Name of the table.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HTable(Configuration conf, final byte [] tableName)
+  throws IOException {
+    this.tableName = tableName;
+    if (conf == null) {
+      this.scannerTimeout = 0;
+      this.connection = null;
+      return;
+    }
+    this.connection = HConnectionManager.getConnection(conf);
+    this.scannerTimeout =
+      (int) conf.getLong(HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY, HConstants.DEFAULT_HBASE_REGIONSERVER_LEASE_PERIOD);
+    this.configuration = conf;
+    this.connection.locateRegion(tableName, HConstants.EMPTY_START_ROW);
+    this.writeBufferSize = conf.getLong("hbase.client.write.buffer", 2097152);
+    this.autoFlush = true;
+    this.currentWriteBufferSize = 0;
+    this.scannerCaching = conf.getInt("hbase.client.scanner.caching", 1);
+
+    this.maxScannerResultSize = conf.getLong(
+      HConstants.HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE_KEY,
+      HConstants.DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE);
+    this.maxKeyValueSize = conf.getInt("hbase.client.keyvalue.maxsize", -1);
+
+    int nrThreads = conf.getInt("hbase.htable.threads.max", getCurrentNrHRS());
+    if (nrThreads == 0) {
+      nrThreads = 1; // is there a better default?
+    }
+
+    // Unfortunately Executors.newCachedThreadPool does not allow us to
+    // set the maximum size of the pool, so we have to do it ourselves.
+    this.pool = new ThreadPoolExecutor(0, nrThreads,
+        60, TimeUnit.SECONDS,
+        new LinkedBlockingQueue<Runnable>(),
+        new DaemonThreadFactory());
+  }
+
+  public Configuration getConfiguration() {
+    return configuration;
+  }
+
+  /**
+   * @return the number of region servers that are currently running
+   * @throws IOException if a remote or network exception occurs
+   */
+  public int getCurrentNrHRS() throws IOException {
+    try {
+      // We go to zk rather than to master to get count of regions to avoid
+      // HTable having a Master dependency.  See HBase-2828
+      return ZKUtil.getNumberOfChildren(this.connection.getZooKeeperWatcher(),
+          this.connection.getZooKeeperWatcher().rsZNode);
+    } catch (KeeperException ke) {
+      throw new IOException("Unexpected ZooKeeper exception", ke);
+    }
+  }
+
+  /**
+   * Tells whether or not a table is enabled or not.
+   * @param tableName Name of table to check.
+   * @return {@code true} if table is online.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public static boolean isTableEnabled(String tableName) throws IOException {
+    return isTableEnabled(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Tells whether or not a table is enabled or not.
+   * @param tableName Name of table to check.
+   * @return {@code true} if table is online.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public static boolean isTableEnabled(byte[] tableName) throws IOException {
+    return isTableEnabled(HBaseConfiguration.create(), tableName);
+  }
+
+  /**
+   * Tells whether or not a table is enabled or not.
+   * @param conf The Configuration object to use.
+   * @param tableName Name of table to check.
+   * @return {@code true} if table is online.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public static boolean isTableEnabled(Configuration conf, String tableName)
+  throws IOException {
+    return isTableEnabled(conf, Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Tells whether or not a table is enabled or not.
+   * @param conf The Configuration object to use.
+   * @param tableName Name of table to check.
+   * @return {@code true} if table is online.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public static boolean isTableEnabled(Configuration conf, byte[] tableName)
+  throws IOException {
+    return HConnectionManager.getConnection(conf).isTableEnabled(tableName);
+  }
+
+  /**
+   * Find region location hosting passed row using cached info
+   * @param row Row to find.
+   * @return The location of the given row.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HRegionLocation getRegionLocation(final String row)
+  throws IOException {
+    return connection.getRegionLocation(tableName, Bytes.toBytes(row), false);
+  }
+
+  /**
+   * Finds the region on which the given row is being served.
+   * @param row Row to find.
+   * @return Location of the row.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public HRegionLocation getRegionLocation(final byte [] row)
+  throws IOException {
+    return connection.getRegionLocation(tableName, row, false);
+  }
+
+  @Override
+  public byte [] getTableName() {
+    return this.tableName;
+  }
+
+  /**
+   * <em>INTERNAL</em> Used by unit tests and tools to do low-level
+   * manipulations.
+   * @return An HConnection instance.
+   */
+  // TODO(tsuna): Remove this.  Unit tests shouldn't require public helpers.
+  public HConnection getConnection() {
+    return this.connection;
+  }
+
+  /**
+   * Gets the number of rows that a scanner will fetch at once.
+   * <p>
+   * The default value comes from {@code hbase.client.scanner.caching}.
+   */
+  public int getScannerCaching() {
+    return scannerCaching;
+  }
+
+  /**
+   * Sets the number of rows that a scanner will fetch at once.
+   * <p>
+   * This will override the value specified by
+   * {@code hbase.client.scanner.caching}.
+   * Increasing this value will reduce the amount of work needed each time
+   * {@code next()} is called on a scanner, at the expense of memory use
+   * (since more rows will need to be maintained in memory by the scanners).
+   * @param scannerCaching the number of rows a scanner will fetch at once.
+   */
+  public void setScannerCaching(int scannerCaching) {
+    this.scannerCaching = scannerCaching;
+  }
+
+  @Override
+  public HTableDescriptor getTableDescriptor() throws IOException {
+    return new UnmodifyableHTableDescriptor(
+      this.connection.getHTableDescriptor(this.tableName));
+  }
+
+  /**
+   * Gets the starting row key for every region in the currently open table.
+   * <p>
+   * This is mainly useful for the MapReduce integration.
+   * @return Array of region starting row keys
+   * @throws IOException if a remote or network exception occurs
+   */
+  public byte [][] getStartKeys() throws IOException {
+    return getStartEndKeys().getFirst();
+  }
+
+  /**
+   * Gets the ending row key for every region in the currently open table.
+   * <p>
+   * This is mainly useful for the MapReduce integration.
+   * @return Array of region ending row keys
+   * @throws IOException if a remote or network exception occurs
+   */
+  public byte[][] getEndKeys() throws IOException {
+    return getStartEndKeys().getSecond();
+  }
+
+  /**
+   * Gets the starting and ending row keys for every region in the currently
+   * open table.
+   * <p>
+   * This is mainly useful for the MapReduce integration.
+   * @return Pair of arrays of region starting and ending row keys
+   * @throws IOException if a remote or network exception occurs
+   */
+  @SuppressWarnings("unchecked")
+  public Pair<byte[][],byte[][]> getStartEndKeys() throws IOException {
+    final List<byte[]> startKeyList = new ArrayList<byte[]>();
+    final List<byte[]> endKeyList = new ArrayList<byte[]>();
+    MetaScannerVisitor visitor = new MetaScannerVisitor() {
+      public boolean processRow(Result rowResult) throws IOException {
+        byte [] bytes = rowResult.getValue(HConstants.CATALOG_FAMILY,
+          HConstants.REGIONINFO_QUALIFIER);
+        if (bytes == null) {
+          LOG.warn("Null " + HConstants.REGIONINFO_QUALIFIER + " cell in " +
+            rowResult);
+          return true;
+        }
+        HRegionInfo info = Writables.getHRegionInfo(bytes);
+        if (Bytes.equals(info.getTableDesc().getName(), getTableName())) {
+          if (!(info.isOffline() || info.isSplit())) {
+            startKeyList.add(info.getStartKey());
+            endKeyList.add(info.getEndKey());
+          }
+        }
+        return true;
+      }
+    };
+    MetaScanner.metaScan(configuration, visitor, this.tableName);
+    return new Pair(startKeyList.toArray(new byte[startKeyList.size()][]),
+                endKeyList.toArray(new byte[endKeyList.size()][]));
+  }
+
+  /**
+   * Gets all the regions and their address for this table.
+   * <p>
+   * This is mainly useful for the MapReduce integration.
+   * @return A map of HRegionInfo with it's server address
+   * @throws IOException if a remote or network exception occurs
+   */
+  public Map<HRegionInfo, HServerAddress> getRegionsInfo() throws IOException {
+    final Map<HRegionInfo, HServerAddress> regionMap =
+      new TreeMap<HRegionInfo, HServerAddress>();
+
+    MetaScannerVisitor visitor = new MetaScannerVisitor() {
+      public boolean processRow(Result rowResult) throws IOException {
+        HRegionInfo info = Writables.getHRegionInfo(
+            rowResult.getValue(HConstants.CATALOG_FAMILY,
+                HConstants.REGIONINFO_QUALIFIER));
+
+        if (!(Bytes.equals(info.getTableDesc().getName(), getTableName()))) {
+          return false;
+        }
+
+        HServerAddress server = new HServerAddress();
+        byte [] value = rowResult.getValue(HConstants.CATALOG_FAMILY,
+            HConstants.SERVER_QUALIFIER);
+        if (value != null && value.length > 0) {
+          String address = Bytes.toString(value);
+          server = new HServerAddress(address);
+        }
+
+        if (!(info.isOffline() || info.isSplit())) {
+          regionMap.put(new UnmodifyableHRegionInfo(info), server);
+        }
+        return true;
+      }
+
+    };
+    MetaScanner.metaScan(configuration, visitor, tableName);
+    return regionMap;
+  }
+
+  /**
+   * Save the passed region information and the table's regions
+   * cache.
+   * <p>
+   * This is mainly useful for the MapReduce integration. You can call
+   * {@link #deserializeRegionInfo deserializeRegionInfo}
+   * to deserialize regions information from a
+   * {@link DataInput}, then call this method to load them to cache.
+   *
+   * <pre>
+   * {@code
+   * HTable t1 = new HTable("foo");
+   * FileInputStream fis = new FileInputStream("regions.dat");
+   * DataInputStream dis = new DataInputStream(fis);
+   *
+   * Map<HRegionInfo, HServerAddress> hm = t1.deserializeRegionInfo(dis);
+   * t1.prewarmRegionCache(hm);
+   * }
+   * </pre>
+   * @param regionMap This piece of regions information will be loaded
+   * to region cache.
+   */
+  public void prewarmRegionCache(Map<HRegionInfo, HServerAddress> regionMap) {
+    this.connection.prewarmRegionCache(this.getTableName(), regionMap);
+  }
+
+  /**
+   * Serialize the regions information of this table and output
+   * to <code>out</code>.
+   * <p>
+   * This is mainly useful for the MapReduce integration. A client could
+   * perform a large scan for all the regions for the table, serialize the
+   * region info to a file. MR job can ship a copy of the meta for the table in
+   * the DistributedCache.
+   * <pre>
+   * {@code
+   * FileOutputStream fos = new FileOutputStream("regions.dat");
+   * DataOutputStream dos = new DataOutputStream(fos);
+   * table.serializeRegionInfo(dos);
+   * dos.flush();
+   * dos.close();
+   * }
+   * </pre>
+   * @param out {@link DataOutput} to serialize this object into.
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void serializeRegionInfo(DataOutput out) throws IOException {
+    Map<HRegionInfo, HServerAddress> allRegions = this.getRegionsInfo();
+    // first, write number of regions
+    out.writeInt(allRegions.size());
+    for (Map.Entry<HRegionInfo, HServerAddress> es : allRegions.entrySet()) {
+      es.getKey().write(out);
+      es.getValue().write(out);
+    }
+  }
+
+  /**
+   * Read from <code>in</code> and deserialize the regions information.
+   *
+   * <p>It behaves similarly as {@link #getRegionsInfo getRegionsInfo}, except
+   * that it loads the region map from a {@link DataInput} object.
+   *
+   * <p>It is supposed to be followed immediately by  {@link
+   * #prewarmRegionCache prewarmRegionCache}.
+   *
+   * <p>
+   * Please refer to {@link #prewarmRegionCache prewarmRegionCache} for usage.
+   *
+   * @param in {@link DataInput} object.
+   * @return A map of HRegionInfo with its server address.
+   * @throws IOException if an I/O exception occurs.
+   */
+  public Map<HRegionInfo, HServerAddress> deserializeRegionInfo(DataInput in)
+  throws IOException {
+    final Map<HRegionInfo, HServerAddress> allRegions =
+      new TreeMap<HRegionInfo, HServerAddress>();
+
+    // the first integer is expected to be the size of records
+    int regionsCount = in.readInt();
+    for (int i = 0; i < regionsCount; ++i) {
+      HRegionInfo hri = new HRegionInfo();
+      hri.readFields(in);
+      HServerAddress hsa = new HServerAddress();
+      hsa.readFields(in);
+      allRegions.put(hri, hsa);
+    }
+    return allRegions;
+  }
+
+   @Override
+   public Result getRowOrBefore(final byte[] row, final byte[] family)
+   throws IOException {
+     return connection.getRegionServerWithRetries(
+         new ServerCallable<Result>(connection, tableName, row) {
+       public Result call() throws IOException {
+         return server.getClosestRowBefore(location.getRegionInfo().getRegionName(),
+           row, family);
+       }
+     });
+   }
+
+  @Override
+  public ResultScanner getScanner(final Scan scan) throws IOException {
+    ClientScanner s = new ClientScanner(scan);
+    s.initialize();
+    return s;
+  }
+
+  @Override
+  public ResultScanner getScanner(byte [] family) throws IOException {
+    Scan scan = new Scan();
+    scan.addFamily(family);
+    return getScanner(scan);
+  }
+
+  @Override
+  public ResultScanner getScanner(byte [] family, byte [] qualifier)
+  throws IOException {
+    Scan scan = new Scan();
+    scan.addColumn(family, qualifier);
+    return getScanner(scan);
+  }
+
+  public Result get(final Get get) throws IOException {
+    return connection.getRegionServerWithRetries(
+        new ServerCallable<Result>(connection, tableName, get.getRow()) {
+          public Result call() throws IOException {
+            return server.get(location.getRegionInfo().getRegionName(), get);
+          }
+        }
+    );
+  }
+
+   public Result[] get(List<Get> gets) throws IOException {
+     try {
+       Object [] r1 = batch((List)gets);
+
+       // translate.
+       Result [] results = new Result[r1.length];
+       int i=0;
+       for (Object o : r1) {
+         // batch ensures if there is a failure we get an exception instead
+         results[i++] = (Result) o;
+       }
+
+       return results;
+     } catch (InterruptedException e) {
+       throw new IOException(e);
+     }
+   }
+
+  /**
+   * Method that does a batch call on Deletes, Gets and Puts.  The ordering of
+   * execution of the actions is not defined. Meaning if you do a Put and a
+   * Get in the same {@link #batch} call, you will not necessarily be
+   * guaranteed that the Get returns what the Put had put.
+   *
+   * @param actions list of Get, Put, Delete objects
+   * @param results Empty Result[], same size as actions. Provides access to
+   * partial results, in case an exception is thrown. If there are any failures,
+   * there will be a null or Throwable will be in the results array, AND an
+   * exception will be thrown.
+   * @throws IOException
+   */
+  @Override
+  public synchronized void batch(final List<Row> actions, final Object[] results)
+      throws InterruptedException, IOException {
+    connection.processBatch(actions, tableName, pool, results);
+  }
+
+  /**
+   * Method that does a batch call on Deletes, Gets and Puts.
+   *
+   * @param actions list of Get, Put, Delete objects
+   * @return the results from the actions. A null in the return array means that
+   * the call for that action failed, even after retries
+   * @throws IOException
+   */
+  @Override
+  public synchronized Object[] batch(final List<Row> actions) throws InterruptedException, IOException {
+    Object[] results = new Object[actions.size()];
+    connection.processBatch(actions, tableName, pool, results);
+    return results;
+  }
+
+  /**
+   * Deletes the specified cells/row.
+   *
+   * @param delete The object that specifies what to delete.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  @Override
+  public void delete(final Delete delete)
+  throws IOException {
+    connection.getRegionServerWithRetries(
+        new ServerCallable<Boolean>(connection, tableName, delete.getRow()) {
+          public Boolean call() throws IOException {
+            server.delete(location.getRegionInfo().getRegionName(), delete);
+            return null; // FindBugs NP_BOOLEAN_RETURN_NULL
+          }
+        }
+    );
+  }
+
+  /**
+   * Deletes the specified cells/rows in bulk.
+   * @param deletes List of things to delete. As a side effect, it will be modified:
+   * successful {@link Delete}s are removed. The ordering of the list will not change.
+   * @throws IOException if a remote or network exception occurs. In that case
+   * the {@code deletes} argument will contain the {@link Delete} instances
+   * that have not be successfully applied.
+   * @since 0.20.1
+   * @see #batch(java.util.List, Object[])
+   */
+  @Override
+  public void delete(final List<Delete> deletes)
+  throws IOException {
+    Object[] results = new Object[deletes.size()];
+    try {
+      connection.processBatch((List) deletes, tableName, pool, results);
+    } catch (InterruptedException e) {
+      throw new IOException(e);
+    } finally {
+      // mutate list so that it is empty for complete success, or contains only failed records
+      // results are returned in the same order as the requests in list
+      // walk the list backwards, so we can remove from list without impacting the indexes of earlier members
+      for (int i = results.length - 1; i>=0; i--) {
+        // if result is not null, it succeeded
+        if (results[i] instanceof Result) {
+          deletes.remove(i);
+        }
+      }
+    }
+  }
+
+  @Override
+  public void put(final Put put) throws IOException {
+    doPut(Arrays.asList(put));
+  }
+
+  @Override
+  public void put(final List<Put> puts) throws IOException {
+    doPut(puts);
+  }
+
+  private void doPut(final List<Put> puts) throws IOException {
+    for (Put put : puts) {
+      validatePut(put);
+      writeBuffer.add(put);
+      currentWriteBufferSize += put.heapSize();
+    }
+    if (autoFlush || currentWriteBufferSize > writeBufferSize) {
+      flushCommits();
+    }
+  }
+
+  @Override
+  public Result increment(final Increment increment) throws IOException {
+    if (!increment.hasFamilies()) {
+      throw new IOException(
+          "Invalid arguments to increment, no columns specified");
+    }
+    return connection.getRegionServerWithRetries(
+        new ServerCallable<Result>(connection, tableName, increment.getRow()) {
+          public Result call() throws IOException {
+            return server.increment(
+                location.getRegionInfo().getRegionName(), increment);
+          }
+        }
+    );
+  }
+
+  @Override
+  public long incrementColumnValue(final byte [] row, final byte [] family,
+      final byte [] qualifier, final long amount)
+  throws IOException {
+    return incrementColumnValue(row, family, qualifier, amount, true);
+  }
+
+  @Override
+  public long incrementColumnValue(final byte [] row, final byte [] family,
+      final byte [] qualifier, final long amount, final boolean writeToWAL)
+  throws IOException {
+    NullPointerException npe = null;
+    if (row == null) {
+      npe = new NullPointerException("row is null");
+    } else if (family == null) {
+      npe = new NullPointerException("column is null");
+    }
+    if (npe != null) {
+      throw new IOException(
+          "Invalid arguments to incrementColumnValue", npe);
+    }
+    return connection.getRegionServerWithRetries(
+        new ServerCallable<Long>(connection, tableName, row) {
+          public Long call() throws IOException {
+            return server.incrementColumnValue(
+                location.getRegionInfo().getRegionName(), row, family,
+                qualifier, amount, writeToWAL);
+          }
+        }
+    );
+  }
+
+  /**
+   * Atomically checks if a row/family/qualifier value match the expectedValue.
+   * If it does, it adds the put.  If value == null, checks for non-existence
+   * of the value.
+   *
+   * @param row to check
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param value the expected value
+   * @param put put to execute if value matches.
+   * @throws IOException
+   * @return true if the new put was execute, false otherwise
+   */
+  @Override
+  public boolean checkAndPut(final byte [] row,
+      final byte [] family, final byte [] qualifier, final byte [] value,
+      final Put put)
+  throws IOException {
+    return connection.getRegionServerWithRetries(
+        new ServerCallable<Boolean>(connection, tableName, row) {
+          public Boolean call() throws IOException {
+            return server.checkAndPut(location.getRegionInfo().getRegionName(),
+                row, family, qualifier, value, put) ? Boolean.TRUE : Boolean.FALSE;
+          }
+        }
+    );
+  }
+
+  /**
+   * Atomically checks if a row/family/qualifier value match the expectedValue.
+   * If it does, it adds the delete.  If value == null, checks for non-existence
+   * of the value.
+   *
+   * @param row to check
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param value the expected value
+   * @param delete delete to execute if value matches.
+   * @throws IOException
+   * @return true if the new delete was executed, false otherwise
+   */
+  @Override
+  public boolean checkAndDelete(final byte [] row,
+      final byte [] family, final byte [] qualifier, final byte [] value,
+      final Delete delete)
+  throws IOException {
+    return connection.getRegionServerWithRetries(
+        new ServerCallable<Boolean>(connection, tableName, row) {
+          public Boolean call() throws IOException {
+            return server.checkAndDelete(
+                location.getRegionInfo().getRegionName(),
+                row, family, qualifier, value, delete)
+            ? Boolean.TRUE : Boolean.FALSE;
+          }
+        }
+    );
+  }
+
+  /**
+   * Test for the existence of columns in the table, as specified in the Get.<p>
+   *
+   * This will return true if the Get matches one or more keys, false if not.<p>
+   *
+   * This is a server-side call so it prevents any data from being transfered
+   * to the client.
+   * @param get param to check for
+   * @return true if the specified Get matches one or more keys, false if not
+   * @throws IOException
+   */
+  @Override
+  public boolean exists(final Get get) throws IOException {
+    return connection.getRegionServerWithRetries(
+        new ServerCallable<Boolean>(connection, tableName, get.getRow()) {
+          public Boolean call() throws IOException {
+            return server.
+                exists(location.getRegionInfo().getRegionName(), get);
+          }
+        }
+    );
+  }
+
+  /**
+   * Executes all the buffered {@link Put} operations.
+   * <p>
+   * This method gets called once automatically for every {@link Put} or batch
+   * of {@link Put}s (when {@link #batch(List)} is used) when
+   * {@link #isAutoFlush()} is {@code true}.
+   * @throws IOException if a remote or network exception occurs.
+   */
+  @Override
+  public void flushCommits() throws IOException {
+    try {
+      connection.processBatchOfPuts(writeBuffer, tableName, pool);
+    } finally {
+      // the write buffer was adjusted by processBatchOfPuts
+      currentWriteBufferSize = 0;
+      for (Put aPut : writeBuffer) {
+        currentWriteBufferSize += aPut.heapSize();
+      }
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    flushCommits();
+  }
+
+  // validate for well-formedness
+  private void validatePut(final Put put) throws IllegalArgumentException{
+    if (put.isEmpty()) {
+      throw new IllegalArgumentException("No columns to insert");
+    }
+    if (maxKeyValueSize > 0) {
+      for (List<KeyValue> list : put.getFamilyMap().values()) {
+        for (KeyValue kv : list) {
+          if (kv.getLength() > maxKeyValueSize) {
+            throw new IllegalArgumentException("KeyValue size too large");
+          }
+        }
+      }
+    }
+  }
+
+  @Override
+  public RowLock lockRow(final byte [] row)
+  throws IOException {
+    return connection.getRegionServerWithRetries(
+      new ServerCallable<RowLock>(connection, tableName, row) {
+        public RowLock call() throws IOException {
+          long lockId =
+              server.lockRow(location.getRegionInfo().getRegionName(), row);
+          return new RowLock(row,lockId);
+        }
+      }
+    );
+  }
+
+  @Override
+  public void unlockRow(final RowLock rl)
+  throws IOException {
+    connection.getRegionServerWithRetries(
+      new ServerCallable<Boolean>(connection, tableName, rl.getRow()) {
+        public Boolean call() throws IOException {
+          server.unlockRow(location.getRegionInfo().getRegionName(),
+              rl.getLockId());
+          return null; // FindBugs NP_BOOLEAN_RETURN_NULL
+        }
+      }
+    );
+  }
+
+  @Override
+  public boolean isAutoFlush() {
+    return autoFlush;
+  }
+
+  /**
+   * Turns 'auto-flush' on or off.
+   * <p>
+   * When enabled (default), {@link Put} operations don't get buffered/delayed
+   * and are immediately executed.  This is slower but safer.
+   * <p>
+   * Turning this off means that multiple {@link Put}s will be accepted before
+   * any RPC is actually sent to do the write operations.  If the application
+   * dies before pending writes get flushed to HBase, data will be lost.
+   * Other side effects may include the fact that the application thinks a
+   * {@link Put} was executed successfully whereas it was in fact only
+   * buffered and the operation may fail when attempting to flush all pending
+   * writes.  In that case though, the code will retry the failed {@link Put}
+   * upon its next attempt to flush the buffer.
+   *
+   * @param autoFlush Whether or not to enable 'auto-flush'.
+   * @see #flushCommits
+   */
+  public void setAutoFlush(boolean autoFlush) {
+    this.autoFlush = autoFlush;
+  }
+
+  /**
+   * Returns the maximum size in bytes of the write buffer for this HTable.
+   * <p>
+   * The default value comes from the configuration parameter
+   * {@code hbase.client.write.buffer}.
+   * @return The size of the write buffer in bytes.
+   */
+  public long getWriteBufferSize() {
+    return writeBufferSize;
+  }
+
+  /**
+   * Sets the size of the buffer in bytes.
+   * <p>
+   * If the new size is less than the current amount of data in the
+   * write buffer, the buffer gets flushed.
+   * @param writeBufferSize The new write buffer size, in bytes.
+   * @throws IOException if a remote or network exception occurs.
+   */
+  public void setWriteBufferSize(long writeBufferSize) throws IOException {
+    this.writeBufferSize = writeBufferSize;
+    if(currentWriteBufferSize > writeBufferSize) {
+      flushCommits();
+    }
+  }
+
+  /**
+   * Returns the write buffer.
+   * @return The current write buffer.
+   */
+  public ArrayList<Put> getWriteBuffer() {
+    return writeBuffer;
+  }
+
+  /**
+   * Implements the scanner interface for the HBase client.
+   * If there are multiple regions in a table, this scanner will iterate
+   * through them all.
+   */
+  protected class ClientScanner implements ResultScanner {
+    private final Log CLIENT_LOG = LogFactory.getLog(this.getClass());
+    // HEADSUP: The scan internal start row can change as we move through table.
+    private Scan scan;
+    private boolean closed = false;
+    // Current region scanner is against.  Gets cleared if current region goes
+    // wonky: e.g. if it splits on us.
+    private HRegionInfo currentRegion = null;
+    private ScannerCallable callable = null;
+    private final LinkedList<Result> cache = new LinkedList<Result>();
+    private final int caching;
+    private long lastNext;
+    // Keep lastResult returned successfully in case we have to reset scanner.
+    private Result lastResult = null;
+
+    protected ClientScanner(final Scan scan) {
+      if (CLIENT_LOG.isDebugEnabled()) {
+        CLIENT_LOG.debug("Creating scanner over "
+            + Bytes.toString(getTableName())
+            + " starting at key '" + Bytes.toStringBinary(scan.getStartRow()) + "'");
+      }
+      this.scan = scan;
+      this.lastNext = System.currentTimeMillis();
+
+      // Use the caching from the Scan.  If not set, use the default cache setting for this table.
+      if (this.scan.getCaching() > 0) {
+        this.caching = this.scan.getCaching();
+      } else {
+        this.caching = HTable.this.scannerCaching;
+      }
+
+      // Removed filter validation.  We have a new format now, only one of all
+      // the current filters has a validate() method.  We can add it back,
+      // need to decide on what we're going to do re: filter redesign.
+      // Need, at the least, to break up family from qualifier as separate
+      // checks, I think it's important server-side filters are optimal in that
+      // respect.
+    }
+
+    public void initialize() throws IOException {
+      nextScanner(this.caching, false);
+    }
+
+    protected Scan getScan() {
+      return scan;
+    }
+
+    protected long getTimestamp() {
+      return lastNext;
+    }
+
+    // returns true if the passed region endKey
+    private boolean checkScanStopRow(final byte [] endKey) {
+      if (this.scan.getStopRow().length > 0) {
+        // there is a stop row, check to see if we are past it.
+        byte [] stopRow = scan.getStopRow();
+        int cmp = Bytes.compareTo(stopRow, 0, stopRow.length,
+          endKey, 0, endKey.length);
+        if (cmp <= 0) {
+          // stopRow <= endKey (endKey is equals to or larger than stopRow)
+          // This is a stop.
+          return true;
+        }
+      }
+      return false; //unlikely.
+    }
+
+    /*
+     * Gets a scanner for the next region.  If this.currentRegion != null, then
+     * we will move to the endrow of this.currentRegion.  Else we will get
+     * scanner at the scan.getStartRow().  We will go no further, just tidy
+     * up outstanding scanners, if <code>currentRegion != null</code> and
+     * <code>done</code> is true.
+     * @param nbRows
+     * @param done Server-side says we're done scanning.
+     */
+    private boolean nextScanner(int nbRows, final boolean done)
+    throws IOException {
+      // Close the previous scanner if it's open
+      if (this.callable != null) {
+        this.callable.setClose();
+        getConnection().getRegionServerWithRetries(callable);
+        this.callable = null;
+      }
+
+      // Where to start the next scanner
+      byte [] localStartKey;
+
+      // if we're at end of table, close and return false to stop iterating
+      if (this.currentRegion != null) {
+        byte [] endKey = this.currentRegion.getEndKey();
+        if (endKey == null ||
+            Bytes.equals(endKey, HConstants.EMPTY_BYTE_ARRAY) ||
+            checkScanStopRow(endKey) ||
+            done) {
+          close();
+          if (CLIENT_LOG.isDebugEnabled()) {
+            CLIENT_LOG.debug("Finished with scanning at " + this.currentRegion);
+          }
+          return false;
+        }
+        localStartKey = endKey;
+        if (CLIENT_LOG.isDebugEnabled()) {
+          CLIENT_LOG.debug("Finished with region " + this.currentRegion);
+        }
+      } else {
+        localStartKey = this.scan.getStartRow();
+      }
+
+      if (CLIENT_LOG.isDebugEnabled()) {
+        CLIENT_LOG.debug("Advancing internal scanner to startKey at '" +
+          Bytes.toStringBinary(localStartKey) + "'");
+      }
+      try {
+        callable = getScannerCallable(localStartKey, nbRows);
+        // Open a scanner on the region server starting at the
+        // beginning of the region
+        getConnection().getRegionServerWithRetries(callable);
+        this.currentRegion = callable.getHRegionInfo();
+      } catch (IOException e) {
+        close();
+        throw e;
+      }
+      return true;
+    }
+
+    protected ScannerCallable getScannerCallable(byte [] localStartKey,
+        int nbRows) {
+      scan.setStartRow(localStartKey);
+      ScannerCallable s = new ScannerCallable(getConnection(),
+        getTableName(), scan);
+      s.setCaching(nbRows);
+      return s;
+    }
+
+    public Result next() throws IOException {
+      // If the scanner is closed but there is some rows left in the cache,
+      // it will first empty it before returning null
+      if (cache.size() == 0 && this.closed) {
+        return null;
+      }
+      if (cache.size() == 0) {
+        Result [] values = null;
+        long remainingResultSize = maxScannerResultSize;
+        int countdown = this.caching;
+        // We need to reset it if it's a new callable that was created
+        // with a countdown in nextScanner
+        callable.setCaching(this.caching);
+        // This flag is set when we want to skip the result returned.  We do
+        // this when we reset scanner because it split under us.
+        boolean skipFirst = false;
+        do {
+          try {
+            // Server returns a null values if scanning is to stop.  Else,
+            // returns an empty array if scanning is to go on and we've just
+            // exhausted current region.
+            values = getConnection().getRegionServerWithRetries(callable);
+            if (skipFirst) {
+              skipFirst = false;
+              // Reget.
+              values = getConnection().getRegionServerWithRetries(callable);
+            }
+          } catch (DoNotRetryIOException e) {
+            if (e instanceof UnknownScannerException) {
+              long timeout = lastNext + scannerTimeout;
+              // If we are over the timeout, throw this exception to the client
+              // Else, it's because the region moved and we used the old id
+              // against the new region server; reset the scanner.
+              if (timeout < System.currentTimeMillis()) {
+                long elapsed = System.currentTimeMillis() - lastNext;
+                ScannerTimeoutException ex = new ScannerTimeoutException(
+                    elapsed + "ms passed since the last invocation, " +
+                        "timeout is currently set to " + scannerTimeout);
+                ex.initCause(e);
+                throw ex;
+              }
+            } else {
+              Throwable cause = e.getCause();
+              if (cause == null || !(cause instanceof NotServingRegionException)) {
+                throw e;
+              }
+            }
+            // Else, its signal from depths of ScannerCallable that we got an
+            // NSRE on a next and that we need to reset the scanner.
+            if (this.lastResult != null) {
+              this.scan.setStartRow(this.lastResult.getRow());
+              // Skip first row returned.  We already let it out on previous
+              // invocation.
+              skipFirst = true;
+            }
+            // Clear region
+            this.currentRegion = null;
+            continue;
+          }
+          lastNext = System.currentTimeMillis();
+          if (values != null && values.length > 0) {
+            for (Result rs : values) {
+              cache.add(rs);
+              for (KeyValue kv : rs.raw()) {
+                  remainingResultSize -= kv.heapSize();
+              }
+              countdown--;
+              this.lastResult = rs;
+            }
+          }
+          // Values == null means server-side filter has determined we must STOP
+        } while (remainingResultSize > 0 && countdown > 0 && nextScanner(countdown, values == null));
+      }
+
+      if (cache.size() > 0) {
+        return cache.poll();
+      }
+      return null;
+    }
+
+    /**
+     * Get <param>nbRows</param> rows.
+     * How many RPCs are made is determined by the {@link Scan#setCaching(int)}
+     * setting (or hbase.client.scanner.caching in hbase-site.xml).
+     * @param nbRows number of rows to return
+     * @return Between zero and <param>nbRows</param> RowResults.  Scan is done
+     * if returned array is of zero-length (We never return null).
+     * @throws IOException
+     */
+    public Result [] next(int nbRows) throws IOException {
+      // Collect values to be returned here
+      ArrayList<Result> resultSets = new ArrayList<Result>(nbRows);
+      for(int i = 0; i < nbRows; i++) {
+        Result next = next();
+        if (next != null) {
+          resultSets.add(next);
+        } else {
+          break;
+        }
+      }
+      return resultSets.toArray(new Result[resultSets.size()]);
+    }
+
+    public void close() {
+      if (callable != null) {
+        callable.setClose();
+        try {
+          getConnection().getRegionServerWithRetries(callable);
+        } catch (IOException e) {
+          // We used to catch this error, interpret, and rethrow. However, we
+          // have since decided that it's not nice for a scanner's close to
+          // throw exceptions. Chances are it was just an UnknownScanner
+          // exception due to lease time out.
+        }
+        callable = null;
+      }
+      closed = true;
+    }
+
+    public Iterator<Result> iterator() {
+      return new Iterator<Result>() {
+        // The next RowResult, possibly pre-read
+        Result next = null;
+
+        // return true if there is another item pending, false if there isn't.
+        // this method is where the actual advancing takes place, but you need
+        // to call next() to consume it. hasNext() will only advance if there
+        // isn't a pending next().
+        public boolean hasNext() {
+          if (next == null) {
+            try {
+              next = ClientScanner.this.next();
+              return next != null;
+            } catch (IOException e) {
+              throw new RuntimeException(e);
+            }
+          }
+          return true;
+        }
+
+        // get the pending next item and advance the iterator. returns null if
+        // there is no next item.
+        public Result next() {
+          // since hasNext() does the real advancing, we call this to determine
+          // if there is a next before proceeding.
+          if (!hasNext()) {
+            return null;
+          }
+
+          // if we get to here, then hasNext() has given us an item to return.
+          // we want to return the item and then null out the next pointer, so
+          // we use a temporary variable.
+          Result temp = next;
+          next = null;
+          return temp;
+        }
+
+        public void remove() {
+          throw new UnsupportedOperationException();
+        }
+      };
+    }
+  }
+
+  static class DaemonThreadFactory implements ThreadFactory {
+    static final AtomicInteger poolNumber = new AtomicInteger(1);
+        final ThreadGroup group;
+        final AtomicInteger threadNumber = new AtomicInteger(1);
+        final String namePrefix;
+
+        DaemonThreadFactory() {
+            SecurityManager s = System.getSecurityManager();
+            group = (s != null)? s.getThreadGroup() :
+                                 Thread.currentThread().getThreadGroup();
+            namePrefix = "pool-" +
+                          poolNumber.getAndIncrement() +
+                         "-thread-";
+        }
+
+        public Thread newThread(Runnable r) {
+            Thread t = new Thread(group, r,
+                                  namePrefix + threadNumber.getAndIncrement(),
+                                  0);
+            if (!t.isDaemon()) {
+              t.setDaemon(true);
+            }
+            if (t.getPriority() != Thread.NORM_PRIORITY) {
+              t.setPriority(Thread.NORM_PRIORITY);
+            }
+            return t;
+        }
+  }
+
+  /**
+   * Enable or disable region cache prefetch for the table. It will be
+   * applied for the given table's all HTable instances who share the same
+   * connection. By default, the cache prefetch is enabled.
+   * @param tableName name of table to configure.
+   * @param enable Set to true to enable region cache prefetch. Or set to
+   * false to disable it.
+   * @throws ZooKeeperConnectionException
+   */
+  public static void setRegionCachePrefetch(final byte[] tableName,
+      boolean enable) throws ZooKeeperConnectionException {
+    HConnectionManager.getConnection(HBaseConfiguration.create()).
+    setRegionCachePrefetch(tableName, enable);
+  }
+
+  /**
+   * Enable or disable region cache prefetch for the table. It will be
+   * applied for the given table's all HTable instances who share the same
+   * connection. By default, the cache prefetch is enabled.
+   * @param conf The Configuration object to use.
+   * @param tableName name of table to configure.
+   * @param enable Set to true to enable region cache prefetch. Or set to
+   * false to disable it.
+   * @throws ZooKeeperConnectionException
+   */
+  public static void setRegionCachePrefetch(final Configuration conf,
+      final byte[] tableName, boolean enable) throws ZooKeeperConnectionException {
+    HConnectionManager.getConnection(conf).setRegionCachePrefetch(
+        tableName, enable);
+  }
+
+  /**
+   * Check whether region cache prefetch is enabled or not for the table.
+   * @param conf The Configuration object to use.
+   * @param tableName name of table to check
+   * @return true if table's region cache prefecth is enabled. Otherwise
+   * it is disabled.
+   * @throws ZooKeeperConnectionException
+   */
+  public static boolean getRegionCachePrefetch(final Configuration conf,
+      final byte[] tableName) throws ZooKeeperConnectionException {
+    return HConnectionManager.getConnection(conf).getRegionCachePrefetch(
+        tableName);
+  }
+
+  /**
+   * Check whether region cache prefetch is enabled or not for the table.
+   * @param tableName name of table to check
+   * @return true if table's region cache prefecth is enabled. Otherwise
+   * it is disabled.
+   * @throws ZooKeeperConnectionException
+   */
+  public static boolean getRegionCachePrefetch(final byte[] tableName) throws ZooKeeperConnectionException {
+    return HConnectionManager.getConnection(HBaseConfiguration.create()).
+    getRegionCachePrefetch(tableName);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java
new file mode 100644
index 0000000..b755896
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.conf.Configuration;
+
+import java.io.IOException;
+
+/**
+ * Factory for creating HTable instances.
+ *
+ * @since 0.21.0
+ */
+public class HTableFactory implements HTableInterfaceFactory {
+  @Override
+  public HTableInterface createHTableInterface(Configuration config,
+      byte[] tableName) {
+    try {
+      return new HTable(config, tableName);
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+
+  @Override
+  public void releaseHTableInterface(HTableInterface table) {
+    try {
+      table.close();
+    } catch (IOException ioe) {
+      throw new RuntimeException(ioe);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
new file mode 100644
index 0000000..b99ed04
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
@@ -0,0 +1,353 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * Used to communicate with a single HBase table.
+ *
+ * @since 0.21.0
+ */
+public interface HTableInterface {
+
+  /**
+   * Gets the name of this table.
+   *
+   * @return the table name.
+   */
+  byte[] getTableName();
+
+  /**
+   * Returns the {@link Configuration} object used by this instance.
+   * <p>
+   * The reference returned is not a copy, so any change made to it will
+   * affect this instance.
+   */
+  Configuration getConfiguration();
+
+  /**
+   * Gets the {@link HTableDescriptor table descriptor} for this table.
+   * @throws IOException if a remote or network exception occurs.
+   */
+  HTableDescriptor getTableDescriptor() throws IOException;
+
+  /**
+   * Test for the existence of columns in the table, as specified in the Get.
+   * <p>
+   *
+   * This will return true if the Get matches one or more keys, false if not.
+   * <p>
+   *
+   * This is a server-side call so it prevents any data from being transfered to
+   * the client.
+   *
+   * @param get the Get
+   * @return true if the specified Get matches one or more keys, false if not
+   * @throws IOException e
+   */
+  boolean exists(Get get) throws IOException;
+
+  /**
+   * Method that does a batch call on Deletes, Gets and Puts.
+   *
+   * @param actions list of Get, Put, Delete objects
+   * @param results Empty Object[], same size as actions. Provides access to partial
+   *                results, in case an exception is thrown. A null in the result array means that
+   *                the call for that action failed, even after retries
+   * @throws IOException
+   * @since 0.90.0
+   */
+  void batch(final List<Row> actions, final Object[] results) throws IOException, InterruptedException;
+
+  /**
+   * Method that does a batch call on Deletes, Gets and Puts.
+   *
+   *
+   * @param actions list of Get, Put, Delete objects
+   * @return the results from the actions. A null in the return array means that
+   *         the call for that action failed, even after retries
+   * @throws IOException
+   * @since 0.90.0
+   */
+  Object[] batch(final List<Row> actions) throws IOException, InterruptedException;
+
+  /**
+   * Extracts certain cells from a given row.
+   * @param get The object that specifies what data to fetch and from which row.
+   * @return The data coming from the specified row, if it exists.  If the row
+   * specified doesn't exist, the {@link Result} instance returned won't
+   * contain any {@link KeyValue}, as indicated by {@link Result#isEmpty()}.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  Result get(Get get) throws IOException;
+
+  /**
+   * Extracts certain cells from the given rows, in batch.
+   *
+   * @param gets The objects that specify what data to fetch and from which rows.
+   *
+   * @return The data coming from the specified rows, if it exists.  If the row
+   *         specified doesn't exist, the {@link Result} instance returned won't
+   *         contain any {@link KeyValue}, as indicated by {@link Result#isEmpty()}.
+   *         A null in the return array means that the get operation for that
+   *         Get failed, even after retries.
+   * @throws IOException if a remote or network exception occurs.
+   *
+   * @since 0.90.0
+   */
+  Result[] get(List<Get> gets) throws IOException;
+
+  /**
+   * Return the row that matches <i>row</i> exactly,
+   * or the one that immediately precedes it.
+   *
+   * @param row A row key.
+   * @param family Column family to include in the {@link Result}.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  Result getRowOrBefore(byte[] row, byte[] family) throws IOException;
+
+  /**
+   * Returns a scanner on the current table as specified by the {@link Scan}
+   * object.
+   *
+   * @param scan A configured {@link Scan} object.
+   * @return A scanner.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  ResultScanner getScanner(Scan scan) throws IOException;
+
+  /**
+   * Gets a scanner on the current table for the given family.
+   *
+   * @param family The column family to scan.
+   * @return A scanner.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  ResultScanner getScanner(byte[] family) throws IOException;
+
+  /**
+   * Gets a scanner on the current table for the given family and qualifier.
+   *
+   * @param family The column family to scan.
+   * @param qualifier The column qualifier to scan.
+   * @return A scanner.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  ResultScanner getScanner(byte[] family, byte[] qualifier) throws IOException;
+
+
+  /**
+   * Puts some data in the table.
+   * <p>
+   * If {@link #isAutoFlush isAutoFlush} is false, the update is buffered
+   * until the internal buffer is full.
+   * @param put The data to put.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  void put(Put put) throws IOException;
+
+  /**
+   * Puts some data in the table, in batch.
+   * <p>
+   * If {@link #isAutoFlush isAutoFlush} is false, the update is buffered
+   * until the internal buffer is full.
+   * @param puts The list of mutations to apply.  The list gets modified by this
+   * method (in particular it gets re-ordered, so the order in which the elements
+   * are inserted in the list gives no guarantee as to the order in which the
+   * {@link Put}s are executed).
+   * @throws IOException if a remote or network exception occurs. In that case
+   * the {@code puts} argument will contain the {@link Put} instances that
+   * have not be successfully applied.
+   * @since 0.20.0
+   */
+  void put(List<Put> puts) throws IOException;
+
+  /**
+   * Atomically checks if a row/family/qualifier value matches the expected
+   * value. If it does, it adds the put.  If the passed value is null, the check
+   * is for the lack of column (ie: non-existance)
+   *
+   * @param row to check
+   * @param family column family to check
+   * @param qualifier column qualifier to check
+   * @param value the expected value
+   * @param put data to put if check succeeds
+   * @throws IOException e
+   * @return true if the new put was executed, false otherwise
+   */
+  boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier,
+      byte[] value, Put put) throws IOException;
+
+  /**
+   * Deletes the specified cells/row.
+   *
+   * @param delete The object that specifies what to delete.
+   * @throws IOException if a remote or network exception occurs.
+   * @since 0.20.0
+   */
+  void delete(Delete delete) throws IOException;
+
+  /**
+   * Deletes the specified cells/rows in bulk.
+   * @param deletes List of things to delete.  List gets modified by this
+   * method (in particular it gets re-ordered, so the order in which the elements
+   * are inserted in the list gives no guarantee as to the order in which the
+   * {@link Delete}s are executed).
+   * @throws IOException if a remote or network exception occurs. In that case
+   * the {@code deletes} argument will contain the {@link Delete} instances
+   * that have not be successfully applied.
+   * @since 0.20.1
+   */
+  void delete(List<Delete> deletes) throws IOException;
+
+  /**
+   * Atomically checks if a row/family/qualifier value matches the expected
+   * value. If it does, it adds the delete.  If the passed value is null, the
+   * check is for the lack of column (ie: non-existance)
+   *
+   * @param row to check
+   * @param family column family to check
+   * @param qualifier column qualifier to check
+   * @param value the expected value
+   * @param delete data to delete if check succeeds
+   * @throws IOException e
+   * @return true if the new delete was executed, false otherwise
+   */
+  boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier,
+      byte[] value, Delete delete) throws IOException;
+
+  /**
+   * Increments one or more columns within a single row.
+   * <p>
+   * This operation does not appear atomic to readers.  Increments are done
+   * under a single row lock, so write operations to a row are synchronized, but
+   * readers do not take row locks so get and scan operations can see this
+   * operation partially completed.
+   *
+   * @param increment object that specifies the columns and amounts to be used
+   *                  for the increment operations
+   * @throws IOException e
+   * @return values of columns after the increment
+   */
+  public Result increment(final Increment increment) throws IOException;
+
+  /**
+   * Atomically increments a column value.
+   * <p>
+   * Equivalent to {@code {@link #incrementColumnValue(byte[], byte[], byte[],
+   * long, boolean) incrementColumnValue}(row, family, qualifier, amount,
+   * <b>true</b>)}
+   * @param row The row that contains the cell to increment.
+   * @param family The column family of the cell to increment.
+   * @param qualifier The column qualifier of the cell to increment.
+   * @param amount The amount to increment the cell with (or decrement, if the
+   * amount is negative).
+   * @return The new value, post increment.
+   * @throws IOException if a remote or network exception occurs.
+   */
+  long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
+      long amount) throws IOException;
+
+  /**
+   * Atomically increments a column value. If the column value already exists
+   * and is not a big-endian long, this could throw an exception. If the column
+   * value does not yet exist it is initialized to <code>amount</code> and
+   * written to the specified column.
+   *
+   * <p>Setting writeToWAL to false means that in a fail scenario, you will lose
+   * any increments that have not been flushed.
+   * @param row The row that contains the cell to increment.
+   * @param family The column family of the cell to increment.
+   * @param qualifier The column qualifier of the cell to increment.
+   * @param amount The amount to increment the cell with (or decrement, if the
+   * amount is negative).
+   * @param writeToWAL if {@code true}, the operation will be applied to the
+   * Write Ahead Log (WAL).  This makes the operation slower but safer, as if
+   * the call returns successfully, it is guaranteed that the increment will
+   * be safely persisted.  When set to {@code false}, the call may return
+   * successfully before the increment is safely persisted, so it's possible
+   * that the increment be lost in the event of a failure happening before the
+   * operation gets persisted.
+   * @return The new value, post increment.
+   * @throws IOException if a remote or network exception occurs.
+   */
+  long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
+      long amount, boolean writeToWAL) throws IOException;
+
+  /**
+   * Tells whether or not 'auto-flush' is turned on.
+   *
+   * @return {@code true} if 'auto-flush' is enabled (default), meaning
+   * {@link Put} operations don't get buffered/delayed and are immediately
+   * executed.
+   */
+  boolean isAutoFlush();
+
+  /**
+   * Executes all the buffered {@link Put} operations.
+   * <p>
+   * This method gets called once automatically for every {@link Put} or batch
+   * of {@link Put}s (when <code>put(List<Put>)</code> is used) when
+   * {@link #isAutoFlush} is {@code true}.
+   * @throws IOException if a remote or network exception occurs.
+   */
+  void flushCommits() throws IOException;
+
+  /**
+   * Releases any resources help or pending changes in internal buffers.
+   *
+   * @throws IOException if a remote or network exception occurs.
+   */
+  void close() throws IOException;
+
+  /**
+   * Obtains a lock on a row.
+   *
+   * @param row The row to lock.
+   * @return A {@link RowLock} containing the row and lock id.
+   * @throws IOException if a remote or network exception occurs.
+   * @see RowLock
+   * @see #unlockRow
+   */
+  RowLock lockRow(byte[] row) throws IOException;
+
+  /**
+   * Releases a row lock.
+   *
+   * @param rl The row lock to release.
+   * @throws IOException if a remote or network exception occurs.
+   * @see RowLock
+   * @see #unlockRow
+   */
+  void unlockRow(RowLock rl) throws IOException;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java
new file mode 100644
index 0000000..5d68291
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java
@@ -0,0 +1,47 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.conf.Configuration;
+
+
+/**
+ * Defines methods to create new HTableInterface.
+ *
+ * @since 0.21.0
+ */
+public interface HTableInterfaceFactory {
+
+  /**
+   * Creates a new HTableInterface.
+   *
+   * @param config HBaseConfiguration instance.
+   * @param tableName name of the HBase table.
+   * @return HTableInterface instance.
+   */
+  HTableInterface createHTableInterface(Configuration config, byte[] tableName);
+
+
+  /**
+   * Release the HTable resource represented by the table.
+   * @param table
+   */
+  void releaseHTableInterface(final HTableInterface table);
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java
new file mode 100755
index 0000000..953144b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java
@@ -0,0 +1,168 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.util.LinkedList;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A simple pool of HTable instances.<p>
+ *
+ * Each HTablePool acts as a pool for all tables.  To use, instantiate an
+ * HTablePool and use {@link #getTable(String)} to get an HTable from the pool.
+ * Once you are done with it, return it to the pool with {@link #putTable(HTableInterface)}.
+ * 
+ * <p>A pool can be created with a <i>maxSize</i> which defines the most HTable
+ * references that will ever be retained for each table.  Otherwise the default
+ * is {@link Integer#MAX_VALUE}.
+ *
+ * <p>Pool will manage its own cluster to the cluster. See {@link HConnectionManager}.
+ */
+public class HTablePool {
+  private final ConcurrentMap<String, LinkedList<HTableInterface>> tables =
+    new ConcurrentHashMap<String, LinkedList<HTableInterface>>();
+  private final Configuration config;
+  private final int maxSize;
+  private final HTableInterfaceFactory tableFactory;
+
+  /**
+   * Default Constructor.  Default HBaseConfiguration and no limit on pool size.
+   */
+  public HTablePool() {
+    this(HBaseConfiguration.create(), Integer.MAX_VALUE);
+  }
+
+  /**
+   * Constructor to set maximum versions and use the specified configuration.
+   * @param config configuration
+   * @param maxSize maximum number of references to keep for each table
+   */
+  public HTablePool(final Configuration config, final int maxSize) {
+    this(config, maxSize, null);
+  }
+
+  public HTablePool(final Configuration config, final int maxSize,
+      final HTableInterfaceFactory tableFactory) {
+    // Make a new configuration instance so I can safely cleanup when
+    // done with the pool.
+    this.config = config == null? new Configuration(): new Configuration(config);
+    this.maxSize = maxSize;
+    this.tableFactory = tableFactory == null? new HTableFactory(): tableFactory;
+  }
+
+  /**
+   * Get a reference to the specified table from the pool.<p>
+   *
+   * Create a new one if one is not available.
+   * @param tableName table name
+   * @return a reference to the specified table
+   * @throws RuntimeException if there is a problem instantiating the HTable
+   */
+  public HTableInterface getTable(String tableName) {
+    LinkedList<HTableInterface> queue = tables.get(tableName);
+    if(queue == null) {
+      queue = new LinkedList<HTableInterface>();
+      tables.putIfAbsent(tableName, queue);
+      return createHTable(tableName);
+    }
+    HTableInterface table;
+    synchronized(queue) {
+      table = queue.poll();
+    }
+    if(table == null) {
+      return createHTable(tableName);
+    }
+    return table;
+  }
+
+  /**
+   * Get a reference to the specified table from the pool.<p>
+   *
+   * Create a new one if one is not available.
+   * @param tableName table name
+   * @return a reference to the specified table
+   * @throws RuntimeException if there is a problem instantiating the HTable
+   */
+  public HTableInterface getTable(byte [] tableName) {
+    return getTable(Bytes.toString(tableName));
+  }
+
+  /**
+   * Puts the specified HTable back into the pool.<p>
+   *
+   * If the pool already contains <i>maxSize</i> references to the table,
+   * then nothing happens.
+   * @param table table
+   */
+  public void putTable(HTableInterface table) {
+    LinkedList<HTableInterface> queue = tables.get(Bytes.toString(table.getTableName()));
+    synchronized(queue) {
+      if(queue.size() >= maxSize) return;
+      queue.add(table);
+    }
+  }
+
+  protected HTableInterface createHTable(String tableName) {
+    return this.tableFactory.createHTableInterface(config, Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Closes all the HTable instances , belonging to the given table, in the table pool.
+   * <p>
+   * Note: this is a 'shutdown' of the given table pool and different from
+   * {@link #putTable(HTableInterface)}, that is used to return the table
+   * instance to the pool for future re-use.
+   *
+   * @param tableName
+   */
+  public void closeTablePool(final String tableName)  {
+    Queue<HTableInterface> queue = tables.get(tableName);
+    synchronized (queue) {
+      HTableInterface table = queue.poll();
+      while (table != null) {
+        this.tableFactory.releaseHTableInterface(table);
+        table = queue.poll();
+      }
+    }
+    HConnectionManager.deleteConnection(this.config, true);
+  }
+
+  /**
+   * See {@link #closeTablePool(String)}.
+   *
+   * @param tableName
+   */
+  public void closeTablePool(final byte[] tableName)  {
+    closeTablePool(Bytes.toString(tableName));
+  }
+
+  int getCurrentPoolSize(String tableName) {
+    Queue<HTableInterface> queue = tables.get(tableName);
+    synchronized(queue) {
+      return queue.size();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Increment.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Increment.java
new file mode 100644
index 0000000..3aec67f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Increment.java
@@ -0,0 +1,327 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Used to perform Increment operations on a single row.
+ * <p>
+ * This operation does not appear atomic to readers.  Increments are done
+ * under a single row lock, so write operations to a row are synchronized, but
+ * readers do not take row locks so get and scan operations can see this
+ * operation partially completed.
+ * <p>
+ * To increment columns of a row, instantiate an Increment object with the row
+ * to increment.  At least one column to increment must be specified using the
+ * {@link #addColumn(byte[], byte[], long)} method.
+ */
+public class Increment implements Writable {
+  private static final byte INCREMENT_VERSION = (byte)1;
+
+  private byte [] row = null;
+  private long lockId = -1L;
+  private boolean writeToWAL = true;
+  private TimeRange tr = new TimeRange();
+  private Map<byte [], NavigableMap<byte [], Long>> familyMap =
+    new TreeMap<byte [], NavigableMap<byte [], Long>>(Bytes.BYTES_COMPARATOR);
+
+  /** Constructor for Writable.  DO NOT USE */
+  public Increment() {}
+
+  /**
+   * Create a Increment operation for the specified row.
+   * <p>
+   * At least one column must be incremented.
+   * @param row row key
+   */
+  public Increment(byte [] row) {
+    this(row, null);
+  }
+
+  /**
+   * Create a Increment operation for the specified row, using an existing row
+   * lock.
+   * <p>
+   * At least one column must be incremented.
+   * @param row row key
+   * @param rowLock previously acquired row lock, or null
+   */
+  public Increment(byte [] row, RowLock rowLock) {
+    this.row = row;
+    if(rowLock != null) {
+      this.lockId = rowLock.getLockId();
+    }
+  }
+
+  /**
+   * Increment the column from the specific family with the specified qualifier
+   * by the specified amount.
+   * <p>
+   * Overrides previous calls to addColumn for this family and qualifier.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param amount amount to increment by
+   * @return the Increment object
+   */
+  public Increment addColumn(byte [] family, byte [] qualifier, long amount) {
+    NavigableMap<byte [], Long> set = familyMap.get(family);
+    if(set == null) {
+      set = new TreeMap<byte [], Long>(Bytes.BYTES_COMPARATOR);
+    }
+    set.put(qualifier, amount);
+    familyMap.put(family, set);
+    return this;
+  }
+
+  /* Accessors */
+
+  /**
+   * Method for retrieving the increment's row
+   * @return row
+   */
+  public byte [] getRow() {
+    return this.row;
+  }
+
+  /**
+   * Method for retrieving the increment's RowLock
+   * @return RowLock
+   */
+  public RowLock getRowLock() {
+    return new RowLock(this.row, this.lockId);
+  }
+
+  /**
+   * Method for retrieving the increment's lockId
+   * @return lockId
+   */
+  public long getLockId() {
+    return this.lockId;
+  }
+
+  /**
+   * Method for retrieving whether WAL will be written to or not
+   * @return true if WAL should be used, false if not
+   */
+  public boolean getWriteToWAL() {
+    return this.writeToWAL;
+  }
+
+  /**
+   * Sets whether this operation should write to the WAL or not.
+   * @param writeToWAL true if WAL should be used, false if not
+   * @return this increment operation
+   */
+  public Increment setWriteToWAL(boolean writeToWAL) {
+    this.writeToWAL = writeToWAL;
+    return this;
+  }
+
+  /**
+   * Gets the TimeRange used for this increment.
+   * @return TimeRange
+   */
+  public TimeRange getTimeRange() {
+    return this.tr;
+  }
+
+  /**
+   * Sets the TimeRange to be used on the Get for this increment.
+   * <p>
+   * This is useful for when you have counters that only last for specific
+   * periods of time (ie. counters that are partitioned by time).  By setting
+   * the range of valid times for this increment, you can potentially gain
+   * some performance with a more optimal Get operation.
+   * <p>
+   * This range is used as [minStamp, maxStamp).
+   * @param minStamp minimum timestamp value, inclusive
+   * @param maxStamp maximum timestamp value, exclusive
+   * @throws IOException if invalid time range
+   * @return this
+   */
+  public Increment setTimeRange(long minStamp, long maxStamp)
+  throws IOException {
+    tr = new TimeRange(minStamp, maxStamp);
+    return this;
+  }
+
+  /**
+   * Method for retrieving the keys in the familyMap
+   * @return keys in the current familyMap
+   */
+  public Set<byte[]> familySet() {
+    return this.familyMap.keySet();
+  }
+
+  /**
+   * Method for retrieving the number of families to increment from
+   * @return number of families
+   */
+  public int numFamilies() {
+    return this.familyMap.size();
+  }
+
+  /**
+   * Method for retrieving the number of columns to increment
+   * @return number of columns across all families
+   */
+  public int numColumns() {
+    if (!hasFamilies()) return 0;
+    int num = 0;
+    for (NavigableMap<byte [], Long> family : familyMap.values()) {
+      num += family.size();
+    }
+    return num;
+  }
+
+  /**
+   * Method for checking if any families have been inserted into this Increment
+   * @return true if familyMap is non empty false otherwise
+   */
+  public boolean hasFamilies() {
+    return !this.familyMap.isEmpty();
+  }
+
+  /**
+   * Method for retrieving the increment's familyMap
+   * @return familyMap
+   */
+  public Map<byte[],NavigableMap<byte[], Long>> getFamilyMap() {
+    return this.familyMap;
+  }
+
+  /**
+   * @return String
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("row=");
+    sb.append(Bytes.toString(this.row));
+    if(this.familyMap.size() == 0) {
+      sb.append(", no columns set to be incremented");
+      return sb.toString();
+    }
+    sb.append(", families=");
+    boolean moreThanOne = false;
+    for(Map.Entry<byte [], NavigableMap<byte[], Long>> entry :
+      this.familyMap.entrySet()) {
+      if(moreThanOne) {
+        sb.append("), ");
+      } else {
+        moreThanOne = true;
+        sb.append("{");
+      }
+      sb.append("(family=");
+      sb.append(Bytes.toString(entry.getKey()));
+      sb.append(", columns=");
+      if(entry.getValue() == null) {
+        sb.append("NONE");
+      } else {
+        sb.append("{");
+        boolean moreThanOneB = false;
+        for(Map.Entry<byte [], Long> column : entry.getValue().entrySet()) {
+          if(moreThanOneB) {
+            sb.append(", ");
+          } else {
+            moreThanOneB = true;
+          }
+          sb.append(Bytes.toString(column.getKey()) + "+=" + column.getValue());
+        }
+        sb.append("}");
+      }
+    }
+    sb.append("}");
+    return sb.toString();
+  }
+
+  //Writable
+  public void readFields(final DataInput in)
+  throws IOException {
+    int version = in.readByte();
+    if (version > INCREMENT_VERSION) {
+      throw new IOException("unsupported version");
+    }
+    this.row = Bytes.readByteArray(in);
+    this.tr = new TimeRange();
+    tr.readFields(in);
+    this.lockId = in.readLong();
+    int numFamilies = in.readInt();
+    if (numFamilies == 0) {
+      throw new IOException("At least one column required");
+    }
+    this.familyMap =
+      new TreeMap<byte [],NavigableMap<byte [], Long>>(Bytes.BYTES_COMPARATOR);
+    for(int i=0; i<numFamilies; i++) {
+      byte [] family = Bytes.readByteArray(in);
+      boolean hasColumns = in.readBoolean();
+      NavigableMap<byte [], Long> set = null;
+      if(hasColumns) {
+        int numColumns = in.readInt();
+        set = new TreeMap<byte [], Long>(Bytes.BYTES_COMPARATOR);
+        for(int j=0; j<numColumns; j++) {
+          byte [] qualifier = Bytes.readByteArray(in);
+          set.put(qualifier, in.readLong());
+        }
+      } else {
+        throw new IOException("At least one column required per family");
+      }
+      this.familyMap.put(family, set);
+    }
+  }
+
+  public void write(final DataOutput out)
+  throws IOException {
+    out.writeByte(INCREMENT_VERSION);
+    Bytes.writeByteArray(out, this.row);
+    tr.write(out);
+    out.writeLong(this.lockId);
+    if (familyMap.size() == 0) {
+      throw new IOException("At least one column required");
+    }
+    out.writeInt(familyMap.size());
+    for(Map.Entry<byte [], NavigableMap<byte [], Long>> entry :
+      familyMap.entrySet()) {
+      Bytes.writeByteArray(out, entry.getKey());
+      NavigableMap<byte [], Long> columnSet = entry.getValue();
+      if(columnSet == null) {
+        throw new IOException("At least one column required per family");
+      } else {
+        out.writeBoolean(true);
+        out.writeInt(columnSet.size());
+        for(Map.Entry<byte [], Long> qualifier : columnSet.entrySet()) {
+          Bytes.writeByteArray(out, qualifier.getKey());
+          out.writeLong(qualifier.getValue());
+        }
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
new file mode 100644
index 0000000..9e3f4d1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java
@@ -0,0 +1,267 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * Scanner class that contains the <code>.META.</code> table scanning logic
+ * and uses a Retryable scanner. Provided visitors will be called
+ * for each row.
+ * 
+ * Although public visibility, this is not a public-facing API and may evolve in
+ * minor releases.
+ */
+public class MetaScanner {
+  private static final Log LOG = LogFactory.getLog(MetaScanner.class);
+  /**
+   * Scans the meta table and calls a visitor on each RowResult and uses a empty
+   * start row value as table name.
+   *
+   * @param configuration conf
+   * @param visitor A custom visitor
+   * @throws IOException e
+   */
+  public static void metaScan(Configuration configuration,
+      MetaScannerVisitor visitor)
+  throws IOException {
+    metaScan(configuration, visitor, null);
+  }
+
+  /**
+   * Scans the meta table and calls a visitor on each RowResult. Uses a table
+   * name to locate meta regions.
+   *
+   * @param configuration config
+   * @param visitor visitor object
+   * @param userTableName User table name in meta table to start scan at.  Pass
+   * null if not interested in a particular table.
+   * @throws IOException e
+   */
+  public static void metaScan(Configuration configuration,
+      MetaScannerVisitor visitor, byte [] userTableName)
+  throws IOException {
+    metaScan(configuration, visitor, userTableName, null, Integer.MAX_VALUE);
+  }
+
+  /**
+   * Scans the meta table and calls a visitor on each RowResult. Uses a table
+   * name and a row name to locate meta regions. And it only scans at most
+   * <code>rowLimit</code> of rows.
+   *
+   * @param configuration HBase configuration.
+   * @param visitor Visitor object.
+   * @param userTableName User table name in meta table to start scan at.  Pass
+   * null if not interested in a particular table.
+   * @param row Name of the row at the user table. The scan will start from
+   * the region row where the row resides.
+   * @param rowLimit Max of processed rows. If it is less than 0, it
+   * will be set to default value <code>Integer.MAX_VALUE</code>.
+   * @throws IOException e
+   */
+  public static void metaScan(Configuration configuration,
+      MetaScannerVisitor visitor, byte [] userTableName, byte[] row,
+      int rowLimit)
+  throws IOException {
+    metaScan(configuration, visitor, userTableName, row, rowLimit,
+      HConstants.META_TABLE_NAME);
+  }
+
+  /**
+   * Scans the meta table and calls a visitor on each RowResult. Uses a table
+   * name and a row name to locate meta regions. And it only scans at most
+   * <code>rowLimit</code> of rows.
+   *
+   * @param configuration HBase configuration.
+   * @param visitor Visitor object.
+   * @param tableName User table name in meta table to start scan at.  Pass
+   * null if not interested in a particular table.
+   * @param row Name of the row at the user table. The scan will start from
+   * the region row where the row resides.
+   * @param rowLimit Max of processed rows. If it is less than 0, it
+   * will be set to default value <code>Integer.MAX_VALUE</code>.
+   * @param metaTableName Meta table to scan, root or meta.
+   * @throws IOException e
+   */
+  public static void metaScan(Configuration configuration,
+      MetaScannerVisitor visitor, byte [] tableName, byte[] row,
+      int rowLimit, final byte [] metaTableName)
+  throws IOException {
+    int rowUpperLimit = rowLimit > 0 ? rowLimit: Integer.MAX_VALUE;
+
+    HConnection connection = HConnectionManager.getConnection(configuration);
+    // if row is not null, we want to use the startKey of the row's region as
+    // the startRow for the meta scan.
+    byte[] startRow;
+    if (row != null) {
+      // Scan starting at a particular row in a particular table
+      assert tableName != null;
+      byte[] searchRow =
+        HRegionInfo.createRegionName(tableName, row, HConstants.NINES,
+          false);
+
+      HTable metaTable = new HTable(configuration, HConstants.META_TABLE_NAME);
+      Result startRowResult = metaTable.getRowOrBefore(searchRow,
+          HConstants.CATALOG_FAMILY);
+      if (startRowResult == null) {
+        throw new TableNotFoundException("Cannot find row in .META. for table: "
+            + Bytes.toString(tableName) + ", row=" + Bytes.toString(searchRow));
+      }
+      byte[] value = startRowResult.getValue(HConstants.CATALOG_FAMILY,
+          HConstants.REGIONINFO_QUALIFIER);
+      if (value == null || value.length == 0) {
+        throw new IOException("HRegionInfo was null or empty in Meta for " +
+          Bytes.toString(tableName) + ", row=" + Bytes.toString(searchRow));
+      }
+      HRegionInfo regionInfo = Writables.getHRegionInfo(value);
+
+      byte[] rowBefore = regionInfo.getStartKey();
+      startRow = HRegionInfo.createRegionName(tableName, rowBefore,
+          HConstants.ZEROES, false);
+    } else if (tableName == null || tableName.length == 0) {
+      // Full META scan
+      startRow = HConstants.EMPTY_START_ROW;
+    } else {
+      // Scan META for an entire table
+      startRow = HRegionInfo.createRegionName(
+          tableName, HConstants.EMPTY_START_ROW, HConstants.ZEROES, false);
+    }
+
+    // Scan over each meta region
+    ScannerCallable callable;
+    int rows = Math.min(rowLimit,
+        configuration.getInt("hbase.meta.scanner.caching", 100));
+    do {
+      final Scan scan = new Scan(startRow).addFamily(HConstants.CATALOG_FAMILY);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Scanning " + Bytes.toString(metaTableName) +
+          " starting at row=" + Bytes.toString(startRow) + " for max=" +
+          rowUpperLimit + " rows");
+      }
+      callable = new ScannerCallable(connection, metaTableName, scan);
+      // Open scanner
+      connection.getRegionServerWithRetries(callable);
+
+      int processedRows = 0;
+      try {
+        callable.setCaching(rows);
+        done: do {
+          if (processedRows >= rowUpperLimit) {
+            break;
+          }
+          //we have all the rows here
+          Result [] rrs = connection.getRegionServerWithRetries(callable);
+          if (rrs == null || rrs.length == 0 || rrs[0].size() == 0) {
+            break; //exit completely
+          }
+          for (Result rr : rrs) {
+            if (processedRows >= rowUpperLimit) {
+              break done;
+            }
+            if (!visitor.processRow(rr))
+              break done; //exit completely
+            processedRows++;
+          }
+          //here, we didn't break anywhere. Check if we have more rows
+        } while(true);
+        // Advance the startRow to the end key of the current region
+        startRow = callable.getHRegionInfo().getEndKey();
+      } finally {
+        // Close scanner
+        callable.setClose();
+        connection.getRegionServerWithRetries(callable);
+      }
+    } while (Bytes.compareTo(startRow, HConstants.LAST_ROW) != 0);
+  }
+
+  /**
+   * Lists all of the regions currently in META.
+   * @param conf
+   * @return List of all user-space regions.
+   * @throws IOException
+   */
+  public static List<HRegionInfo> listAllRegions(Configuration conf)
+  throws IOException {
+    return listAllRegions(conf, true);
+  }
+
+  /**
+   * Lists all of the regions currently in META.
+   * @param conf
+   * @param offlined True if we are to include offlined regions, false and we'll
+   * leave out offlined regions from returned list.
+   * @return List of all user-space regions.
+   * @throws IOException
+   */
+  public static List<HRegionInfo> listAllRegions(Configuration conf, final boolean offlined)
+  throws IOException {
+    final List<HRegionInfo> regions = new ArrayList<HRegionInfo>();
+    MetaScannerVisitor visitor =
+      new MetaScannerVisitor() {
+        @Override
+        public boolean processRow(Result result) throws IOException {
+          if (result == null || result.isEmpty()) {
+            return true;
+          }
+          byte [] bytes = result.getValue(HConstants.CATALOG_FAMILY,
+            HConstants.REGIONINFO_QUALIFIER);
+          if (bytes == null) {
+            LOG.warn("Null REGIONINFO_QUALIFIER: " + result);
+            return true;
+          }
+          HRegionInfo regionInfo = Writables.getHRegionInfo(bytes);
+          // If region offline AND we are not to include offlined regions, return.
+          if (regionInfo.isOffline() && !offlined) return true;
+          regions.add(regionInfo);
+          return true;
+        }
+    };
+    metaScan(conf, visitor);
+    return regions;
+  }
+
+  /**
+   * Visitor class called to process each row of the .META. table
+   */
+  public interface MetaScannerVisitor {
+    /**
+     * Visitor method that accepts a RowResult and the meta region location.
+     * Implementations can return false to stop the region's loop if it becomes
+     * unnecessary for some reason.
+     *
+     * @param rowResult result
+     * @return A boolean to know if it should continue to loop in the region
+     * @throws IOException e
+     */
+    public boolean processRow(Result rowResult) throws IOException;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java
new file mode 100644
index 0000000..c6ea838
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java
@@ -0,0 +1,122 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.HServerAddress;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.DataInput;
+import java.util.List;
+import java.util.Map;
+import java.util.ArrayList;
+import java.util.Set;
+import java.util.TreeMap;
+
+/**
+ * Container for Actions (i.e. Get, Delete, or Put), which are grouped by
+ * regionName. Intended to be used with HConnectionManager.processBatch()
+ */
+public final class MultiAction implements Writable {
+
+  // map of regions to lists of puts/gets/deletes for that region.
+  public Map<byte[], List<Action>> actions = new TreeMap<byte[], List<Action>>(
+      Bytes.BYTES_COMPARATOR);
+
+  public MultiAction() {
+  }
+
+  /**
+   * Get the total number of Actions
+   *
+   * @return total number of Actions for all groups in this container.
+   */
+  public int size() {
+    int size = 0;
+    for (List l : actions.values()) {
+      size += l.size();
+    }
+    return size;
+  }
+
+  /**
+   * Add an Action to this container based on it's regionName. If the regionName
+   * is wrong, the initial execution will fail, but will be automatically
+   * retried after looking up the correct region.
+   *
+   * @param regionName
+   * @param a
+   */
+  public void add(byte[] regionName, Action a) {
+    List<Action> rsActions = actions.get(regionName);
+    if (rsActions == null) {
+      rsActions = new ArrayList<Action>();
+      actions.put(regionName, rsActions);
+    }
+    rsActions.add(a);
+  }
+
+  public Set<byte[]> getRegions() {
+    return actions.keySet();
+  }
+
+  /**
+   * @return All actions from all regions in this container
+   */
+  public List<Action> allActions() {
+    List<Action> res = new ArrayList<Action>();
+    for (List<Action> lst : actions.values()) {
+      res.addAll(lst);
+    }
+    return res;
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeInt(actions.size());
+    for (Map.Entry<byte[], List<Action>> e : actions.entrySet()) {
+      Bytes.writeByteArray(out, e.getKey());
+      List<Action> lst = e.getValue();
+      out.writeInt(lst.size());
+      for (Action a : lst) {
+        HbaseObjectWritable.writeObject(out, a, Action.class, null);
+      }
+    }
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    actions.clear();
+    int mapSize = in.readInt();
+    for (int i = 0; i < mapSize; i++) {
+      byte[] key = Bytes.readByteArray(in);
+      int listSize = in.readInt();
+      List<Action> lst = new ArrayList<Action>(listSize);
+      for (int j = 0; j < listSize; j++) {
+        lst.add((Action) HbaseObjectWritable.readObject(in, null));
+      }
+      actions.put(key, lst);
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiPut.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiPut.java
new file mode 100644
index 0000000..058fe53
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiPut.java
@@ -0,0 +1,118 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * @deprecated Use MultiAction instead
+ * Data type class for putting multiple regions worth of puts in one RPC.
+ */
+public class MultiPut implements Writable {
+  public HServerAddress address; // client code ONLY
+
+  // map of regions to lists of puts for that region.
+  public Map<byte[], List<Put> > puts = new TreeMap<byte[], List<Put>>(Bytes.BYTES_COMPARATOR);
+
+  /**
+   * Writable constructor only.
+   */
+  public MultiPut() {}
+
+  /**
+   * MultiPut for putting multiple regions worth of puts in one RPC.
+   * @param a address
+   */
+  public MultiPut(HServerAddress a) {
+    address = a;
+  }
+
+  public int size() {
+    int size = 0;
+    for( List<Put> l : puts.values()) {
+      size += l.size();
+    }
+    return size;
+  }
+
+  public void add(byte[] regionName, Put aPut) {
+    List<Put> rsput = puts.get(regionName);
+    if (rsput == null) {
+      rsput = new ArrayList<Put>();
+      puts.put(regionName, rsput);
+    }
+    rsput.add(aPut);
+  }
+
+  public Collection<Put> allPuts() {
+    List<Put> res = new ArrayList<Put>();
+    for ( List<Put> pp : puts.values() ) {
+      res.addAll(pp);
+    }
+    return res;
+  }
+
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeInt(puts.size());
+    for( Map.Entry<byte[],List<Put>> e : puts.entrySet()) {
+      Bytes.writeByteArray(out, e.getKey());
+
+      List<Put> ps = e.getValue();
+      out.writeInt(ps.size());
+      for( Put p : ps ) {
+        p.write(out);
+      }
+    }
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    puts.clear();
+
+    int mapSize = in.readInt();
+
+    for (int i = 0 ; i < mapSize; i++) {
+      byte[] key = Bytes.readByteArray(in);
+
+      int listSize = in.readInt();
+      List<Put> ps = new ArrayList<Put>(listSize);
+      for ( int j = 0 ; j < listSize; j++ ) {
+        Put put = new Put();
+        put.readFields(in);
+        ps.add(put);
+      }
+      puts.put(key, ps);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiPutResponse.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiPutResponse.java
new file mode 100644
index 0000000..7e0311a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiPutResponse.java
@@ -0,0 +1,73 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * @deprecated Replaced by MultiResponse
+ * Response class for MultiPut.
+ */
+public class MultiPutResponse implements Writable {
+
+  protected MultiPut request; // used in client code ONLY
+
+  protected Map<byte[], Integer> answers = new TreeMap<byte[], Integer>(Bytes.BYTES_COMPARATOR);
+
+  public MultiPutResponse() {}
+
+  public void addResult(byte[] regionName, int result) {
+    answers.put(regionName, result);
+  }
+
+  public Integer getAnswer(byte[] region) {
+    return answers.get(region);
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeInt(answers.size());
+    for( Map.Entry<byte[],Integer> e : answers.entrySet()) {
+      Bytes.writeByteArray(out, e.getKey());
+      out.writeInt(e.getValue());
+    }
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    answers.clear();
+
+    int mapSize = in.readInt();
+    for( int i = 0 ; i < mapSize ; i++ ) {
+      byte[] key = Bytes.readByteArray(in);
+      int value = in.readInt();
+
+      answers.put(key, value);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiResponse.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiResponse.java
new file mode 100644
index 0000000..e8efc69
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/MultiResponse.java
@@ -0,0 +1,166 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.util.StringUtils;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.DataInput;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.ArrayList;
+import java.util.TreeMap;
+
+/**
+ * A container for Result objects, grouped by regionName.
+ */
+public class MultiResponse implements Writable {
+
+  // map of regionName to list of (Results paired to the original index for that
+  // Result)
+  private Map<byte[], List<Pair<Integer, Object>>> results =
+      new TreeMap<byte[], List<Pair<Integer, Object>>>(Bytes.BYTES_COMPARATOR);
+
+  public MultiResponse() {
+  }
+
+  /**
+   * @return Number of pairs in this container
+   */
+  public int size() {
+    int size = 0;
+    for (Collection<?> c : results.values()) {
+      size += c.size();
+    }
+    return size;
+  }
+
+  /**
+   * Add the pair to the container, grouped by the regionName
+   *
+   * @param regionName
+   * @param r
+   *          First item in the pair is the original index of the Action
+   *          (request). Second item is the Result. Result will be empty for
+   *          successful Put and Delete actions.
+   */
+  public void add(byte[] regionName, Pair<Integer, Object> r) {
+    List<Pair<Integer, Object>> rs = results.get(regionName);
+    if (rs == null) {
+      rs = new ArrayList<Pair<Integer, Object>>();
+      results.put(regionName, rs);
+    }
+    rs.add(r);
+  }
+
+  public void add(byte []regionName, int originalIndex, Object resOrEx) {
+    add(regionName, new Pair<Integer,Object>(originalIndex, resOrEx));
+  }
+
+  public Map<byte[], List<Pair<Integer, Object>>> getResults() {
+    return results;
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeInt(results.size());
+    for (Map.Entry<byte[], List<Pair<Integer, Object>>> e : results.entrySet()) {
+      Bytes.writeByteArray(out, e.getKey());
+      List<Pair<Integer, Object>> lst = e.getValue();
+      out.writeInt(lst.size());
+      for (Pair<Integer, Object> r : lst) {
+        if (r == null) {
+          out.writeInt(-1); // Cant have index -1; on other side we recognize -1 as 'null'
+        } else {
+          out.writeInt(r.getFirst()); // Can this can npe!?!
+          Object obj = r.getSecond();
+          if (obj instanceof Throwable) {
+            out.writeBoolean(true); // true, Throwable/exception.
+
+            Throwable t = (Throwable) obj;
+            // serialize exception
+            WritableUtils.writeString(out, t.getClass().getName());
+            WritableUtils.writeString(out,
+                StringUtils.stringifyException(t));
+
+          } else {
+            out.writeBoolean(false); // no exception
+
+            if (! (obj instanceof Writable))
+              obj = null; // squash all non-writables to null.
+            HbaseObjectWritable.writeObject(out, obj, Result.class, null);
+          }
+        }
+      }
+    }
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    results.clear();
+    int mapSize = in.readInt();
+    for (int i = 0; i < mapSize; i++) {
+      byte[] key = Bytes.readByteArray(in);
+      int listSize = in.readInt();
+      List<Pair<Integer, Object>> lst = new ArrayList<Pair<Integer, Object>>(
+          listSize);
+      for (int j = 0; j < listSize; j++) {
+        Integer idx = in.readInt();
+        if (idx == -1) {
+          lst.add(null);
+        } else {
+          boolean isException = in.readBoolean();
+          Object o = null;
+          if (isException) {
+            String klass = WritableUtils.readString(in);
+            String desc = WritableUtils.readString(in);
+            try {
+              // the type-unsafe insertion, but since we control what klass is..
+              Class<? extends Throwable> c = (Class<? extends Throwable>) Class.forName(klass);
+              Constructor<? extends Throwable> cn = c.getDeclaredConstructor(String.class);
+              o = cn.newInstance(desc);
+            } catch (ClassNotFoundException ignored) {
+            } catch (NoSuchMethodException ignored) {
+            } catch (InvocationTargetException ignored) {
+            } catch (InstantiationException ignored) {
+            } catch (IllegalAccessException ignored) {
+            }
+          } else {
+            o = HbaseObjectWritable.readObject(in, null);
+          }
+          lst.add(new Pair<Integer, Object>(idx, o));
+        }
+      }
+      results.put(key, lst);
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/NoServerForRegionException.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/NoServerForRegionException.java
new file mode 100644
index 0000000..4f33914
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/NoServerForRegionException.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.RegionException;
+
+/**
+ * Thrown when no region server can be found for a region
+ */
+public class NoServerForRegionException extends RegionException {
+  private static final long serialVersionUID = 1L << 11 - 1L;
+
+  /** default constructor */
+  public NoServerForRegionException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public NoServerForRegionException(String s) {
+    super(s);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Put.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Put.java
new file mode 100644
index 0000000..2479b80
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Put.java
@@ -0,0 +1,545 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+
+/**
+ * Used to perform Put operations for a single row.
+ * <p>
+ * To perform a Put, instantiate a Put object with the row to insert to and
+ * for each column to be inserted, execute {@link #add(byte[], byte[], byte[]) add} or
+ * {@link #add(byte[], byte[], long, byte[]) add} if setting the timestamp.
+ */
+public class Put implements HeapSize, Writable, Row, Comparable<Row> {
+  private static final byte PUT_VERSION = (byte)1;
+
+  private byte [] row = null;
+  private long timestamp = HConstants.LATEST_TIMESTAMP;
+  private long lockId = -1L;
+  private boolean writeToWAL = true;
+
+  private Map<byte [], List<KeyValue>> familyMap =
+    new TreeMap<byte [], List<KeyValue>>(Bytes.BYTES_COMPARATOR);
+
+  private static final long OVERHEAD = ClassSize.align(
+      ClassSize.OBJECT + ClassSize.REFERENCE +
+      2 * Bytes.SIZEOF_LONG + Bytes.SIZEOF_BOOLEAN +
+      ClassSize.REFERENCE + ClassSize.TREEMAP);
+
+  /** Constructor for Writable. DO NOT USE */
+  public Put() {}
+
+  /**
+   * Create a Put operation for the specified row.
+   * @param row row key
+   */
+  public Put(byte [] row) {
+    this(row, null);
+  }
+
+  /**
+   * Create a Put operation for the specified row, using an existing row lock.
+   * @param row row key
+   * @param rowLock previously acquired row lock, or null
+   */
+  public Put(byte [] row, RowLock rowLock) {
+      this(row, HConstants.LATEST_TIMESTAMP, rowLock);
+  }
+
+  /**
+   * Create a Put operation for the specified row, using a given timestamp.
+   *
+   * @param row row key
+   * @param ts timestamp
+   */
+  public Put(byte[] row, long ts) {
+    this(row, ts, null);
+  }
+
+  /**
+   * Create a Put operation for the specified row, using a given timestamp, and an existing row lock.
+   * @param row row key
+   * @param ts timestamp
+   * @param rowLock previously acquired row lock, or null
+   */
+  public Put(byte [] row, long ts, RowLock rowLock) {
+    if(row == null || row.length > HConstants.MAX_ROW_LENGTH) {
+      throw new IllegalArgumentException("Row key is invalid");
+    }
+    this.row = Arrays.copyOf(row, row.length);
+    this.timestamp = ts;
+    if(rowLock != null) {
+      this.lockId = rowLock.getLockId();
+    }
+  }
+
+  /**
+   * Copy constructor.  Creates a Put operation cloned from the specified Put.
+   * @param putToCopy put to copy
+   */
+  public Put(Put putToCopy) {
+    this(putToCopy.getRow(), putToCopy.timestamp, putToCopy.getRowLock());
+    this.familyMap =
+      new TreeMap<byte [], List<KeyValue>>(Bytes.BYTES_COMPARATOR);
+    for(Map.Entry<byte [], List<KeyValue>> entry :
+      putToCopy.getFamilyMap().entrySet()) {
+      this.familyMap.put(entry.getKey(), entry.getValue());
+    }
+    this.writeToWAL = putToCopy.writeToWAL;
+  }
+
+  /**
+   * Add the specified column and value to this Put operation.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param value column value
+   * @return this
+   */
+  public Put add(byte [] family, byte [] qualifier, byte [] value) {
+    return add(family, qualifier, this.timestamp, value);
+  }
+
+  /**
+   * Add the specified column and value, with the specified timestamp as
+   * its version to this Put operation.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @param ts version timestamp
+   * @param value column value
+   * @return this
+   */
+  public Put add(byte [] family, byte [] qualifier, long ts, byte [] value) {
+    List<KeyValue> list = getKeyValueList(family);
+    KeyValue kv = createPutKeyValue(family, qualifier, ts, value);
+    list.add(kv);
+    familyMap.put(kv.getFamily(), list);
+    return this;
+  }
+
+  /**
+   * Add the specified KeyValue to this Put operation.  Operation assumes that
+   * the passed KeyValue is immutable and its backing array will not be modified
+   * for the duration of this Put.
+   * @param kv individual KeyValue
+   * @return this
+   * @throws java.io.IOException e
+   */
+  public Put add(KeyValue kv) throws IOException{
+    byte [] family = kv.getFamily();
+    List<KeyValue> list = getKeyValueList(family);
+    //Checking that the row of the kv is the same as the put
+    int res = Bytes.compareTo(this.row, 0, row.length,
+        kv.getBuffer(), kv.getRowOffset(), kv.getRowLength());
+    if(res != 0) {
+      throw new IOException("The row in the recently added KeyValue " +
+          Bytes.toStringBinary(kv.getBuffer(), kv.getRowOffset(),
+        kv.getRowLength()) + " doesn't match the original one " +
+        Bytes.toStringBinary(this.row));
+    }
+    list.add(kv);
+    familyMap.put(family, list);
+    return this;
+  }
+
+  /*
+   * Create a KeyValue with this objects row key and the Put identifier.
+   *
+   * @return a KeyValue with this objects row key and the Put identifier.
+   */
+  private KeyValue createPutKeyValue(byte[] family, byte[] qualifier, long ts,
+      byte[] value) {
+  return  new KeyValue(this.row, family, qualifier, ts, KeyValue.Type.Put,
+      value);
+  }
+
+  /**
+   * A convenience method to determine if this object's familyMap contains
+   * a value assigned to the given family & qualifier.
+   * Both given arguments must match the KeyValue object to return true.
+   *
+   * @param family column family
+   * @param qualifier column qualifier
+   * @return returns true if the given family and qualifier already has an
+   * existing KeyValue object in the family map.
+   */
+  public boolean has(byte [] family, byte [] qualifier) {
+  return has(family, qualifier, this.timestamp, new byte[0], true, true);
+  }
+
+  /**
+   * A convenience method to determine if this object's familyMap contains
+   * a value assigned to the given family, qualifier and timestamp.
+   * All 3 given arguments must match the KeyValue object to return true.
+   *
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param ts timestamp
+   * @return returns true if the given family, qualifier and timestamp already has an
+   * existing KeyValue object in the family map.
+   */
+  public boolean has(byte [] family, byte [] qualifier, long ts) {
+  return has(family, qualifier, ts, new byte[0], false, true);
+  }
+
+  /**
+   * A convenience method to determine if this object's familyMap contains
+   * a value assigned to the given family, qualifier and timestamp.
+   * All 3 given arguments must match the KeyValue object to return true.
+   *
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param value value to check
+   * @return returns true if the given family, qualifier and value already has an
+   * existing KeyValue object in the family map.
+   */
+  public boolean has(byte [] family, byte [] qualifier, byte [] value) {
+    return has(family, qualifier, this.timestamp, value, true, false);
+  }
+
+  /**
+   * A convenience method to determine if this object's familyMap contains
+   * the given value assigned to the given family, qualifier and timestamp.
+   * All 4 given arguments must match the KeyValue object to return true.
+   *
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param ts timestamp
+   * @param value value to check
+   * @return returns true if the given family, qualifier timestamp and value
+   * already has an existing KeyValue object in the family map.
+   */
+  public boolean has(byte [] family, byte [] qualifier, long ts, byte [] value) {
+      return has(family, qualifier, ts, value, false, false);
+  }
+
+  /*
+   * Private method to determine if this object's familyMap contains
+   * the given value assigned to the given family, qualifier and timestamp
+   * respecting the 2 boolean arguments
+   *
+   * @param family
+   * @param qualifier
+   * @param ts
+   * @param value
+   * @param ignoreTS
+   * @param ignoreValue
+   * @return returns true if the given family, qualifier timestamp and value
+   * already has an existing KeyValue object in the family map.
+   */
+  private boolean has(byte [] family, byte [] qualifier, long ts, byte [] value,
+      boolean ignoreTS, boolean ignoreValue) {
+    List<KeyValue> list = getKeyValueList(family);
+    if (list.size() == 0) {
+      return false;
+    }
+    // Boolean analysis of ignoreTS/ignoreValue.
+    // T T => 2
+    // T F => 3 (first is always true)
+    // F T => 2
+    // F F => 1
+    if (!ignoreTS && !ignoreValue) {
+      KeyValue kv = createPutKeyValue(family, qualifier, ts, value);
+      return (list.contains(kv));
+    } else if (ignoreValue) {
+      for (KeyValue kv: list) {
+        if (Arrays.equals(kv.getFamily(), family) && Arrays.equals(kv.getQualifier(), qualifier)
+            && kv.getTimestamp() == ts) {
+          return true;
+        }
+      }
+    } else {
+      // ignoreTS is always true
+      for (KeyValue kv: list) {
+      if (Arrays.equals(kv.getFamily(), family) && Arrays.equals(kv.getQualifier(), qualifier)
+              && Arrays.equals(kv.getValue(), value)) {
+          return true;
+        }
+      }
+    }
+    return false;
+  }
+
+  /**
+   * Returns a list of all KeyValue objects with matching column family and qualifier.
+   *
+   * @param family column family
+   * @param qualifier column qualifier
+   * @return a list of KeyValue objects with the matching family and qualifier,
+   * returns an empty list if one doesnt exist for the given family.
+   */
+  public List<KeyValue> get(byte[] family, byte[] qualifier) {
+    List<KeyValue> filteredList = new ArrayList<KeyValue>();
+    for (KeyValue kv: getKeyValueList(family)) {
+      if (Arrays.equals(kv.getQualifier(), qualifier)) {
+        filteredList.add(kv);
+      }
+    }
+    return filteredList;
+  }
+
+  /**
+   * Creates an empty list if one doesnt exist for the given column family
+   * or else it returns the associated list of KeyValue objects.
+   *
+   * @param family column family
+   * @return a list of KeyValue objects, returns an empty list if one doesnt exist.
+   */
+  private List<KeyValue> getKeyValueList(byte[] family) {
+    List<KeyValue> list = familyMap.get(family);
+    if(list == null) {
+      list = new ArrayList<KeyValue>(0);
+    }
+    return list;
+  }
+
+  /**
+   * Method for retrieving the put's familyMap
+   * @return familyMap
+   */
+  public Map<byte [], List<KeyValue>> getFamilyMap() {
+    return this.familyMap;
+  }
+
+  /**
+   * Method for retrieving the put's row
+   * @return row
+   */
+  public byte [] getRow() {
+    return this.row;
+  }
+
+  /**
+   * Method for retrieving the put's RowLock
+   * @return RowLock
+   */
+  public RowLock getRowLock() {
+    return new RowLock(this.row, this.lockId);
+  }
+
+  /**
+   * Method for retrieving the put's lockId
+   * @return lockId
+   */
+  public long getLockId() {
+  	return this.lockId;
+  }
+
+  /**
+   * Method to check if the familyMap is empty
+   * @return true if empty, false otherwise
+   */
+  public boolean isEmpty() {
+    return familyMap.isEmpty();
+  }
+
+  /**
+   * @return Timestamp
+   */
+  public long getTimeStamp() {
+    return this.timestamp;
+  }
+
+  /**
+   * @return the number of different families included in this put
+   */
+  public int numFamilies() {
+    return familyMap.size();
+  }
+
+  /**
+   * @return the total number of KeyValues that will be added with this put
+   */
+  public int size() {
+    int size = 0;
+    for(List<KeyValue> kvList : this.familyMap.values()) {
+      size += kvList.size();
+    }
+    return size;
+  }
+
+  /**
+   * @return true if edits should be applied to WAL, false if not
+   */
+  public boolean getWriteToWAL() {
+    return this.writeToWAL;
+  }
+
+  /**
+   * Set whether this Put should be written to the WAL or not.
+   * Not writing the WAL means you may lose edits on server crash.
+   * @param write true if edits should be written to WAL, false if not
+   */
+  public void setWriteToWAL(boolean write) {
+    this.writeToWAL = write;
+  }
+
+  /**
+   * @return String
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("row=");
+    sb.append(Bytes.toString(this.row));
+    sb.append(", families={");
+    boolean moreThanOne = false;
+    for(Map.Entry<byte [], List<KeyValue>> entry : this.familyMap.entrySet()) {
+      if(moreThanOne) {
+        sb.append(", ");
+      } else {
+        moreThanOne = true;
+      }
+      sb.append("(family=");
+      sb.append(Bytes.toString(entry.getKey()));
+      sb.append(", keyvalues=(");
+      boolean moreThanOneB = false;
+      for(KeyValue kv : entry.getValue()) {
+        if(moreThanOneB) {
+          sb.append(", ");
+        } else {
+          moreThanOneB = true;
+        }
+        sb.append(kv.toString());
+      }
+      sb.append(")");
+    }
+    sb.append("}");
+    return sb.toString();
+  }
+
+  public int compareTo(Row p) {
+    return Bytes.compareTo(this.getRow(), p.getRow());
+  }
+
+  //HeapSize
+  public long heapSize() {
+    long heapsize = OVERHEAD;
+    //Adding row
+    heapsize += ClassSize.align(ClassSize.ARRAY + this.row.length);
+
+    //Adding map overhead
+    heapsize +=
+      ClassSize.align(this.familyMap.size() * ClassSize.MAP_ENTRY);
+    for(Map.Entry<byte [], List<KeyValue>> entry : this.familyMap.entrySet()) {
+      //Adding key overhead
+      heapsize +=
+        ClassSize.align(ClassSize.ARRAY + entry.getKey().length);
+
+      //This part is kinds tricky since the JVM can reuse references if you
+      //store the same value, but have a good match with SizeOf at the moment
+      //Adding value overhead
+      heapsize += ClassSize.align(ClassSize.ARRAYLIST);
+      int size = entry.getValue().size();
+      heapsize += ClassSize.align(ClassSize.ARRAY +
+          size * ClassSize.REFERENCE);
+
+      for(KeyValue kv : entry.getValue()) {
+        heapsize += kv.heapSize();
+      }
+    }
+    return ClassSize.align((int)heapsize);
+  }
+
+  //Writable
+  public void readFields(final DataInput in)
+  throws IOException {
+    int version = in.readByte();
+    if (version > PUT_VERSION) {
+      throw new IOException("version not supported");
+    }
+    this.row = Bytes.readByteArray(in);
+    this.timestamp = in.readLong();
+    this.lockId = in.readLong();
+    this.writeToWAL = in.readBoolean();
+    int numFamilies = in.readInt();
+    if (!this.familyMap.isEmpty()) this.familyMap.clear();
+    for(int i=0;i<numFamilies;i++) {
+      byte [] family = Bytes.readByteArray(in);
+      int numKeys = in.readInt();
+      List<KeyValue> keys = new ArrayList<KeyValue>(numKeys);
+      int totalLen = in.readInt();
+      byte [] buf = new byte[totalLen];
+      int offset = 0;
+      for (int j = 0; j < numKeys; j++) {
+        int keyLength = in.readInt();
+        in.readFully(buf, offset, keyLength);
+        keys.add(new KeyValue(buf, offset, keyLength));
+        offset += keyLength;
+      }
+      this.familyMap.put(family, keys);
+    }
+  }
+
+  public void write(final DataOutput out)
+  throws IOException {
+    out.writeByte(PUT_VERSION);
+    Bytes.writeByteArray(out, this.row);
+    out.writeLong(this.timestamp);
+    out.writeLong(this.lockId);
+    out.writeBoolean(this.writeToWAL);
+    out.writeInt(familyMap.size());
+    for (Map.Entry<byte [], List<KeyValue>> entry : familyMap.entrySet()) {
+      Bytes.writeByteArray(out, entry.getKey());
+      List<KeyValue> keys = entry.getValue();
+      out.writeInt(keys.size());
+      int totalLen = 0;
+      for(KeyValue kv : keys) {
+        totalLen += kv.getLength();
+      }
+      out.writeInt(totalLen);
+      for(KeyValue kv : keys) {
+        out.writeInt(kv.getLength());
+        out.write(kv.getBuffer(), kv.getOffset(), kv.getLength());
+      }
+    }
+  }
+
+  /**
+   * Add the specified column and value, with the specified timestamp as
+   * its version to this Put operation.
+   * @param column Old style column name with family and qualifier put together
+   * with a colon.
+   * @param ts version timestamp
+   * @param value column value
+   * @deprecated use {@link #add(byte[], byte[], long, byte[])} instead
+   * @return true
+   */
+  public Put add(byte [] column, long ts, byte [] value) {
+    byte [][] parts = KeyValue.parseColumn(column);
+    return add(parts[0], parts[1], ts, value);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/RegionOfflineException.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/RegionOfflineException.java
new file mode 100644
index 0000000..d223860
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/RegionOfflineException.java
@@ -0,0 +1,36 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.RegionException;
+
+/** Thrown when a table can not be located */
+public class RegionOfflineException extends RegionException {
+  private static final long serialVersionUID = 466008402L;
+  /** default constructor */
+  public RegionOfflineException() {
+    super();
+  }
+
+  /** @param s message */
+  public RegionOfflineException(String s) {
+    super(s);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Result.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Result.java
new file mode 100644
index 0000000..6bdc892
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Result.java
@@ -0,0 +1,683 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import com.google.common.collect.Ordering;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValue.SplitKeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.WritableWithSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+
+/**
+ * Single row result of a {@link Get} or {@link Scan} query.<p>
+ *
+ * This class is NOT THREAD SAFE.<p>
+ *
+ * Convenience methods are available that return various {@link Map}
+ * structures and values directly.<p>
+ *
+ * To get a complete mapping of all cells in the Result, which can include
+ * multiple families and multiple versions, use {@link #getMap()}.<p>
+ *
+ * To get a mapping of each family to its columns (qualifiers and values),
+ * including only the latest version of each, use {@link #getNoVersionMap()}.
+ *
+ * To get a mapping of qualifiers to latest values for an individual family use
+ * {@link #getFamilyMap(byte[])}.<p>
+ *
+ * To get the latest value for a specific family and qualifier use {@link #getValue(byte[], byte[])}.
+ *
+ * A Result is backed by an array of {@link KeyValue} objects, each representing
+ * an HBase cell defined by the row, family, qualifier, timestamp, and value.<p>
+ *
+ * The underlying {@link KeyValue} objects can be accessed through the methods
+ * {@link #sorted()} and {@link #list()}.  Each KeyValue can then be accessed
+ * through {@link KeyValue#getRow()}, {@link KeyValue#getFamily()}, {@link KeyValue#getQualifier()},
+ * {@link KeyValue#getTimestamp()}, and {@link KeyValue#getValue()}.
+ */
+public class Result implements Writable, WritableWithSize {
+  private static final byte RESULT_VERSION = (byte)1;
+
+  private KeyValue [] kvs = null;
+  private NavigableMap<byte[],
+     NavigableMap<byte[], NavigableMap<Long, byte[]>>> familyMap = null;
+  // We're not using java serialization.  Transient here is just a marker to say
+  // that this is where we cache row if we're ever asked for it.
+  private transient byte [] row = null;
+  private ImmutableBytesWritable bytes = null;
+
+  /**
+   * Constructor used for Writable.
+   */
+  public Result() {}
+
+  /**
+   * Instantiate a Result with the specified array of KeyValues.
+   * @param kvs array of KeyValues
+   */
+  public Result(KeyValue [] kvs) {
+    if(kvs != null && kvs.length > 0) {
+      this.kvs = kvs;
+    }
+  }
+
+  /**
+   * Instantiate a Result with the specified List of KeyValues.
+   * @param kvs List of KeyValues
+   */
+  public Result(List<KeyValue> kvs) {
+    this(kvs.toArray(new KeyValue[0]));
+  }
+
+  /**
+   * Instantiate a Result from the specified raw binary format.
+   * @param bytes raw binary format of Result
+   */
+  public Result(ImmutableBytesWritable bytes) {
+    this.bytes = bytes;
+  }
+
+  /**
+   * Method for retrieving the row that this result is for
+   * @return row
+   */
+  public byte [] getRow() {
+    if (this.row == null) {
+      if(this.kvs == null) {
+        readFields();
+      }
+      this.row = this.kvs.length == 0? null: this.kvs[0].getRow();
+    }
+    return this.row;
+  }
+
+  /**
+   * Return the array of KeyValues backing this Result instance.
+   *
+   * The array is sorted from smallest -> largest using the
+   * {@link KeyValue#COMPARATOR}.
+   *
+   * The array only contains what your Get or Scan specifies and no more.
+   * For example if you request column "A" 1 version you will have at most 1
+   * KeyValue in the array. If you request column "A" with 2 version you will
+   * have at most 2 KeyValues, with the first one being the newer timestamp and
+   * the second being the older timestamp (this is the sort order defined by
+   * {@link KeyValue#COMPARATOR}).  If columns don't exist, they won't be
+   * present in the result. Therefore if you ask for 1 version all columns,
+   * it is safe to iterate over this array and expect to see 1 KeyValue for
+   * each column and no more.
+   *
+   * This API is faster than using getFamilyMap() and getMap()
+   *
+   * @return array of KeyValues
+   */
+  public KeyValue[] raw() {
+    if(this.kvs == null) {
+      readFields();
+    }
+    return kvs;
+  }
+
+  /**
+   * Create a sorted list of the KeyValue's in this result.
+   *
+   * Since HBase 0.20.5 this is equivalent to raw().
+   *
+   * @return The sorted list of KeyValue's.
+   */
+  public List<KeyValue> list() {
+    if(this.kvs == null) {
+      readFields();
+    }
+    return isEmpty()? null: Arrays.asList(raw());
+  }
+
+  /**
+   * Returns a sorted array of KeyValues in this Result.
+   * <p>
+   * Since HBase 0.20.5 this is equivalent to {@link #raw}. Use
+   * {@link #raw} instead.
+   *
+   * @return sorted array of KeyValues
+   * @deprecated
+   */
+  public KeyValue[] sorted() {
+    return raw(); // side effect of loading this.kvs
+  }
+
+  /**
+   * Return the KeyValues for the specific column.  The KeyValues are sorted in
+   * the {@link KeyValue#COMPARATOR} order.  That implies the first entry in
+   * the list is the most recent column.  If the query (Scan or Get) only
+   * requested 1 version the list will contain at most 1 entry.  If the column
+   * did not exist in the result set (either the column does not exist
+   * or the column was not selected in the query) the list will be empty.
+   *
+   * Also see getColumnLatest which returns just a KeyValue
+   *
+   * @param family the family
+   * @param qualifier
+   * @return a list of KeyValues for this column or empty list if the column
+   * did not exist in the result set
+   */
+  public List<KeyValue> getColumn(byte [] family, byte [] qualifier) {
+    List<KeyValue> result = new ArrayList<KeyValue>();
+
+    KeyValue [] kvs = raw();
+
+    if (kvs == null || kvs.length == 0) {
+      return result;
+    }
+    int pos = binarySearch(kvs, family, qualifier);
+    if (pos == -1) {
+      return result; // cant find it
+    }
+
+    for (int i = pos ; i < kvs.length ; i++ ) {
+      KeyValue kv = kvs[i];
+      if (kv.matchingColumn(family,qualifier)) {
+        result.add(kv);
+      } else {
+        break;
+      }
+    }
+
+    return result;
+  }
+
+  protected int binarySearch(final KeyValue [] kvs,
+                             final byte [] family,
+                             final byte [] qualifier) {
+    KeyValue searchTerm =
+        KeyValue.createFirstOnRow(kvs[0].getRow(),
+            family, qualifier);
+
+    // pos === ( -(insertion point) - 1)
+    int pos = Arrays.binarySearch(kvs, searchTerm, KeyValue.COMPARATOR);
+    // never will exact match
+    if (pos < 0) {
+      pos = (pos+1) * -1;
+      // pos is now insertion point
+    }
+    if (pos == kvs.length) {
+      return -1; // doesn't exist
+    }
+    return pos;
+  }
+
+  /**
+   * The KeyValue for the most recent for a given column. If the column does
+   * not exist in the result set - if it wasn't selected in the query (Get/Scan)
+   * or just does not exist in the row the return value is null.
+   *
+   * @param family
+   * @param qualifier
+   * @return KeyValue for the column or null
+   */
+  public KeyValue getColumnLatest(byte [] family, byte [] qualifier) {
+    KeyValue [] kvs = raw(); // side effect possibly.
+    if (kvs == null || kvs.length == 0) {
+      return null;
+    }
+    int pos = binarySearch(kvs, family, qualifier);
+    if (pos == -1) {
+      return null;
+    }
+    KeyValue kv = kvs[pos];
+    if (kv.matchingColumn(family, qualifier)) {
+      return kv;
+    }
+    return null;
+  }
+
+  /**
+   * Get the latest version of the specified column.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @return value of latest version of column, null if none found
+   */
+  public byte[] getValue(byte [] family, byte [] qualifier) {
+    KeyValue kv = getColumnLatest(family, qualifier);
+    if (kv == null) {
+      return null;
+    }
+    return kv.getValue();
+  }
+
+  /**
+   * Checks for existence of the specified column.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @return true if at least one value exists in the result, false if not
+   */
+  public boolean containsColumn(byte [] family, byte [] qualifier) {
+    KeyValue kv = getColumnLatest(family, qualifier);
+    return kv != null;
+  }
+
+  /**
+   * Map of families to all versions of its qualifiers and values.
+   * <p>
+   * Returns a three level Map of the form:
+   * <code>Map<family,Map&lt;qualifier,Map&lt;timestamp,value>>></code>
+   * <p>
+   * Note: All other map returning methods make use of this map internally.
+   * @return map from families to qualifiers to versions
+   */
+  public NavigableMap<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>> getMap() {
+    if(this.familyMap != null) {
+      return this.familyMap;
+    }
+    if(isEmpty()) {
+      return null;
+    }
+    this.familyMap =
+      new TreeMap<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>>
+      (Bytes.BYTES_COMPARATOR);
+    for(KeyValue kv : this.kvs) {
+      SplitKeyValue splitKV = kv.split();
+      byte [] family = splitKV.getFamily();
+      NavigableMap<byte[], NavigableMap<Long, byte[]>> columnMap =
+        familyMap.get(family);
+      if(columnMap == null) {
+        columnMap = new TreeMap<byte[], NavigableMap<Long, byte[]>>
+          (Bytes.BYTES_COMPARATOR);
+        familyMap.put(family, columnMap);
+      }
+      byte [] qualifier = splitKV.getQualifier();
+      NavigableMap<Long, byte[]> versionMap = columnMap.get(qualifier);
+      if(versionMap == null) {
+        versionMap = new TreeMap<Long, byte[]>(new Comparator<Long>() {
+          public int compare(Long l1, Long l2) {
+            return l2.compareTo(l1);
+          }
+        });
+        columnMap.put(qualifier, versionMap);
+      }
+      Long timestamp = Bytes.toLong(splitKV.getTimestamp());
+      byte [] value = splitKV.getValue();
+      versionMap.put(timestamp, value);
+    }
+    return this.familyMap;
+  }
+
+  /**
+   * Map of families to their most recent qualifiers and values.
+   * <p>
+   * Returns a two level Map of the form: <code>Map<family,Map&lt;qualifier,value>></code>
+   * <p>
+   * The most recent version of each qualifier will be used.
+   * @return map from families to qualifiers and value
+   */
+  public NavigableMap<byte[], NavigableMap<byte[], byte[]>> getNoVersionMap() {
+    if(this.familyMap == null) {
+      getMap();
+    }
+    if(isEmpty()) {
+      return null;
+    }
+    NavigableMap<byte[], NavigableMap<byte[], byte[]>> returnMap =
+      new TreeMap<byte[], NavigableMap<byte[], byte[]>>(Bytes.BYTES_COMPARATOR);
+    for(Map.Entry<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>>
+      familyEntry : familyMap.entrySet()) {
+      NavigableMap<byte[], byte[]> qualifierMap =
+        new TreeMap<byte[], byte[]>(Bytes.BYTES_COMPARATOR);
+      for(Map.Entry<byte[], NavigableMap<Long, byte[]>> qualifierEntry :
+        familyEntry.getValue().entrySet()) {
+        byte [] value =
+          qualifierEntry.getValue().get(qualifierEntry.getValue().firstKey());
+        qualifierMap.put(qualifierEntry.getKey(), value);
+      }
+      returnMap.put(familyEntry.getKey(), qualifierMap);
+    }
+    return returnMap;
+  }
+
+  /**
+   * Map of qualifiers to values.
+   * <p>
+   * Returns a Map of the form: <code>Map&lt;qualifier,value></code>
+   * @param family column family to get
+   * @return map of qualifiers to values
+   */
+  public NavigableMap<byte[], byte[]> getFamilyMap(byte [] family) {
+    if(this.familyMap == null) {
+      getMap();
+    }
+    if(isEmpty()) {
+      return null;
+    }
+    NavigableMap<byte[], byte[]> returnMap =
+      new TreeMap<byte[], byte[]>(Bytes.BYTES_COMPARATOR);
+    NavigableMap<byte[], NavigableMap<Long, byte[]>> qualifierMap =
+      familyMap.get(family);
+    if(qualifierMap == null) {
+      return returnMap;
+    }
+    for(Map.Entry<byte[], NavigableMap<Long, byte[]>> entry :
+      qualifierMap.entrySet()) {
+      byte [] value =
+        entry.getValue().get(entry.getValue().firstKey());
+      returnMap.put(entry.getKey(), value);
+    }
+    return returnMap;
+  }
+
+  private Map.Entry<Long,byte[]> getKeyValue(byte[] family, byte[] qualifier) {
+    if(this.familyMap == null) {
+      getMap();
+    }
+    if(isEmpty()) {
+      return null;
+    }
+    NavigableMap<byte [], NavigableMap<Long, byte[]>> qualifierMap =
+      familyMap.get(family);
+    if(qualifierMap == null) {
+      return null;
+    }
+    NavigableMap<Long, byte[]> versionMap =
+      getVersionMap(qualifierMap, qualifier);
+    if(versionMap == null) {
+      return null;
+    }
+    return versionMap.firstEntry();
+  }
+
+  private NavigableMap<Long, byte[]> getVersionMap(
+      NavigableMap<byte [], NavigableMap<Long, byte[]>> qualifierMap, byte [] qualifier) {
+    return qualifier != null?
+      qualifierMap.get(qualifier): qualifierMap.get(new byte[0]);
+  }
+
+  /**
+   * Returns the value of the first column in the Result.
+   * @return value of the first column
+   */
+  public byte [] value() {
+    if (isEmpty()) {
+      return null;
+    }
+    return kvs[0].getValue();
+  }
+
+  /**
+   * Returns the raw binary encoding of this Result.<p>
+   *
+   * Please note, there may be an offset into the underlying byte array of the
+   * returned ImmutableBytesWritable.  Be sure to use both
+   * {@link ImmutableBytesWritable#get()} and {@link ImmutableBytesWritable#getOffset()}
+   * @return pointer to raw binary of Result
+   */
+  public ImmutableBytesWritable getBytes() {
+    return this.bytes;
+  }
+
+  /**
+   * Check if the underlying KeyValue [] is empty or not
+   * @return true if empty
+   */
+  public boolean isEmpty() {
+    if(this.kvs == null) {
+      readFields();
+    }
+    return this.kvs == null || this.kvs.length == 0;
+  }
+
+  /**
+   * @return the size of the underlying KeyValue []
+   */
+  public int size() {
+    if(this.kvs == null) {
+      readFields();
+    }
+    return this.kvs == null? 0: this.kvs.length;
+  }
+
+  /**
+   * @return String
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("keyvalues=");
+    if(isEmpty()) {
+      sb.append("NONE");
+      return sb.toString();
+    }
+    sb.append("{");
+    boolean moreThanOne = false;
+    for(KeyValue kv : this.kvs) {
+      if(moreThanOne) {
+        sb.append(", ");
+      } else {
+        moreThanOne = true;
+      }
+      sb.append(kv.toString());
+    }
+    sb.append("}");
+    return sb.toString();
+  }
+
+  //Writable
+  public void readFields(final DataInput in)
+  throws IOException {
+    familyMap = null;
+    row = null;
+    kvs = null;
+    int totalBuffer = in.readInt();
+    if(totalBuffer == 0) {
+      bytes = null;
+      return;
+    }
+    byte [] raw = new byte[totalBuffer];
+    in.readFully(raw, 0, totalBuffer);
+    bytes = new ImmutableBytesWritable(raw, 0, totalBuffer);
+  }
+
+  //Create KeyValue[] when needed
+  private void readFields() {
+    if (bytes == null) {
+      this.kvs = new KeyValue[0];
+      return;
+    }
+    byte [] buf = bytes.get();
+    int offset = bytes.getOffset();
+    int finalOffset = bytes.getSize() + offset;
+    List<KeyValue> kvs = new ArrayList<KeyValue>();
+    while(offset < finalOffset) {
+      int keyLength = Bytes.toInt(buf, offset);
+      offset += Bytes.SIZEOF_INT;
+      kvs.add(new KeyValue(buf, offset, keyLength));
+      offset += keyLength;
+    }
+    this.kvs = kvs.toArray(new KeyValue[kvs.size()]);
+  }
+
+  public long getWritableSize() {
+    if (isEmpty())
+      return Bytes.SIZEOF_INT; // int size = 0
+
+    long size = Bytes.SIZEOF_INT; // totalLen
+
+    for (KeyValue kv : kvs) {
+      size += kv.getLength();
+      size += Bytes.SIZEOF_INT; // kv.getLength
+    }
+
+    return size;
+  }
+
+  public void write(final DataOutput out)
+  throws IOException {
+    if(isEmpty()) {
+      out.writeInt(0);
+    } else {
+      int totalLen = 0;
+      for(KeyValue kv : kvs) {
+        totalLen += kv.getLength() + Bytes.SIZEOF_INT;
+      }
+      out.writeInt(totalLen);
+      for(KeyValue kv : kvs) {
+        out.writeInt(kv.getLength());
+        out.write(kv.getBuffer(), kv.getOffset(), kv.getLength());
+      }
+    }
+  }
+
+  public static long getWriteArraySize(Result [] results) {
+    long size = Bytes.SIZEOF_BYTE; // RESULT_VERSION
+    if (results == null || results.length == 0) {
+      size += Bytes.SIZEOF_INT;
+      return size;
+    }
+
+    size += Bytes.SIZEOF_INT; // results.length
+    size += Bytes.SIZEOF_INT; // bufLen
+    for (Result result : results) {
+      size += Bytes.SIZEOF_INT; // either 0 or result.size()
+      if (result == null || result.isEmpty())
+        continue;
+
+      for (KeyValue kv : result.raw()) {
+        size += Bytes.SIZEOF_INT; // kv.getLength();
+        size += kv.getLength();
+      }
+    }
+
+    return size;
+  }
+
+  public static void writeArray(final DataOutput out, Result [] results)
+  throws IOException {
+    // Write version when writing array form.
+    // This assumes that results are sent to the client as Result[], so we
+    // have an opportunity to handle version differences without affecting
+    // efficiency.
+    out.writeByte(RESULT_VERSION);
+    if(results == null || results.length == 0) {
+      out.writeInt(0);
+      return;
+    }
+    out.writeInt(results.length);
+    int bufLen = 0;
+    for(Result result : results) {
+      bufLen += Bytes.SIZEOF_INT;
+      if(result == null || result.isEmpty()) {
+        continue;
+      }
+      for(KeyValue key : result.raw()) {
+        bufLen += key.getLength() + Bytes.SIZEOF_INT;
+      }
+    }
+    out.writeInt(bufLen);
+    for(Result result : results) {
+      if(result == null || result.isEmpty()) {
+        out.writeInt(0);
+        continue;
+      }
+      out.writeInt(result.size());
+      for(KeyValue kv : result.raw()) {
+        out.writeInt(kv.getLength());
+        out.write(kv.getBuffer(), kv.getOffset(), kv.getLength());
+      }
+    }
+  }
+
+  public static Result [] readArray(final DataInput in)
+  throws IOException {
+    // Read version for array form.
+    // This assumes that results are sent to the client as Result[], so we
+    // have an opportunity to handle version differences without affecting
+    // efficiency.
+    int version = in.readByte();
+    if (version > RESULT_VERSION) {
+      throw new IOException("version not supported");
+    }
+    int numResults = in.readInt();
+    if(numResults == 0) {
+      return new Result[0];
+    }
+    Result [] results = new Result[numResults];
+    int bufSize = in.readInt();
+    byte [] buf = new byte[bufSize];
+    int offset = 0;
+    for(int i=0;i<numResults;i++) {
+      int numKeys = in.readInt();
+      offset += Bytes.SIZEOF_INT;
+      if(numKeys == 0) {
+        results[i] = new Result((ImmutableBytesWritable)null);
+        continue;
+      }
+      int initialOffset = offset;
+      for(int j=0;j<numKeys;j++) {
+        int keyLen = in.readInt();
+        Bytes.putInt(buf, offset, keyLen);
+        offset += Bytes.SIZEOF_INT;
+        in.readFully(buf, offset, keyLen);
+        offset += keyLen;
+      }
+      int totalLength = offset - initialOffset;
+      results[i] = new Result(new ImmutableBytesWritable(buf, initialOffset,
+          totalLength));
+    }
+    return results;
+  }
+
+  /**
+   * Does a deep comparison of two Results, down to the byte arrays.
+   * @param res1 first result to compare
+   * @param res2 second result to compare
+   * @throws Exception Every difference is throwing an exception
+   */
+  public static void compareResults(Result res1, Result res2)
+      throws Exception {
+    if (res2 == null) {
+      throw new Exception("There wasn't enough rows, we stopped at "
+          + Bytes.toString(res1.getRow()));
+    }
+    if (res1.size() != res2.size()) {
+      throw new Exception("This row doesn't have the same number of KVs: "
+          + res1.toString() + " compared to " + res2.toString());
+    }
+    KeyValue[] ourKVs = res1.sorted();
+    KeyValue[] replicatedKVs = res2.sorted();
+    for (int i = 0; i < res1.size(); i++) {
+      if (!ourKVs[i].equals(replicatedKVs[i]) &&
+          !Bytes.equals(ourKVs[i].getValue(), replicatedKVs[i].getValue())) {
+        throw new Exception("This result was different: "
+            + res1.toString() + " compared to " + res2.toString());
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/ResultScanner.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/ResultScanner.java
new file mode 100644
index 0000000..6843018
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/ResultScanner.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.Closeable;
+import java.io.IOException;
+
+/**
+ * Interface for client-side scanning.
+ * Go to {@link HTable} to obtain instances.
+ */
+public interface ResultScanner extends Closeable, Iterable<Result> {
+
+  /**
+   * Grab the next row's worth of values. The scanner will return a Result.
+   * @return Result object if there is another row, null if the scanner is
+   * exhausted.
+   * @throws IOException e
+   */
+  public Result next() throws IOException;
+
+  /**
+   * @param nbRows number of rows to return
+   * @return Between zero and <param>nbRows</param> Results
+   * @throws IOException e
+   */
+  public Result [] next(int nbRows) throws IOException;
+
+  /**
+   * Closes the scanner and releases any resources it has allocated
+   */
+  public void close();
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java
new file mode 100644
index 0000000..89d2abe
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java
@@ -0,0 +1,69 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Exception thrown by HTable methods when an attempt to do something (like
+ * commit changes) fails after a bunch of retries.
+ */
+public class RetriesExhaustedException extends IOException {
+  private static final long serialVersionUID = 1876775844L;
+
+  public RetriesExhaustedException(final String msg) {
+    super(msg);
+  }
+
+  public RetriesExhaustedException(final String msg, final IOException e) {
+    super(msg, e);
+  }
+
+  /**
+   * Create a new RetriesExhaustedException from the list of prior failures.
+   * @param serverName name of HRegionServer
+   * @param regionName name of region
+   * @param row The row we were pursuing when we ran out of retries
+   * @param numTries The number of tries we made
+   * @param exceptions List of exceptions that failed before giving up
+   */
+  public RetriesExhaustedException(String serverName, final byte [] regionName,
+      final byte []  row, int numTries, List<Throwable> exceptions) {
+    super(getMessage(serverName, regionName, row, numTries, exceptions));
+  }
+
+  private static String getMessage(String serverName, final byte [] regionName,
+      final byte [] row,
+      int numTries, List<Throwable> exceptions) {
+    StringBuilder buffer = new StringBuilder("Trying to contact region server ");
+    buffer.append(serverName);
+    buffer.append(" for region ");
+    buffer.append(regionName == null? "": Bytes.toStringBinary(regionName));
+    buffer.append(", row '");
+    buffer.append(row == null? "": Bytes.toStringBinary(row));
+    buffer.append("', but failed after ");
+    buffer.append(numTries + 1);
+    buffer.append(" attempts.\nExceptions:\n");
+    for (Throwable t : exceptions) {
+      buffer.append(t.toString());
+      buffer.append("\n");
+    }
+    return buffer.toString();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java
new file mode 100644
index 0000000..6c62024
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java
@@ -0,0 +1,138 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HServerAddress;
+
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * This subclass of {@link org.apache.hadoop.hbase.client.RetriesExhaustedException}
+ * is thrown when we have more information about which rows were causing which
+ * exceptions on what servers.  You can call {@link #mayHaveClusterIssues()}
+ * and if the result is false, you have input error problems, otherwise you
+ * may have cluster issues.  You can iterate over the causes, rows and last
+ * known server addresses via {@link #getNumExceptions()} and
+ * {@link #getCause(int)}, {@link #getRow(int)} and {@link #getAddress(int)}.
+ */
+public class RetriesExhaustedWithDetailsException extends RetriesExhaustedException {
+
+  List<Throwable> exceptions;
+  List<Row> actions;
+  List<HServerAddress> addresses;
+
+  public RetriesExhaustedWithDetailsException(List<Throwable> exceptions,
+                                              List<Row> actions,
+                                              List<HServerAddress> addresses) {
+    super("Failed " + exceptions.size() + " action" +
+        pluralize(exceptions) + ": " +
+        getDesc(exceptions,actions,addresses));
+
+    this.exceptions = exceptions;
+    this.actions = actions;
+    this.addresses = addresses;
+  }
+
+  public List<Throwable> getCauses() {
+    return exceptions;
+  }
+
+  public int getNumExceptions() {
+    return exceptions.size();
+  }
+
+  public Throwable getCause(int i) {
+    return exceptions.get(i);
+  }
+
+  public Row getRow(int i) {
+    return actions.get(i);
+  }
+
+  public HServerAddress getAddress(int i) {
+    return addresses.get(i);
+  }
+
+  public boolean mayHaveClusterIssues() {
+    boolean res = false;
+
+    // If all of the exceptions are DNRIOE not exception
+    for (Throwable t : exceptions) {
+      if ( !(t instanceof DoNotRetryIOException)) {
+        res = true;
+      }
+    }
+    return res;
+  }
+
+
+  public static String pluralize(Collection<?> c) {
+    return pluralize(c.size());
+  }
+
+  public static String pluralize(int c) {
+    return c > 1 ? "s" : "";
+  }
+
+  public static String getDesc(List<Throwable> exceptions,
+                               List<Row> actions,
+                               List<HServerAddress> addresses) {
+    String s = getDesc(classifyExs(exceptions));
+    s += "servers with issues: ";
+    Set<HServerAddress> uniqAddr = new HashSet<HServerAddress>();
+    uniqAddr.addAll(addresses);
+    for(HServerAddress addr : uniqAddr) {
+      s += addr + ", ";
+    }
+    return s;
+  }
+
+  public static Map<String, Integer> classifyExs(List<Throwable> ths) {
+    Map<String, Integer> cls = new HashMap<String, Integer>();
+    for (Throwable t : ths) {
+      if (t == null) continue;
+      String name = t.getClass().getSimpleName();
+      Integer i = cls.get(name);
+      if (i == null) {
+        i = 0;
+      }
+      i += 1;
+      cls.put(name, i);
+    }
+    return cls;
+  }
+
+  public static String getDesc(Map<String,Integer> classificaton) {
+    String s = "";
+    for (Map.Entry<String, Integer> e : classificaton.entrySet()) {
+      s += e.getKey() + ": " + e.getValue() + " time" +
+          pluralize(e.getValue()) + ", ";
+    }
+    return s;
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Row.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Row.java
new file mode 100644
index 0000000..cd332bd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Row.java
@@ -0,0 +1,32 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * Has a row.
+ */
+public interface Row extends WritableComparable<Row> {
+  /**
+   * @return The row.
+   */
+  public byte [] getRow();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/RowLock.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/RowLock.java
new file mode 100644
index 0000000..56b0787
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/RowLock.java
@@ -0,0 +1,62 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+/**
+ * Holds row name and lock id.
+ */
+public class RowLock {
+  private byte [] row = null;
+  private long lockId = -1L;
+
+  /**
+   * Creates a RowLock from a row and lock id
+   * @param row row to lock on
+   * @param lockId the lock id
+   */
+  public RowLock(final byte [] row, final long lockId) {
+    this.row = row;
+    this.lockId = lockId;
+  }
+
+  /**
+   * Creates a RowLock with only a lock id
+   * @param lockId lock id
+   */
+  public RowLock(final long lockId) {
+    this.lockId = lockId;
+  }
+
+  /**
+   * Get the row for this RowLock
+   * @return the row
+   */
+  public byte [] getRow() {
+    return row;
+  }
+
+  /**
+   * Get the lock id from this RowLock
+   * @return the lock id
+   */
+  public long getLockId() {
+    return lockId;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/Scan.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/Scan.java
new file mode 100644
index 0000000..97e01f9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/Scan.java
@@ -0,0 +1,663 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.IncompatibleFilterException;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableFactories;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+/**
+ * Used to perform Scan operations.
+ * <p>
+ * All operations are identical to {@link Get} with the exception of
+ * instantiation.  Rather than specifying a single row, an optional startRow
+ * and stopRow may be defined.  If rows are not specified, the Scanner will
+ * iterate over all rows.
+ * <p>
+ * To scan everything for each row, instantiate a Scan object.
+ * <p>
+ * To modify scanner caching for just this scan, use {@link #setCaching(int) setCaching}.
+ * If caching is NOT set, we will use the caching value of the hosting
+ * {@link HTable}.  See {@link HTable#setScannerCaching(int)}.
+ * <p>
+ * To further define the scope of what to get when scanning, perform additional
+ * methods as outlined below.
+ * <p>
+ * To get all columns from specific families, execute {@link #addFamily(byte[]) addFamily}
+ * for each family to retrieve.
+ * <p>
+ * To get specific columns, execute {@link #addColumn(byte[], byte[]) addColumn}
+ * for each column to retrieve.
+ * <p>
+ * To only retrieve columns within a specific range of version timestamps,
+ * execute {@link #setTimeRange(long, long) setTimeRange}.
+ * <p>
+ * To only retrieve columns with a specific timestamp, execute
+ * {@link #setTimeStamp(long) setTimestamp}.
+ * <p>
+ * To limit the number of versions of each column to be returned, execute
+ * {@link #setMaxVersions(int) setMaxVersions}.
+ * <p>
+ * To limit the maximum number of values returned for each call to next(),
+ * execute {@link #setBatch(int) setBatch}.
+ * <p>
+ * To add a filter, execute {@link #setFilter(org.apache.hadoop.hbase.filter.Filter) setFilter}.
+ * <p>
+ * Expert: To explicitly disable server-side block caching for this scan,
+ * execute {@link #setCacheBlocks(boolean)}.
+ */
+public class Scan implements Writable {
+  private static final byte SCAN_VERSION = (byte)1;
+  private byte [] startRow = HConstants.EMPTY_START_ROW;
+  private byte [] stopRow  = HConstants.EMPTY_END_ROW;
+  private int maxVersions = 1;
+  private int batch = -1;
+  /*
+   * -1 means no caching
+   */
+  private int caching = -1;
+  private boolean cacheBlocks = true;
+  private Filter filter = null;
+  private TimeRange tr = new TimeRange();
+  private Map<byte [], NavigableSet<byte []>> familyMap =
+    new TreeMap<byte [], NavigableSet<byte []>>(Bytes.BYTES_COMPARATOR);
+
+  /**
+   * Create a Scan operation across all rows.
+   */
+  public Scan() {}
+
+  public Scan(byte [] startRow, Filter filter) {
+    this(startRow);
+    this.filter = filter;
+  }
+
+  /**
+   * Create a Scan operation starting at the specified row.
+   * <p>
+   * If the specified row does not exist, the Scanner will start from the
+   * next closest row after the specified row.
+   * @param startRow row to start scanner at or after
+   */
+  public Scan(byte [] startRow) {
+    this.startRow = startRow;
+  }
+
+  /**
+   * Create a Scan operation for the range of rows specified.
+   * @param startRow row to start scanner at or after (inclusive)
+   * @param stopRow row to stop scanner before (exclusive)
+   */
+  public Scan(byte [] startRow, byte [] stopRow) {
+    this.startRow = startRow;
+    this.stopRow = stopRow;
+  }
+
+  /**
+   * Creates a new instance of this class while copying all values.
+   *
+   * @param scan  The scan instance to copy from.
+   * @throws IOException When copying the values fails.
+   */
+  public Scan(Scan scan) throws IOException {
+    startRow = scan.getStartRow();
+    stopRow  = scan.getStopRow();
+    maxVersions = scan.getMaxVersions();
+    batch = scan.getBatch();
+    caching = scan.getCaching();
+    cacheBlocks = scan.getCacheBlocks();
+    filter = scan.getFilter(); // clone?
+    TimeRange ctr = scan.getTimeRange();
+    tr = new TimeRange(ctr.getMin(), ctr.getMax());
+    Map<byte[], NavigableSet<byte[]>> fams = scan.getFamilyMap();
+    for (Map.Entry<byte[],NavigableSet<byte[]>> entry : fams.entrySet()) {
+      byte [] fam = entry.getKey();
+      NavigableSet<byte[]> cols = entry.getValue();
+      if (cols != null && cols.size() > 0) {
+        for (byte[] col : cols) {
+          addColumn(fam, col);
+        }
+      } else {
+        addFamily(fam);
+      }
+    }
+  }
+
+  /**
+   * Builds a scan object with the same specs as get.
+   * @param get get to model scan after
+   */
+  public Scan(Get get) {
+    this.startRow = get.getRow();
+    this.stopRow = get.getRow();
+    this.filter = get.getFilter();
+    this.cacheBlocks = get.getCacheBlocks();
+    this.maxVersions = get.getMaxVersions();
+    this.tr = get.getTimeRange();
+    this.familyMap = get.getFamilyMap();
+  }
+
+  public boolean isGetScan() {
+    return this.startRow != null && this.startRow.length > 0 &&
+      Bytes.equals(this.startRow, this.stopRow);
+  }
+
+  /**
+   * Get all columns from the specified family.
+   * <p>
+   * Overrides previous calls to addColumn for this family.
+   * @param family family name
+   * @return this
+   */
+  public Scan addFamily(byte [] family) {
+    familyMap.remove(family);
+    familyMap.put(family, null);
+    return this;
+  }
+
+  /**
+   * Get the column from the specified family with the specified qualifier.
+   * <p>
+   * Overrides previous calls to addFamily for this family.
+   * @param family family name
+   * @param qualifier column qualifier
+   * @return this
+   */
+  public Scan addColumn(byte [] family, byte [] qualifier) {
+    NavigableSet<byte []> set = familyMap.get(family);
+    if(set == null) {
+      set = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+    }
+    set.add(qualifier);
+    familyMap.put(family, set);
+
+    return this;
+  }
+
+  /**
+   * Get versions of columns only within the specified timestamp range,
+   * [minStamp, maxStamp).  Note, default maximum versions to return is 1.  If
+   * your time range spans more than one version and you want all versions
+   * returned, up the number of versions beyond the defaut.
+   * @param minStamp minimum timestamp value, inclusive
+   * @param maxStamp maximum timestamp value, exclusive
+   * @throws IOException if invalid time range
+   * @see #setMaxVersions()
+   * @see #setMaxVersions(int)
+   * @return this
+   */
+  public Scan setTimeRange(long minStamp, long maxStamp)
+  throws IOException {
+    tr = new TimeRange(minStamp, maxStamp);
+    return this;
+  }
+
+  /**
+   * Get versions of columns with the specified timestamp. Note, default maximum
+   * versions to return is 1.  If your time range spans more than one version
+   * and you want all versions returned, up the number of versions beyond the
+   * defaut.
+   * @param timestamp version timestamp
+   * @see #setMaxVersions()
+   * @see #setMaxVersions(int)
+   * @return this
+   */
+  public Scan setTimeStamp(long timestamp) {
+    try {
+      tr = new TimeRange(timestamp, timestamp+1);
+    } catch(IOException e) {
+      // Will never happen
+    }
+    return this;
+  }
+
+  /**
+   * Set the start row of the scan.
+   * @param startRow row to start scan on, inclusive
+   * @return this
+   */
+  public Scan setStartRow(byte [] startRow) {
+    this.startRow = startRow;
+    return this;
+  }
+
+  /**
+   * Set the stop row.
+   * @param stopRow row to end at (exclusive)
+   * @return this
+   */
+  public Scan setStopRow(byte [] stopRow) {
+    this.stopRow = stopRow;
+    return this;
+  }
+
+  /**
+   * Get all available versions.
+   * @return this
+   */
+  public Scan setMaxVersions() {
+    this.maxVersions = Integer.MAX_VALUE;
+    return this;
+  }
+
+  /**
+   * Get up to the specified number of versions of each column.
+   * @param maxVersions maximum versions for each column
+   * @return this
+   */
+  public Scan setMaxVersions(int maxVersions) {
+    this.maxVersions = maxVersions;
+    return this;
+  }
+
+  /**
+   * Set the maximum number of values to return for each call to next()
+   * @param batch the maximum number of values
+   */
+  public void setBatch(int batch) {
+	if(this.hasFilter() && this.filter.hasFilterRow()) {
+	  throw new IncompatibleFilterException(
+        "Cannot set batch on a scan using a filter" +
+        " that returns true for filter.hasFilterRow");
+	}
+    this.batch = batch;
+  }
+
+  /**
+   * Set the number of rows for caching that will be passed to scanners.
+   * If not set, the default setting from {@link HTable#getScannerCaching()} will apply.
+   * Higher caching values will enable faster scanners but will use more memory.
+   * @param caching the number of rows for caching
+   */
+  public void setCaching(int caching) {
+    this.caching = caching;
+  }
+
+  /**
+   * Apply the specified server-side filter when performing the Scan.
+   * @param filter filter to run on the server
+   * @return this
+   */
+  public Scan setFilter(Filter filter) {
+    this.filter = filter;
+    return this;
+  }
+
+  /**
+   * Setting the familyMap
+   * @param familyMap map of family to qualifier
+   * @return this
+   */
+  public Scan setFamilyMap(Map<byte [], NavigableSet<byte []>> familyMap) {
+    this.familyMap = familyMap;
+    return this;
+  }
+
+  /**
+   * Getting the familyMap
+   * @return familyMap
+   */
+  public Map<byte [], NavigableSet<byte []>> getFamilyMap() {
+    return this.familyMap;
+  }
+
+  /**
+   * @return the number of families in familyMap
+   */
+  public int numFamilies() {
+    if(hasFamilies()) {
+      return this.familyMap.size();
+    }
+    return 0;
+  }
+
+  /**
+   * @return true if familyMap is non empty, false otherwise
+   */
+  public boolean hasFamilies() {
+    return !this.familyMap.isEmpty();
+  }
+
+  /**
+   * @return the keys of the familyMap
+   */
+  public byte[][] getFamilies() {
+    if(hasFamilies()) {
+      return this.familyMap.keySet().toArray(new byte[0][0]);
+    }
+    return null;
+  }
+
+  /**
+   * @return the startrow
+   */
+  public byte [] getStartRow() {
+    return this.startRow;
+  }
+
+  /**
+   * @return the stoprow
+   */
+  public byte [] getStopRow() {
+    return this.stopRow;
+  }
+
+  /**
+   * @return the max number of versions to fetch
+   */
+  public int getMaxVersions() {
+    return this.maxVersions;
+  }
+
+  /**
+   * @return maximum number of values to return for a single call to next()
+   */
+  public int getBatch() {
+    return this.batch;
+  }
+
+  /**
+   * @return caching the number of rows fetched when calling next on a scanner
+   */
+  public int getCaching() {
+    return this.caching;
+  }
+
+  /**
+   * @return TimeRange
+   */
+  public TimeRange getTimeRange() {
+    return this.tr;
+  }
+
+  /**
+   * @return RowFilter
+   */
+  public Filter getFilter() {
+    return filter;
+  }
+
+  /**
+   * @return true is a filter has been specified, false if not
+   */
+  public boolean hasFilter() {
+    return filter != null;
+  }
+
+  /**
+   * Set whether blocks should be cached for this Scan.
+   * <p>
+   * This is true by default.  When true, default settings of the table and
+   * family are used (this will never override caching blocks if the block
+   * cache is disabled for that family or entirely).
+   *
+   * @param cacheBlocks if false, default settings are overridden and blocks
+   * will not be cached
+   */
+  public void setCacheBlocks(boolean cacheBlocks) {
+    this.cacheBlocks = cacheBlocks;
+  }
+
+  /**
+   * Get whether blocks should be cached for this Scan.
+   * @return true if default caching should be used, false if blocks should not
+   * be cached
+   */
+  public boolean getCacheBlocks() {
+    return cacheBlocks;
+  }
+
+  /**
+   * @return String
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("startRow=");
+    sb.append(Bytes.toStringBinary(this.startRow));
+    sb.append(", stopRow=");
+    sb.append(Bytes.toStringBinary(this.stopRow));
+    sb.append(", maxVersions=");
+    sb.append(this.maxVersions);
+    sb.append(", batch=");
+    sb.append(this.batch);
+    sb.append(", caching=");
+    sb.append(this.caching);
+    sb.append(", cacheBlocks=");
+    sb.append(this.cacheBlocks);
+    sb.append(", timeRange=");
+    sb.append("[").append(this.tr.getMin()).append(",");
+    sb.append(this.tr.getMax()).append(")");
+    sb.append(", families=");
+    if(this.familyMap.size() == 0) {
+      sb.append("ALL");
+      return sb.toString();
+    }
+    boolean moreThanOne = false;
+    for(Map.Entry<byte [], NavigableSet<byte[]>> entry : this.familyMap.entrySet()) {
+      if(moreThanOne) {
+        sb.append("), ");
+      } else {
+        moreThanOne = true;
+        sb.append("{");
+      }
+      sb.append("(family=");
+      sb.append(Bytes.toStringBinary(entry.getKey()));
+      sb.append(", columns=");
+      if(entry.getValue() == null) {
+        sb.append("ALL");
+      } else {
+        sb.append("{");
+        boolean moreThanOneB = false;
+        for(byte [] column : entry.getValue()) {
+          if(moreThanOneB) {
+            sb.append(", ");
+          } else {
+            moreThanOneB = true;
+          }
+          sb.append(Bytes.toStringBinary(column));
+        }
+        sb.append("}");
+      }
+    }
+    sb.append("}");
+    return sb.toString();
+  }
+
+  @SuppressWarnings("unchecked")
+  private Writable createForName(String className) {
+    try {
+      Class<? extends Writable> clazz =
+        (Class<? extends Writable>) Class.forName(className);
+      return WritableFactories.newInstance(clazz, new Configuration());
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Can't find class " + className);
+    }
+  }
+
+  //Writable
+  public void readFields(final DataInput in)
+  throws IOException {
+    int version = in.readByte();
+    if (version > (int)SCAN_VERSION) {
+      throw new IOException("version not supported");
+    }
+    this.startRow = Bytes.readByteArray(in);
+    this.stopRow = Bytes.readByteArray(in);
+    this.maxVersions = in.readInt();
+    this.batch = in.readInt();
+    this.caching = in.readInt();
+    this.cacheBlocks = in.readBoolean();
+    if(in.readBoolean()) {
+      this.filter = (Filter)createForName(Bytes.toString(Bytes.readByteArray(in)));
+      this.filter.readFields(in);
+    }
+    this.tr = new TimeRange();
+    tr.readFields(in);
+    int numFamilies = in.readInt();
+    this.familyMap =
+      new TreeMap<byte [], NavigableSet<byte []>>(Bytes.BYTES_COMPARATOR);
+    for(int i=0; i<numFamilies; i++) {
+      byte [] family = Bytes.readByteArray(in);
+      int numColumns = in.readInt();
+      TreeSet<byte []> set = new TreeSet<byte []>(Bytes.BYTES_COMPARATOR);
+      for(int j=0; j<numColumns; j++) {
+        byte [] qualifier = Bytes.readByteArray(in);
+        set.add(qualifier);
+      }
+      this.familyMap.put(family, set);
+    }
+  }
+
+  public void write(final DataOutput out)
+  throws IOException {
+    out.writeByte(SCAN_VERSION);
+    Bytes.writeByteArray(out, this.startRow);
+    Bytes.writeByteArray(out, this.stopRow);
+    out.writeInt(this.maxVersions);
+    out.writeInt(this.batch);
+    out.writeInt(this.caching);
+    out.writeBoolean(this.cacheBlocks);
+    if(this.filter == null) {
+      out.writeBoolean(false);
+    } else {
+      out.writeBoolean(true);
+      Bytes.writeByteArray(out, Bytes.toBytes(filter.getClass().getName()));
+      filter.write(out);
+    }
+    tr.write(out);
+    out.writeInt(familyMap.size());
+    for(Map.Entry<byte [], NavigableSet<byte []>> entry : familyMap.entrySet()) {
+      Bytes.writeByteArray(out, entry.getKey());
+      NavigableSet<byte []> columnSet = entry.getValue();
+      if(columnSet != null){
+        out.writeInt(columnSet.size());
+        for(byte [] qualifier : columnSet) {
+          Bytes.writeByteArray(out, qualifier);
+        }
+      } else {
+        out.writeInt(0);
+      }
+    }
+  }
+
+   /**
+   * Parses a combined family and qualifier and adds either both or just the
+   * family in case there is not qualifier. This assumes the older colon
+   * divided notation, e.g. "data:contents" or "meta:".
+   * <p>
+   * Note: It will through an error when the colon is missing.
+   *
+   * @param familyAndQualifier family and qualifier
+   * @return A reference to this instance.
+   * @throws IllegalArgumentException When the colon is missing.
+   * @deprecated use {@link #addColumn(byte[], byte[])} instead
+   */
+  public Scan addColumn(byte[] familyAndQualifier) {
+    byte [][] fq = KeyValue.parseColumn(familyAndQualifier);
+    if (fq.length > 1 && fq[1] != null && fq[1].length > 0) {
+      addColumn(fq[0], fq[1]);
+    } else {
+      addFamily(fq[0]);
+    }
+    return this;
+  }
+
+  /**
+   * Adds an array of columns specified using old format, family:qualifier.
+   * <p>
+   * Overrides previous calls to addFamily for any families in the input.
+   *
+   * @param columns array of columns, formatted as <pre>family:qualifier</pre>
+   * @deprecated issue multiple {@link #addColumn(byte[], byte[])} instead
+   * @return this
+   */
+  public Scan addColumns(byte [][] columns) {
+    for (byte[] column : columns) {
+      addColumn(column);
+    }
+    return this;
+  }
+
+  /**
+   * Convenience method to help parse old style (or rather user entry on the
+   * command line) column definitions, e.g. "data:contents mime:". The columns
+   * must be space delimited and always have a colon (":") to denote family
+   * and qualifier.
+   *
+   * @param columns  The columns to parse.
+   * @return A reference to this instance.
+   * @deprecated use {@link #addColumn(byte[], byte[])} instead
+   */
+  public Scan addColumns(String columns) {
+    String[] cols = columns.split(" ");
+    for (String col : cols) {
+      addColumn(Bytes.toBytes(col));
+    }
+    return this;
+  }
+
+  /**
+   * Helps to convert the binary column families and qualifiers to a text
+   * representation, e.g. "data:mimetype data:contents meta:". Binary values
+   * are properly encoded using {@link Bytes#toBytesBinary(String)}.
+   *
+   * @return The columns in an old style string format.
+   * @deprecated
+   */
+  public String getInputColumns() {
+    StringBuilder cols = new StringBuilder("");
+    for (Map.Entry<byte[], NavigableSet<byte[]>> e :
+      familyMap.entrySet()) {
+      byte[] fam = e.getKey();
+      if (cols.length() > 0) cols.append(" ");
+      NavigableSet<byte[]> quals = e.getValue();
+      // check if this family has qualifiers
+      if (quals != null && quals.size() > 0) {
+        StringBuilder cs = new StringBuilder("");
+        for (byte[] qual : quals) {
+          if (cs.length() > 0) cs.append(" ");
+          // encode values to make parsing easier later
+          cs.append(Bytes.toStringBinary(fam)).append(":").append(Bytes.toStringBinary(qual));
+        }
+        cols.append(cs);
+      } else {
+        // only add the family but with old style delimiter
+        cols.append(Bytes.toStringBinary(fam)).append(":");
+      }
+    }
+    return cols.toString();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
new file mode 100644
index 0000000..5ea38b4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java
@@ -0,0 +1,154 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.ipc.RemoteException;
+
+
+/**
+ * Retries scanner operations such as create, next, etc.
+ * Used by {@link ResultScanner}s made by {@link HTable}.
+ */
+public class ScannerCallable extends ServerCallable<Result[]> {
+  private static final Log LOG = LogFactory.getLog(ScannerCallable.class);
+  private long scannerId = -1L;
+  private boolean instantiated = false;
+  private boolean closed = false;
+  private Scan scan;
+  private int caching = 1;
+
+  /**
+   * @param connection which connection
+   * @param tableName table callable is on
+   * @param scan the scan to execute
+   */
+  public ScannerCallable (HConnection connection, byte [] tableName, Scan scan) {
+    super(connection, tableName, scan.getStartRow());
+    this.scan = scan;
+  }
+
+  /**
+   * @param reload force reload of server location
+   * @throws IOException
+   */
+  @Override
+  public void instantiateServer(boolean reload) throws IOException {
+    if (!instantiated || reload) {
+      super.instantiateServer(reload);
+      instantiated = true;
+    }
+  }
+
+  /**
+   * @see java.util.concurrent.Callable#call()
+   */
+  public Result [] call() throws IOException {
+    if (scannerId != -1L && closed) {
+      close();
+    } else if (scannerId == -1L && !closed) {
+      this.scannerId = openScanner();
+    } else {
+      Result [] rrs = null;
+      try {
+        rrs = server.next(scannerId, caching);
+      } catch (IOException e) {
+        IOException ioe = null;
+        if (e instanceof RemoteException) {
+          ioe = RemoteExceptionHandler.decodeRemoteException((RemoteException)e);
+        }
+        if (ioe == null) throw new IOException(e);
+        if (ioe instanceof NotServingRegionException) {
+          // Throw a DNRE so that we break out of cycle of calling NSRE
+          // when what we need is to open scanner against new location.
+          // Attach NSRE to signal client that it needs to resetup scanner.
+          throw new DoNotRetryIOException("Reset scanner", ioe);
+        } else {
+          // The outer layers will retry
+          throw ioe;
+        }
+      }
+      return rrs;
+    }
+    return null;
+  }
+
+  private void close() {
+    if (this.scannerId == -1L) {
+      return;
+    }
+    try {
+      this.server.close(this.scannerId);
+    } catch (IOException e) {
+      LOG.warn("Ignore, probably already closed", e);
+    }
+    this.scannerId = -1L;
+  }
+
+  protected long openScanner() throws IOException {
+    return this.server.openScanner(this.location.getRegionInfo().getRegionName(),
+      this.scan);
+  }
+
+  protected Scan getScan() {
+    return scan;
+  }
+
+  /**
+   * Call this when the next invocation of call should close the scanner
+   */
+  public void setClose() {
+    this.closed = true;
+  }
+
+  /**
+   * @return the HRegionInfo for the current region
+   */
+  public HRegionInfo getHRegionInfo() {
+    if (!instantiated) {
+      return null;
+    }
+    return location.getRegionInfo();
+  }
+
+  /**
+   * Get the number of rows that will be fetched on next
+   * @return the number of rows for caching
+   */
+  public int getCaching() {
+    return caching;
+  }
+
+  /**
+   * Set the number of rows that will be fetched on next
+   * @param caching the number of rows for caching
+   */
+  public void setCaching(int caching) {
+    this.caching = caching;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java
new file mode 100644
index 0000000..5a10b0e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown when a scanner has timed out.
+ */
+public class ScannerTimeoutException extends DoNotRetryIOException {
+
+  private static final long serialVersionUID = 8788838690290688313L;
+
+  /** default constructor */
+  ScannerTimeoutException() {
+    super();
+  }
+
+  /** @param s */
+  ScannerTimeoutException(String s) {
+    super(s);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java
new file mode 100644
index 0000000..6f22123
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/ServerCallable.java
@@ -0,0 +1,81 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+
+import java.io.IOException;
+import java.util.concurrent.Callable;
+
+/**
+ * Abstract class that implements Callable, used by retryable actions.
+ * @param <T> the class that the ServerCallable handles
+ */
+public abstract class ServerCallable<T> implements Callable<T> {
+  protected final HConnection connection;
+  protected final byte [] tableName;
+  protected final byte [] row;
+  protected HRegionLocation location;
+  protected HRegionInterface server;
+
+  /**
+   * @param connection connection callable is on
+   * @param tableName table name callable is on
+   * @param row row we are querying
+   */
+  public ServerCallable(HConnection connection, byte [] tableName, byte [] row) {
+    this.connection = connection;
+    this.tableName = tableName;
+    this.row = row;
+  }
+
+  /**
+   *
+   * @param reload set this to true if connection should re-find the region
+   * @throws IOException e
+   */
+  public void instantiateServer(boolean reload) throws IOException {
+    this.location = connection.getRegionLocation(tableName, row, reload);
+    this.server = connection.getHRegionConnection(location.getServerAddress());
+  }
+
+  /** @return the server name */
+  public String getServerName() {
+    if (location == null) {
+      return null;
+    }
+    return location.getServerAddress().toString();
+  }
+
+  /** @return the region name */
+  public byte[] getRegionName() {
+    if (location == null) {
+      return null;
+    }
+    return location.getRegionInfo().getRegionName();
+  }
+
+  /** @return the row */
+  public byte [] getRow() {
+    return row;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHColumnDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHColumnDescriptor.java
new file mode 100644
index 0000000..d99f02a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHColumnDescriptor.java
@@ -0,0 +1,93 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+
+/**
+ * Immutable HColumnDescriptor
+ */
+public class UnmodifyableHColumnDescriptor extends HColumnDescriptor {
+
+  /**
+   * @param desc wrapped
+   */
+  public UnmodifyableHColumnDescriptor (final HColumnDescriptor desc) {
+    super(desc);
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HColumnDescriptor#setValue(byte[], byte[])
+   */
+  @Override
+  public void setValue(byte[] key, byte[] value) {
+    throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HColumnDescriptor#setValue(java.lang.String, java.lang.String)
+   */
+  @Override
+  public void setValue(String key, String value) {
+    throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions(int)
+   */
+  @Override
+  public void setMaxVersions(int maxVersions) {
+    throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HColumnDescriptor#setInMemory(boolean)
+   */
+  @Override
+  public void setInMemory(boolean inMemory) {
+    throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HColumnDescriptor#setBlockCacheEnabled(boolean)
+   */
+  @Override
+  public void setBlockCacheEnabled(boolean blockCacheEnabled) {
+    throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HColumnDescriptor#setTimeToLive(int)
+   */
+  @Override
+  public void setTimeToLive(int timeToLive) {
+    throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HColumnDescriptor#setCompressionType(org.apache.hadoop.hbase.io.hfile.Compression.Algorithm)
+   */
+  @Override
+  public void setCompressionType(Compression.Algorithm type) {
+    throw new UnsupportedOperationException("HColumnDescriptor is read-only");
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java
new file mode 100644
index 0000000..23e7a6b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java
@@ -0,0 +1,51 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+
+class UnmodifyableHRegionInfo extends HRegionInfo {
+  /*
+   * Creates an unmodifyable copy of an HRegionInfo
+   *
+   * @param info
+   */
+  UnmodifyableHRegionInfo(HRegionInfo info) {
+    super(info);
+    this.tableDesc = new UnmodifyableHTableDescriptor(info.getTableDesc());
+  }
+
+  /**
+   * @param split set split status
+   */
+  @Override
+  public void setSplit(boolean split) {
+    throw new UnsupportedOperationException("HRegionInfo is read-only");
+  }
+
+  /**
+   * @param offLine set online - offline status
+   */
+  @Override
+  public void setOffline(boolean offLine) {
+    throw new UnsupportedOperationException("HRegionInfo is read-only");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java
new file mode 100644
index 0000000..27d1faa
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java
@@ -0,0 +1,124 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+
+/**
+ * Read-only table descriptor.
+ */
+public class UnmodifyableHTableDescriptor extends HTableDescriptor {
+  /** Default constructor */
+  public UnmodifyableHTableDescriptor() {
+	  super();
+  }
+
+  /*
+   * Create an unmodifyable copy of an HTableDescriptor
+   * @param desc
+   */
+  UnmodifyableHTableDescriptor(final HTableDescriptor desc) {
+    super(desc.getName(), getUnmodifyableFamilies(desc), desc.getValues());
+  }
+
+
+  /*
+   * @param desc
+   * @return Families as unmodifiable array.
+   */
+  private static HColumnDescriptor[] getUnmodifyableFamilies(
+      final HTableDescriptor desc) {
+    HColumnDescriptor [] f = new HColumnDescriptor[desc.getFamilies().size()];
+    int i = 0;
+    for (HColumnDescriptor c: desc.getFamilies()) {
+      f[i++] = c;
+    }
+    return f;
+  }
+
+  /**
+   * Does NOT add a column family. This object is immutable
+   * @param family HColumnDescriptor of familyto add.
+   */
+  @Override
+  public void addFamily(final HColumnDescriptor family) {
+    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+  }
+
+  /**
+   * @param column
+   * @return Column descriptor for the passed family name or the family on
+   * passed in column.
+   */
+  @Override
+  public HColumnDescriptor removeFamily(final byte [] column) {
+    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HTableDescriptor#setReadOnly(boolean)
+   */
+  @Override
+  public void setReadOnly(boolean readOnly) {
+    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HTableDescriptor#setValue(byte[], byte[])
+   */
+  @Override
+  public void setValue(byte[] key, byte[] value) {
+    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HTableDescriptor#setValue(java.lang.String, java.lang.String)
+   */
+  @Override
+  public void setValue(String key, String value) {
+    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HTableDescriptor#setMaxFileSize(long)
+   */
+  @Override
+  public void setMaxFileSize(long maxFileSize) {
+    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.HTableDescriptor#setMemStoreFlushSize(long)
+   */
+  @Override
+  public void setMemStoreFlushSize(long memstoreFlushSize) {
+    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+  }
+
+//  /**
+//   * @see org.apache.hadoop.hbase.HTableDescriptor#addIndex(org.apache.hadoop.hbase.client.tableindexed.IndexSpecification)
+//   */
+//  @Override
+//  public void addIndex(IndexSpecification index) {
+//    throw new UnsupportedOperationException("HTableDescriptor is read-only");
+//  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/package-info.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/package-info.java
new file mode 100644
index 0000000..b00a556
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/package-info.java
@@ -0,0 +1,186 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+Provides HBase Client
+
+<h2>Table of Contents</h2>
+<ul>
+ <li><a href="#overview">Overview</a></li>
+<li><a href="#client_example">Example API Usage</a></li>
+</ul>
+
+ <h2><a name="overview">Overview</a></h2>
+ <p>To administer HBase, create and drop tables, list and alter tables,
+ use {@link org.apache.hadoop.hbase.client.HBaseAdmin}.  Once created, table access is via an instance
+ of {@link org.apache.hadoop.hbase.client.HTable}.  You add content to a table a row at a time.  To insert,
+ create an instance of a {@link org.apache.hadoop.hbase.client.Put} object.  Specify value, target column
+ and optionally a timestamp.  Commit your update using {@link org.apache.hadoop.hbase.client.HTable#put(Put)}.
+ To fetch your inserted value, use {@link org.apache.hadoop.hbase.client.Get}.  The Get can be specified to be broad -- get all
+ on a particular row -- or narrow; i.e. return only a single cell value.   After creating an instance of
+ Get, invoke {@link org.apache.hadoop.hbase.client.HTable#get(Get)}.  Use
+ {@link org.apache.hadoop.hbase.client.Scan} to set up a scanner -- a Cursor- like access.  After
+ creating and configuring your Scan instance, call {@link org.apache.hadoop.hbase.client.HTable#getScanner(Scan)} and then
+ invoke next on the returned object.  Both {@link org.apache.hadoop.hbase.client.HTable#get(Get)} and
+ {@link org.apache.hadoop.hbase.client.HTable#getScanner(Scan)} return a
+{@link org.apache.hadoop.hbase.client.Result}.
+A Result is a List of {@link org.apache.hadoop.hbase.KeyValue}s.  It has facility for packaging the return
+in different formats.
+ Use {@link org.apache.hadoop.hbase.client.Delete} to remove content.
+ You can remove individual cells or entire families, etc.  Pass it to
+ {@link org.apache.hadoop.hbase.client.HTable#delete(Delete)} to execute.
+ </p>
+ <p>Puts, Gets and Deletes take out a lock on the target row for the duration of their operation.
+ Concurrent modifications to a single row are serialized.  Gets and scans run concurrently without
+ interference of the row locks and are guaranteed to not to return half written rows.
+ </p>
+ <p>Client code accessing a cluster finds the cluster by querying ZooKeeper.
+ This means that the ZooKeeper quorum to use must be on the client CLASSPATH.
+ Usually this means make sure the client can find your <code>hbase-site.xml</code>.
+ </p>
+
+<h2><a name="client_example">Example API Usage</a></h2>
+
+<p>Once you have a running HBase, you probably want a way to hook your application up to it.
+  If your application is in Java, then you should use the Java API. Here's an example of what
+  a simple client might look like.  This example assumes that you've created a table called
+  "myTable" with a column family called "myColumnFamily".
+</p>
+
+<div style="background-color: #cccccc; padding: 2px">
+<blockquote><pre>
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+// Class that has nothing but a main.
+// Does a Put, Get and a Scan against an hbase table.
+public class MyLittleHBaseClient {
+  public static void main(String[] args) throws IOException {
+    // You need a configuration object to tell the client where to connect.
+    // When you create a HBaseConfiguration, it reads in whatever you've set
+    // into your hbase-site.xml and in hbase-default.xml, as long as these can
+    // be found on the CLASSPATH
+    Configuration config = HBaseConfiguration.create();
+
+    // This instantiates an HTable object that connects you to
+    // the "myLittleHBaseTable" table.
+    HTable table = new HTable(config, "myLittleHBaseTable");
+
+    // To add to a row, use Put.  A Put constructor takes the name of the row
+    // you want to insert into as a byte array.  In HBase, the Bytes class has
+    // utility for converting all kinds of java types to byte arrays.  In the
+    // below, we are converting the String "myLittleRow" into a byte array to
+    // use as a row key for our update. Once you have a Put instance, you can
+    // adorn it by setting the names of columns you want to update on the row,
+    // the timestamp to use in your update, etc.If no timestamp, the server
+    // applies current time to the edits.
+    Put p = new Put(Bytes.toBytes("myLittleRow"));
+
+    // To set the value you'd like to update in the row 'myLittleRow', specify
+    // the column family, column qualifier, and value of the table cell you'd
+    // like to update.  The column family must already exist in your table
+    // schema.  The qualifier can be anything.  All must be specified as byte
+    // arrays as hbase is all about byte arrays.  Lets pretend the table
+    // 'myLittleHBaseTable' was created with a family 'myLittleFamily'.
+    p.add(Bytes.toBytes("myLittleFamily"), Bytes.toBytes("someQualifier"),
+      Bytes.toBytes("Some Value"));
+
+    // Once you've adorned your Put instance with all the updates you want to
+    // make, to commit it do the following (The HTable#put method takes the
+    // Put instance you've been building and pushes the changes you made into
+    // hbase)
+    table.put(p);
+
+    // Now, to retrieve the data we just wrote. The values that come back are
+    // Result instances. Generally, a Result is an object that will package up
+    // the hbase return into the form you find most palatable.
+    Get g = new Get(Bytes.toBytes("myLittleRow"));
+    Result r = table.get(g);
+    byte [] value = r.getValue(Bytes.toBytes("myLittleFamily"),
+      Bytes.toBytes("someQualifier"));
+    // If we convert the value bytes, we should get back 'Some Value', the
+    // value we inserted at this location.
+    String valueStr = Bytes.toString(value);
+    System.out.println("GET: " + valueStr);
+
+    // Sometimes, you won't know the row you're looking for. In this case, you
+    // use a Scanner. This will give you cursor-like interface to the contents
+    // of the table.  To set up a Scanner, do like you did above making a Put
+    // and a Get, create a Scan.  Adorn it with column names, etc.
+    Scan s = new Scan();
+    s.addColumn(Bytes.toBytes("myLittleFamily"), Bytes.toBytes("someQualifier"));
+    ResultScanner scanner = table.getScanner(s);
+    try {
+      // Scanners return Result instances.
+      // Now, for the actual iteration. One way is to use a while loop like so:
+      for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
+        // print out the row we found and the columns we were looking for
+        System.out.println("Found row: " + rr);
+      }
+
+      // The other approach is to use a foreach loop. Scanners are iterable!
+      // for (Result rr : scanner) {
+      //   System.out.println("Found row: " + rr);
+      // }
+    } finally {
+      // Make sure you close your scanners when you are done!
+      // Thats why we have it inside a try/finally clause
+      scanner.close();
+    }
+  }
+}
+</pre></blockquote>
+</div>
+
+<p>There are many other methods for putting data into and getting data out of
+  HBase, but these examples should get you started. See the HTable javadoc for
+  more methods. Additionally, there are methods for managing tables in the
+  HBaseAdmin class.</p>
+
+<p>If your client is NOT Java, then you should consider the Thrift or REST
+  libraries.</p>
+
+<h2><a name="related" >Related Documentation</a></h2>
+<ul>
+  <li><a href="http://hbase.org">HBase Home Page</a>
+  <li><a href="http://wiki.apache.org/hadoop/Hbase">HBase Wiki</a>
+  <li><a href="http://hadoop.apache.org/">Hadoop Home Page</a>
+</ul>
+</pre></code>
+</div>
+
+<p>There are many other methods for putting data into and getting data out of
+  HBase, but these examples should get you started. See the HTable javadoc for
+  more methods. Additionally, there are methods for managing tables in the
+  HBaseAdmin class.</p>
+
+</body>
+</html>
+*/
+package org.apache.hadoop.hbase.client;
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java b/0.90/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
new file mode 100644
index 0000000..d76e333
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
@@ -0,0 +1,172 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.replication;
+
+import java.io.IOException;
+
+import org.apache.commons.lang.NotImplementedException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.replication.ReplicationZookeeper;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * <p>
+ * This class provides the administrative interface to HBase cluster
+ * replication. In order to use it, the cluster and the client using
+ * ReplicationAdmin must be configured with <code>hbase.replication</code>
+ * set to true.
+ * </p>
+ * <p>
+ * Adding a new peer results in creating new outbound connections from every
+ * region server to a subset of region servers on the slave cluster. Each
+ * new stream of replication will start replicating from the beginning of the
+ * current HLog, meaning that edits from that past will be replicated.
+ * </p>
+ * <p>
+ * Removing a peer is a destructive and irreversible operation that stops
+ * all the replication streams for the given cluster and deletes the metadata
+ * used to keep track of the replication state.
+ * </p>
+ * <p>
+ * Enabling and disabling peers is currently not supported.
+ * </p>
+ * <p>
+ * As cluster replication is still experimental, a kill switch is provided
+ * in order to stop all replication-related operations, see
+ * {@link #setReplicating(boolean)}. When setting it back to true, the new
+ * state of all the replication streams will be unknown and may have holes.
+ * Use at your own risk.
+ * </p>
+ * <p>
+ * To see which commands are available in the shell, type
+ * <code>replication</code>.
+ * </p>
+ */
+public class ReplicationAdmin {
+
+  private final ReplicationZookeeper replicationZk;
+  private final HConnection connection;
+
+  /**
+   * Constructor that creates a connection to the local ZooKeeper ensemble.
+   * @param conf Configuration to use
+   * @throws IOException if the connection to ZK cannot be made
+   * @throws RuntimeException if replication isn't enabled.
+   */
+  public ReplicationAdmin(Configuration conf) throws IOException {
+    if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY, false)) {
+      throw new RuntimeException("hbase.replication isn't true, please " +
+          "enable it in order to use replication");
+    }
+    this.connection = HConnectionManager.getConnection(conf);
+    ZooKeeperWatcher zkw = this.connection.getZooKeeperWatcher();
+    try {
+      this.replicationZk = new ReplicationZookeeper(this.connection, conf, zkw);
+    } catch (KeeperException e) {
+      throw new IOException("Unable setup the ZooKeeper connection", e);
+    }
+  }
+
+  /**
+   * Add a new peer cluster to replicate to.
+   * @param id a short that identifies the cluster
+   * @param clusterKey the concatenation of the slave cluster's
+   * <code>hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent</code>
+   * @throws IllegalStateException if there's already one slave since
+   * multi-slave isn't supported yet.
+   */
+  public void addPeer(String id, String clusterKey) throws IOException {
+    this.replicationZk.addPeer(id, clusterKey);
+  }
+
+  /**
+   * Removes a peer cluster and stops the replication to it.
+   * @param id a short that identifies the cluster
+   */
+  public void removePeer(String id) throws IOException {
+    this.replicationZk.removePeer(id);
+  }
+
+  /**
+   * Restart the replication stream to the specified peer.
+   * @param id a short that identifies the cluster
+   */
+  public void enablePeer(String id) {
+    throw new NotImplementedException("Not implemented");
+  }
+
+  /**
+   * Stop the replication stream to the specified peer.
+   * @param id a short that identifies the cluster
+   */
+  public void disablePeer(String id) {
+    throw new NotImplementedException("Not implemented");
+  }
+
+  /**
+   * Get the number of slave clusters the local cluster has.
+   * @return number of slave clusters
+   */
+  public int getPeersCount() {
+    return this.replicationZk.listPeersIdsAndWatch().size();
+  }
+
+  /**
+   * Get the current status of the kill switch, if the cluster is replicating
+   * or not.
+   * @return true if the cluster is replicated, otherwise false
+   */
+  public boolean getReplicating() throws IOException {
+    try {
+      return this.replicationZk.getReplication();
+    } catch (KeeperException e) {
+      throw new IOException("Couldn't get the replication status");
+    }
+  }
+
+  /**
+   * Kill switch for all replication-related features
+   * @param newState true to start replication, false to stop it.
+   * completely
+   * @return the previous state
+   */
+  public boolean setReplicating(boolean newState) throws IOException {
+    boolean prev = true;
+    try {
+      prev = getReplicating();
+      this.replicationZk.setReplicating(newState);
+    } catch (KeeperException e) {
+      throw new IOException("Unable to set the replication state", e);
+    }
+    return prev;
+  }
+
+  /**
+   * Get the ZK-support tool created and used by this object for replication.
+   * @return the ZK-support tool
+   */
+  ReplicationZookeeper getReplicationZk() {
+    return replicationZk;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java
new file mode 100644
index 0000000..b48b390
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java
@@ -0,0 +1,224 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.executor;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Server;
+
+
+/**
+ * Abstract base class for all HBase event handlers. Subclasses should
+ * implement the {@link #process()} method.  Subclasses should also do all
+ * necessary checks up in their constructor if possible -- check table exists,
+ * is disabled, etc. -- so they fail fast rather than later when process is
+ * running.  Do it this way because process be invoked directly but event
+ * handlers are also
+ * run in an executor context -- i.e. asynchronously -- and in this case,
+ * exceptions thrown at process time will not be seen by the invoker, not till
+ * we implement a call-back mechanism so the client can pick them up later.
+ * <p>
+ * Event handlers have an {@link EventType}.
+ * {@link EventType} is a list of ALL handler event types.  We need to keep
+ * a full list in one place -- and as enums is a good shorthand for an
+ * implemenations -- because event handlers can be passed to executors when
+ * they are to be run asynchronously. The
+ * hbase executor, see {@link ExecutorService}, has a switch for passing
+ * event type to executor.
+ * <p>
+ * Event listeners can be installed and will be called pre- and post- process if
+ * this EventHandler is run in a Thread (its a Runnable so if its {@link #run()}
+ * method gets called).  Implement
+ * {@link EventHandlerListener}s, and registering using
+ * {@link #setListener(EventHandlerListener)}.
+ * @see ExecutorService
+ */
+public abstract class EventHandler implements Runnable, Comparable<Runnable> {
+  private static final Log LOG = LogFactory.getLog(EventHandler.class);
+
+  // type of event this object represents
+  protected EventType eventType;
+
+  protected Server server;
+
+  // sequence id generator for default FIFO ordering of events
+  protected static AtomicLong seqids = new AtomicLong(0);
+
+  // sequence id for this event
+  private final long seqid;
+
+  // Listener to call pre- and post- processing.  May be null.
+  private EventHandlerListener listener;
+
+  /**
+   * This interface provides pre- and post-process hooks for events.
+   */
+  public interface EventHandlerListener {
+    /**
+     * Called before any event is processed
+     * @param event The event handler whose process method is about to be called.
+     */
+    public void beforeProcess(EventHandler event);
+    /**
+     * Called after any event is processed
+     * @param event The event handler whose process method is about to be called.
+     */
+    public void afterProcess(EventHandler event);
+  }
+
+  /**
+   * List of all HBase event handler types.  Event types are named by a
+   * convention: event type names specify the component from which the event
+   * originated and then where its destined -- e.g. RS2ZK_ prefix means the
+   * event came from a regionserver destined for zookeeper -- and then what
+   * the even is; e.g. REGION_OPENING.
+   * 
+   * <p>We give the enums indices so we can add types later and keep them
+   * grouped together rather than have to add them always to the end as we
+   * would have to if we used raw enum ordinals.
+   */
+  public enum EventType {
+    // Messages originating from RS (NOTE: there is NO direct communication from
+    // RS to Master). These are a result of RS updates into ZK.
+    RS_ZK_REGION_CLOSING      (1),   // RS is in process of closing a region
+    RS_ZK_REGION_CLOSED       (2),   // RS has finished closing a region
+    RS_ZK_REGION_OPENING      (3),   // RS is in process of opening a region
+    RS_ZK_REGION_OPENED       (4),   // RS has finished opening a region
+
+    // Messages originating from Master to RS
+    M_RS_OPEN_REGION          (20),  // Master asking RS to open a region
+    M_RS_OPEN_ROOT            (21),  // Master asking RS to open root
+    M_RS_OPEN_META            (22),  // Master asking RS to open meta
+    M_RS_CLOSE_REGION         (23),  // Master asking RS to close a region
+    M_RS_CLOSE_ROOT           (24),  // Master asking RS to close root
+    M_RS_CLOSE_META           (25),  // Master asking RS to close meta
+
+    // Messages originating from Client to Master
+    C_M_DELETE_TABLE          (40),   // Client asking Master to delete a table
+    C_M_DISABLE_TABLE         (41),   // Client asking Master to disable a table
+    C_M_ENABLE_TABLE          (42),   // Client asking Master to enable a table
+    C_M_MODIFY_TABLE          (43),   // Client asking Master to modify a table
+    C_M_ADD_FAMILY            (44),   // Client asking Master to add family to table
+    C_M_DELETE_FAMILY         (45),   // Client asking Master to delete family of table
+    C_M_MODIFY_FAMILY         (46),   // Client asking Master to modify family of table
+
+    // Updates from master to ZK. This is done by the master and there is
+    // nothing to process by either Master or RS
+    M_ZK_REGION_OFFLINE       (50),  // Master adds this region as offline in ZK
+
+    // Master controlled events to be executed on the master
+    M_SERVER_SHUTDOWN         (70),  // Master is processing shutdown of a RS
+    M_META_SERVER_SHUTDOWN    (72);  // Master is processing shutdown of RS hosting a meta region (-ROOT- or .META.).
+
+    /**
+     * Constructor
+     */
+    EventType(int value) {}
+  }
+
+  /**
+   * Default base class constructor.
+   */
+  public EventHandler(Server server, EventType eventType) {
+    this.server = server;
+    this.eventType = eventType;
+    seqid = seqids.incrementAndGet();
+  }
+
+  public void run() {
+    try {
+      if (getListener() != null) getListener().beforeProcess(this);
+      process();
+      if (getListener() != null) getListener().afterProcess(this);
+    } catch(Throwable t) {
+      LOG.error("Caught throwable while processing event " + eventType, t);
+    }
+  }
+
+  /**
+   * This method is the main processing loop to be implemented by the various
+   * subclasses.
+   * @throws IOException
+   */
+  public abstract void process() throws IOException;
+
+  /**
+   * Return the event type
+   * @return The event type.
+   */
+  public EventType getEventType() {
+    return this.eventType;
+  }
+
+  /**
+   * Get the priority level for this handler instance.  This uses natural
+   * ordering so lower numbers are higher priority.
+   * <p>
+   * Lowest priority is Integer.MAX_VALUE.  Highest priority is 0.
+   * <p>
+   * Subclasses should override this method to allow prioritizing handlers.
+   * <p>
+   * Handlers with the same priority are handled in FIFO order.
+   * <p>
+   * @return Integer.MAX_VALUE by default, override to set higher priorities
+   */
+  public int getPriority() {
+    return Integer.MAX_VALUE;
+  }
+
+  /**
+   * @return This events' sequence id.
+   */
+  public long getSeqid() {
+    return this.seqid;
+  }
+
+  /**
+   * Default prioritized runnable comparator which implements a FIFO ordering.
+   * <p>
+   * Subclasses should not override this.  Instead, if they want to implement
+   * priority beyond FIFO, they should override {@link #getPriority()}.
+   */
+  @Override
+  public int compareTo(Runnable o) {
+    EventHandler eh = (EventHandler)o;
+    if(getPriority() != eh.getPriority()) {
+      return (getPriority() < eh.getPriority()) ? -1 : 1;
+    }
+    return (this.seqid < eh.seqid) ? -1 : 1;
+  }
+
+  /**
+   * @return Current listener or null if none set.
+   */
+  public synchronized EventHandlerListener getListener() {
+    return listener;
+  }
+
+  /**
+   * @param listener Listener to call pre- and post- {@link #process()}.
+   */
+  public synchronized void setListener(EventHandlerListener listener) {
+    this.listener = listener;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java b/0.90/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java
new file mode 100644
index 0000000..6914c69
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java
@@ -0,0 +1,286 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.executor;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.executor.EventHandler.EventHandlerListener;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * This is a generic executor service. This component abstracts a
+ * threadpool, a queue to which {@link EventHandler.EventType}s can be submitted,
+ * and a <code>Runnable</code> that handles the object that is added to the queue.
+ *
+ * <p>In order to create a new service, create an instance of this class and
+ * then do: <code>instance.startExecutorService("myService");</code>.  When done
+ * call {@link #shutdown()}.
+ *
+ * <p>In order to use the service created above, call
+ * {@link #submit(EventHandler)}. Register pre- and post- processing listeners
+ * by registering your implementation of {@link EventHandler.EventHandlerListener}
+ * with {@link #registerListener(EventHandler.EventType, EventHandler.EventHandlerListener)}.  Be sure
+ * to deregister your listener when done via {@link #unregisterListener(EventHandler.EventType)}.
+ */
+public class ExecutorService {
+  private static final Log LOG = LogFactory.getLog(ExecutorService.class);
+
+  // hold the all the executors created in a map addressable by their names
+  private final ConcurrentHashMap<String, Executor> executorMap =
+    new ConcurrentHashMap<String, Executor>();
+
+  // listeners that are called before and after an event is processed
+  private ConcurrentHashMap<EventHandler.EventType, EventHandlerListener> eventHandlerListeners =
+    new ConcurrentHashMap<EventHandler.EventType, EventHandlerListener>();
+
+  // Name of the server hosting this executor service.
+  private final String servername;
+
+  /**
+   * The following is a list of all executor types, both those that run in the
+   * master and those that run in the regionserver.
+   */
+  public enum ExecutorType {
+
+    // Master executor services
+    MASTER_CLOSE_REGION        (1),
+    MASTER_OPEN_REGION         (2),
+    MASTER_SERVER_OPERATIONS   (3),
+    MASTER_TABLE_OPERATIONS    (4),
+    MASTER_RS_SHUTDOWN         (5),
+    MASTER_META_SERVER_OPERATIONS (6),
+
+    // RegionServer executor services
+    RS_OPEN_REGION             (20),
+    RS_OPEN_ROOT               (21),
+    RS_OPEN_META               (22),
+    RS_CLOSE_REGION            (23),
+    RS_CLOSE_ROOT              (24),
+    RS_CLOSE_META              (25);
+
+    ExecutorType(int value) {}
+
+    /**
+     * @param serverName
+     * @return Conflation of the executor type and the passed servername.
+     */
+    String getExecutorName(String serverName) {
+      return this.toString() + "-" + serverName;
+    }
+  }
+
+  /**
+   * Returns the executor service type (the thread pool instance) for the
+   * passed event handler type.
+   * @param type EventHandler type.
+   */
+  public ExecutorType getExecutorServiceType(final EventHandler.EventType type) {
+    switch(type) {
+      // Master executor services
+
+      case RS_ZK_REGION_CLOSED:
+        return ExecutorType.MASTER_CLOSE_REGION;
+
+      case RS_ZK_REGION_OPENED:
+        return ExecutorType.MASTER_OPEN_REGION;
+
+      case M_SERVER_SHUTDOWN:
+        return ExecutorType.MASTER_SERVER_OPERATIONS;
+
+      case M_META_SERVER_SHUTDOWN:
+        return ExecutorType.MASTER_META_SERVER_OPERATIONS;
+
+      case C_M_DELETE_TABLE:
+      case C_M_DISABLE_TABLE:
+      case C_M_ENABLE_TABLE:
+      case C_M_MODIFY_TABLE:
+        return ExecutorType.MASTER_TABLE_OPERATIONS;
+
+      // RegionServer executor services
+
+      case M_RS_OPEN_REGION:
+        return ExecutorType.RS_OPEN_REGION;
+
+      case M_RS_OPEN_ROOT:
+        return ExecutorType.RS_OPEN_ROOT;
+
+      case M_RS_OPEN_META:
+        return ExecutorType.RS_OPEN_META;
+
+      case M_RS_CLOSE_REGION:
+        return ExecutorType.RS_CLOSE_REGION;
+
+      case M_RS_CLOSE_ROOT:
+        return ExecutorType.RS_CLOSE_ROOT;
+
+      case M_RS_CLOSE_META:
+        return ExecutorType.RS_CLOSE_META;
+
+      default:
+        throw new RuntimeException("Unhandled event type " + type);
+    }
+  }
+
+  /**
+   * Default constructor.
+   * @param servername Name of the hosting server.
+   */
+  public ExecutorService(final String servername) {
+    super();
+    this.servername = servername;
+  }
+
+  /**
+   * Start an executor service with a given name. If there was a service already
+   * started with the same name, this throws a RuntimeException.
+   * @param name Name of the service to start.
+   */
+  void startExecutorService(String name, int maxThreads) {
+    if (this.executorMap.get(name) != null) {
+      throw new RuntimeException("An executor service with the name " + name +
+        " is already running!");
+    }
+    Executor hbes = new Executor(name, maxThreads, this.eventHandlerListeners);
+    if (this.executorMap.putIfAbsent(name, hbes) != null) {
+      throw new RuntimeException("An executor service with the name " + name +
+      " is already running (2)!");
+    }
+    LOG.debug("Starting executor service name=" + name +
+      ", corePoolSize=" + hbes.threadPoolExecutor.getCorePoolSize() +
+      ", maxPoolSize=" + hbes.threadPoolExecutor.getMaximumPoolSize());
+  }
+
+  boolean isExecutorServiceRunning(String name) {
+    return this.executorMap.containsKey(name);
+  }
+
+  public void shutdown() {
+    for(Entry<String, Executor> entry: this.executorMap.entrySet()) {
+      List<Runnable> wasRunning =
+        entry.getValue().threadPoolExecutor.shutdownNow();
+      if (!wasRunning.isEmpty()) {
+        LOG.info(entry.getKey() + " had " + wasRunning + " on shutdown");
+      }
+    }
+    this.executorMap.clear();
+  }
+
+  Executor getExecutor(final ExecutorType type) {
+    return getExecutor(type.getExecutorName(this.servername));
+  }
+
+  Executor getExecutor(String name) {
+    Executor executor = this.executorMap.get(name);
+    if (executor == null) {
+      LOG.debug("Executor service [" + name + "] not found in " + this.executorMap);
+    }
+    return executor;
+  }
+
+
+  public void startExecutorService(final ExecutorType type, final int maxThreads) {
+    String name = type.getExecutorName(this.servername);
+    if (isExecutorServiceRunning(name)) {
+      LOG.debug("Executor service " + toString() + " already running on " +
+        this.servername);
+      return;
+    }
+    startExecutorService(name, maxThreads);
+  }
+
+  public void submit(final EventHandler eh) {
+    getExecutor(getExecutorServiceType(eh.getEventType())).submit(eh);
+  }
+
+  /**
+   * Subscribe to updates before and after processing instances of
+   * {@link EventHandler.EventType}.  Currently only one listener per
+   * event type.
+   * @param type Type of event we're registering listener for
+   * @param listener The listener to run.
+   */
+  public void registerListener(final EventHandler.EventType type,
+      final EventHandlerListener listener) {
+    this.eventHandlerListeners.put(type, listener);
+  }
+
+  /**
+   * Stop receiving updates before and after processing instances of
+   * {@link EventHandler.EventType}
+   * @param type Type of event we're registering listener for
+   * @return The listener we removed or null if we did not remove it.
+   */
+  public EventHandlerListener unregisterListener(final EventHandler.EventType type) {
+    return this.eventHandlerListeners.remove(type);
+  }
+
+  /**
+   * Executor instance.
+   */
+  static class Executor {
+    // how long to retain excess threads
+    final long keepAliveTimeInMillis = 1000;
+    // the thread pool executor that services the requests
+    final ThreadPoolExecutor threadPoolExecutor;
+    // work queue to use - unbounded queue
+    final BlockingQueue<Runnable> q = new LinkedBlockingQueue<Runnable>();
+    private final String name;
+    private final Map<EventHandler.EventType, EventHandlerListener> eventHandlerListeners;
+
+    protected Executor(String name, int maxThreads,
+        final Map<EventHandler.EventType, EventHandlerListener> eventHandlerListeners) {
+      this.name = name;
+      this.eventHandlerListeners = eventHandlerListeners;
+      // create the thread pool executor
+      this.threadPoolExecutor = new ThreadPoolExecutor(maxThreads, maxThreads,
+          keepAliveTimeInMillis, TimeUnit.MILLISECONDS, q);
+      // name the threads for this threadpool
+      ThreadFactoryBuilder tfb = new ThreadFactoryBuilder();
+      tfb.setNameFormat(this.name + "-%d");
+      this.threadPoolExecutor.setThreadFactory(tfb.build());
+    }
+
+    /**
+     * Submit the event to the queue for handling.
+     * @param event
+     */
+    void submit(final EventHandler event) {
+      // If there is a listener for this type, make sure we call the before
+      // and after process methods.
+      EventHandlerListener listener =
+        this.eventHandlerListeners.get(event.getEventType());
+      if (listener != null) {
+        event.setListener(listener);
+      }
+      this.threadPoolExecutor.execute(event);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/executor/RegionTransitionData.java b/0.90/src/main/java/org/apache/hadoop/hbase/executor/RegionTransitionData.java
new file mode 100644
index 0000000..5e3cc27
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/executor/RegionTransitionData.java
@@ -0,0 +1,210 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.executor;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Data serialized into ZooKeeper for region transitions.
+ */
+public class RegionTransitionData implements Writable {
+  /**
+   * Type of transition event (offline, opening, opened, closing, closed).
+   * Required.
+   */
+  private EventType eventType;
+
+  /** Region being transitioned.  Required. */
+  private byte [] regionName;
+
+  /** Server event originated from.  Optional. */
+  private String serverName;
+
+  /** Time the event was created.  Required but automatically set. */
+  private long stamp;
+
+  /**
+   * Writable constructor.  Do not use directly.
+   */
+  public RegionTransitionData() {}
+
+  /**
+   * Construct data for a new region transition event with the specified event
+   * type and region name.
+   *
+   * <p>Used when the server name is not known (the master is setting it).  This
+   * happens during cluster startup or during failure scenarios.  When
+   * processing a failed regionserver, the master assigns the regions from that
+   * server to other servers though the region was never 'closed'.  During
+   * master failover, the new master may have regions stuck in transition
+   * without a destination so may have to set regions offline and generate a new
+   * assignment.
+   *
+   * <p>Since only the master uses this constructor, the type should always be
+   * {@link EventType#M_ZK_REGION_OFFLINE}.
+   *
+   * @param eventType type of event
+   * @param regionName name of region as per <code>HRegionInfo#getRegionName()</code>
+   */
+  public RegionTransitionData(EventType eventType, byte [] regionName) {
+    this(eventType, regionName, null);
+  }
+
+  /**
+   * Construct data for a new region transition event with the specified event
+   * type, region name, and server name.
+   *
+   * <p>Used when the server name is known (a regionserver is setting it).
+   *
+   * <p>Valid types for this constructor are {@link EventType#RS_ZK_REGION_CLOSING},
+   * {@link EventType#RS_ZK_REGION_CLOSED}, {@link EventType#RS_ZK_REGION_OPENING},
+   * and {@link EventType#RS_ZK_REGION_OPENED}.
+   *
+   * @param eventType type of event
+   * @param regionName name of region as per <code>HRegionInfo#getRegionName()</code>
+   * @param serverName name of server setting data
+   */
+  public RegionTransitionData(EventType eventType, byte [] regionName,
+      String serverName) {
+    this.eventType = eventType;
+    this.stamp = System.currentTimeMillis();
+    this.regionName = regionName;
+    this.serverName = serverName;
+  }
+
+  /**
+   * Gets the type of region transition event.
+   *
+   * <p>One of:
+   * <ul>
+   * <li>{@link EventType#M_ZK_REGION_OFFLINE}
+   * <li>{@link EventType#RS_ZK_REGION_CLOSING}
+   * <li>{@link EventType#RS_ZK_REGION_CLOSED}
+   * <li>{@link EventType#RS_ZK_REGION_OPENING}
+   * <li>{@link EventType#RS_ZK_REGION_OPENED}
+   * </ul>
+   * @return type of region transition event
+   */
+  public EventType getEventType() {
+    return eventType;
+  }
+
+  /**
+   * Gets the name of the region being transitioned.
+   *
+   * <p>Region name is required so this never returns null.
+   * @return region name, the result of a call to <code>HRegionInfo#getRegionName()</code>
+   */
+  public byte [] getRegionName() {
+    return regionName;
+  }
+
+  /**
+   * Gets the server the event originated from.  If null, this event originated
+   * from the master.
+   *
+   * @return server name of originating regionserver, or null if from master
+   */
+  public String getServerName() {
+    return serverName;
+  }
+
+  /**
+   * Gets the timestamp when this event was created.
+   *
+   * @return stamp event was created
+   */
+  public long getStamp() {
+    return stamp;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    // the event type byte
+    eventType = EventType.values()[in.readShort()];
+    // the timestamp
+    stamp = in.readLong();
+    // the encoded name of the region being transitioned
+    regionName = Bytes.readByteArray(in);
+    // remaining fields are optional so prefixed with boolean
+    // the name of the regionserver sending the data
+    if(in.readBoolean()) {
+      serverName = in.readUTF();
+    } else {
+      serverName = null;
+    }
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeShort(eventType.ordinal());
+    out.writeLong(System.currentTimeMillis());
+    Bytes.writeByteArray(out, regionName);
+    // remaining fields are optional so prefixed with boolean
+    out.writeBoolean(serverName != null);
+    if(serverName != null) {
+      out.writeUTF(serverName);
+    }
+  }
+
+  /**
+   * Get the bytes for this instance.  Throws a {@link RuntimeException} if
+   * there is an error deserializing this instance because it represents a code
+   * bug.
+   * @return binary representation of this instance
+   */
+  public byte [] getBytes() {
+    try {
+      return Writables.getBytes(this);
+    } catch(IOException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  /**
+   * Get an instance from bytes.  Throws a {@link RuntimeException} if
+   * there is an error serializing this instance from bytes because it
+   * represents a code bug.
+   * @param bytes binary representation of this instance
+   * @return instance of this class
+   */
+  public static RegionTransitionData fromBytes(byte [] bytes) {
+    try {
+      RegionTransitionData data = new RegionTransitionData();
+      Writables.getWritable(bytes, data);
+      return data;
+    } catch(IOException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  @Override
+  public String toString() {
+    return "region=" + Bytes.toString(regionName) + ", server=" + serverName +
+      ", state=" + eventType;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
new file mode 100644
index 0000000..dd4cc26
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java
@@ -0,0 +1,40 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+/**
+ * A binary comparator which lexicographically compares against the specified
+ * byte array using {@link org.apache.hadoop.hbase.util.Bytes#compareTo(byte[], byte[])}.
+ */
+public class BinaryComparator extends WritableByteArrayComparable {
+
+  /** Nullary constructor for Writable, do not use */
+  public BinaryComparator() { }
+
+  /**
+   * Constructor
+   * @param value value
+   */
+  public BinaryComparator(byte[] value) {
+    super(value);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java
new file mode 100644
index 0000000..7db9965
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java
@@ -0,0 +1,53 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A comparator which compares against a specified byte array, but only compares
+ * up to the length of this byte array. For the rest it is similar to
+ * {@link BinaryComparator}.
+ */
+public class BinaryPrefixComparator extends WritableByteArrayComparable {
+
+  /** Nullary constructor for Writable, do not use */
+  public BinaryPrefixComparator() { }
+
+  /**
+   * Constructor
+   * @param value value
+   */
+  public BinaryPrefixComparator(byte[] value) {
+    super(value);
+  }
+
+  @Override
+  public int compareTo(byte [] value) {
+    if (this.value.length <= value.length) {
+      return Bytes.compareTo(this.value, 0, this.value.length, value, 0,
+          this.value.length);
+    } else {
+      return Bytes.compareTo(this.value, value);
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java
new file mode 100644
index 0000000..306ed21
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java
@@ -0,0 +1,80 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+/**
+ * Simple filter that returns first N columns on row only.
+ * This filter was written to test filters in Get and as soon as it gets
+ * its quota of columns, {@link #filterAllRemaining()} returns true.  This
+ * makes this filter unsuitable as a Scan filter.
+ */
+public class ColumnCountGetFilter extends FilterBase {
+  private int limit = 0;
+  private int count = 0;
+
+  /**
+   * Used during serialization.
+   * Do not use.
+   */
+  public ColumnCountGetFilter() {
+    super();
+  }
+
+  public ColumnCountGetFilter(final int n) {
+    this.limit = n;
+  }
+
+  public int getLimit() {
+    return limit;
+  }
+
+  @Override
+  public boolean filterAllRemaining() {
+    return this.count > this.limit;
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    this.count++;
+    return filterAllRemaining() ? ReturnCode.SKIP: ReturnCode.INCLUDE;
+  }
+
+  @Override
+  public void reset() {
+    this.count = 0;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    this.limit = in.readInt();
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeInt(this.limit);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java
new file mode 100644
index 0000000..aa30bbf
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java
@@ -0,0 +1,84 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * A filter, based on the ColumnCountGetFilter, takes two arguments: limit and offset.
+ * This filter can be used for row-based indexing, where references to other tables are stored across many columns,
+ * in order to efficient lookups and paginated results for end users.
+ */
+public class ColumnPaginationFilter extends FilterBase
+{
+  private int limit = 0;
+  private int offset = 0;
+  private int count = 0;
+
+    /**
+     * Used during serialization. Do not use.
+     */
+  public ColumnPaginationFilter()
+  {
+    super();
+  }
+
+  public ColumnPaginationFilter(final int limit, final int offset)
+  {
+    this.limit = limit;
+    this.offset = offset;
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v)
+  {
+    if(count >= offset + limit)
+    {
+      return ReturnCode.NEXT_ROW;
+    }
+
+    ReturnCode code = count < offset ? ReturnCode.SKIP : ReturnCode.INCLUDE;
+    count++;
+    return code;
+  }
+
+  @Override
+  public void reset()
+  {
+    this.count = 0;
+  }
+
+  public void readFields(DataInput in) throws IOException
+  {
+    this.limit = in.readInt();
+    this.offset = in.readInt();
+  }
+
+  public void write(DataOutput out) throws IOException
+  {
+    out.writeInt(this.limit);
+    out.writeInt(this.offset);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java
new file mode 100644
index 0000000..2848915
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java
@@ -0,0 +1,94 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.DataInput;
+
+/**
+ * This filter is used for selecting only those keys with columns that matches
+ * a particular prefix. For example, if prefix is 'an', it will pass keys will
+ * columns like 'and', 'anti' but not keys with columns like 'ball', 'act'.
+ */
+public class ColumnPrefixFilter extends FilterBase {
+  protected byte [] prefix = null;
+
+  public ColumnPrefixFilter() {
+    super();
+  }
+
+  public ColumnPrefixFilter(final byte [] prefix) {
+    this.prefix = prefix;
+  }
+
+  public byte[] getPrefix() {
+    return prefix;
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue kv) {
+    if (this.prefix == null || kv.getBuffer() == null) {
+      return ReturnCode.INCLUDE;
+    } else {
+      return filterColumn(kv.getBuffer(), kv.getQualifierOffset(), kv.getQualifierLength());
+    }
+  }
+
+  public ReturnCode filterColumn(byte[] buffer, int qualifierOffset, int qualifierLength) {
+    if (qualifierLength < prefix.length) {
+      int cmp = Bytes.compareTo(buffer, qualifierOffset, qualifierLength, this.prefix, 0,
+          qualifierLength);
+      if (cmp <= 0) {
+        return ReturnCode.SEEK_NEXT_USING_HINT;
+      } else {
+        return ReturnCode.NEXT_ROW;
+      }
+    } else {
+      int cmp = Bytes.compareTo(buffer, qualifierOffset, this.prefix.length, this.prefix, 0,
+          this.prefix.length);
+      if (cmp < 0) {
+        return ReturnCode.SEEK_NEXT_USING_HINT;
+      } else if (cmp > 0) {
+        return ReturnCode.NEXT_ROW;
+      } else {
+        return ReturnCode.INCLUDE;
+      }
+    }
+  }
+
+  public void write(DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, this.prefix);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    this.prefix = Bytes.readByteArray(in);
+  }
+
+  public KeyValue getNextKeyHint(KeyValue kv) {
+    return KeyValue.createFirstOnRow(
+        kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(), kv.getBuffer(),
+        kv.getFamilyOffset(), kv.getFamilyLength(), prefix, 0, prefix.length);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java
new file mode 100644
index 0000000..6d73439
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java
@@ -0,0 +1,139 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+
+/**
+ * This is a generic filter to be used to filter by comparison.  It takes an
+ * operator (equal, greater, not equal, etc) and a byte [] comparator.
+ * <p>
+ * To filter by row key, use {@link RowFilter}.
+ * <p>
+ * To filter by column qualifier, use {@link QualifierFilter}.
+ * <p>
+ * To filter by value, use {@link SingleColumnValueFilter}.
+ * <p>
+ * These filters can be wrapped with {@link SkipFilter} and {@link WhileMatchFilter}
+ * to add more control.
+ * <p>
+ * Multiple filters can be combined using {@link FilterList}.
+ */
+public abstract class CompareFilter extends FilterBase {
+
+  /** Comparison operators. */
+  public enum CompareOp {
+    /** less than */
+    LESS,
+    /** less than or equal to */
+    LESS_OR_EQUAL,
+    /** equals */
+    EQUAL,
+    /** not equal */
+    NOT_EQUAL,
+    /** greater than or equal to */
+    GREATER_OR_EQUAL,
+    /** greater than */
+    GREATER,
+    /** no operation */
+    NO_OP,
+  }
+
+  protected CompareOp compareOp;
+  protected WritableByteArrayComparable comparator;
+
+  /**
+   * Writable constructor, do not use.
+   */
+  public CompareFilter() {
+  }
+
+  /**
+   * Constructor.
+   * @param compareOp the compare op for row matching
+   * @param comparator the comparator for row matching
+   */
+  public CompareFilter(final CompareOp compareOp,
+      final WritableByteArrayComparable comparator) {
+    this.compareOp = compareOp;
+    this.comparator = comparator;
+  }
+
+  /**
+   * @return operator
+   */
+  public CompareOp getOperator() {
+    return compareOp;
+  }
+
+  /**
+   * @return the comparator
+   */
+  public WritableByteArrayComparable getComparator() {
+    return comparator;
+  }
+
+  protected boolean doCompare(final CompareOp compareOp,
+      final WritableByteArrayComparable comparator, final byte [] data,
+      final int offset, final int length) {
+      if (compareOp == CompareOp.NO_OP) {
+	  return true;
+      }
+    int compareResult =
+      comparator.compareTo(Arrays.copyOfRange(data, offset,
+        offset + length));
+    switch (compareOp) {
+      case LESS:
+        return compareResult <= 0;
+      case LESS_OR_EQUAL:
+        return compareResult < 0;
+      case EQUAL:
+        return compareResult != 0;
+      case NOT_EQUAL:
+        return compareResult == 0;
+      case GREATER_OR_EQUAL:
+        return compareResult > 0;
+      case GREATER:
+        return compareResult >= 0;
+      default:
+        throw new RuntimeException("Unknown Compare op " +
+          compareOp.name());
+    }
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    compareOp = CompareOp.valueOf(in.readUTF());
+    comparator = (WritableByteArrayComparable)
+      HbaseObjectWritable.readObject(in, null);
+  }
+
+  public void write(DataOutput out) throws IOException {
+    out.writeUTF(compareOp.name());
+    HbaseObjectWritable.writeObject(out, comparator,
+      WritableByteArrayComparable.class, null);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java
new file mode 100644
index 0000000..9a07a20
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java
@@ -0,0 +1,182 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A filter for adding inter-column timestamp matching
+ * Only cells with a correspondingly timestamped entry in
+ * the target column will be retained
+ * Not compatible with Scan.setBatch as operations need 
+ * full rows for correct filtering 
+ */
+public class DependentColumnFilter extends CompareFilter {
+
+  protected byte[] columnFamily;
+  protected byte[] columnQualifier;
+  protected boolean dropDependentColumn;
+
+  protected Set<Long> stampSet = new HashSet<Long>();
+  
+  /**
+   * Should only be used for writable
+   */
+  public DependentColumnFilter() {
+  }
+  
+  /**
+   * Build a dependent column filter with value checking
+   * dependent column varies will be compared using the supplied
+   * compareOp and comparator, for usage of which
+   * refer to {@link CompareFilter}
+   * 
+   * @param family dependent column family
+   * @param qualifier dependent column qualifier
+   * @param dropDependentColumn whether the column should be discarded after
+   * @param valueCompareOp comparison op 
+   * @param valueComparator comparator
+   */
+  public DependentColumnFilter(final byte [] family, final byte[] qualifier,
+		  final boolean dropDependentColumn, final CompareOp valueCompareOp,
+	      final WritableByteArrayComparable valueComparator) {
+    // set up the comparator   
+    super(valueCompareOp, valueComparator);
+    this.columnFamily = family;
+    this.columnQualifier = qualifier;
+    this.dropDependentColumn = dropDependentColumn;
+  }
+  
+  /**
+   * Constructor for DependentColumn filter.
+   * Keyvalues where a keyvalue from target column 
+   * with the same timestamp do not exist will be dropped. 
+   * 
+   * @param family name of target column family
+   * @param qualifier name of column qualifier
+   */
+  public DependentColumnFilter(final byte [] family, final byte [] qualifier) {
+    this(family, qualifier, false);
+  }
+  
+  /**
+   * Constructor for DependentColumn filter.
+   * Keyvalues where a keyvalue from target column 
+   * with the same timestamp do not exist will be dropped. 
+   * 
+   * @param family name of dependent column family
+   * @param qualifier name of dependent qualifier
+   * @param dropDependentColumn whether the dependent columns keyvalues should be discarded
+   */
+  public DependentColumnFilter(final byte [] family, final byte [] qualifier,
+      final boolean dropDependentColumn) {
+    this(family, qualifier, dropDependentColumn, CompareOp.NO_OP, null);
+  }
+  
+  
+  @Override
+  public boolean filterAllRemaining() {
+    return false;
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    // Check if the column and qualifier match
+  	if (!v.matchingColumn(this.columnFamily, this.columnQualifier)) {
+        // include non-matches for the time being, they'll be discarded afterwards
+        return ReturnCode.INCLUDE;
+  	}
+  	// If it doesn't pass the op, skip it
+  	if(comparator != null && doCompare(compareOp, comparator, v.getValue(), 0, v.getValueLength()))
+  	  return ReturnCode.SKIP;  	  
+	
+    stampSet.add(v.getTimestamp());
+    if(dropDependentColumn) {
+    	return ReturnCode.SKIP;
+    }
+    return ReturnCode.INCLUDE;
+  }
+
+  @Override
+  public void filterRow(List<KeyValue> kvs) {
+    Iterator<KeyValue> it = kvs.iterator();
+    KeyValue kv;
+    while(it.hasNext()) {
+      kv = it.next();
+      if(!stampSet.contains(kv.getTimestamp())) {
+        it.remove();
+      }
+    }
+  }
+
+  @Override
+  public boolean hasFilterRow() {
+    return true;
+  }
+  
+  @Override
+  public boolean filterRow() {
+    return false;
+  }
+
+  @Override
+  public boolean filterRowKey(byte[] buffer, int offset, int length) {
+    return false;
+  }
+
+  @Override
+  public void reset() {
+    stampSet.clear();    
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+	super.readFields(in);
+    this.columnFamily = Bytes.readByteArray(in);
+	if(this.columnFamily.length == 0) {
+	  this.columnFamily = null;
+	}
+    
+    this.columnQualifier = Bytes.readByteArray(in);
+    if(this.columnQualifier.length == 0) {
+      this.columnQualifier = null;
+    }	
+    
+    this.dropDependentColumn = in.readBoolean();
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    super.write(out);
+    Bytes.writeByteArray(out, this.columnFamily);
+    Bytes.writeByteArray(out, this.columnQualifier);
+    out.writeBoolean(this.dropDependentColumn);    
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java
new file mode 100644
index 0000000..03e16c4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java
@@ -0,0 +1,67 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * This filter is used to filter based on the column family. It takes an
+ * operator (equal, greater, not equal, etc) and a byte [] comparator for the
+ * column family portion of a key.
+ * <p/>
+ * This filter can be wrapped with {@link org.apache.hadoop.hbase.filter.WhileMatchFilter} and {@link org.apache.hadoop.hbase.filter.SkipFilter}
+ * to add more control.
+ * <p/>
+ * Multiple filters can be combined using {@link org.apache.hadoop.hbase.filter.FilterList}.
+ * <p/>
+ * If an already known column family is looked for, use {@link org.apache.hadoop.hbase.client.Get#addFamily(byte[])}
+ * directly rather than a filter.
+ */
+public class FamilyFilter extends CompareFilter {
+  /**
+   * Writable constructor, do not use.
+   */
+  public FamilyFilter() {
+  }
+
+  /**
+   * Constructor.
+   *
+   * @param familyCompareOp  the compare op for column family matching
+   * @param familyComparator the comparator for column family matching
+   */
+  public FamilyFilter(final CompareOp familyCompareOp,
+                      final WritableByteArrayComparable familyComparator) {
+      super(familyCompareOp, familyComparator);
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    int familyLength = v.getFamilyLength();
+    if (familyLength > 0) {
+      if (doCompare(this.compareOp, this.comparator, v.getBuffer(),
+          v.getFamilyOffset(), familyLength)) {
+        return ReturnCode.SKIP;
+      }
+    }
+    return ReturnCode.INCLUDE;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/Filter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
new file mode 100644
index 0000000..e6408b7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/Filter.java
@@ -0,0 +1,151 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.io.Writable;
+
+import java.util.List;
+
+/**
+ * Interface for row and column filters directly applied within the regionserver.
+ * A filter can expect the following call sequence:
+ *<ul>
+ * <li>{@link #reset()}</li>
+ * <li>{@link #filterAllRemaining()} -> true indicates scan is over, false, keep going on.</li>
+ * <li>{@link #filterRowKey(byte[],int,int)} -> true to drop this row,
+ * if false, we will also call</li>
+ * <li>{@link #filterKeyValue(KeyValue)} -> true to drop this key/value</li>
+ * <li>{@link #filterRow(List)} -> allows directmodification of the final list to be submitted
+ * <li>{@link #filterRow()} -> last chance to drop entire row based on the sequence of
+ * filterValue() calls. Eg: filter a row if it doesn't contain a specified column.
+ * </li>
+ * </ul>
+ *
+ * Filter instances are created one per region/scan.  This interface replaces
+ * the old RowFilterInterface.
+ *
+ * When implementing your own filters, consider inheriting {@link FilterBase} to help
+ * you reduce boilerplate.
+ * 
+ * @see FilterBase
+ */
+public interface Filter extends Writable {
+  /**
+   * Reset the state of the filter between rows.
+   */
+  public void reset();
+
+  /**
+   * Filters a row based on the row key. If this returns true, the entire
+   * row will be excluded.  If false, each KeyValue in the row will be
+   * passed to {@link #filterKeyValue(KeyValue)} below.
+   *
+   * @param buffer buffer containing row key
+   * @param offset offset into buffer where row key starts
+   * @param length length of the row key
+   * @return true, remove entire row, false, include the row (maybe).
+   */
+  public boolean filterRowKey(byte [] buffer, int offset, int length);
+
+  /**
+   * If this returns true, the scan will terminate.
+   *
+   * @return true to end scan, false to continue.
+   */
+  public boolean filterAllRemaining();
+
+  /**
+   * A way to filter based on the column family, column qualifier and/or the
+   * column value. Return code is described below.  This allows filters to
+   * filter only certain number of columns, then terminate without matching ever
+   * column.
+   *
+   * If your filter returns <code>ReturnCode.NEXT_ROW</code>, it should return
+   * <code>ReturnCode.NEXT_ROW</code> until {@link #reset()} is called
+   * just in case the caller calls for the next row.
+   *
+   * @param v the KeyValue in question
+   * @return code as described below
+   * @see Filter.ReturnCode
+   */
+  public ReturnCode filterKeyValue(KeyValue v);
+
+  /**
+   * Return codes for filterValue().
+   */
+  public enum ReturnCode {
+    /**
+     * Include the KeyValue
+     */
+    INCLUDE,
+    /**
+     * Skip this KeyValue
+     */
+    SKIP,
+    /**
+     * Skip this column. Go to the next column in this row.
+     */
+    NEXT_COL,
+    /**
+     * Done with columns, skip to next row. Note that filterRow() will
+     * still be called.
+     */
+    NEXT_ROW,
+    /**
+     * Seek to next key which is given as hint by the filter.
+     */
+    SEEK_NEXT_USING_HINT,
+}
+
+  /**
+   * Chance to alter the list of keyvalues to be submitted.
+   * Modifications to the list will carry on
+   * @param kvs the list of keyvalues to be filtered
+   */
+  public void filterRow(List<KeyValue> kvs);
+
+  /**
+   * @return True if this filter actively uses filterRow(List).
+   * Primarily used to check for conflicts with scans(such as scans
+   * that do not read a full row at a time)
+   */
+  public boolean hasFilterRow();
+
+  /**
+   * Last chance to veto row based on previous {@link #filterKeyValue(KeyValue)}
+   * calls. The filter needs to retain state then return a particular value for
+   * this call if they wish to exclude a row if a certain column is missing
+   * (for example).
+   * @return true to exclude row, false to include row.
+   */
+  public boolean filterRow();
+
+  /**
+   * If the filter returns the match code SEEK_NEXT_USING_HINT, then
+   * it should also tell which is the next key it must seek to.
+   * After receiving the match code SEEK_NEXT_USING_HINT, the QueryMatcher would
+   * call this function to find out which key it must next seek to.
+   * @return KeyValue which must be next seeked. return null if the filter is
+   * not sure which key to seek to next.
+   */
+  public KeyValue getNextKeyHint(KeyValue currentKV);
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java
new file mode 100644
index 0000000..fc775c4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java
@@ -0,0 +1,122 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance                                                                       with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.util.List;
+
+/**
+ * Abstract base class to help you implement new Filters.  Common "ignore" or NOOP type
+ * methods can go here, helping to reduce boiler plate in an ever-expanding filter
+ * library.
+ *
+ * If you could instantiate FilterBase, it would end up being a "null" filter -
+ * that is one that never filters anything.
+ */
+public abstract class FilterBase implements Filter {
+
+  /**
+   * Filters that are purely stateless and do nothing in their reset() methods can inherit
+   * this null/empty implementation.
+   *
+   * @inheritDoc
+   */
+  @Override
+  public void reset() {
+  }
+
+  /**
+   * Filters that do not filter by row key can inherit this implementation that
+   * never filters anything. (ie: returns false).
+   *
+   * @inheritDoc
+   */
+  @Override
+  public boolean filterRowKey(byte [] buffer, int offset, int length) {
+    return false;
+  }
+
+  /**
+   * Filters that never filter all remaining can inherit this implementation that
+   * never stops the filter early.
+   *
+   * @inheritDoc
+   */
+  @Override
+  public boolean filterAllRemaining() {
+    return false;
+  }
+
+  /**
+   * Filters that dont filter by key value can inherit this implementation that
+   * includes all KeyValues.
+   *
+   * @inheritDoc
+   */
+  @Override
+  public ReturnCode filterKeyValue(KeyValue ignored) {
+    return ReturnCode.INCLUDE;
+  }
+
+  /**
+   * Filters that never filter by modifying the returned List of KeyValues can
+   * inherit this implementation that does nothing.
+   *
+   * @inheritDoc
+   */
+  @Override
+  public void filterRow(List<KeyValue> ignored) {
+  }
+
+  /**
+   * Fitlers that never filter by modifying the returned List of KeyValues can
+   * inherit this implementation that does nothing.
+   *
+   * @inheritDoc
+   */
+  @Override
+  public boolean hasFilterRow() {
+    return false;
+  }
+
+  /**
+   * Filters that never filter by rows based on previously gathered state from
+   * {@link #filterKeyValue(KeyValue)} can inherit this implementation that
+   * never filters a row.
+   *
+   * @inheritDoc
+   */
+  @Override
+  public boolean filterRow() {
+    return false;
+  }
+
+  /**
+   * Filters that are not sure which key must be next seeked to, can inherit
+   * this implementation that, by default, returns a null KeyValue.
+   *
+   * @inheritDoc
+   */
+  public KeyValue getNextKeyHint(KeyValue currentKV) {
+    return null;
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
new file mode 100644
index 0000000..211cb56
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
@@ -0,0 +1,255 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+/**
+ * Implementation of {@link Filter} that represents an ordered List of Filters
+ * which will be evaluated with a specified boolean operator {@link Operator#MUST_PASS_ALL}
+ * (<code>!AND</code>) or {@link Operator#MUST_PASS_ONE} (<code>!OR</code>).
+ * Since you can use Filter Lists as children of Filter Lists, you can create a
+ * hierarchy of filters to be evaluated.
+ * Defaults to {@link Operator#MUST_PASS_ALL}.
+ * <p>TODO: Fix creation of Configuration on serialization and deserialization.
+ */
+public class FilterList implements Filter {
+  /** set operator */
+  public static enum Operator {
+    /** !AND */
+    MUST_PASS_ALL,
+    /** !OR */
+    MUST_PASS_ONE
+  }
+
+  private static final Configuration conf = HBaseConfiguration.create();
+  private Operator operator = Operator.MUST_PASS_ALL;
+  private List<Filter> filters = new ArrayList<Filter>();
+
+  /**
+   * Default constructor, filters nothing. Required though for RPC
+   * deserialization.
+   */
+  public FilterList() {
+    super();
+  }
+
+  /**
+   * Constructor that takes a set of {@link Filter}s. The default operator
+   * MUST_PASS_ALL is assumed.
+   *
+   * @param rowFilters list of filters
+   */
+  public FilterList(final List<Filter> rowFilters) {
+    this.filters = rowFilters;
+  }
+
+  /**
+   * Constructor that takes an operator.
+   *
+   * @param operator Operator to process filter set with.
+   */
+  public FilterList(final Operator operator) {
+    this.operator = operator;
+  }
+
+  /**
+   * Constructor that takes a set of {@link Filter}s and an operator.
+   *
+   * @param operator Operator to process filter set with.
+   * @param rowFilters Set of row filters.
+   */
+  public FilterList(final Operator operator, final List<Filter> rowFilters) {
+    this.filters = rowFilters;
+    this.operator = operator;
+  }
+
+  /**
+   * Get the operator.
+   *
+   * @return operator
+   */
+  public Operator getOperator() {
+    return operator;
+  }
+
+  /**
+   * Get the filters.
+   *
+   * @return filters
+   */
+  public List<Filter> getFilters() {
+    return filters;
+  }
+
+  /**
+   * Add a filter.
+   *
+   * @param filter another filter
+   */
+  public void addFilter(Filter filter) {
+    this.filters.add(filter);
+  }
+
+  @Override
+  public void reset() {
+    for (Filter filter : filters) {
+      filter.reset();
+    }
+  }
+
+  @Override
+  public boolean filterRowKey(byte[] rowKey, int offset, int length) {
+    for (Filter filter : filters) {
+      if (this.operator == Operator.MUST_PASS_ALL) {
+        if (filter.filterAllRemaining() ||
+            filter.filterRowKey(rowKey, offset, length)) {
+          return true;
+        }
+      } else if (this.operator == Operator.MUST_PASS_ONE) {
+        if (!filter.filterAllRemaining() &&
+            !filter.filterRowKey(rowKey, offset, length)) {
+          return false;
+        }
+      }
+    }
+    return this.operator == Operator.MUST_PASS_ONE;
+  }
+
+  @Override
+  public boolean filterAllRemaining() {
+    for (Filter filter : filters) {
+      if (filter.filterAllRemaining()) {
+        if (operator == Operator.MUST_PASS_ALL) {
+          return true;
+        }
+      } else {
+        if (operator == Operator.MUST_PASS_ONE) {
+          return false;
+        }
+      }
+    }
+    return operator == Operator.MUST_PASS_ONE;
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    ReturnCode rc = operator == Operator.MUST_PASS_ONE?
+        ReturnCode.SKIP: ReturnCode.INCLUDE;
+    for (Filter filter : filters) {
+      if (operator == Operator.MUST_PASS_ALL) {
+        if (filter.filterAllRemaining()) {
+          return ReturnCode.NEXT_ROW;
+        }
+        switch (filter.filterKeyValue(v)) {
+        case INCLUDE:
+          continue;
+        case NEXT_ROW:
+        case SKIP:
+          return ReturnCode.SKIP;
+        }
+      } else if (operator == Operator.MUST_PASS_ONE) {
+        if (filter.filterAllRemaining()) {
+          continue;
+        }
+
+        switch (filter.filterKeyValue(v)) {
+        case INCLUDE:
+          rc = ReturnCode.INCLUDE;
+          // must continue here to evaluate all filters
+        case NEXT_ROW:
+        case SKIP:
+          // continue;
+        }
+      }
+    }
+    return rc;
+  }
+
+  @Override
+  public void filterRow(List<KeyValue> kvs) {
+    for (Filter filter : filters) {
+      filter.filterRow(kvs);
+    }
+  }
+
+  @Override
+  public boolean hasFilterRow() {
+    for (Filter filter : filters) {
+      if(filter.hasFilterRow()) {
+    	return true;
+      }
+    }
+    return false;
+  }
+
+  @Override
+  public boolean filterRow() {
+    for (Filter filter : filters) {
+      if (operator == Operator.MUST_PASS_ALL) {
+        if (filter.filterAllRemaining() || filter.filterRow()) {
+          return true;
+        }
+      } else if (operator == Operator.MUST_PASS_ONE) {
+        if (!filter.filterAllRemaining()
+            && !filter.filterRow()) {
+          return false;
+        }
+      }
+    }
+    return  operator == Operator.MUST_PASS_ONE;
+  }
+
+  public void readFields(final DataInput in) throws IOException {
+    byte opByte = in.readByte();
+    operator = Operator.values()[opByte];
+    int size = in.readInt();
+    if (size > 0) {
+      filters = new ArrayList<Filter>(size);
+      for (int i = 0; i < size; i++) {
+        Filter filter = (Filter)HbaseObjectWritable.readObject(in, conf);
+        filters.add(filter);
+      }
+    }
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    out.writeByte(operator.ordinal());
+    out.writeInt(filters.size());
+    for (Filter filter : filters) {
+      HbaseObjectWritable.writeObject(out, filter, Writable.class, conf);
+    }
+  }
+
+  @Override
+  public KeyValue getNextKeyHint(KeyValue currentKV) {
+    return null;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java
new file mode 100644
index 0000000..36170bf
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java
@@ -0,0 +1,55 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.DataInput;
+import java.util.List;
+
+/**
+ * A filter that will only return the first KV from each row.
+ * <p>
+ * This filter can be used to more efficiently perform row count operations.
+ */
+public class FirstKeyOnlyFilter extends FilterBase {
+  private boolean foundKV = false;
+
+  public FirstKeyOnlyFilter() {
+  }
+
+  public void reset() {
+    foundKV = false;
+  }
+
+  public ReturnCode filterKeyValue(KeyValue v) {
+    if(foundKV) return ReturnCode.NEXT_ROW;
+    foundKV = true;
+    return ReturnCode.INCLUDE;
+  }
+
+  public void write(DataOutput out) throws IOException {
+  }
+
+  public void readFields(DataInput in) throws IOException {
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
new file mode 100644
index 0000000..6148f35
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java
@@ -0,0 +1,82 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * A Filter that stops after the given row.  There is no "RowStopFilter" because
+ * the Scan spec allows you to specify a stop row.
+ *
+ * Use this filter to include the stop row, eg: [A,Z].
+ */
+public class InclusiveStopFilter extends FilterBase {
+  private byte [] stopRowKey;
+  private boolean done = false;
+
+  public InclusiveStopFilter() {
+    super();
+  }
+
+  public InclusiveStopFilter(final byte [] stopRowKey) {
+    this.stopRowKey = stopRowKey;
+  }
+
+  public byte[] getStopRowKey() {
+    return this.stopRowKey;
+  }
+
+  public boolean filterRowKey(byte[] buffer, int offset, int length) {
+    if (buffer == null) {
+      //noinspection RedundantIfStatement
+      if (this.stopRowKey == null) {
+        return true; //filter...
+      }
+      return false;
+    }
+    // if stopRowKey is <= buffer, then true, filter row.
+    int cmp = Bytes.compareTo(stopRowKey, 0, stopRowKey.length,
+      buffer, offset, length);
+
+    if(cmp < 0) {
+      done = true;
+    }
+    return done;
+  }
+
+  public boolean filterAllRemaining() {
+    return done;
+  }
+
+  public void write(DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, this.stopRowKey);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    this.stopRowKey = Bytes.readByteArray(in);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java
new file mode 100644
index 0000000..75edf19
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/IncompatibleFilterException.java
@@ -0,0 +1,40 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+/**
+ * Used to indicate a filter incompatibility
+ */
+public class IncompatibleFilterException extends RuntimeException {
+  private static final long serialVersionUID = 3236763276623198231L;
+
+/** constructor */
+  public IncompatibleFilterException() {
+    super();
+  }
+
+  /**
+   * constructor
+   * @param s message
+   */
+  public IncompatibleFilterException(String s) {
+    super(s);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
new file mode 100644
index 0000000..14b8e31
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/InvalidRowFilterException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+/**
+ * Used to indicate an invalid RowFilter.
+ */
+public class InvalidRowFilterException extends RuntimeException {
+  private static final long serialVersionUID = 2667894046345657865L;
+
+
+  /** constructor */
+  public InvalidRowFilterException() {
+    super();
+  }
+
+  /**
+   * constructor
+   * @param s message
+   */
+  public InvalidRowFilterException(String s) {
+    super(s);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
new file mode 100644
index 0000000..d417efd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java
@@ -0,0 +1,54 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A filter that will only return the key component of each KV (the value will
+ * be rewritten as empty).
+ * <p>
+ * This filter can be used to grab all of the keys without having to also grab
+ * the values.
+ */
+public class KeyOnlyFilter extends FilterBase {
+
+  boolean lenAsVal;
+  public KeyOnlyFilter() { this(false); }
+  public KeyOnlyFilter(boolean lenAsVal) { this.lenAsVal = lenAsVal; }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue kv) {
+    kv.convertToKeyOnly(this.lenAsVal);
+    return ReturnCode.INCLUDE;
+  }
+
+  public void write(DataOutput out) throws IOException {
+    out.writeBoolean(this.lenAsVal);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    this.lenAsVal = in.readBoolean();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java
new file mode 100644
index 0000000..b5e4dd3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java
@@ -0,0 +1,81 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Implementation of Filter interface that limits results to a specific page
+ * size. It terminates scanning once the number of filter-passed rows is >
+ * the given page size.
+ * <p>
+ * Note that this filter cannot guarantee that the number of results returned
+ * to a client are <= page size. This is because the filter is applied
+ * separately on different region servers. It does however optimize the scan of
+ * individual HRegions by making sure that the page size is never exceeded
+ * locally.
+ */
+public class PageFilter extends FilterBase {
+  private long pageSize = Long.MAX_VALUE;
+  private int rowsAccepted = 0;
+
+  /**
+   * Default constructor, filters nothing. Required though for RPC
+   * deserialization.
+   */
+  public PageFilter() {
+    super();
+  }
+
+  /**
+   * Constructor that takes a maximum page size.
+   *
+   * @param pageSize Maximum result size.
+   */
+  public PageFilter(final long pageSize) {
+    this.pageSize = pageSize;
+  }
+
+  public long getPageSize() {
+    return pageSize;
+  }
+
+  public boolean filterAllRemaining() {
+    return this.rowsAccepted >= this.pageSize;
+  }
+
+  public boolean filterRow() {
+    this.rowsAccepted++;
+    return this.rowsAccepted > this.pageSize;
+  }
+
+  public void readFields(final DataInput in) throws IOException {
+    this.pageSize = in.readLong();
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    out.writeLong(pageSize);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
new file mode 100644
index 0000000..063d068
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java
@@ -0,0 +1,77 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.DataInput;
+import java.util.List;
+
+/**
+ * Pass results that have same row prefix.
+ */
+public class PrefixFilter extends FilterBase {
+  protected byte [] prefix = null;
+  protected boolean passedPrefix = false;
+
+  public PrefixFilter(final byte [] prefix) {
+    this.prefix = prefix;
+  }
+
+  public PrefixFilter() {
+    super();
+  }
+
+  public byte[] getPrefix() {
+    return prefix;
+  }
+
+  public boolean filterRowKey(byte[] buffer, int offset, int length) {
+    if (buffer == null || this.prefix == null)
+      return true;
+    if (length < prefix.length)
+      return true;
+    // if they are equal, return false => pass row
+    // else return true, filter row
+    // if we are passed the prefix, set flag
+    int cmp = Bytes.compareTo(buffer, offset, this.prefix.length, this.prefix, 0,
+        this.prefix.length);
+    if(cmp > 0) {
+      passedPrefix = true;
+    }
+    return cmp != 0;
+  }
+
+  public boolean filterAllRemaining() {
+    return passedPrefix;
+  }
+
+  public void write(DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, this.prefix);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    this.prefix = Bytes.readByteArray(in);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java
new file mode 100644
index 0000000..2625eb1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java
@@ -0,0 +1,68 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Get;
+
+/**
+ * This filter is used to filter based on the column qualifier. It takes an
+ * operator (equal, greater, not equal, etc) and a byte [] comparator for the
+ * column qualifier portion of a key.
+ * <p>
+ * This filter can be wrapped with {@link WhileMatchFilter} and {@link SkipFilter}
+ * to add more control.
+ * <p>
+ * Multiple filters can be combined using {@link FilterList}.
+ * <p>
+ * If an already known column qualifier is looked for, use {@link Get#addColumn}
+ * directly rather than a filter.
+ */
+public class QualifierFilter extends CompareFilter {
+
+  /**
+   * Writable constructor, do not use.
+   */
+  public QualifierFilter() {
+  }
+
+  /**
+   * Constructor.
+   * @param qualifierCompareOp the compare op for column qualifier matching
+   * @param qualifierComparator the comparator for column qualifier matching
+   */
+  public QualifierFilter(final CompareOp qualifierCompareOp,
+      final WritableByteArrayComparable qualifierComparator) {
+    super(qualifierCompareOp, qualifierComparator);
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    int qualifierLength = v.getQualifierLength();
+    if (qualifierLength > 0) {
+      if (doCompare(this.compareOp, this.comparator, v.getBuffer(),
+          v.getQualifierOffset(), qualifierLength)) {
+        return ReturnCode.SKIP;
+      }
+    }
+    return ReturnCode.INCLUDE;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java
new file mode 100644
index 0000000..6c37898
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java
@@ -0,0 +1,120 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.nio.charset.IllegalCharsetNameException;
+import java.util.regex.Pattern;
+
+/**
+ * This comparator is for use with {@link CompareFilter} implementations, such
+ * as {@link RowFilter}, {@link QualifierFilter}, and {@link ValueFilter}, for
+ * filtering based on the value of a given column. Use it to test if a given
+ * regular expression matches a cell value in the column.
+ * <p>
+ * Only EQUAL or NOT_EQUAL comparisons are valid with this comparator.
+ * <p>
+ * For example:
+ * <p>
+ * <pre>
+ * ValueFilter vf = new ValueFilter(CompareOp.EQUAL,
+ *     new RegexStringComparator(
+ *       // v4 IP address
+ *       "(((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3,3}" +
+ *         "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))(\\/[0-9]+)?" +
+ *         "|" +
+ *       // v6 IP address
+ *       "((([\\dA-Fa-f]{1,4}:){7}[\\dA-Fa-f]{1,4})(:([\\d]{1,3}.)" +
+ *         "{3}[\\d]{1,3})?)(\\/[0-9]+)?"));
+ * </pre>
+ */
+public class RegexStringComparator extends WritableByteArrayComparable {
+
+  private static final Log LOG = LogFactory.getLog(RegexStringComparator.class);
+
+  private Charset charset = Charset.forName(HConstants.UTF8_ENCODING);
+
+  private Pattern pattern;
+
+  /** Nullary constructor for Writable, do not use */
+  public RegexStringComparator() { }
+
+  /**
+   * Constructor
+   * @param expr a valid regular expression
+   */
+  public RegexStringComparator(String expr) {
+    super(Bytes.toBytes(expr));
+    this.pattern = Pattern.compile(expr, Pattern.DOTALL);
+  }
+
+  /**
+   * Specifies the {@link Charset} to use to convert the row key to a String.
+   * <p>
+   * The row key needs to be converted to a String in order to be matched
+   * against the regular expression.  This method controls which charset is
+   * used to do this conversion.
+   * <p>
+   * If the row key is made of arbitrary bytes, the charset {@code ISO-8859-1}
+   * is recommended.
+   * @param charset The charset to use.
+   */
+  public void setCharset(final Charset charset) {
+    this.charset = charset;
+  }
+
+  @Override
+  public int compareTo(byte[] value) {
+    // Use find() for subsequence match instead of matches() (full sequence
+    // match) to adhere to the principle of least surprise.
+    return pattern.matcher(new String(value, charset)).find() ? 0 : 1;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    final String expr = in.readUTF();
+    this.value = Bytes.toBytes(expr);
+    this.pattern = Pattern.compile(expr);
+    final String charset = in.readUTF();
+    if (charset.length() > 0) {
+      try {
+        this.charset = Charset.forName(charset);
+      } catch (IllegalCharsetNameException e) {
+        LOG.error("invalid charset", e);
+      }
+    }
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeUTF(pattern.toString());
+    out.writeUTF(charset.name());
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java
new file mode 100644
index 0000000..9d9d0a4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java
@@ -0,0 +1,86 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
+
+import java.util.List;
+
+/**
+ * This filter is used to filter based on the key. It takes an operator
+ * (equal, greater, not equal, etc) and a byte [] comparator for the row,
+ * and column qualifier portions of a key.
+ * <p>
+ * This filter can be wrapped with {@link WhileMatchFilter} to add more control.
+ * <p>
+ * Multiple filters can be combined using {@link FilterList}.
+ * <p>
+ * If an already known row range needs to be scanned, use {@link Scan} start
+ * and stop rows directly rather than a filter.
+ */
+public class RowFilter extends CompareFilter {
+
+  private boolean filterOutRow = false;
+
+  /**
+   * Writable constructor, do not use.
+   */
+  public RowFilter() {
+    super();
+  }
+
+  /**
+   * Constructor.
+   * @param rowCompareOp the compare op for row matching
+   * @param rowComparator the comparator for row matching
+   */
+  public RowFilter(final CompareOp rowCompareOp,
+      final WritableByteArrayComparable rowComparator) {
+    super(rowCompareOp, rowComparator);
+  }
+
+  @Override
+  public void reset() {
+    this.filterOutRow = false;
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    if(this.filterOutRow) {
+      return ReturnCode.NEXT_ROW;
+    }
+    return ReturnCode.INCLUDE;
+  }
+
+  @Override
+  public boolean filterRowKey(byte[] data, int offset, int length) {
+    if(doCompare(this.compareOp, this.comparator, data, offset, length)) {
+      this.filterOutRow = true;
+    }
+    return this.filterOutRow;
+  }
+
+  @Override
+  public boolean filterRow() {
+    return this.filterOutRow;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java
new file mode 100644
index 0000000..cda1ccf
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java
@@ -0,0 +1,88 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+
+/**
+ * A {@link Filter} that checks a single column value, but does not emit the
+ * tested column. This will enable a performance boost over
+ * {@link SingleColumnValueFilter}, if the tested column value is not actually
+ * needed as input (besides for the filtering itself).
+ */
+public class SingleColumnValueExcludeFilter extends SingleColumnValueFilter {
+
+  /**
+   * Writable constructor, do not use.
+   */
+  public SingleColumnValueExcludeFilter() {
+    super();
+  }
+
+  /**
+   * Constructor for binary compare of the value of a single column. If the
+   * column is found and the condition passes, all columns of the row will be
+   * emitted; except for the tested column value. If the column is not found or
+   * the condition fails, the row will not be emitted.
+   *
+   * @param family name of column family
+   * @param qualifier name of column qualifier
+   * @param compareOp operator
+   * @param value value to compare column values against
+   */
+  public SingleColumnValueExcludeFilter(byte[] family, byte[] qualifier,
+      CompareOp compareOp, byte[] value) {
+    super(family, qualifier, compareOp, value);
+  }
+
+  /**
+   * Constructor for binary compare of the value of a single column. If the
+   * column is found and the condition passes, all columns of the row will be
+   * emitted; except for the tested column value. If the condition fails, the
+   * row will not be emitted.
+   * <p>
+   * Use the filterIfColumnMissing flag to set whether the rest of the columns
+   * in a row will be emitted if the specified column to check is not found in
+   * the row.
+   *
+   * @param family name of column family
+   * @param qualifier name of column qualifier
+   * @param compareOp operator
+   * @param comparator Comparator to use.
+   */
+  public SingleColumnValueExcludeFilter(byte[] family, byte[] qualifier,
+      CompareOp compareOp, WritableByteArrayComparable comparator) {
+    super(family, qualifier, compareOp, comparator);
+  }
+
+  public ReturnCode filterKeyValue(KeyValue keyValue) {
+    ReturnCode superRetCode = super.filterKeyValue(keyValue);
+    if (superRetCode == ReturnCode.INCLUDE) {
+      // If the current column is actually the tested column,
+      // we will skip it instead.
+      if (keyValue.matchingColumn(this.columnFamily, this.columnQualifier)) {
+        return ReturnCode.SKIP;
+      }
+    }
+    return superRetCode;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java
new file mode 100644
index 0000000..24ea37f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java
@@ -0,0 +1,276 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.List;
+
+/**
+ * This filter is used to filter cells based on value. It takes a {@link CompareFilter.CompareOp}
+ * operator (equal, greater, not equal, etc), and either a byte [] value or
+ * a WritableByteArrayComparable.
+ * <p>
+ * If we have a byte [] value then we just do a lexicographic compare. For
+ * example, if passed value is 'b' and cell has 'a' and the compare operator
+ * is LESS, then we will filter out this cell (return true).  If this is not
+ * sufficient (eg you want to deserialize a long and then compare it to a fixed
+ * long value), then you can pass in your own comparator instead.
+ * <p>
+ * You must also specify a family and qualifier.  Only the value of this column
+ * will be tested. When using this filter on a {@link Scan} with specified
+ * inputs, the column to be tested should also be added as input (otherwise
+ * the filter will regard the column as missing).
+ * <p>
+ * To prevent the entire row from being emitted if the column is not found
+ * on a row, use {@link #setFilterIfMissing}.
+ * Otherwise, if the column is found, the entire row will be emitted only if
+ * the value passes.  If the value fails, the row will be filtered out.
+ * <p>
+ * In order to test values of previous versions (timestamps), set
+ * {@link #setLatestVersionOnly} to false. The default is true, meaning that
+ * only the latest version's value is tested and all previous versions are ignored.
+ * <p>
+ * To filter based on the value of all scanned columns, use {@link ValueFilter}.
+ */
+public class SingleColumnValueFilter extends FilterBase {
+  static final Log LOG = LogFactory.getLog(SingleColumnValueFilter.class);
+
+  protected byte [] columnFamily;
+  protected byte [] columnQualifier;
+  private CompareOp compareOp;
+  private WritableByteArrayComparable comparator;
+  private boolean foundColumn = false;
+  private boolean matchedColumn = false;
+  private boolean filterIfMissing = false;
+  private boolean latestVersionOnly = true;
+
+  /**
+   * Writable constructor, do not use.
+   */
+  public SingleColumnValueFilter() {
+  }
+
+  /**
+   * Constructor for binary compare of the value of a single column.  If the
+   * column is found and the condition passes, all columns of the row will be
+   * emitted.  If the column is not found or the condition fails, the row will
+   * not be emitted.
+   *
+   * @param family name of column family
+   * @param qualifier name of column qualifier
+   * @param compareOp operator
+   * @param value value to compare column values against
+   */
+  public SingleColumnValueFilter(final byte [] family, final byte [] qualifier,
+      final CompareOp compareOp, final byte[] value) {
+    this(family, qualifier, compareOp, new BinaryComparator(value));
+  }
+
+  /**
+   * Constructor for binary compare of the value of a single column.  If the
+   * column is found and the condition passes, all columns of the row will be
+   * emitted.  If the condition fails, the row will not be emitted.
+   * <p>
+   * Use the filterIfColumnMissing flag to set whether the rest of the columns
+   * in a row will be emitted if the specified column to check is not found in
+   * the row.
+   *
+   * @param family name of column family
+   * @param qualifier name of column qualifier
+   * @param compareOp operator
+   * @param comparator Comparator to use.
+   */
+  public SingleColumnValueFilter(final byte [] family, final byte [] qualifier,
+      final CompareOp compareOp, final WritableByteArrayComparable comparator) {
+    this.columnFamily = family;
+    this.columnQualifier = qualifier;
+    this.compareOp = compareOp;
+    this.comparator = comparator;
+  }
+
+  /**
+   * @return operator
+   */
+  public CompareOp getOperator() {
+    return compareOp;
+  }
+
+  /**
+   * @return the comparator
+   */
+  public WritableByteArrayComparable getComparator() {
+    return comparator;
+  }
+
+  /**
+   * @return the family
+   */
+  public byte[] getFamily() {
+    return columnFamily;
+  }
+
+  /**
+   * @return the qualifier
+   */
+  public byte[] getQualifier() {
+    return columnQualifier;
+  }
+
+  public ReturnCode filterKeyValue(KeyValue keyValue) {
+    // System.out.println("REMOVE KEY=" + keyValue.toString() + ", value=" + Bytes.toString(keyValue.getValue()));
+    if (this.matchedColumn) {
+      // We already found and matched the single column, all keys now pass
+      return ReturnCode.INCLUDE;
+    } else if (this.latestVersionOnly && this.foundColumn) {
+      // We found but did not match the single column, skip to next row
+      return ReturnCode.NEXT_ROW;
+    }
+    if (!keyValue.matchingColumn(this.columnFamily, this.columnQualifier)) {
+      return ReturnCode.INCLUDE;
+    }
+    foundColumn = true;
+    if (filterColumnValue(keyValue.getBuffer(),
+        keyValue.getValueOffset(), keyValue.getValueLength())) {
+      return this.latestVersionOnly? ReturnCode.NEXT_ROW: ReturnCode.INCLUDE;
+    }
+    this.matchedColumn = true;
+    return ReturnCode.INCLUDE;
+  }
+
+  private boolean filterColumnValue(final byte [] data, final int offset,
+      final int length) {
+    // TODO: Can this filter take a rawcomparator so don't have to make this
+    // byte array copy?
+    int compareResult =
+      this.comparator.compareTo(Arrays.copyOfRange(data, offset, offset + length));
+    switch (this.compareOp) {
+    case LESS:
+      return compareResult <= 0;
+    case LESS_OR_EQUAL:
+      return compareResult < 0;
+    case EQUAL:
+      return compareResult != 0;
+    case NOT_EQUAL:
+      return compareResult == 0;
+    case GREATER_OR_EQUAL:
+      return compareResult > 0;
+    case GREATER:
+      return compareResult >= 0;
+    default:
+      throw new RuntimeException("Unknown Compare op " + compareOp.name());
+    }
+  }
+
+  public boolean filterRow() {
+    // If column was found, return false if it was matched, true if it was not
+    // If column not found, return true if we filter if missing, false if not
+    return this.foundColumn? !this.matchedColumn: this.filterIfMissing;
+  }
+
+  public void reset() {
+    foundColumn = false;
+    matchedColumn = false;
+  }
+
+  /**
+   * Get whether entire row should be filtered if column is not found.
+   * @return true if row should be skipped if column not found, false if row
+   * should be let through anyways
+   */
+  public boolean getFilterIfMissing() {
+    return filterIfMissing;
+  }
+
+  /**
+   * Set whether entire row should be filtered if column is not found.
+   * <p>
+   * If true, the entire row will be skipped if the column is not found.
+   * <p>
+   * If false, the row will pass if the column is not found.  This is default.
+   * @param filterIfMissing flag
+   */
+  public void setFilterIfMissing(boolean filterIfMissing) {
+    this.filterIfMissing = filterIfMissing;
+  }
+
+  /**
+   * Get whether only the latest version of the column value should be compared.
+   * If true, the row will be returned if only the latest version of the column
+   * value matches. If false, the row will be returned if any version of the
+   * column value matches. The default is true.
+   * @return return value
+   */
+  public boolean getLatestVersionOnly() {
+    return latestVersionOnly;
+  }
+
+  /**
+   * Set whether only the latest version of the column value should be compared.
+   * If true, the row will be returned if only the latest version of the column
+   * value matches. If false, the row will be returned if any version of the
+   * column value matches. The default is true.
+   * @param latestVersionOnly flag
+   */
+  public void setLatestVersionOnly(boolean latestVersionOnly) {
+    this.latestVersionOnly = latestVersionOnly;
+  }
+
+  public void readFields(final DataInput in) throws IOException {
+    this.columnFamily = Bytes.readByteArray(in);
+    if(this.columnFamily.length == 0) {
+      this.columnFamily = null;
+    }
+    this.columnQualifier = Bytes.readByteArray(in);
+    if(this.columnQualifier.length == 0) {
+      this.columnQualifier = null;
+    }
+    this.compareOp = CompareOp.valueOf(in.readUTF());
+    this.comparator =
+      (WritableByteArrayComparable)HbaseObjectWritable.readObject(in, null);
+    this.foundColumn = in.readBoolean();
+    this.matchedColumn = in.readBoolean();
+    this.filterIfMissing = in.readBoolean();
+    this.latestVersionOnly = in.readBoolean();
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, this.columnFamily);
+    Bytes.writeByteArray(out, this.columnQualifier);
+    out.writeUTF(compareOp.name());
+    HbaseObjectWritable.writeObject(out, comparator,
+        WritableByteArrayComparable.class, null);
+    out.writeBoolean(foundColumn);
+    out.writeBoolean(matchedColumn);
+    out.writeBoolean(filterIfMissing);
+    out.writeBoolean(latestVersionOnly);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java
new file mode 100644
index 0000000..72a91bb
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java
@@ -0,0 +1,101 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * A wrapper filter that filters an entire row if any of the KeyValue checks do
+ * not pass.
+ * <p>
+ * For example, if all columns in a row represent weights of different things,
+ * with the values being the actual weights, and we want to filter out the
+ * entire row if any of its weights are zero.  In this case, we want to prevent
+ * rows from being emitted if a single key is filtered.  Combine this filter
+ * with a {@link ValueFilter}:
+ * <p>
+ * <pre>
+ * scan.setFilter(new SkipFilter(new ValueFilter(CompareOp.EQUAL,
+ *     new BinaryComparator(Bytes.toBytes(0))));
+ * </code>
+ * Any row which contained a column whose value was 0 will be filtered out.
+ * Without this filter, the other non-zero valued columns in the row would still
+ * be emitted.
+ */
+public class SkipFilter extends FilterBase {
+  private boolean filterRow = false;
+  private Filter filter;
+
+  public SkipFilter() {
+    super();
+  }
+
+  public SkipFilter(Filter filter) {
+    this.filter = filter;
+  }
+
+  public Filter getFilter() {
+    return filter;
+  }
+
+  public void reset() {
+    filter.reset();
+    filterRow = false;
+  }
+
+  private void changeFR(boolean value) {
+    filterRow = filterRow || value;
+  }
+
+  public ReturnCode filterKeyValue(KeyValue v) {
+    ReturnCode c = filter.filterKeyValue(v);
+    changeFR(c != ReturnCode.INCLUDE);
+    return c;
+  }
+
+  public boolean filterRow() {
+    return filterRow;
+  }
+
+  public void write(DataOutput out) throws IOException {
+    out.writeUTF(this.filter.getClass().getName());
+    this.filter.write(out);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    String className = in.readUTF();
+    try {
+      this.filter = (Filter)(Class.forName(className).newInstance());
+      this.filter.readFields(in);
+    } catch (InstantiationException e) {
+      throw new RuntimeException("Failed deserialize.", e);
+    } catch (IllegalAccessException e) {
+      throw new RuntimeException("Failed deserialize.", e);
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Failed deserialize.", e);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java
new file mode 100644
index 0000000..4869dd1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java
@@ -0,0 +1,83 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+/**
+ * This comparator is for use with ColumnValueFilter, for filtering based on
+ * the value of a given column. Use it to test if a given substring appears
+ * in a cell value in the column. The comparison is case insensitive.
+ * <p>
+ * Only EQUAL or NOT_EQUAL tests are valid with this comparator.
+ * <p>
+ * For example:
+ * <p>
+ * <pre>
+ * ColumnValueFilter cvf =
+ *   new ColumnValueFilter("col", ColumnValueFilter.CompareOp.EQUAL,
+ *     new SubstringComparator("substr"));
+ * </pre>
+ */
+public class SubstringComparator extends WritableByteArrayComparable {
+
+  private String substr;
+
+  /** Nullary constructor for Writable, do not use */
+  public SubstringComparator() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param substr the substring
+   */
+  public SubstringComparator(String substr) {
+    super(Bytes.toBytes(substr.toLowerCase()));
+    this.substr = substr.toLowerCase();
+  }
+
+  @Override
+  public byte[] getValue() {
+    return Bytes.toBytes(substr);
+  }
+
+  @Override
+  public int compareTo(byte[] value) {
+    return Bytes.toString(value).toLowerCase().contains(substr) ? 0 : 1;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    String substr = in.readUTF();
+    this.value = Bytes.toBytes(substr);
+    this.substr = substr;
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    out.writeUTF(substr);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java
new file mode 100644
index 0000000..e9ccf5e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java
@@ -0,0 +1,91 @@
+package org.apache.hadoop.hbase.filter;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * Filter that returns only cells whose timestamp (version) is
+ * in the specified list of timestamps (versions).
+ * <p>
+ * Note: Use of this filter overrides any time range/time stamp
+ * options specified using {@link org.apache.hadoop.hbase.client.Get#setTimeRange(long, long)},
+ * {@link org.apache.hadoop.hbase.client.Scan#setTimeRange(long, long)}, {@link org.apache.hadoop.hbase.client.Get#setTimeStamp(long)},
+ * or {@link org.apache.hadoop.hbase.client.Scan#setTimeStamp(long)}.
+ */
+public class TimestampsFilter extends FilterBase {
+
+  TreeSet<Long> timestamps;
+
+  // Used during scans to hint the scan to stop early
+  // once the timestamps fall below the minTimeStamp.
+  long minTimeStamp = Long.MAX_VALUE;
+
+  /**
+   * Used during deserialization. Do not use otherwise.
+   */
+  public TimestampsFilter() {
+    super();
+  }
+
+  /**
+   * Constructor for filter that retains only those
+   * cells whose timestamp (version) is in the specified
+   * list of timestamps.
+   *
+   * @param timestamps
+   */
+  public TimestampsFilter(List<Long> timestamps) {
+    this.timestamps = new TreeSet<Long>(timestamps);
+    init();
+  }
+
+  private void init() {
+    if (this.timestamps.size() > 0) {
+      minTimeStamp = this.timestamps.first();
+    }
+  }
+
+  /**
+   * Gets the minimum timestamp requested by filter.
+   * @return  minimum timestamp requested by filter.
+   */
+  public long getMin() {
+    return minTimeStamp;
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    if (this.timestamps.contains(v.getTimestamp())) {
+      return ReturnCode.INCLUDE;
+    } else if (v.getTimestamp() < minTimeStamp) {
+      // The remaining versions of this column are guaranteed
+      // to be lesser than all of the other values.
+      return ReturnCode.NEXT_COL;
+    }
+    return ReturnCode.SKIP;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    int numTimestamps = in.readInt();
+    this.timestamps = new TreeSet<Long>();
+    for (int idx = 0; idx < numTimestamps; idx++) {
+      this.timestamps.add(in.readLong());
+    }
+    init();
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    int numTimestamps = this.timestamps.size();
+    out.writeInt(numTimestamps);
+    for (Long timestamp : this.timestamps) {
+      out.writeLong(timestamp);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java
new file mode 100644
index 0000000..cc80c66
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/ValueFilter.java
@@ -0,0 +1,64 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * This filter is used to filter based on column value. It takes an
+ * operator (equal, greater, not equal, etc) and a byte [] comparator for the
+ * cell value.
+ * <p>
+ * This filter can be wrapped with {@link WhileMatchFilter} and {@link SkipFilter}
+ * to add more control.
+ * <p>
+ * Multiple filters can be combined using {@link FilterList}.
+ * <p>
+ * To test the value of a single qualifier when scanning multiple qualifiers,
+ * use {@link SingleColumnValueFilter}.
+ */
+public class ValueFilter extends CompareFilter {
+
+  /**
+   * Writable constructor, do not use.
+   */
+  public ValueFilter() {
+  }
+
+  /**
+   * Constructor.
+   * @param valueCompareOp the compare op for value matching
+   * @param valueComparator the comparator for value matching
+   */
+  public ValueFilter(final CompareOp valueCompareOp,
+      final WritableByteArrayComparable valueComparator) {
+    super(valueCompareOp, valueComparator);
+  }
+
+  @Override
+  public ReturnCode filterKeyValue(KeyValue v) {
+    if (doCompare(this.compareOp, this.comparator, v.getBuffer(),
+        v.getValueOffset(), v.getValueLength())) {
+      return ReturnCode.SKIP;
+    }
+    return ReturnCode.INCLUDE;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/WhileMatchFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/WhileMatchFilter.java
new file mode 100644
index 0000000..bad3c68
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/WhileMatchFilter.java
@@ -0,0 +1,102 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * A wrapper filter that returns true from {@link #filterAllRemaining()} as soon
+ * as the wrapped filters {@link Filter#filterRowKey(byte[], int, int)},
+ * {@link Filter#filterKeyValue(org.apache.hadoop.hbase.KeyValue)},
+ * {@link org.apache.hadoop.hbase.filter.Filter#filterRow()} or
+ * {@link org.apache.hadoop.hbase.filter.Filter#filterAllRemaining()} methods
+ * returns true.
+ */
+public class WhileMatchFilter extends FilterBase {
+  private boolean filterAllRemaining = false;
+  private Filter filter;
+
+  public WhileMatchFilter() {
+    super();
+  }
+
+  public WhileMatchFilter(Filter filter) {
+    this.filter = filter;
+  }
+
+  public Filter getFilter() {
+    return filter;
+  }
+
+  public void reset() {
+    this.filter.reset();
+  }
+
+  private void changeFAR(boolean value) {
+    filterAllRemaining = filterAllRemaining || value;
+  }
+
+  public boolean filterAllRemaining() {
+    return this.filterAllRemaining || this.filter.filterAllRemaining();
+  }
+
+  public boolean filterRowKey(byte[] buffer, int offset, int length) {
+    boolean value = filter.filterRowKey(buffer, offset, length);
+    changeFAR(value);
+    return value;
+  }
+
+  public ReturnCode filterKeyValue(KeyValue v) {
+    ReturnCode c = filter.filterKeyValue(v);
+    changeFAR(c != ReturnCode.INCLUDE);
+    return c;
+  }
+
+  public boolean filterRow() {
+    boolean filterRow = this.filter.filterRow();
+    changeFAR(filterRow);
+    return filterRow;
+  }
+
+  public void write(DataOutput out) throws IOException {
+    out.writeUTF(this.filter.getClass().getName());
+    this.filter.write(out);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    String className = in.readUTF();
+    try {
+      this.filter = (Filter)(Class.forName(className).newInstance());
+      this.filter.readFields(in);
+    } catch (InstantiationException e) {
+      throw new RuntimeException("Failed deserialize.", e);
+    } catch (IllegalAccessException e) {
+      throw new RuntimeException("Failed deserialize.", e);
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException("Failed deserialize.", e);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.java
new file mode 100644
index 0000000..309d49c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.java
@@ -0,0 +1,66 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+/** Base class, combines Comparable<byte []> and Writable. */
+public abstract class WritableByteArrayComparable implements Writable, Comparable<byte[]> {
+
+  byte[] value;
+
+  /**
+   * Nullary constructor, for Writable
+   */
+  public WritableByteArrayComparable() { }
+
+  /**
+   * Constructor.
+   * @param value the value to compare against
+   */
+  public WritableByteArrayComparable(byte [] value) {
+    this.value = value;
+  }
+
+  public byte[] getValue() {
+    return value;
+  }
+
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    value = Bytes.readByteArray(in);
+  }
+
+  @Override
+  public void write(DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, value);
+  }
+
+  @Override
+  public int compareTo(byte [] value) {
+    return Bytes.compareTo(this.value, value);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/filter/package-info.java b/0.90/src/main/java/org/apache/hadoop/hbase/filter/package-info.java
new file mode 100644
index 0000000..73ccef8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/filter/package-info.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * Provides row-level filters applied to HRegion scan results during calls to
+ * {@link org.apache.hadoop.hbase.client.ResultScanner#next()}.
+
+<p>
+Filters run the extent of a table unless you wrap your filter in a
+{@link org.apache.hadoop.hbase.filter.WhileMatchFilter}.
+The latter returns as soon as the filter stops matching.
+</p>
+<p>Do not rely on filters carrying state across rows; its not reliable in current
+hbase as we have no handlers in place for when regions split, close or server
+crashes.
+</p>
+*/
+package org.apache.hadoop.hbase.filter;
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/CodeToClassAndBack.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/CodeToClassAndBack.java
new file mode 100644
index 0000000..61a1c5e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/CodeToClassAndBack.java
@@ -0,0 +1,71 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.util.*;
+
+/**
+ * A Static Interface.
+ * Instead of having this code in the the HbaseMapWritable code, where it
+ * blocks the possibility of altering the variables and changing their types,
+ * it is put here in this static interface where the static final Maps are
+ * loaded one time. Only byte[] and Cell are supported at this time.
+ */
+public interface CodeToClassAndBack {
+  /**
+   * Static map that contains mapping from code to class
+   */
+  public static final Map<Byte, Class<?>> CODE_TO_CLASS =
+    new HashMap<Byte, Class<?>>();
+
+  /**
+   * Static map that contains mapping from class to code
+   */
+  public static final Map<Class<?>, Byte> CLASS_TO_CODE =
+    new HashMap<Class<?>, Byte>();
+
+  /**
+   * Class list for supported classes
+   */
+  public Class<?>[] classList = {byte[].class};
+
+  /**
+   * The static loader that is used instead of the static constructor in
+   * HbaseMapWritable.
+   */
+  public InternalStaticLoader sl =
+    new InternalStaticLoader(classList, CODE_TO_CLASS, CLASS_TO_CODE);
+
+  /**
+   * Class that loads the static maps with their values.
+   */
+  public class InternalStaticLoader{
+    InternalStaticLoader(Class<?>[] classList,
+        Map<Byte,Class<?>> CODE_TO_CLASS, Map<Class<?>, Byte> CLASS_TO_CODE){
+      byte code = 1;
+      for(int i=0; i<classList.length; i++){
+        CLASS_TO_CODE.put(classList[i], code);
+        CODE_TO_CLASS.put(code, classList[i]);
+        code++;
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
new file mode 100644
index 0000000..40be649
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
@@ -0,0 +1,271 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A facade for a {@link org.apache.hadoop.hbase.io.hfile.HFile.Reader} that serves up
+ * either the top or bottom half of a HFile where 'bottom' is the first half
+ * of the file containing the keys that sort lowest and 'top' is the second half
+ * of the file with keys that sort greater than those of the bottom half.
+ * The top includes the split files midkey, of the key that follows if it does
+ * not exist in the file.
+ *
+ * <p>This type works in tandem with the {@link Reference} type.  This class
+ * is used reading while Reference is used writing.
+ *
+ * <p>This file is not splitable.  Calls to {@link #midkey()} return null.
+ */
+public class HalfStoreFileReader extends StoreFile.Reader {
+  final Log LOG = LogFactory.getLog(HalfStoreFileReader.class);
+  final boolean top;
+  // This is the key we split around.  Its the first possible entry on a row:
+  // i.e. empty column and a timestamp of LATEST_TIMESTAMP.
+  protected final byte [] splitkey;
+
+  /**
+   * @param fs
+   * @param p
+   * @param c
+   * @param r
+   * @throws IOException
+   */
+  public HalfStoreFileReader(final FileSystem fs, final Path p, final BlockCache c,
+    final Reference r)
+  throws IOException {
+    super(fs, p, c, false);
+    // This is not actual midkey for this half-file; its just border
+    // around which we split top and bottom.  Have to look in files to find
+    // actual last and first keys for bottom and top halves.  Half-files don't
+    // have an actual midkey themselves. No midkey is how we indicate file is
+    // not splittable.
+    this.splitkey = r.getSplitKey();
+    // Is it top or bottom half?
+    this.top = Reference.isTopFileRegion(r.getFileRegion());
+  }
+
+  protected boolean isTop() {
+    return this.top;
+  }
+
+  @Override
+  public HFileScanner getScanner(final boolean cacheBlocks, final boolean pread) {
+    final HFileScanner s = super.getScanner(cacheBlocks, pread);
+    return new HFileScanner() {
+      final HFileScanner delegate = s;
+      public boolean atEnd = false;
+
+      public ByteBuffer getKey() {
+        if (atEnd) return null;
+        return delegate.getKey();
+      }
+
+      public String getKeyString() {
+        if (atEnd) return null;
+
+        return delegate.getKeyString();
+      }
+
+      public ByteBuffer getValue() {
+        if (atEnd) return null;
+
+        return delegate.getValue();
+      }
+
+      public String getValueString() {
+        if (atEnd) return null;
+
+        return delegate.getValueString();
+      }
+
+      public KeyValue getKeyValue() {
+        if (atEnd) return null;
+
+        return delegate.getKeyValue();
+      }
+
+      public boolean next() throws IOException {
+        if (atEnd) return false;
+
+        boolean b = delegate.next();
+        if (!b) {
+          return b;
+        }
+        // constrain the bottom.
+        if (!top) {
+          ByteBuffer bb = getKey();
+          if (getComparator().compare(bb.array(), bb.arrayOffset(), bb.limit(),
+              splitkey, 0, splitkey.length) >= 0) {
+            atEnd = true;
+            return false;
+          }
+        }
+        return true;
+      }
+
+      public boolean seekBefore(byte[] key) throws IOException {
+        return seekBefore(key, 0, key.length);
+      }
+
+      public boolean seekBefore(byte [] key, int offset, int length)
+      throws IOException {
+        if (top) {
+          if (getComparator().compare(key, offset, length, splitkey, 0,
+              splitkey.length) < 0) {
+            return false;
+          }
+        } else {
+          if (getComparator().compare(key, offset, length, splitkey, 0,
+              splitkey.length) >= 0) {
+            return seekBefore(splitkey, 0, splitkey.length);
+          }
+        }
+        return this.delegate.seekBefore(key, offset, length);
+      }
+
+      public boolean seekTo() throws IOException {
+        if (top) {
+          int r = this.delegate.seekTo(splitkey);
+          if (r < 0) {
+            // midkey is < first key in file
+            return this.delegate.seekTo();
+          }
+          if (r > 0) {
+            return this.delegate.next();
+          }
+          return true;
+        }
+
+        boolean b = delegate.seekTo();
+        if (!b) {
+          return b;
+        }
+        // Check key.
+        ByteBuffer k = this.delegate.getKey();
+        return this.delegate.getReader().getComparator().
+          compare(k.array(), k.arrayOffset(), k.limit(),
+            splitkey, 0, splitkey.length) < 0;
+      }
+
+      public int seekTo(byte[] key) throws IOException {
+        return seekTo(key, 0, key.length);
+      }
+
+      public int seekTo(byte[] key, int offset, int length) throws IOException {
+        if (top) {
+          if (getComparator().compare(key, offset, length, splitkey, 0,
+              splitkey.length) < 0) {
+            return -1;
+          }
+        } else {
+          if (getComparator().compare(key, offset, length, splitkey, 0,
+              splitkey.length) >= 0) {
+            // we would place the scanner in the second half.
+            // it might be an error to return false here ever...
+            boolean res = delegate.seekBefore(splitkey, 0, splitkey.length);
+            if (!res) {
+              throw new IOException("Seeking for a key in bottom of file, but key exists in top of file, failed on seekBefore(midkey)");
+            }
+            return 1;
+          }
+        }
+        return delegate.seekTo(key, offset, length);
+      }
+
+      @Override
+      public int reseekTo(byte[] key) throws IOException {
+        return reseekTo(key, 0, key.length);
+      }
+
+      @Override
+      public int reseekTo(byte[] key, int offset, int length)
+      throws IOException {
+        //This function is identical to the corresponding seekTo function except
+        //that we call reseekTo (and not seekTo) on the delegate.
+        if (top) {
+          if (getComparator().compare(key, offset, length, splitkey, 0,
+              splitkey.length) < 0) {
+            return -1;
+          }
+        } else {
+          if (getComparator().compare(key, offset, length, splitkey, 0,
+              splitkey.length) >= 0) {
+            // we would place the scanner in the second half.
+            // it might be an error to return false here ever...
+            boolean res = delegate.seekBefore(splitkey, 0, splitkey.length);
+            if (!res) {
+              throw new IOException("Seeking for a key in bottom of file, but" +
+                  " key exists in top of file, failed on seekBefore(midkey)");
+            }
+            return 1;
+          }
+        }
+        if (atEnd) {
+          // skip the 'reseek' and just return 1.
+          return 1;
+        }
+        return delegate.reseekTo(key, offset, length);
+      }
+
+      public org.apache.hadoop.hbase.io.hfile.HFile.Reader getReader() {
+        return this.delegate.getReader();
+      }
+
+      public boolean isSeeked() {
+        return this.delegate.isSeeked();
+      }
+    };
+  }
+
+  @Override
+  public byte[] getLastKey() {
+    if (top) {
+      return super.getLastKey();
+    }
+    // Get a scanner that caches the block and that uses pread.
+    HFileScanner scanner = getScanner(true, true);
+    try {
+      if (scanner.seekBefore(this.splitkey)) {
+        return Bytes.toBytes(scanner.getKey());
+      }
+    } catch (IOException e) {
+      LOG.warn("Failed seekBefore " + Bytes.toString(this.splitkey), e);
+    }
+    return null;
+  }
+
+  @Override
+  public byte[] midkey() throws IOException {
+    // Returns null to indicate file is not splitable.
+    return null;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/HbaseMapWritable.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/HbaseMapWritable.java
new file mode 100644
index 0000000..45eb495
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/HbaseMapWritable.java
@@ -0,0 +1,221 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * A Writable Map.
+ * Like {@link org.apache.hadoop.io.MapWritable} but dumb. It will fail
+ * if passed a value type that it has not already been told about. Its  been
+ * primed with hbase Writables and byte [].  Keys are always byte arrays.
+ *
+ * @param <K> <byte []> key  TODO: Parameter K is never used, could be removed.
+ * @param <V> value Expects a Writable or byte [].
+ */
+public class HbaseMapWritable <K,V>
+implements SortedMap<byte[],V>, Configurable, Writable, CodeToClassAndBack{
+  private AtomicReference<Configuration> conf = null;
+  protected SortedMap<byte [], V> instance = null;
+
+  /**
+   * The default contructor where a TreeMap is used
+   **/
+   public HbaseMapWritable(){
+     this (new TreeMap<byte [], V>(Bytes.BYTES_COMPARATOR));
+   }
+
+  /**
+   * Contructor where another SortedMap can be used
+   *
+   * @param map the SortedMap to be used
+   */
+  public HbaseMapWritable(SortedMap<byte[], V> map){
+    conf = new AtomicReference<Configuration>();
+    instance = map;
+  }
+
+
+  /** @return the conf */
+  public Configuration getConf() {
+    return conf.get();
+  }
+
+  /** @param conf the conf to set */
+  public void setConf(Configuration conf) {
+    this.conf.set(conf);
+  }
+
+  public void clear() {
+    instance.clear();
+  }
+
+  public boolean containsKey(Object key) {
+    return instance.containsKey(key);
+  }
+
+  public boolean containsValue(Object value) {
+    return instance.containsValue(value);
+  }
+
+  public Set<Entry<byte [], V>> entrySet() {
+    return instance.entrySet();
+  }
+
+  public V get(Object key) {
+    return instance.get(key);
+  }
+
+  public boolean isEmpty() {
+    return instance.isEmpty();
+  }
+
+  public Set<byte []> keySet() {
+    return instance.keySet();
+  }
+
+  public int size() {
+    return instance.size();
+  }
+
+  public Collection<V> values() {
+    return instance.values();
+  }
+
+  public void putAll(Map<? extends byte [], ? extends V> m) {
+    this.instance.putAll(m);
+  }
+
+  public V remove(Object key) {
+    return this.instance.remove(key);
+  }
+
+  public V put(byte [] key, V value) {
+    return this.instance.put(key, value);
+  }
+
+  public Comparator<? super byte[]> comparator() {
+    return this.instance.comparator();
+  }
+
+  public byte[] firstKey() {
+    return this.instance.firstKey();
+  }
+
+  public SortedMap<byte[], V> headMap(byte[] toKey) {
+    return this.instance.headMap(toKey);
+  }
+
+  public byte[] lastKey() {
+    return this.instance.lastKey();
+  }
+
+  public SortedMap<byte[], V> subMap(byte[] fromKey, byte[] toKey) {
+    return this.instance.subMap(fromKey, toKey);
+  }
+
+  public SortedMap<byte[], V> tailMap(byte[] fromKey) {
+    return this.instance.tailMap(fromKey);
+  }
+
+  // Writable
+
+  /** @return the Class class for the specified id */
+  @SuppressWarnings("boxing")
+  protected Class<?> getClass(byte id) {
+    return CODE_TO_CLASS.get(id);
+  }
+
+  /** @return the id for the specified Class */
+  @SuppressWarnings("boxing")
+  protected byte getId(Class<?> clazz) {
+    Byte b = CLASS_TO_CODE.get(clazz);
+    if (b == null) {
+      throw new NullPointerException("Nothing for : " + clazz);
+    }
+    return b;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    return this.instance.toString();
+  }
+
+  public void write(DataOutput out) throws IOException {
+    // Write out the number of entries in the map
+    out.writeInt(this.instance.size());
+    // Then write out each key/value pair
+    for (Map.Entry<byte [], V> e: instance.entrySet()) {
+      Bytes.writeByteArray(out, e.getKey());
+      Byte id = getId(e.getValue().getClass());
+      out.writeByte(id);
+      Object value = e.getValue();
+      if (value instanceof byte []) {
+        Bytes.writeByteArray(out, (byte [])value);
+      } else {
+        ((Writable)value).write(out);
+      }
+    }
+  }
+
+  @SuppressWarnings("unchecked")
+  public void readFields(DataInput in) throws IOException {
+    // First clear the map.  Otherwise we will just accumulate
+    // entries every time this method is called.
+    this.instance.clear();
+    // Read the number of entries in the map
+    int entries = in.readInt();
+    // Then read each key/value pair
+    for (int i = 0; i < entries; i++) {
+      byte [] key = Bytes.readByteArray(in);
+      byte id = in.readByte();
+      Class clazz = getClass(id);
+      V value = null;
+      if (clazz.equals(byte [].class)) {
+        byte [] bytes = Bytes.readByteArray(in);
+        value = (V)bytes;
+      } else {
+        Writable w = (Writable)ReflectionUtils.
+          newInstance(clazz, getConf());
+        w.readFields(in);
+        value = (V)w;
+      }
+      this.instance.put(key, value);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
new file mode 100644
index 0000000..4ca5ad4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/HbaseObjectWritable.java
@@ -0,0 +1,564 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Array;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.MultiPut;
+import org.apache.hadoop.hbase.client.MultiPutResponse;
+import org.apache.hadoop.hbase.client.MultiAction;
+import org.apache.hadoop.hbase.client.Action;
+import org.apache.hadoop.hbase.client.MultiResponse;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
+import org.apache.hadoop.hbase.filter.ColumnPrefixFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.DependentColumnFilter;
+import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
+import org.apache.hadoop.hbase.filter.InclusiveStopFilter;
+import org.apache.hadoop.hbase.filter.KeyOnlyFilter;
+import org.apache.hadoop.hbase.filter.PageFilter;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.QualifierFilter;
+import org.apache.hadoop.hbase.filter.RowFilter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueExcludeFilter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.filter.SkipFilter;
+import org.apache.hadoop.hbase.filter.ValueFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.filter.WritableByteArrayComparable;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableFactories;
+
+/**
+ * This is a customized version of the polymorphic hadoop
+ * {@link ObjectWritable}.  It removes UTF8 (HADOOP-414).
+ * Using {@link Text} intead of UTF-8 saves ~2% CPU between reading and writing
+ * objects running a short sequentialWrite Performance Evaluation test just in
+ * ObjectWritable alone; more when we're doing randomRead-ing.  Other
+ * optimizations include our passing codes for classes instead of the
+ * actual class names themselves.  This makes it so this class needs amendment
+ * if non-Writable classes are introduced -- if passed a Writable for which we
+ * have no code, we just do the old-school passing of the class name, etc. --
+ * but passing codes the  savings are large particularly when cell
+ * data is small (If < a couple of kilobytes, the encoding/decoding of class
+ * name and reflection to instantiate class was costing in excess of the cell
+ * handling).
+ */
+public class HbaseObjectWritable implements Writable, WritableWithSize, Configurable {
+  protected final static Log LOG = LogFactory.getLog(HbaseObjectWritable.class);
+
+  // Here we maintain two static maps of classes to code and vice versa.
+  // Add new classes+codes as wanted or figure way to auto-generate these
+  // maps from the HMasterInterface.
+  static final Map<Byte, Class<?>> CODE_TO_CLASS =
+    new HashMap<Byte, Class<?>>();
+  static final Map<Class<?>, Byte> CLASS_TO_CODE =
+    new HashMap<Class<?>, Byte>();
+  // Special code that means 'not-encoded'; in this case we do old school
+  // sending of the class name using reflection, etc.
+  private static final byte NOT_ENCODED = 0;
+  static {
+    byte code = NOT_ENCODED + 1;
+    // Primitive types.
+    addToMap(Boolean.TYPE, code++);
+    addToMap(Byte.TYPE, code++);
+    addToMap(Character.TYPE, code++);
+    addToMap(Short.TYPE, code++);
+    addToMap(Integer.TYPE, code++);
+    addToMap(Long.TYPE, code++);
+    addToMap(Float.TYPE, code++);
+    addToMap(Double.TYPE, code++);
+    addToMap(Void.TYPE, code++);
+
+    // Other java types
+    addToMap(String.class, code++);
+    addToMap(byte [].class, code++);
+    addToMap(byte [][].class, code++);
+
+    // Hadoop types
+    addToMap(Text.class, code++);
+    addToMap(Writable.class, code++);
+    addToMap(Writable [].class, code++);
+    addToMap(HbaseMapWritable.class, code++);
+    addToMap(NullInstance.class, code++);
+
+    // Hbase types
+    addToMap(HColumnDescriptor.class, code++);
+    addToMap(HConstants.Modify.class, code++);
+    addToMap(HMsg.class, code++);
+    addToMap(HMsg[].class, code++);
+    addToMap(HRegion.class, code++);
+    addToMap(HRegion[].class, code++);
+    addToMap(HRegionInfo.class, code++);
+    addToMap(HRegionInfo[].class, code++);
+    addToMap(HServerAddress.class, code++);
+    addToMap(HServerInfo.class, code++);
+    addToMap(HTableDescriptor.class, code++);
+    addToMap(MapWritable.class, code++);
+
+    //
+    // HBASE-880
+    //
+    addToMap(ClusterStatus.class, code++);
+    addToMap(Delete.class, code++);
+    addToMap(Get.class, code++);
+    addToMap(KeyValue.class, code++);
+    addToMap(KeyValue[].class, code++);
+    addToMap(Put.class, code++);
+    addToMap(Put[].class, code++);
+    addToMap(Result.class, code++);
+    addToMap(Result[].class, code++);
+    addToMap(Scan.class, code++);
+
+    addToMap(WhileMatchFilter.class, code++);
+    addToMap(PrefixFilter.class, code++);
+    addToMap(PageFilter.class, code++);
+    addToMap(InclusiveStopFilter.class, code++);
+    addToMap(ColumnCountGetFilter.class, code++);
+    addToMap(SingleColumnValueFilter.class, code++);
+    addToMap(SingleColumnValueExcludeFilter.class, code++);
+    addToMap(BinaryComparator.class, code++);
+    addToMap(CompareFilter.class, code++);
+    addToMap(RowFilter.class, code++);
+    addToMap(ValueFilter.class, code++);
+    addToMap(QualifierFilter.class, code++);
+    addToMap(SkipFilter.class, code++);
+    addToMap(WritableByteArrayComparable.class, code++);
+    addToMap(FirstKeyOnlyFilter.class, code++);
+    addToMap(DependentColumnFilter.class, code++);
+
+    addToMap(Delete [].class, code++);
+
+    addToMap(MultiPut.class, code++);
+    addToMap(MultiPutResponse.class, code++);
+
+    addToMap(HLog.Entry.class, code++);
+    addToMap(HLog.Entry[].class, code++);
+    addToMap(HLogKey.class, code++);
+
+    addToMap(List.class, code++);
+
+    addToMap(NavigableSet.class, code++);
+    addToMap(ColumnPrefixFilter.class, code++);
+
+    // Multi
+    addToMap(Row.class, code++);
+    addToMap(Action.class, code++);
+    addToMap(MultiAction.class, code++);
+    addToMap(MultiResponse.class, code++);
+
+    addToMap(Increment.class, code++);
+
+    addToMap(KeyOnlyFilter.class, code++);
+
+  }
+
+  private Class<?> declaredClass;
+  private Object instance;
+  private Configuration conf;
+
+  /** default constructor for writable */
+  public HbaseObjectWritable() {
+    super();
+  }
+
+  /**
+   * @param instance
+   */
+  public HbaseObjectWritable(Object instance) {
+    set(instance);
+  }
+
+  /**
+   * @param declaredClass
+   * @param instance
+   */
+  public HbaseObjectWritable(Class<?> declaredClass, Object instance) {
+    this.declaredClass = declaredClass;
+    this.instance = instance;
+  }
+
+  /** @return the instance, or null if none. */
+  public Object get() { return instance; }
+
+  /** @return the class this is meant to be. */
+  public Class<?> getDeclaredClass() { return declaredClass; }
+
+  /**
+   * Reset the instance.
+   * @param instance
+   */
+  public void set(Object instance) {
+    this.declaredClass = instance.getClass();
+    this.instance = instance;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    return "OW[class=" + declaredClass + ",value=" + instance + "]";
+  }
+
+
+  public void readFields(DataInput in) throws IOException {
+    readObject(in, this, this.conf);
+  }
+
+  public void write(DataOutput out) throws IOException {
+    writeObject(out, instance, declaredClass, conf);
+  }
+
+  public long getWritableSize() {
+    return getWritableSize(instance, declaredClass, conf);
+  }
+
+  private static class NullInstance extends Configured implements Writable {
+    Class<?> declaredClass;
+    /** default constructor for writable */
+    @SuppressWarnings("unused")
+    public NullInstance() { super(null); }
+
+    /**
+     * @param declaredClass
+     * @param conf
+     */
+    public NullInstance(Class<?> declaredClass, Configuration conf) {
+      super(conf);
+      this.declaredClass = declaredClass;
+    }
+
+    public void readFields(DataInput in) throws IOException {
+      this.declaredClass = CODE_TO_CLASS.get(in.readByte());
+    }
+
+    public void write(DataOutput out) throws IOException {
+      writeClassCode(out, this.declaredClass);
+    }
+  }
+
+  /**
+   * Write out the code byte for passed Class.
+   * @param out
+   * @param c
+   * @throws IOException
+   */
+  static void writeClassCode(final DataOutput out, final Class<?> c)
+  throws IOException {
+    Byte code = CLASS_TO_CODE.get(c);
+    if (code == null ) {
+      if ( List.class.isAssignableFrom(c)) {
+        code = CLASS_TO_CODE.get(List.class);
+      }
+      else if (Writable.class.isAssignableFrom(c)) {
+        code = CLASS_TO_CODE.get(Writable.class);
+      }
+    }
+    if (code == null) {
+      LOG.error("Unsupported type " + c);
+      StackTraceElement[] els = new Exception().getStackTrace();
+      for(StackTraceElement elem : els) {
+        LOG.error(elem.getMethodName());
+      }
+//          new Exception().getStackTrace()[0].getMethodName());
+//      throw new IOException(new Exception().getStackTrace()[0].getMethodName());
+      throw new UnsupportedOperationException("No code for unexpected " + c);
+    }
+    out.writeByte(code);
+  }
+
+
+  public static long getWritableSize(Object instance, Class declaredClass,
+                                     Configuration conf) {
+    long size = Bytes.SIZEOF_BYTE; // code
+    if (instance == null) {
+      return 0L;
+    }
+
+    if (declaredClass.isArray()) {
+      if (declaredClass.equals(Result[].class)) {
+
+        return size + Result.getWriteArraySize((Result[])instance);
+      }
+    }
+    if (declaredClass.equals(Result.class)) {
+      Result r = (Result) instance;
+      // one extra class code for writable instance.
+      return r.getWritableSize() + size + Bytes.SIZEOF_BYTE;
+    }
+    return 0L; // no hint is the default.
+  }
+  /**
+   * Write a {@link Writable}, {@link String}, primitive type, or an array of
+   * the preceding.
+   * @param out
+   * @param instance
+   * @param declaredClass
+   * @param conf
+   * @throws IOException
+   */
+  @SuppressWarnings("unchecked")
+  public static void writeObject(DataOutput out, Object instance,
+                                 Class declaredClass,
+                                 Configuration conf)
+  throws IOException {
+
+    Object instanceObj = instance;
+    Class declClass = declaredClass;
+
+    if (instanceObj == null) {                       // null
+      instanceObj = new NullInstance(declClass, conf);
+      declClass = Writable.class;
+    }
+    writeClassCode(out, declClass);
+    if (declClass.isArray()) {                // array
+      // If bytearray, just dump it out -- avoid the recursion and
+      // byte-at-a-time we were previously doing.
+      if (declClass.equals(byte [].class)) {
+        Bytes.writeByteArray(out, (byte [])instanceObj);
+      } else if(declClass.equals(Result [].class)) {
+        Result.writeArray(out, (Result [])instanceObj);
+      } else {
+        int length = Array.getLength(instanceObj);
+        out.writeInt(length);
+        for (int i = 0; i < length; i++) {
+          writeObject(out, Array.get(instanceObj, i),
+                    declClass.getComponentType(), conf);
+        }
+      }
+    } else if (List.class.isAssignableFrom(declClass)) {
+      List list = (List)instanceObj;
+      int length = list.size();
+      out.writeInt(length);
+      for (int i = 0; i < length; i++) {
+        writeObject(out, list.get(i),
+                  list.get(i).getClass(), conf);
+      }
+    } else if (declClass == String.class) {   // String
+      Text.writeString(out, (String)instanceObj);
+    } else if (declClass.isPrimitive()) {     // primitive type
+      if (declClass == Boolean.TYPE) {        // boolean
+        out.writeBoolean(((Boolean)instanceObj).booleanValue());
+      } else if (declClass == Character.TYPE) { // char
+        out.writeChar(((Character)instanceObj).charValue());
+      } else if (declClass == Byte.TYPE) {    // byte
+        out.writeByte(((Byte)instanceObj).byteValue());
+      } else if (declClass == Short.TYPE) {   // short
+        out.writeShort(((Short)instanceObj).shortValue());
+      } else if (declClass == Integer.TYPE) { // int
+        out.writeInt(((Integer)instanceObj).intValue());
+      } else if (declClass == Long.TYPE) {    // long
+        out.writeLong(((Long)instanceObj).longValue());
+      } else if (declClass == Float.TYPE) {   // float
+        out.writeFloat(((Float)instanceObj).floatValue());
+      } else if (declClass == Double.TYPE) {  // double
+        out.writeDouble(((Double)instanceObj).doubleValue());
+      } else if (declClass == Void.TYPE) {    // void
+      } else {
+        throw new IllegalArgumentException("Not a primitive: "+declClass);
+      }
+    } else if (declClass.isEnum()) {         // enum
+      Text.writeString(out, ((Enum)instanceObj).name());
+    } else if (Writable.class.isAssignableFrom(declClass)) { // Writable
+      Class <?> c = instanceObj.getClass();
+      Byte code = CLASS_TO_CODE.get(c);
+      if (code == null) {
+        out.writeByte(NOT_ENCODED);
+        Text.writeString(out, c.getName());
+      } else {
+        writeClassCode(out, c);
+      }
+      ((Writable)instanceObj).write(out);
+    } else {
+      throw new IOException("Can't write: "+instanceObj+" as "+declClass);
+    }
+  }
+
+
+  /**
+   * Read a {@link Writable}, {@link String}, primitive type, or an array of
+   * the preceding.
+   * @param in
+   * @param conf
+   * @return the object
+   * @throws IOException
+   */
+  public static Object readObject(DataInput in, Configuration conf)
+    throws IOException {
+    return readObject(in, null, conf);
+  }
+
+  /**
+   * Read a {@link Writable}, {@link String}, primitive type, or an array of
+   * the preceding.
+   * @param in
+   * @param objectWritable
+   * @param conf
+   * @return the object
+   * @throws IOException
+   */
+  @SuppressWarnings("unchecked")
+  public static Object readObject(DataInput in,
+      HbaseObjectWritable objectWritable, Configuration conf)
+  throws IOException {
+    Class<?> declaredClass = CODE_TO_CLASS.get(in.readByte());
+    Object instance;
+    if (declaredClass.isPrimitive()) {            // primitive types
+      if (declaredClass == Boolean.TYPE) {             // boolean
+        instance = Boolean.valueOf(in.readBoolean());
+      } else if (declaredClass == Character.TYPE) {    // char
+        instance = Character.valueOf(in.readChar());
+      } else if (declaredClass == Byte.TYPE) {         // byte
+        instance = Byte.valueOf(in.readByte());
+      } else if (declaredClass == Short.TYPE) {        // short
+        instance = Short.valueOf(in.readShort());
+      } else if (declaredClass == Integer.TYPE) {      // int
+        instance = Integer.valueOf(in.readInt());
+      } else if (declaredClass == Long.TYPE) {         // long
+        instance = Long.valueOf(in.readLong());
+      } else if (declaredClass == Float.TYPE) {        // float
+        instance = Float.valueOf(in.readFloat());
+      } else if (declaredClass == Double.TYPE) {       // double
+        instance = Double.valueOf(in.readDouble());
+      } else if (declaredClass == Void.TYPE) {         // void
+        instance = null;
+      } else {
+        throw new IllegalArgumentException("Not a primitive: "+declaredClass);
+      }
+    } else if (declaredClass.isArray()) {              // array
+      if (declaredClass.equals(byte [].class)) {
+        instance = Bytes.readByteArray(in);
+      } else if(declaredClass.equals(Result [].class)) {
+        instance = Result.readArray(in);
+      } else {
+        int length = in.readInt();
+        instance = Array.newInstance(declaredClass.getComponentType(), length);
+        for (int i = 0; i < length; i++) {
+          Array.set(instance, i, readObject(in, conf));
+        }
+      }
+    } else if (List.class.isAssignableFrom(declaredClass)) {              // List
+      int length = in.readInt();
+      instance = new ArrayList(length);
+      for (int i = 0; i < length; i++) {
+        ((ArrayList)instance).add(readObject(in, conf));
+      }
+    } else if (declaredClass == String.class) {        // String
+      instance = Text.readString(in);
+    } else if (declaredClass.isEnum()) {         // enum
+      instance = Enum.valueOf((Class<? extends Enum>) declaredClass,
+        Text.readString(in));
+    } else {                                      // Writable
+      Class instanceClass = null;
+      Byte b = in.readByte();
+      if (b.byteValue() == NOT_ENCODED) {
+        String className = Text.readString(in);
+        try {
+          instanceClass = getClassByName(conf, className);
+        } catch (ClassNotFoundException e) {
+          LOG.error("Can't find class " + className, e);
+          throw new IOException("Can't find class " + className, e);
+        }
+      } else {
+        instanceClass = CODE_TO_CLASS.get(b);
+      }
+      Writable writable = WritableFactories.newInstance(instanceClass, conf);
+      try {
+        writable.readFields(in);
+      } catch (Exception e) {
+        LOG.error("Error in readFields", e);
+        throw new IOException("Error in readFields" , e);
+      }
+      instance = writable;
+      if (instanceClass == NullInstance.class) {  // null
+        declaredClass = ((NullInstance)instance).declaredClass;
+        instance = null;
+      }
+    }
+    if (objectWritable != null) {                 // store values
+      objectWritable.declaredClass = declaredClass;
+      objectWritable.instance = instance;
+    }
+    return instance;
+  }
+
+  @SuppressWarnings("unchecked")
+  private static Class getClassByName(Configuration conf, String className)
+  throws ClassNotFoundException {
+    if(conf != null) {
+      return conf.getClassByName(className);
+    }
+    ClassLoader cl = Thread.currentThread().getContextClassLoader();
+    if(cl == null) {
+      cl = HbaseObjectWritable.class.getClassLoader();
+    }
+    return Class.forName(className, true, cl);
+  }
+
+  private static void addToMap(final Class<?> clazz, final byte code) {
+    CLASS_TO_CODE.put(clazz, code);
+    CODE_TO_CLASS.put(code, clazz);
+  }
+
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+  }
+
+  public Configuration getConf() {
+    return this.conf;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/HeapSize.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/HeapSize.java
new file mode 100644
index 0000000..bd78846
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/HeapSize.java
@@ -0,0 +1,47 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+/**
+ * Implementations can be asked for an estimate of their size in bytes.
+ * <p>
+ * Useful for sizing caches.  Its a given that implementation approximations
+ * do not account for 32 vs 64 bit nor for different VM implementations.
+ * <p>
+ * An Object's size is determined by the non-static data members in it,
+ * as well as the fixed {@link Object} overhead.
+ * <p>
+ * For example:
+ * <pre>
+ * public class SampleObject implements HeapSize {
+ *
+ *   int [] numbers;
+ *   int x;
+ * }
+ * </pre>
+ */
+public interface HeapSize {
+  /**
+   * @return Approximate 'exclusive deep size' of implementing object.  Includes
+   * count of payload and hosting object sizings.
+  */
+  public long heapSize();
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java
new file mode 100644
index 0000000..0cd5213
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java
@@ -0,0 +1,269 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.io.WritableComparator;
+
+/**
+ * A byte sequence that is usable as a key or value.  Based on
+ * {@link org.apache.hadoop.io.BytesWritable} only this class is NOT resizable
+ * and DOES NOT distinguish between the size of the seqeunce and the current
+ * capacity as {@link org.apache.hadoop.io.BytesWritable} does. Hence its
+ * comparatively 'immutable'. When creating a new instance of this class,
+ * the underlying byte [] is not copied, just referenced.  The backing
+ * buffer is accessed when we go to serialize.
+ */
+public class ImmutableBytesWritable
+implements WritableComparable<ImmutableBytesWritable> {
+  private byte[] bytes;
+  private int offset;
+  private int length;
+
+  /**
+   * Create a zero-size sequence.
+   */
+  public ImmutableBytesWritable() {
+    super();
+  }
+
+  /**
+   * Create a ImmutableBytesWritable using the byte array as the initial value.
+   * @param bytes This array becomes the backing storage for the object.
+   */
+  public ImmutableBytesWritable(byte[] bytes) {
+    this(bytes, 0, bytes.length);
+  }
+
+  /**
+   * Set the new ImmutableBytesWritable to the contents of the passed
+   * <code>ibw</code>.
+   * @param ibw the value to set this ImmutableBytesWritable to.
+   */
+  public ImmutableBytesWritable(final ImmutableBytesWritable ibw) {
+    this(ibw.get(), 0, ibw.getSize());
+  }
+
+  /**
+   * Set the value to a given byte range
+   * @param bytes the new byte range to set to
+   * @param offset the offset in newData to start at
+   * @param length the number of bytes in the range
+   */
+  public ImmutableBytesWritable(final byte[] bytes, final int offset,
+      final int length) {
+    this.bytes = bytes;
+    this.offset = offset;
+    this.length = length;
+  }
+
+  /**
+   * Get the data from the BytesWritable.
+   * @return The data is only valid between offset and offset+length.
+   */
+  public byte [] get() {
+    if (this.bytes == null) {
+      throw new IllegalStateException("Uninitialiized. Null constructor " +
+        "called w/o accompaying readFields invocation");
+    }
+    return this.bytes;
+  }
+
+  /**
+   * @param b Use passed bytes as backing array for this instance.
+   */
+  public void set(final byte [] b) {
+    set(b, 0, b.length);
+  }
+
+  /**
+   * @param b Use passed bytes as backing array for this instance.
+   * @param offset
+   * @param length
+   */
+  public void set(final byte [] b, final int offset, final int length) {
+    this.bytes = b;
+    this.offset = offset;
+    this.length = length;
+  }
+
+  /**
+   * @return the number of valid bytes in the buffer
+   */
+  public int getSize() {
+    if (this.bytes == null) {
+      throw new IllegalStateException("Uninitialiized. Null constructor " +
+        "called w/o accompaying readFields invocation");
+    }
+    return this.length;
+  }
+
+  /**
+   * @return the number of valid bytes in the buffer
+   */
+  //Should probably deprecate getSize() so that we keep the same calls for all
+  //byte []
+  public int getLength() {
+    if (this.bytes == null) {
+      throw new IllegalStateException("Uninitialiized. Null constructor " +
+        "called w/o accompaying readFields invocation");
+    }
+    return this.length;
+  }
+
+  /**
+   * @return offset
+   */
+  public int getOffset(){
+    return this.offset;
+  }
+
+  public void readFields(final DataInput in) throws IOException {
+    this.length = in.readInt();
+    this.bytes = new byte[this.length];
+    in.readFully(this.bytes, 0, this.length);
+    this.offset = 0;
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    out.writeInt(this.length);
+    out.write(this.bytes, this.offset, this.length);
+  }
+
+  // Below methods copied from BytesWritable
+  @Override
+  public int hashCode() {
+    int hash = 1;
+    for (int i = offset; i < offset + length; i++)
+      hash = (31 * hash) + (int)bytes[i];
+    return hash;
+  }
+
+  /**
+   * Define the sort order of the BytesWritable.
+   * @param that The other bytes writable
+   * @return Positive if left is bigger than right, 0 if they are equal, and
+   *         negative if left is smaller than right.
+   */
+  public int compareTo(ImmutableBytesWritable that) {
+    return WritableComparator.compareBytes(
+      this.bytes, this.offset, this.length,
+      that.bytes, that.offset, that.length);
+  }
+
+  /**
+   * Compares the bytes in this object to the specified byte array
+   * @param that
+   * @return Positive if left is bigger than right, 0 if they are equal, and
+   *         negative if left is smaller than right.
+   */
+  public int compareTo(final byte [] that) {
+    return WritableComparator.compareBytes(
+      this.bytes, this.offset, this.length,
+      that, 0, that.length);
+  }
+
+  /**
+   * @see java.lang.Object#equals(java.lang.Object)
+   */
+  @Override
+  public boolean equals(Object right_obj) {
+    if (right_obj instanceof byte []) {
+      return compareTo((byte [])right_obj) == 0;
+    }
+    if (right_obj instanceof ImmutableBytesWritable) {
+      return compareTo((ImmutableBytesWritable)right_obj) == 0;
+    }
+    return false;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder(3*this.bytes.length);
+    for (int idx = offset; idx < offset + length; idx++) {
+      // if not the first, put a blank separator in
+      if (idx != offset) {
+        sb.append(' ');
+      }
+      String num = Integer.toHexString(bytes[idx]);
+      // if it is only one digit, add a leading 0.
+      if (num.length() < 2) {
+        sb.append('0');
+      }
+      sb.append(num);
+    }
+    return sb.toString();
+  }
+
+  /** A Comparator optimized for ImmutableBytesWritable.
+   */
+  public static class Comparator extends WritableComparator {
+    private BytesWritable.Comparator comparator =
+      new BytesWritable.Comparator();
+
+    /** constructor */
+    public Comparator() {
+      super(ImmutableBytesWritable.class);
+    }
+
+    /**
+     * @see org.apache.hadoop.io.WritableComparator#compare(byte[], int, int, byte[], int, int)
+     */
+    @Override
+    public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2, int l2) {
+      return comparator.compare(b1, s1, l1, b2, s2, l2);
+    }
+  }
+
+  static { // register this comparator
+    WritableComparator.define(ImmutableBytesWritable.class, new Comparator());
+  }
+
+  /**
+   * @param array List of byte [].
+   * @return Array of byte [].
+   */
+  public static byte [][] toArray(final List<byte []> array) {
+    // List#toArray doesn't work on lists of byte [].
+    byte[][] results = new byte[array.size()][];
+    for (int i = 0; i < array.size(); i++) {
+      results[i] = array.get(i);
+    }
+    return results;
+  }
+
+  /**
+   * Returns a copy of the bytes referred to by this writable
+   */
+  public byte[] copyBytes() {
+    return Arrays.copyOfRange(bytes, offset, offset+length);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/Reference.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/Reference.java
new file mode 100644
index 0000000..219203c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/Reference.java
@@ -0,0 +1,156 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * A reference to the top or bottom half of a store file.  The file referenced
+ * lives under a different region.  References are made at region split time.
+ *
+ * <p>References work with a special half store file type.  References know how
+ * to write out the reference format in the file system and are whats juggled
+ * when references are mixed in with direct store files.  The half store file
+ * type is used reading the referred to file.
+ *
+ * <p>References to store files located over in some other region look like
+ * this in the file system
+ * <code>1278437856009925445.3323223323</code>:
+ * i.e. an id followed by hash of the referenced region.
+ * Note, a region is itself not splitable if it has instances of store file
+ * references.  References are cleaned up by compactions.
+ */
+public class Reference implements Writable {
+  private byte [] splitkey;
+  private Range region;
+
+  /**
+   * For split HStoreFiles, it specifies if the file covers the lower half or
+   * the upper half of the key range
+   */
+  public static enum Range {
+    /** HStoreFile contains upper half of key range */
+    top,
+    /** HStoreFile contains lower half of key range */
+    bottom
+  }
+
+  /**
+   * Constructor
+   * @param splitRow This is row we are splitting around.
+   * @param fr
+   */
+  public Reference(final byte [] splitRow, final Range fr) {
+    this.splitkey = splitRow == null?
+      null: KeyValue.createFirstOnRow(splitRow).getKey();
+    this.region = fr;
+  }
+
+  /**
+   * Used by serializations.
+   */
+  public Reference() {
+    this(null, Range.bottom);
+  }
+
+  /**
+   *
+   * @return Range
+   */
+  public Range getFileRegion() {
+    return this.region;
+  }
+
+  /**
+   * @return splitKey
+   */
+  public byte [] getSplitKey() {
+    return splitkey;
+  }
+
+  /**
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    return "" + this.region;
+  }
+
+  // Make it serializable.
+
+  public void write(DataOutput out) throws IOException {
+    // Write true if we're doing top of the file.
+    out.writeBoolean(isTopFileRegion(this.region));
+    Bytes.writeByteArray(out, this.splitkey);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    boolean tmp = in.readBoolean();
+    // If true, set region to top.
+    this.region = tmp? Range.top: Range.bottom;
+    this.splitkey = Bytes.readByteArray(in);
+  }
+
+  public static boolean isTopFileRegion(final Range r) {
+    return r.equals(Range.top);
+  }
+
+  public Path write(final FileSystem fs, final Path p)
+  throws IOException {
+    FSUtils.create(fs, p);
+    FSDataOutputStream out = fs.create(p);
+    try {
+      write(out);
+    } finally {
+      out.close();
+    }
+    return p;
+  }
+
+  /**
+   * Read a Reference from FileSystem.
+   * @param fs
+   * @param p
+   * @return New Reference made from passed <code>p</code>
+   * @throws IOException
+   */
+  public static Reference read(final FileSystem fs, final Path p)
+  throws IOException {
+    FSDataInputStream in = fs.open(p);
+    try {
+      Reference r = new Reference();
+      r.readFields(in);
+      return r;
+    } finally {
+      in.close();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
new file mode 100644
index 0000000..12a9b68
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/TimeRange.java
@@ -0,0 +1,189 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.io.Writable;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Represents an interval of version timestamps.
+ * <p>
+ * Evaluated according to minStamp <= timestamp < maxStamp
+ * or [minStamp,maxStamp) in interval notation.
+ * <p>
+ * Only used internally; should not be accessed directly by clients.
+ */
+public class TimeRange implements Writable {
+  private long minStamp = 0L;
+  private long maxStamp = Long.MAX_VALUE;
+  private boolean allTime = false;
+
+  /**
+   * Default constructor.
+   * Represents interval [0, Long.MAX_VALUE) (allTime)
+   */
+  public TimeRange() {
+    allTime = true;
+  }
+
+  /**
+   * Represents interval [minStamp, Long.MAX_VALUE)
+   * @param minStamp the minimum timestamp value, inclusive
+   */
+  public TimeRange(long minStamp) {
+    this.minStamp = minStamp;
+  }
+
+  /**
+   * Represents interval [minStamp, Long.MAX_VALUE)
+   * @param minStamp the minimum timestamp value, inclusive
+   */
+  public TimeRange(byte [] minStamp) {
+  	this.minStamp = Bytes.toLong(minStamp);
+  }
+
+  /**
+   * Represents interval [minStamp, maxStamp)
+   * @param minStamp the minimum timestamp, inclusive
+   * @param maxStamp the maximum timestamp, exclusive
+   * @throws IOException
+   */
+  public TimeRange(long minStamp, long maxStamp)
+  throws IOException {
+    if(maxStamp < minStamp) {
+      throw new IOException("maxStamp is smaller than minStamp");
+    }
+    this.minStamp = minStamp;
+    this.maxStamp = maxStamp;
+  }
+
+  /**
+   * Represents interval [minStamp, maxStamp)
+   * @param minStamp the minimum timestamp, inclusive
+   * @param maxStamp the maximum timestamp, exclusive
+   * @throws IOException
+   */
+  public TimeRange(byte [] minStamp, byte [] maxStamp)
+  throws IOException {
+    this(Bytes.toLong(minStamp), Bytes.toLong(maxStamp));
+  }
+
+  /**
+   * @return the smallest timestamp that should be considered
+   */
+  public long getMin() {
+    return minStamp;
+  }
+
+  /**
+   * @return the biggest timestamp that should be considered
+   */
+  public long getMax() {
+    return maxStamp;
+  }
+
+  /**
+   * Check if the specified timestamp is within this TimeRange.
+   * <p>
+   * Returns true if within interval [minStamp, maxStamp), false
+   * if not.
+   * @param bytes timestamp to check
+   * @param offset offset into the bytes
+   * @return true if within TimeRange, false if not
+   */
+  public boolean withinTimeRange(byte [] bytes, int offset) {
+  	if(allTime) return true;
+  	return withinTimeRange(Bytes.toLong(bytes, offset));
+  }
+
+  /**
+   * Check if the specified timestamp is within this TimeRange.
+   * <p>
+   * Returns true if within interval [minStamp, maxStamp), false
+   * if not.
+   * @param timestamp timestamp to check
+   * @return true if within TimeRange, false if not
+   */
+  public boolean withinTimeRange(long timestamp) {
+  	if(allTime) return true;
+  	// check if >= minStamp
+  	return (minStamp <= timestamp && timestamp < maxStamp);
+  }
+
+  /**
+   * Check if the specified timestamp is within this TimeRange.
+   * <p>
+   * Returns true if within interval [minStamp, maxStamp), false
+   * if not.
+   * @param timestamp timestamp to check
+   * @return true if within TimeRange, false if not
+   */
+  public boolean withinOrAfterTimeRange(long timestamp) {
+    if(allTime) return true;
+    // check if >= minStamp
+    return (timestamp >= minStamp);
+  }
+
+  /**
+   * Compare the timestamp to timerange
+   * @param timestamp
+   * @return -1 if timestamp is less than timerange,
+   * 0 if timestamp is within timerange,
+   * 1 if timestamp is greater than timerange
+   */
+  public int compare(long timestamp) {
+    if (timestamp < minStamp) {
+      return -1;
+    } else if (timestamp >= maxStamp) {
+      return 1;
+    } else {
+      return 0;
+    }
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("maxStamp=");
+    sb.append(this.maxStamp);
+    sb.append(", minStamp=");
+    sb.append(this.minStamp);
+    return sb.toString();
+  }
+
+  //Writable
+  public void readFields(final DataInput in) throws IOException {
+    this.minStamp = in.readLong();
+    this.maxStamp = in.readLong();
+    this.allTime = in.readBoolean();
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    out.writeLong(minStamp);
+    out.writeLong(maxStamp);
+    out.writeBoolean(this.allTime);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/WritableWithSize.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/WritableWithSize.java
new file mode 100644
index 0000000..f8aefa1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/WritableWithSize.java
@@ -0,0 +1,36 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+/**
+ * An optional interface to 'size' writables.
+ */
+public interface WritableWithSize {
+  /**
+   * Provide a size hint to the caller. write() should ideally
+   * not go beyond this if at all possible.
+   *
+   * You can return 0 if there is no size hint.
+   *
+   * @return the size of the writable
+   */
+  public long getWritableSize();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java
new file mode 100644
index 0000000..3ef0780
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java
@@ -0,0 +1,56 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.nio.ByteBuffer;
+
+/**
+ * Block cache interface.
+ * TODO: Add filename or hash of filename to block cache key.
+ */
+public interface BlockCache {
+  /**
+   * Add block to cache.
+   * @param blockName Zero-based file block number.
+   * @param buf The block contents wrapped in a ByteBuffer.
+   * @param inMemory Whether block should be treated as in-memory
+   */
+  public void cacheBlock(String blockName, ByteBuffer buf, boolean inMemory);
+
+  /**
+   * Add block to cache (defaults to not in-memory).
+   * @param blockName Zero-based file block number.
+   * @param buf The block contents wrapped in a ByteBuffer.
+   */
+  public void cacheBlock(String blockName, ByteBuffer buf);
+
+  /**
+   * Fetch block from cache.
+   * @param blockName Block number to fetch.
+   * @param caching Whether this request has caching enabled (used for stats)
+   * @return Block or null if block is not in the cache.
+   */
+  public ByteBuffer getBlock(String blockName, boolean caching);
+
+  /**
+   * Shutdown the cache.
+   */
+  public void shutdown();
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/BoundedRangeFileInputStream.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/BoundedRangeFileInputStream.java
new file mode 100644
index 0000000..f7da04d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/BoundedRangeFileInputStream.java
@@ -0,0 +1,149 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+
+/**
+ * BoundedRangeFIleInputStream abstracts a contiguous region of a Hadoop
+ * FSDataInputStream as a regular input stream. One can create multiple
+ * BoundedRangeFileInputStream on top of the same FSDataInputStream and they
+ * would not interfere with each other.
+ * Copied from hadoop-335 tfile.
+ */
+class BoundedRangeFileInputStream  extends InputStream {
+
+  private FSDataInputStream in;
+  private long pos;
+  private long end;
+  private long mark;
+  private final byte[] oneByte = new byte[1];
+  private final boolean pread;
+
+  /**
+   * Constructor
+   *
+   * @param in
+   *          The FSDataInputStream we connect to.
+   * @param offset
+   *          Beginning offset of the region.
+   * @param length
+   *          Length of the region.
+   * @param pread If true, use Filesystem positional read rather than seek+read.
+   *
+   *          The actual length of the region may be smaller if (off_begin +
+   *          length) goes beyond the end of FS input stream.
+   */
+  public BoundedRangeFileInputStream(FSDataInputStream in, long offset,
+      long length, final boolean pread) {
+    if (offset < 0 || length < 0) {
+      throw new IndexOutOfBoundsException("Invalid offset/length: " + offset
+          + "/" + length);
+    }
+
+    this.in = in;
+    this.pos = offset;
+    this.end = offset + length;
+    this.mark = -1;
+    this.pread = pread;
+  }
+
+  @Override
+  public int available() throws IOException {
+    int avail = in.available();
+    if (pos + avail > end) {
+      avail = (int) (end - pos);
+    }
+
+    return avail;
+  }
+
+  @Override
+  public int read() throws IOException {
+    int ret = read(oneByte);
+    if (ret == 1) return oneByte[0] & 0xff;
+    return -1;
+  }
+
+  @Override
+  public int read(byte[] b) throws IOException {
+    return read(b, 0, b.length);
+  }
+
+  @Override
+  public int read(byte[] b, int off, int len) throws IOException {
+    if ((off | len | (off + len) | (b.length - (off + len))) < 0) {
+      throw new IndexOutOfBoundsException();
+    }
+
+    int n = (int) Math.min(Integer.MAX_VALUE, Math.min(len, (end - pos)));
+    if (n == 0) return -1;
+    int ret = 0;
+    if (this.pread) {
+      ret = in.read(pos, b, off, n);
+    } else {
+      synchronized (in) {
+        in.seek(pos);
+        ret = in.read(b, off, n);
+      }
+    }
+    if (ret < 0) {
+      end = pos;
+      return -1;
+    }
+    pos += ret;
+    return ret;
+  }
+
+  @Override
+  /*
+   * We may skip beyond the end of the file.
+   */
+  public long skip(long n) throws IOException {
+    long len = Math.min(n, end - pos);
+    pos += len;
+    return len;
+  }
+
+  @Override
+  public void mark(int readlimit) {
+    mark = pos;
+  }
+
+  @Override
+  public void reset() throws IOException {
+    if (mark < 0) throw new IOException("Resetting to invalid mark");
+    pos = mark;
+  }
+
+  @Override
+  public boolean markSupported() {
+    return true;
+  }
+
+  @Override
+  public void close() {
+    // Invalidate the state of the stream.
+    in = null;
+    pos = end;
+    mark = -1;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/CachedBlock.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/CachedBlock.java
new file mode 100644
index 0000000..fa0a79d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/CachedBlock.java
@@ -0,0 +1,111 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+
+/**
+ * Represents an entry in the {@link LruBlockCache}.
+ *
+ * <p>Makes the block memory-aware with {@link HeapSize} and Comparable
+ * to sort by access time for the LRU.  It also takes care of priority by
+ * either instantiating as in-memory or handling the transition from single
+ * to multiple access.
+ */
+public class CachedBlock implements HeapSize, Comparable<CachedBlock> {
+
+  public final static long PER_BLOCK_OVERHEAD = ClassSize.align(
+    ClassSize.OBJECT + (3 * ClassSize.REFERENCE) + (2 * Bytes.SIZEOF_LONG) +
+    ClassSize.STRING + ClassSize.BYTE_BUFFER);
+
+  static enum BlockPriority {
+    /**
+     * Accessed a single time (used for scan-resistance)
+     */
+    SINGLE,
+    /**
+     * Accessed multiple times
+     */
+    MULTI,
+    /**
+     * Block from in-memory store
+     */
+    MEMORY
+  };
+
+  private final String blockName;
+  private final ByteBuffer buf;
+  private volatile long accessTime;
+  private long size;
+  private BlockPriority priority;
+
+  public CachedBlock(String blockName, ByteBuffer buf, long accessTime) {
+    this(blockName, buf, accessTime, false);
+  }
+
+  public CachedBlock(String blockName, ByteBuffer buf, long accessTime,
+      boolean inMemory) {
+    this.blockName = blockName;
+    this.buf = buf;
+    this.accessTime = accessTime;
+    this.size = ClassSize.align(blockName.length()) +
+    ClassSize.align(buf.capacity()) + PER_BLOCK_OVERHEAD;
+    if(inMemory) {
+      this.priority = BlockPriority.MEMORY;
+    } else {
+      this.priority = BlockPriority.SINGLE;
+    }
+  }
+
+  /**
+   * Block has been accessed.  Update its local access time.
+   */
+  public void access(long accessTime) {
+    this.accessTime = accessTime;
+    if(this.priority == BlockPriority.SINGLE) {
+      this.priority = BlockPriority.MULTI;
+    }
+  }
+
+  public long heapSize() {
+    return size;
+  }
+
+  public int compareTo(CachedBlock that) {
+    if(this.accessTime == that.accessTime) return 0;
+    return this.accessTime < that.accessTime ? 1 : -1;
+  }
+
+  public ByteBuffer getBuffer() {
+    return this.buf;
+  }
+
+  public String getName() {
+    return this.blockName;
+  }
+
+  public BlockPriority getPriority() {
+    return this.priority;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/CachedBlockQueue.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/CachedBlockQueue.java
new file mode 100644
index 0000000..963bc8f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/CachedBlockQueue.java
@@ -0,0 +1,104 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.LinkedList;
+import java.util.PriorityQueue;
+
+import org.apache.hadoop.hbase.io.HeapSize;
+
+/**
+ * A memory-bound queue that will grow until an element brings
+ * total size >= maxSize.  From then on, only entries that are sorted larger
+ * than the smallest current entry will be inserted/replaced.
+ *
+ * <p>Use this when you want to find the largest elements (according to their
+ * ordering, not their heap size) that consume as close to the specified
+ * maxSize as possible.  Default behavior is to grow just above rather than
+ * just below specified max.
+ *
+ * <p>Object used in this queue must implement {@link HeapSize} as well as
+ * {@link Comparable}.
+ */
+public class CachedBlockQueue implements HeapSize {
+
+  private PriorityQueue<CachedBlock> queue;
+
+  private long heapSize;
+  private long maxSize;
+
+  /**
+   * @param maxSize the target size of elements in the queue
+   * @param blockSize expected average size of blocks
+   */
+  public CachedBlockQueue(long maxSize, long blockSize) {
+    int initialSize = (int)(maxSize / blockSize);
+    if(initialSize == 0) initialSize++;
+    queue = new PriorityQueue<CachedBlock>(initialSize);
+    heapSize = 0;
+    this.maxSize = maxSize;
+  }
+
+  /**
+   * Attempt to add the specified cached block to this queue.
+   *
+   * <p>If the queue is smaller than the max size, or if the specified element
+   * is ordered before the smallest element in the queue, the element will be
+   * added to the queue.  Otherwise, there is no side effect of this call.
+   * @param cb block to try to add to the queue
+   */
+  public void add(CachedBlock cb) {
+    if(heapSize < maxSize) {
+      queue.add(cb);
+      heapSize += cb.heapSize();
+    } else {
+      CachedBlock head = queue.peek();
+      if(cb.compareTo(head) > 0) {
+        heapSize += cb.heapSize();
+        heapSize -= head.heapSize();
+        if(heapSize > maxSize) {
+          queue.poll();
+        } else {
+          heapSize += head.heapSize();
+        }
+        queue.add(cb);
+      }
+    }
+  }
+
+  /**
+   * @return a sorted List of all elements in this queue, in descending order
+   */
+  public LinkedList<CachedBlock> get() {
+    LinkedList<CachedBlock> blocks = new LinkedList<CachedBlock>();
+    while (!queue.isEmpty()) {
+      blocks.addFirst(queue.poll());
+    }
+    return blocks;
+  }
+
+  /**
+   * Total size of all elements in this queue.
+   * @return size of all elements currently in queue, in bytes
+   */
+  public long heapSize() {
+    return heapSize;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/Compression.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/Compression.java
new file mode 100644
index 0000000..a5f97fd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/Compression.java
@@ -0,0 +1,279 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.FilterOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.io.compress.CodecPool;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.CompressionInputStream;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.Decompressor;
+import org.apache.hadoop.io.compress.GzipCodec;
+import org.apache.hadoop.io.compress.DefaultCodec;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * Compression related stuff.
+ * Copied from hadoop-3315 tfile.
+ */
+public final class Compression {
+  static final Log LOG = LogFactory.getLog(Compression.class);
+
+  /**
+   * Prevent the instantiation of class.
+   */
+  private Compression() {
+    super();
+  }
+
+  static class FinishOnFlushCompressionStream extends FilterOutputStream {
+    public FinishOnFlushCompressionStream(CompressionOutputStream cout) {
+      super(cout);
+    }
+
+    @Override
+    public void write(byte b[], int off, int len) throws IOException {
+      out.write(b, off, len);
+    }
+
+    @Override
+    public void flush() throws IOException {
+      CompressionOutputStream cout = (CompressionOutputStream) out;
+      cout.finish();
+      cout.flush();
+      cout.resetState();
+    }
+  }
+
+  /**
+   * Compression algorithms. The ordinal of these cannot change or else you
+   * risk breaking all existing HFiles out there.  Even the ones that are
+   * not compressed! (They use the NONE algorithm)
+   */
+  public static enum Algorithm {
+    LZO("lzo") {
+      // Use base type to avoid compile-time dependencies.
+      private transient CompressionCodec lzoCodec;
+
+      @Override
+      CompressionCodec getCodec() {
+        if (lzoCodec == null) {
+          Configuration conf = new Configuration();
+          conf.setBoolean("hadoop.native.lib", true);
+          try {
+            Class<?> externalCodec =
+                ClassLoader.getSystemClassLoader().loadClass("com.hadoop.compression.lzo.LzoCodec");
+            lzoCodec = (CompressionCodec) ReflectionUtils.newInstance(externalCodec, conf);
+          } catch (ClassNotFoundException e) {
+            throw new RuntimeException(e);
+          }
+        }
+        return lzoCodec;
+      }
+    },
+    GZ("gz") {
+      private transient GzipCodec codec;
+
+      @Override
+      DefaultCodec getCodec() {
+        if (codec == null) {
+          Configuration conf = new Configuration();
+          conf.setBoolean("hadoop.native.lib", true);
+          codec = new GzipCodec();
+          codec.setConf(conf);
+        }
+
+        return codec;
+      }
+    },
+
+    NONE("none") {
+      @Override
+      DefaultCodec getCodec() {
+        return null;
+      }
+
+      @Override
+      public synchronized InputStream createDecompressionStream(
+          InputStream downStream, Decompressor decompressor,
+          int downStreamBufferSize) throws IOException {
+        if (downStreamBufferSize > 0) {
+          return new BufferedInputStream(downStream, downStreamBufferSize);
+        }
+        // else {
+          // Make sure we bypass FSInputChecker buffer.
+        // return new BufferedInputStream(downStream, 1024);
+        // }
+        // }
+        return downStream;
+      }
+
+      @Override
+      public synchronized OutputStream createCompressionStream(
+          OutputStream downStream, Compressor compressor,
+          int downStreamBufferSize) throws IOException {
+        if (downStreamBufferSize > 0) {
+          return new BufferedOutputStream(downStream, downStreamBufferSize);
+        }
+
+        return downStream;
+      }
+    };
+
+    private final String compressName;
+	// data input buffer size to absorb small reads from application.
+    private static final int DATA_IBUF_SIZE = 1 * 1024;
+	// data output buffer size to absorb small writes from application.
+    private static final int DATA_OBUF_SIZE = 4 * 1024;
+
+    Algorithm(String name) {
+      this.compressName = name;
+    }
+
+    abstract CompressionCodec getCodec();
+
+    public InputStream createDecompressionStream(
+        InputStream downStream, Decompressor decompressor,
+        int downStreamBufferSize) throws IOException {
+      CompressionCodec codec = getCodec();
+      // Set the internal buffer size to read from down stream.
+      if (downStreamBufferSize > 0) {
+        Configurable c = (Configurable) codec;
+        c.getConf().setInt("io.file.buffer.size", downStreamBufferSize);
+      }
+      CompressionInputStream cis =
+          codec.createInputStream(downStream, decompressor);
+      BufferedInputStream bis2 = new BufferedInputStream(cis, DATA_IBUF_SIZE);
+      return bis2;
+
+    }
+
+    public OutputStream createCompressionStream(
+        OutputStream downStream, Compressor compressor, int downStreamBufferSize)
+        throws IOException {
+      CompressionCodec codec = getCodec();
+      OutputStream bos1 = null;
+      if (downStreamBufferSize > 0) {
+        bos1 = new BufferedOutputStream(downStream, downStreamBufferSize);
+      }
+      else {
+        bos1 = downStream;
+      }
+      Configurable c = (Configurable) codec;
+      c.getConf().setInt("io.file.buffer.size", 32 * 1024);
+      CompressionOutputStream cos =
+          codec.createOutputStream(bos1, compressor);
+      BufferedOutputStream bos2 =
+          new BufferedOutputStream(new FinishOnFlushCompressionStream(cos),
+              DATA_OBUF_SIZE);
+      return bos2;
+    }
+
+    public Compressor getCompressor() {
+      CompressionCodec codec = getCodec();
+      if (codec != null) {
+        Compressor compressor = CodecPool.getCompressor(codec);
+        if (compressor != null) {
+          if (compressor.finished()) {
+            // Somebody returns the compressor to CodecPool but is still using
+            // it.
+            LOG
+                .warn("Compressor obtained from CodecPool is already finished()");
+            // throw new AssertionError(
+            // "Compressor obtained from CodecPool is already finished()");
+          }
+          compressor.reset();
+        }
+        return compressor;
+      }
+      return null;
+    }
+
+    public void returnCompressor(Compressor compressor) {
+      if (compressor != null) {
+        CodecPool.returnCompressor(compressor);
+      }
+    }
+
+    public Decompressor getDecompressor() {
+      CompressionCodec codec = getCodec();
+      if (codec != null) {
+        Decompressor decompressor = CodecPool.getDecompressor(codec);
+        if (decompressor != null) {
+          if (decompressor.finished()) {
+            // Somebody returns the decompressor to CodecPool but is still using
+            // it.
+            LOG
+                .warn("Deompressor obtained from CodecPool is already finished()");
+            // throw new AssertionError(
+            // "Decompressor obtained from CodecPool is already finished()");
+          }
+          decompressor.reset();
+        }
+        return decompressor;
+      }
+
+      return null;
+    }
+
+    public void returnDecompressor(Decompressor decompressor) {
+      if (decompressor != null) {
+        CodecPool.returnDecompressor(decompressor);
+      }
+    }
+
+    public String getName() {
+      return compressName;
+    }
+  }
+
+  public static Algorithm getCompressionAlgorithmByName(String compressName) {
+    Algorithm[] algos = Algorithm.class.getEnumConstants();
+
+    for (Algorithm a : algos) {
+      if (a.getName().equals(compressName)) {
+        return a;
+      }
+    }
+
+    throw new IllegalArgumentException(
+        "Unsupported compression algorithm name: " + compressName);
+  }
+
+  static String[] getSupportedAlgorithms() {
+    Algorithm[] algos = Algorithm.class.getEnumConstants();
+
+    String[] ret = new String[algos.length];
+    int i = 0;
+    for (Algorithm a : algos) {
+      ret[i++] = a.getName();
+    }
+
+    return ret;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
new file mode 100644
index 0000000..e34c334
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
@@ -0,0 +1,1989 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.BufferedInputStream;
+import java.io.Closeable;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.hbase.KeyValue.KeyComparator;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.regionserver.TimeRangeTracker;
+import org.apache.hadoop.hbase.util.BloomFilter;
+import org.apache.hadoop.hbase.util.ByteBloomFilter;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.hbase.util.CompressionTest;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.compress.Compressor;
+import org.apache.hadoop.io.compress.Decompressor;
+
+/**
+ * File format for hbase.
+ * A file of sorted key/value pairs. Both keys and values are byte arrays.
+ * <p>
+ * The memory footprint of a HFile includes the following (below is taken from the
+ * <a
+ * href=https://issues.apache.org/jira/browse/HADOOP-3315>TFile</a> documentation
+ * but applies also to HFile):
+ * <ul>
+ * <li>Some constant overhead of reading or writing a compressed block.
+ * <ul>
+ * <li>Each compressed block requires one compression/decompression codec for
+ * I/O.
+ * <li>Temporary space to buffer the key.
+ * <li>Temporary space to buffer the value.
+ * </ul>
+ * <li>HFile index, which is proportional to the total number of Data Blocks.
+ * The total amount of memory needed to hold the index can be estimated as
+ * (56+AvgKeySize)*NumBlocks.
+ * </ul>
+ * Suggestions on performance optimization.
+ * <ul>
+ * <li>Minimum block size. We recommend a setting of minimum block size between
+ * 8KB to 1MB for general usage. Larger block size is preferred if files are
+ * primarily for sequential access. However, it would lead to inefficient random
+ * access (because there are more data to decompress). Smaller blocks are good
+ * for random access, but require more memory to hold the block index, and may
+ * be slower to create (because we must flush the compressor stream at the
+ * conclusion of each data block, which leads to an FS I/O flush). Further, due
+ * to the internal caching in Compression codec, the smallest possible block
+ * size would be around 20KB-30KB.
+ * <li>The current implementation does not offer true multi-threading for
+ * reading. The implementation uses FSDataInputStream seek()+read(), which is
+ * shown to be much faster than positioned-read call in single thread mode.
+ * However, it also means that if multiple threads attempt to access the same
+ * HFile (using multiple scanners) simultaneously, the actual I/O is carried out
+ * sequentially even if they access different DFS blocks (Reexamine! pread seems
+ * to be 10% faster than seek+read in my testing -- stack).
+ * <li>Compression codec. Use "none" if the data is not very compressable (by
+ * compressable, I mean a compression ratio at least 2:1). Generally, use "lzo"
+ * as the starting point for experimenting. "gz" overs slightly better
+ * compression ratio over "lzo" but requires 4x CPU to compress and 2x CPU to
+ * decompress, comparing to "lzo".
+ * </ul>
+ *
+ * For more on the background behind HFile, see <a
+ * href=https://issues.apache.org/jira/browse/HBASE-61>HBASE-61</a>.
+ * <p>
+ * File is made of data blocks followed by meta data blocks (if any), a fileinfo
+ * block, data block index, meta data block index, and a fixed size trailer
+ * which records the offsets at which file changes content type.
+ * <pre>&lt;data blocks>&lt;meta blocks>&lt;fileinfo>&lt;data index>&lt;meta index>&lt;trailer></pre>
+ * Each block has a bit of magic at its start.  Block are comprised of
+ * key/values.  In data blocks, they are both byte arrays.  Metadata blocks are
+ * a String key and a byte array value.  An empty file looks like this:
+ * <pre>&lt;fileinfo>&lt;trailer></pre>.  That is, there are not data nor meta
+ * blocks present.
+ * <p>
+ * TODO: Do scanners need to be able to take a start and end row?
+ * TODO: Should BlockIndex know the name of its file?  Should it have a Path
+ * that points at its file say for the case where an index lives apart from
+ * an HFile instance?
+ */
+public class HFile {
+  static final Log LOG = LogFactory.getLog(HFile.class);
+
+  /* These values are more or less arbitrary, and they are used as a
+   * form of check to make sure the file isn't completely corrupt.
+   */
+  final static byte [] DATABLOCKMAGIC =
+    {'D', 'A', 'T', 'A', 'B', 'L', 'K', 42 };
+  final static byte [] INDEXBLOCKMAGIC =
+    { 'I', 'D', 'X', 'B', 'L', 'K', 41, 43 };
+  final static byte [] METABLOCKMAGIC =
+    { 'M', 'E', 'T', 'A', 'B', 'L', 'K', 99 };
+  final static byte [] TRAILERBLOCKMAGIC =
+    { 'T', 'R', 'A', 'B', 'L', 'K', 34, 36 };
+
+  /**
+   * Maximum length of key in HFile.
+   */
+  public final static int MAXIMUM_KEY_LENGTH = Integer.MAX_VALUE;
+
+  /**
+   * Default blocksize for hfile.
+   */
+  public final static int DEFAULT_BLOCKSIZE = 64 * 1024;
+
+  /**
+   * Default compression: none.
+   */
+  public final static Compression.Algorithm DEFAULT_COMPRESSION_ALGORITHM =
+    Compression.Algorithm.NONE;
+  /** Default compression name: none. */
+  public final static String DEFAULT_COMPRESSION =
+    DEFAULT_COMPRESSION_ALGORITHM.getName();
+
+  // For measuring latency of "typical" reads and writes
+  private static volatile long readOps;
+  private static volatile long readTime;
+  private static volatile long writeOps;
+  private static volatile long writeTime;
+
+  public static final long getReadOps() {
+    long ret = readOps;
+    readOps = 0;
+    return ret;
+  }
+
+  public static final long getReadTime() {
+    long ret = readTime;
+    readTime = 0;
+    return ret;
+  }
+
+  public static final long getWriteOps() {
+    long ret = writeOps;
+    writeOps = 0;
+    return ret;
+  }
+
+  public static final long getWriteTime() {
+    long ret = writeTime;
+    writeTime = 0;
+    return ret;
+  }
+
+  /**
+   * HFile Writer.
+   */
+  public static class Writer implements Closeable {
+    // FileSystem stream to write on.
+    private FSDataOutputStream outputStream;
+    // True if we opened the <code>outputStream</code> (and so will close it).
+    private boolean closeOutputStream;
+
+    // Name for this object used when logging or in toString.  Is either
+    // the result of a toString on stream or else toString of passed file Path.
+    protected String name;
+
+    // Total uncompressed bytes, maybe calculate a compression ratio later.
+    private long totalBytes = 0;
+
+    // Total # of key/value entries, ie: how many times add() was called.
+    private int entryCount = 0;
+
+    // Used calculating average key and value lengths.
+    private long keylength = 0;
+    private long valuelength = 0;
+
+    // Used to ensure we write in order.
+    private final RawComparator<byte []> comparator;
+
+    // A stream made per block written.
+    private DataOutputStream out;
+
+    // Number of uncompressed bytes per block.  Reinitialized when we start
+    // new block.
+    private int blocksize;
+
+    // Offset where the current block began.
+    private long blockBegin;
+
+    // First key in a block (Not first key in file).
+    private byte [] firstKey = null;
+
+    // Key previously appended.  Becomes the last key in the file.
+    private byte [] lastKeyBuffer = null;
+    private int lastKeyOffset = -1;
+    private int lastKeyLength = -1;
+
+    // See {@link BlockIndex}. Below four fields are used to write the block
+    // index.
+    ArrayList<byte[]> blockKeys = new ArrayList<byte[]>();
+    // Block offset in backing stream.
+    ArrayList<Long> blockOffsets = new ArrayList<Long>();
+    // Raw (decompressed) data size.
+    ArrayList<Integer> blockDataSizes = new ArrayList<Integer>();
+
+    // Meta block system.
+    private ArrayList<byte []> metaNames = new ArrayList<byte []>();
+    private ArrayList<Writable> metaData = new ArrayList<Writable>();
+
+    // Used compression.  Used even if no compression -- 'none'.
+    private final Compression.Algorithm compressAlgo;
+    private Compressor compressor;
+
+    // Special datastructure to hold fileinfo.
+    private FileInfo fileinfo = new FileInfo();
+
+    // May be null if we were passed a stream.
+    private Path path = null;
+
+    /**
+     * Constructor that uses all defaults for compression and block size.
+     * @param fs
+     * @param path
+     * @throws IOException
+     */
+    public Writer(FileSystem fs, Path path)
+    throws IOException {
+      this(fs, path, DEFAULT_BLOCKSIZE, (Compression.Algorithm) null, null);
+    }
+
+    /**
+     * Constructor that takes a Path.
+     * @param fs
+     * @param path
+     * @param blocksize
+     * @param compress
+     * @param comparator
+     * @throws IOException
+     * @throws IOException
+     */
+    public Writer(FileSystem fs, Path path, int blocksize,
+      String compress, final KeyComparator comparator)
+    throws IOException {
+      this(fs, path, blocksize,
+        compress == null? DEFAULT_COMPRESSION_ALGORITHM:
+          Compression.getCompressionAlgorithmByName(compress),
+        comparator);
+    }
+
+    /**
+     * Constructor that takes a Path.
+     * @param fs
+     * @param path
+     * @param blocksize
+     * @param compress
+     * @param comparator
+     * @throws IOException
+     */
+    public Writer(FileSystem fs, Path path, int blocksize,
+      Compression.Algorithm compress,
+      final KeyComparator comparator)
+    throws IOException {
+      this(fs.create(path), blocksize, compress, comparator);
+      this.closeOutputStream = true;
+      this.name = path.toString();
+      this.path = path;
+    }
+
+    /**
+     * Constructor that takes a stream.
+     * @param ostream Stream to use.
+     * @param blocksize
+     * @param compress
+     * @param c RawComparator to use.
+     * @throws IOException
+     */
+    public Writer(final FSDataOutputStream ostream, final int blocksize,
+      final String  compress, final KeyComparator c)
+    throws IOException {
+      this(ostream, blocksize,
+        Compression.getCompressionAlgorithmByName(compress), c);
+    }
+
+    /**
+     * Constructor that takes a stream.
+     * @param ostream Stream to use.
+     * @param blocksize
+     * @param compress
+     * @param c
+     * @throws IOException
+     */
+    public Writer(final FSDataOutputStream ostream, final int blocksize,
+      final Compression.Algorithm  compress, final KeyComparator c)
+    throws IOException {
+      this.outputStream = ostream;
+      this.closeOutputStream = false;
+      this.blocksize = blocksize;
+      this.comparator = c == null? Bytes.BYTES_RAWCOMPARATOR: c;
+      this.name = this.outputStream.toString();
+      this.compressAlgo = compress == null?
+        DEFAULT_COMPRESSION_ALGORITHM: compress;
+    }
+
+    /*
+     * If at block boundary, opens new block.
+     * @throws IOException
+     */
+    private void checkBlockBoundary() throws IOException {
+      if (this.out != null && this.out.size() < blocksize) return;
+      finishBlock();
+      newBlock();
+    }
+
+    /*
+     * Do the cleanup if a current block.
+     * @throws IOException
+     */
+    private void finishBlock() throws IOException {
+      if (this.out == null) return;
+      long now = System.currentTimeMillis();
+
+      int size = releaseCompressingStream(this.out);
+      this.out = null;
+      blockKeys.add(firstKey);
+      blockOffsets.add(Long.valueOf(blockBegin));
+      blockDataSizes.add(Integer.valueOf(size));
+      this.totalBytes += size;
+
+      writeTime += System.currentTimeMillis() - now;
+      writeOps++;
+    }
+
+    /*
+     * Ready a new block for writing.
+     * @throws IOException
+     */
+    private void newBlock() throws IOException {
+      // This is where the next block begins.
+      blockBegin = outputStream.getPos();
+      this.out = getCompressingStream();
+      this.out.write(DATABLOCKMAGIC);
+      firstKey = null;
+    }
+
+    /*
+     * Sets up a compressor and creates a compression stream on top of
+     * this.outputStream.  Get one per block written.
+     * @return A compressing stream; if 'none' compression, returned stream
+     * does not compress.
+     * @throws IOException
+     * @see {@link #releaseCompressingStream(DataOutputStream)}
+     */
+    private DataOutputStream getCompressingStream() throws IOException {
+      this.compressor = compressAlgo.getCompressor();
+      // Get new DOS compression stream.  In tfile, the DOS, is not closed,
+      // just finished, and that seems to be fine over there.  TODO: Check
+      // no memory retention of the DOS.  Should I disable the 'flush' on the
+      // DOS as the BCFile over in tfile does?  It wants to make it so flushes
+      // don't go through to the underlying compressed stream.  Flush on the
+      // compressed downstream should be only when done.  I was going to but
+      // looks like when we call flush in here, its legitimate flush that
+      // should go through to the compressor.
+      OutputStream os =
+        this.compressAlgo.createCompressionStream(this.outputStream,
+        this.compressor, 0);
+      return new DataOutputStream(os);
+    }
+
+    /*
+     * Let go of block compressor and compressing stream gotten in call
+     * {@link #getCompressingStream}.
+     * @param dos
+     * @return How much was written on this stream since it was taken out.
+     * @see #getCompressingStream()
+     * @throws IOException
+     */
+    private int releaseCompressingStream(final DataOutputStream dos)
+    throws IOException {
+      dos.flush();
+      this.compressAlgo.returnCompressor(this.compressor);
+      this.compressor = null;
+      return dos.size();
+    }
+
+    /**
+     * Add a meta block to the end of the file. Call before close().
+     * Metadata blocks are expensive.  Fill one with a bunch of serialized data
+     * rather than do a metadata block per metadata instance.  If metadata is
+     * small, consider adding to file info using
+     * {@link #appendFileInfo(byte[], byte[])}
+     * @param metaBlockName name of the block
+     * @param content will call readFields to get data later (DO NOT REUSE)
+     */
+    public void appendMetaBlock(String metaBlockName, Writable content) {
+      byte[] key = Bytes.toBytes(metaBlockName);
+      int i;
+      for (i = 0; i < metaNames.size(); ++i) {
+        // stop when the current key is greater than our own
+        byte[] cur = metaNames.get(i);
+        if (Bytes.BYTES_RAWCOMPARATOR.compare(cur, 0, cur.length, 
+            key, 0, key.length) > 0) {
+          break;
+        }
+      }
+      metaNames.add(i, key);
+      metaData.add(i, content);
+    }
+
+    /**
+     * Add to the file info.  Added key value can be gotten out of the return
+     * from {@link Reader#loadFileInfo()}.
+     * @param k Key
+     * @param v Value
+     * @throws IOException
+     */
+    public void appendFileInfo(final byte [] k, final byte [] v)
+    throws IOException {
+      appendFileInfo(this.fileinfo, k, v, true);
+    }
+
+    static FileInfo appendFileInfo(FileInfo fi, final byte [] k, final byte [] v,
+      final boolean checkPrefix)
+    throws IOException {
+      if (k == null || v == null) {
+        throw new NullPointerException("Key nor value may be null");
+      }
+      if (checkPrefix &&
+          Bytes.startsWith(k, FileInfo.RESERVED_PREFIX_BYTES)) {
+        throw new IOException("Keys with a " + FileInfo.RESERVED_PREFIX +
+          " are reserved");
+      }
+      fi.put(k, v);
+      return fi;
+    }
+
+    /**
+     * @return Path or null if we were passed a stream rather than a Path.
+     */
+    public Path getPath() {
+      return this.path;
+    }
+
+    @Override
+    public String toString() {
+      return "writer=" + this.name + ", compression=" +
+        this.compressAlgo.getName();
+    }
+
+    /**
+     * Add key/value to file.
+     * Keys must be added in an order that agrees with the Comparator passed
+     * on construction.
+     * @param kv KeyValue to add.  Cannot be empty nor null.
+     * @throws IOException
+     */
+    public void append(final KeyValue kv)
+    throws IOException {
+      append(kv.getBuffer(), kv.getKeyOffset(), kv.getKeyLength(),
+        kv.getBuffer(), kv.getValueOffset(), kv.getValueLength());
+    }
+
+    /**
+     * Add key/value to file.
+     * Keys must be added in an order that agrees with the Comparator passed
+     * on construction.
+     * @param key Key to add.  Cannot be empty nor null.
+     * @param value Value to add.  Cannot be empty nor null.
+     * @throws IOException
+     */
+    public void append(final byte [] key, final byte [] value)
+    throws IOException {
+      append(key, 0, key.length, value, 0, value.length);
+    }
+
+    /**
+     * Add key/value to file.
+     * Keys must be added in an order that agrees with the Comparator passed
+     * on construction.
+     * @param key
+     * @param koffset
+     * @param klength
+     * @param value
+     * @param voffset
+     * @param vlength
+     * @throws IOException
+     */
+    private void append(final byte [] key, final int koffset, final int klength,
+        final byte [] value, final int voffset, final int vlength)
+    throws IOException {
+      boolean dupKey = checkKey(key, koffset, klength);
+      checkValue(value, voffset, vlength);
+      if (!dupKey) {
+        checkBlockBoundary();
+      }
+      // Write length of key and value and then actual key and value bytes.
+      this.out.writeInt(klength);
+      this.keylength += klength;
+      this.out.writeInt(vlength);
+      this.valuelength += vlength;
+      this.out.write(key, koffset, klength);
+      this.out.write(value, voffset, vlength);
+      // Are we the first key in this block?
+      if (this.firstKey == null) {
+        // Copy the key.
+        this.firstKey = new byte [klength];
+        System.arraycopy(key, koffset, this.firstKey, 0, klength);
+      }
+      this.lastKeyBuffer = key;
+      this.lastKeyOffset = koffset;
+      this.lastKeyLength = klength;
+      this.entryCount ++;
+    }
+
+    /*
+     * @param key Key to check.
+     * @return the flag of duplicate Key or not
+     * @throws IOException
+     */
+    private boolean checkKey(final byte [] key, final int offset, final int length)
+    throws IOException {
+      boolean dupKey = false;
+
+      if (key == null || length <= 0) {
+        throw new IOException("Key cannot be null or empty");
+      }
+      if (length > MAXIMUM_KEY_LENGTH) {
+        throw new IOException("Key length " + length + " > " +
+          MAXIMUM_KEY_LENGTH);
+      }
+      if (this.lastKeyBuffer != null) {
+        int keyComp = this.comparator.compare(this.lastKeyBuffer, this.lastKeyOffset,
+            this.lastKeyLength, key, offset, length);
+        if (keyComp > 0) {
+          throw new IOException("Added a key not lexically larger than" +
+            " previous key=" + Bytes.toStringBinary(key, offset, length) +
+            ", lastkey=" + Bytes.toStringBinary(this.lastKeyBuffer, this.lastKeyOffset,
+                this.lastKeyLength));
+        } else if (keyComp == 0) {
+          dupKey = true;
+        }
+      }
+      return dupKey;
+    }
+
+    private void checkValue(final byte [] value, final int offset,
+        final int length) throws IOException {
+      if (value == null) {
+        throw new IOException("Value cannot be null");
+      }
+    }
+
+    public long getTotalBytes() {
+      return this.totalBytes;
+    }
+
+    public void close() throws IOException {
+      if (this.outputStream == null) {
+        return;
+      }
+      // Write out the end of the data blocks, then write meta data blocks.
+      // followed by fileinfo, data block index and meta block index.
+
+      finishBlock();
+
+      FixedFileTrailer trailer = new FixedFileTrailer();
+
+      // Write out the metadata blocks if any.
+      ArrayList<Long> metaOffsets = null;
+      ArrayList<Integer> metaDataSizes = null;
+      if (metaNames.size() > 0) {
+        metaOffsets = new ArrayList<Long>(metaNames.size());
+        metaDataSizes = new ArrayList<Integer>(metaNames.size());
+        for (int i = 0 ; i < metaNames.size() ; ++ i ) {
+          // store the beginning offset
+          long curPos = outputStream.getPos();
+          metaOffsets.add(curPos);
+          // write the metadata content
+          DataOutputStream dos = getCompressingStream();
+          dos.write(METABLOCKMAGIC);
+          metaData.get(i).write(dos);
+          int size = releaseCompressingStream(dos);
+          // store the metadata size
+          metaDataSizes.add(size);
+        }
+      }
+
+      // Write fileinfo.
+      trailer.fileinfoOffset = writeFileInfo(this.outputStream);
+
+      // Write the data block index.
+      trailer.dataIndexOffset = BlockIndex.writeIndex(this.outputStream,
+        this.blockKeys, this.blockOffsets, this.blockDataSizes);
+
+      // Meta block index.
+      if (metaNames.size() > 0) {
+        trailer.metaIndexOffset = BlockIndex.writeIndex(this.outputStream,
+          this.metaNames, metaOffsets, metaDataSizes);
+      }
+
+      // Now finish off the trailer.
+      trailer.dataIndexCount = blockKeys.size();
+      trailer.metaIndexCount = metaNames.size();
+
+      trailer.totalUncompressedBytes = totalBytes;
+      trailer.entryCount = entryCount;
+
+      trailer.compressionCodec = this.compressAlgo.ordinal();
+
+      trailer.serialize(outputStream);
+
+      if (this.closeOutputStream) {
+        this.outputStream.close();
+        this.outputStream = null;
+      }
+    }
+
+    /*
+     * Add last bits of metadata to fileinfo and then write it out.
+     * Reader will be expecting to find all below.
+     * @param o Stream to write on.
+     * @return Position at which we started writing.
+     * @throws IOException
+     */
+    private long writeFileInfo(FSDataOutputStream o) throws IOException {
+      if (this.lastKeyBuffer != null) {
+        // Make a copy.  The copy is stuffed into HMapWritable.  Needs a clean
+        // byte buffer.  Won't take a tuple.
+        byte [] b = new byte[this.lastKeyLength];
+        System.arraycopy(this.lastKeyBuffer, this.lastKeyOffset, b, 0,
+          this.lastKeyLength);
+        appendFileInfo(this.fileinfo, FileInfo.LASTKEY, b, false);
+      }
+      int avgKeyLen = this.entryCount == 0? 0:
+        (int)(this.keylength/this.entryCount);
+      appendFileInfo(this.fileinfo, FileInfo.AVG_KEY_LEN,
+        Bytes.toBytes(avgKeyLen), false);
+      int avgValueLen = this.entryCount == 0? 0:
+        (int)(this.valuelength/this.entryCount);
+      appendFileInfo(this.fileinfo, FileInfo.AVG_VALUE_LEN,
+        Bytes.toBytes(avgValueLen), false);
+      appendFileInfo(this.fileinfo, FileInfo.COMPARATOR,
+        Bytes.toBytes(this.comparator.getClass().getName()), false);
+      long pos = o.getPos();
+      this.fileinfo.write(o);
+      return pos;
+    }
+  }
+
+  /**
+   * HFile Reader.
+   */
+  public static class Reader implements Closeable {
+    // Stream to read from.
+    private FSDataInputStream istream;
+    // True if we should close istream when done.  We don't close it if we
+    // didn't open it.
+    private boolean closeIStream;
+
+    // These are read in when the file info is loaded.
+    HFile.BlockIndex blockIndex;
+    private BlockIndex metaIndex;
+    FixedFileTrailer trailer;
+    private volatile boolean fileInfoLoaded = false;
+
+    // Filled when we read in the trailer.
+    private Compression.Algorithm compressAlgo;
+
+    // Last key in the file.  Filled in when we read in the file info
+    private byte [] lastkey = null;
+    // Stats read in when we load file info.
+    private int avgKeyLen = -1;
+    private int avgValueLen = -1;
+
+    // Used to ensure we seek correctly.
+    RawComparator<byte []> comparator;
+
+    // Size of this file.
+    private final long fileSize;
+
+    // Block cache to use.
+    private final BlockCache cache;
+    public int cacheHits = 0;
+    public int blockLoads = 0;
+    public int metaLoads = 0;
+
+    // Whether file is from in-memory store
+    private boolean inMemory = false;
+
+    // Name for this object used when logging or in toString.  Is either
+    // the result of a toString on the stream or else is toString of passed
+    // file Path plus metadata key/value pairs.
+    protected String name;
+
+    /**
+     * Opens a HFile.  You must load the file info before you can
+     * use it by calling {@link #loadFileInfo()}.
+     *
+     * @param fs filesystem to load from
+     * @param path path within said filesystem
+     * @param cache block cache. Pass null if none.
+     * @throws IOException
+     */
+    public Reader(FileSystem fs, Path path, BlockCache cache, boolean inMemory)
+    throws IOException {
+      this(fs.open(path), fs.getFileStatus(path).getLen(), cache, inMemory);
+      this.closeIStream = true;
+      this.name = path.toString();
+    }
+
+    /**
+     * Opens a HFile.  You must load the index before you can
+     * use it by calling {@link #loadFileInfo()}.
+     *
+     * @param fsdis input stream.  Caller is responsible for closing the passed
+     * stream.
+     * @param size Length of the stream.
+     * @param cache block cache. Pass null if none.
+     * @throws IOException
+     */
+    public Reader(final FSDataInputStream fsdis, final long size,
+        final BlockCache cache, final boolean inMemory) {
+      this.cache = cache;
+      this.fileSize = size;
+      this.istream = fsdis;
+      this.closeIStream = false;
+      this.name = this.istream == null? "": this.istream.toString();
+      this.inMemory = inMemory;
+    }
+
+    @Override
+    public String toString() {
+      return "reader=" + this.name +
+          (!isFileInfoLoaded()? "":
+            ", compression=" + this.compressAlgo.getName() +
+            ", inMemory=" + this.inMemory +
+            ", firstKey=" + toStringFirstKey() +
+            ", lastKey=" + toStringLastKey()) +
+            ", avgKeyLen=" + this.avgKeyLen +
+            ", avgValueLen=" + this.avgValueLen +
+            ", entries=" + this.trailer.entryCount +
+            ", length=" + this.fileSize;
+    }
+
+    protected String toStringFirstKey() {
+      return KeyValue.keyToString(getFirstKey());
+    }
+
+    protected String toStringLastKey() {
+      return KeyValue.keyToString(getLastKey());
+    }
+
+    public long length() {
+      return this.fileSize;
+    }
+
+    public boolean inMemory() {
+      return this.inMemory;
+    }
+
+    private byte[] readAllIndex(final FSDataInputStream in, final long indexOffset,
+        final int indexSize) throws IOException {
+      byte[] allIndex = new byte[indexSize];
+      in.seek(indexOffset);
+      IOUtils.readFully(in, allIndex, 0, allIndex.length);
+      return allIndex;
+    }
+
+    /**
+     * Read in the index and file info.
+     * @return A map of fileinfo data.
+     * See {@link Writer#appendFileInfo(byte[], byte[])}.
+     * @throws IOException
+     */
+    public Map<byte [], byte []> loadFileInfo()
+    throws IOException {
+      this.trailer = readTrailer();
+
+      // Read in the fileinfo and get what we need from it.
+      this.istream.seek(this.trailer.fileinfoOffset);
+      FileInfo fi = new FileInfo();
+      fi.readFields(this.istream);
+      this.lastkey = fi.get(FileInfo.LASTKEY);
+      this.avgKeyLen = Bytes.toInt(fi.get(FileInfo.AVG_KEY_LEN));
+      this.avgValueLen = Bytes.toInt(fi.get(FileInfo.AVG_VALUE_LEN));
+      String clazzName = Bytes.toString(fi.get(FileInfo.COMPARATOR));
+      this.comparator = getComparator(clazzName);
+
+      int allIndexSize = (int)(this.fileSize - this.trailer.dataIndexOffset - FixedFileTrailer.trailerSize());
+      byte[] dataAndMetaIndex = readAllIndex(this.istream, this.trailer.dataIndexOffset, allIndexSize);
+
+      ByteArrayInputStream bis = new ByteArrayInputStream(dataAndMetaIndex);
+      DataInputStream dis = new DataInputStream(bis);
+
+      // Read in the data index.
+      this.blockIndex =
+          BlockIndex.readIndex(this.comparator, dis, this.trailer.dataIndexCount);
+
+      // Read in the metadata index.
+      if (trailer.metaIndexCount > 0) {
+        this.metaIndex = BlockIndex.readIndex(Bytes.BYTES_RAWCOMPARATOR, dis,
+            this.trailer.metaIndexCount);
+      }
+      this.fileInfoLoaded = true;
+
+      if (null != dis) {
+        dis.close();
+      }
+
+      return fi;
+    }
+
+    boolean isFileInfoLoaded() {
+      return this.fileInfoLoaded;
+    }
+
+    @SuppressWarnings("unchecked")
+    private RawComparator<byte []> getComparator(final String clazzName)
+    throws IOException {
+      if (clazzName == null || clazzName.length() == 0) {
+        return null;
+      }
+      try {
+        return (RawComparator<byte []>)Class.forName(clazzName).newInstance();
+      } catch (InstantiationException e) {
+        throw new IOException(e);
+      } catch (IllegalAccessException e) {
+        throw new IOException(e);
+      } catch (ClassNotFoundException e) {
+        throw new IOException(e);
+      }
+    }
+
+    /* Read the trailer off the input stream.  As side effect, sets the
+     * compression algorithm.
+     * @return Populated FixedFileTrailer.
+     * @throws IOException
+     */
+    private FixedFileTrailer readTrailer() throws IOException {
+      FixedFileTrailer fft = new FixedFileTrailer();
+      long seekPoint = this.fileSize - FixedFileTrailer.trailerSize();
+      this.istream.seek(seekPoint);
+      fft.deserialize(this.istream);
+      // Set up the codec.
+      this.compressAlgo =
+        Compression.Algorithm.values()[fft.compressionCodec];
+
+      CompressionTest.testCompression(this.compressAlgo);
+
+      return fft;
+    }
+
+    /**
+     * Create a Scanner on this file.  No seeks or reads are done on creation.
+     * Call {@link HFileScanner#seekTo(byte[])} to position an start the read.
+     * There is nothing to clean up in a Scanner. Letting go of your references
+     * to the scanner is sufficient.
+     * @param pread Use positional read rather than seek+read if true (pread is
+     * better for random reads, seek+read is better scanning).
+     * @param cacheBlocks True if we should cache blocks read in by this scanner.
+     * @return Scanner on this file.
+     */
+    public HFileScanner getScanner(boolean cacheBlocks, final boolean pread) {
+      return new Scanner(this, cacheBlocks, pread);
+    }
+
+    /**
+     * @param key Key to search.
+     * @return Block number of the block containing the key or -1 if not in this
+     * file.
+     */
+    protected int blockContainingKey(final byte [] key, int offset, int length) {
+      if (blockIndex == null) {
+        throw new RuntimeException("Block index not loaded");
+      }
+      return blockIndex.blockContainingKey(key, offset, length);
+    }
+    /**
+     * @param metaBlockName
+     * @param cacheBlock Add block to cache, if found
+     * @return Block wrapped in a ByteBuffer
+     * @throws IOException
+     */
+    public ByteBuffer getMetaBlock(String metaBlockName, boolean cacheBlock)
+    throws IOException {
+      if (trailer.metaIndexCount == 0) {
+        return null; // there are no meta blocks
+      }
+      if (metaIndex == null) {
+        throw new IOException("Meta index not loaded");
+      }
+
+      byte [] mbname = Bytes.toBytes(metaBlockName);
+      int block = metaIndex.blockContainingKey(mbname, 0, mbname.length);
+      if (block == -1)
+        return null;
+      long blockSize;
+      if (block == metaIndex.count - 1) {
+        blockSize = trailer.fileinfoOffset - metaIndex.blockOffsets[block];
+      } else {
+        blockSize = metaIndex.blockOffsets[block+1] - metaIndex.blockOffsets[block];
+      }
+
+      long now = System.currentTimeMillis();
+
+      // Per meta key from any given file, synchronize reads for said block
+      synchronized (metaIndex.blockKeys[block]) {
+        metaLoads++;
+        // Check cache for block.  If found return.
+        if (cache != null) {
+          ByteBuffer cachedBuf = cache.getBlock(name + "meta" + block,
+              cacheBlock);
+          if (cachedBuf != null) {
+            // Return a distinct 'shallow copy' of the block,
+            // so pos doesnt get messed by the scanner
+            cacheHits++;
+            return cachedBuf.duplicate();
+          }
+          // Cache Miss, please load.
+        }
+
+        ByteBuffer buf = decompress(metaIndex.blockOffsets[block],
+          longToInt(blockSize), metaIndex.blockDataSizes[block], true);
+        byte [] magic = new byte[METABLOCKMAGIC.length];
+        buf.get(magic, 0, magic.length);
+
+        if (! Arrays.equals(magic, METABLOCKMAGIC)) {
+          throw new IOException("Meta magic is bad in block " + block);
+        }
+
+        // Create a new ByteBuffer 'shallow copy' to hide the magic header
+        buf = buf.slice();
+
+        readTime += System.currentTimeMillis() - now;
+        readOps++;
+
+        // Cache the block
+        if(cacheBlock && cache != null) {
+          cache.cacheBlock(name + "meta" + block, buf.duplicate(), inMemory);
+        }
+
+        return buf;
+      }
+    }
+
+    /**
+     * Read in a file block.
+     * @param block Index of block to read.
+     * @param pread Use positional read instead of seek+read (positional is
+     * better doing random reads whereas seek+read is better scanning).
+     * @return Block wrapped in a ByteBuffer.
+     * @throws IOException
+     */
+    ByteBuffer readBlock(int block, boolean cacheBlock, final boolean pread)
+    throws IOException {
+      if (blockIndex == null) {
+        throw new IOException("Block index not loaded");
+      }
+      if (block < 0 || block >= blockIndex.count) {
+        throw new IOException("Requested block is out of range: " + block +
+          ", max: " + blockIndex.count);
+      }
+      // For any given block from any given file, synchronize reads for said
+      // block.
+      // Without a cache, this synchronizing is needless overhead, but really
+      // the other choice is to duplicate work (which the cache would prevent you from doing).
+      synchronized (blockIndex.blockKeys[block]) {
+        blockLoads++;
+        // Check cache for block.  If found return.
+        if (cache != null) {
+          ByteBuffer cachedBuf = cache.getBlock(name + block, cacheBlock);
+          if (cachedBuf != null) {
+            // Return a distinct 'shallow copy' of the block,
+            // so pos doesnt get messed by the scanner
+            cacheHits++;
+            return cachedBuf.duplicate();
+          }
+          // Carry on, please load.
+        }
+
+        // Load block from filesystem.
+        long now = System.currentTimeMillis();
+        long onDiskBlockSize;
+        if (block == blockIndex.count - 1) {
+          // last block!  The end of data block is first meta block if there is
+          // one or if there isn't, the fileinfo offset.
+          long offset = this.metaIndex != null?
+            this.metaIndex.blockOffsets[0]: this.trailer.fileinfoOffset;
+          onDiskBlockSize = offset - blockIndex.blockOffsets[block];
+        } else {
+          onDiskBlockSize = blockIndex.blockOffsets[block+1] -
+          blockIndex.blockOffsets[block];
+        }
+        ByteBuffer buf = decompress(blockIndex.blockOffsets[block],
+          longToInt(onDiskBlockSize), this.blockIndex.blockDataSizes[block],
+          pread);
+
+        byte [] magic = new byte[DATABLOCKMAGIC.length];
+        buf.get(magic, 0, magic.length);
+        if (!Arrays.equals(magic, DATABLOCKMAGIC)) {
+          throw new IOException("Data magic is bad in block " + block);
+        }
+
+        // 'shallow copy' to hide the header
+        // NOTE: you WILL GET BIT if you call buf.array() but don't start
+        //       reading at buf.arrayOffset()
+        buf = buf.slice();
+
+        readTime += System.currentTimeMillis() - now;
+        readOps++;
+
+        // Cache the block
+        if(cacheBlock && cache != null) {
+          cache.cacheBlock(name + block, buf.duplicate(), inMemory);
+        }
+
+        return buf;
+      }
+    }
+
+    /*
+     * Decompress <code>compressedSize</code> bytes off the backing
+     * FSDataInputStream.
+     * @param offset
+     * @param compressedSize
+     * @param decompressedSize
+     *
+     * @return
+     * @throws IOException
+     */
+    private ByteBuffer decompress(final long offset, final int compressedSize,
+      final int decompressedSize, final boolean pread)
+    throws IOException {
+      Decompressor decompressor = null;
+      ByteBuffer buf = null;
+      try {
+        decompressor = this.compressAlgo.getDecompressor();
+        // My guess is that the bounded range fis is needed to stop the
+        // decompressor reading into next block -- IIRC, it just grabs a
+        // bunch of data w/o regard to whether decompressor is coming to end of a
+        // decompression.
+
+        // We use a buffer of DEFAULT_BLOCKSIZE size.  This might be extreme.
+        // Could maybe do with less. Study and figure it: TODO
+        InputStream is = this.compressAlgo.createDecompressionStream(
+            new BufferedInputStream(
+                new BoundedRangeFileInputStream(this.istream, offset, compressedSize,
+                                                pread),
+                Math.min(DEFAULT_BLOCKSIZE, compressedSize)),
+            decompressor, 0);
+        buf = ByteBuffer.allocate(decompressedSize);
+        IOUtils.readFully(is, buf.array(), 0, buf.capacity());
+        is.close();
+      } finally {
+        if (null != decompressor) {
+          this.compressAlgo.returnDecompressor(decompressor);
+        }
+      }
+      return buf;
+    }
+
+    /**
+     * @return First key in the file.  May be null if file has no entries.
+     * Note that this is not the first rowkey, but rather the byte form of
+     * the first KeyValue.
+     */
+    public byte [] getFirstKey() {
+      if (blockIndex == null) {
+        throw new RuntimeException("Block index not loaded");
+      }
+      return this.blockIndex.isEmpty()? null: this.blockIndex.blockKeys[0];
+    }
+
+    /**
+     * @return the first row key, or null if the file is empty.
+     * TODO move this to StoreFile after Ryan's patch goes in
+     * to eliminate KeyValue here
+     */
+    public byte[] getFirstRowKey() {
+      byte[] firstKey = getFirstKey();
+      if (firstKey == null) return null;
+      return KeyValue.createKeyValueFromKey(firstKey).getRow();
+    }
+
+    /**
+     * @return number of KV entries in this HFile
+     */
+    public int getEntries() {
+      if (!this.isFileInfoLoaded()) {
+        throw new RuntimeException("File info not loaded");
+      }
+      return this.trailer.entryCount;
+    }
+
+    /**
+     * @return Last key in the file.  May be null if file has no entries.
+     * Note that this is not the last rowkey, but rather the byte form of
+     * the last KeyValue.
+     */
+    public byte [] getLastKey() {
+      if (!isFileInfoLoaded()) {
+        throw new RuntimeException("Load file info first");
+      }
+      return this.blockIndex.isEmpty()? null: this.lastkey;
+    }
+
+    /**
+     * @return the last row key, or null if the file is empty.
+     * TODO move this to StoreFile after Ryan's patch goes in
+     * to eliminate KeyValue here
+     */
+    public byte[] getLastRowKey() {
+      byte[] lastKey = getLastKey();
+      if (lastKey == null) return null;
+      return KeyValue.createKeyValueFromKey(lastKey).getRow();
+    }
+
+    /**
+     * @return number of K entries in this HFile's filter.  Returns KV count if no filter.
+     */
+    public int getFilterEntries() {
+      return getEntries();
+    }
+
+    /**
+     * @return Comparator.
+     */
+    public RawComparator<byte []> getComparator() {
+      return this.comparator;
+    }
+
+    /**
+     * @return index size
+     */
+    public long indexSize() {
+      return (this.blockIndex != null? this.blockIndex.heapSize(): 0) +
+        ((this.metaIndex != null)? this.metaIndex.heapSize(): 0);
+    }
+
+    /**
+     * @return Midkey for this file.  We work with block boundaries only so
+     * returned midkey is an approximation only.
+     * @throws IOException
+     */
+    public byte [] midkey() throws IOException {
+      if (!isFileInfoLoaded() || this.blockIndex.isEmpty()) {
+        return null;
+      }
+      return this.blockIndex.midkey();
+    }
+
+    public void close() throws IOException {
+      if (this.closeIStream && this.istream != null) {
+        this.istream.close();
+        this.istream = null;
+      }
+    }
+
+    public String getName() {
+      return name;
+    }
+
+    /*
+     * Implementation of {@link HFileScanner} interface.
+     */
+    protected static class Scanner implements HFileScanner {
+      private final Reader reader;
+      private ByteBuffer block;
+      private int currBlock;
+
+      private final boolean cacheBlocks;
+      private final boolean pread;
+
+      private int currKeyLen = 0;
+      private int currValueLen = 0;
+
+      public int blockFetches = 0;
+
+      public Scanner(Reader r, boolean cacheBlocks, final boolean pread) {
+        this.reader = r;
+        this.cacheBlocks = cacheBlocks;
+        this.pread = pread;
+      }
+
+      public KeyValue getKeyValue() {
+        if(this.block == null) {
+          return null;
+        }
+        return new KeyValue(this.block.array(),
+            this.block.arrayOffset() + this.block.position() - 8,
+            this.currKeyLen+this.currValueLen+8);
+      }
+
+      public ByteBuffer getKey() {
+        if (this.block == null || this.currKeyLen == 0) {
+          throw new RuntimeException("you need to seekTo() before calling getKey()");
+        }
+        ByteBuffer keyBuff = this.block.slice();
+        keyBuff.limit(this.currKeyLen);
+        keyBuff.rewind();
+        // Do keyBuff.asReadOnly()?
+        return keyBuff;
+      }
+
+      public ByteBuffer getValue() {
+        if (block == null || currKeyLen == 0) {
+          throw new RuntimeException("you need to seekTo() before calling getValue()");
+        }
+        // TODO: Could this be done with one ByteBuffer rather than create two?
+        ByteBuffer valueBuff = this.block.slice();
+        valueBuff.position(this.currKeyLen);
+        valueBuff = valueBuff.slice();
+        valueBuff.limit(currValueLen);
+        valueBuff.rewind();
+        return valueBuff;
+      }
+
+      public boolean next() throws IOException {
+        // LOG.deug("rem:" + block.remaining() + " p:" + block.position() +
+        // " kl: " + currKeyLen + " kv: " + currValueLen);
+        if (block == null) {
+          throw new IOException("Next called on non-seeked scanner");
+        }
+        block.position(block.position() + currKeyLen + currValueLen);
+        if (block.remaining() <= 0) {
+          // LOG.debug("Fetch next block");
+          currBlock++;
+          if (currBlock >= reader.blockIndex.count) {
+            // damn we are at the end
+            currBlock = 0;
+            block = null;
+            return false;
+          }
+          block = reader.readBlock(this.currBlock, this.cacheBlocks, this.pread);
+          currKeyLen = Bytes.toInt(block.array(), block.arrayOffset()+block.position(), 4);
+          currValueLen = Bytes.toInt(block.array(), block.arrayOffset()+block.position()+4, 4);
+          block.position(block.position()+8);
+          blockFetches++;
+          return true;
+        }
+        // LOG.debug("rem:" + block.remaining() + " p:" + block.position() +
+        // " kl: " + currKeyLen + " kv: " + currValueLen);
+        currKeyLen = Bytes.toInt(block.array(), block.arrayOffset()+block.position(), 4);
+        currValueLen = Bytes.toInt(block.array(), block.arrayOffset()+block.position()+4, 4);
+        block.position(block.position()+8);
+        return true;
+      }
+
+      public int seekTo(byte [] key) throws IOException {
+        return seekTo(key, 0, key.length);
+      }
+
+      public int seekTo(byte[] key, int offset, int length) throws IOException {
+        int b = reader.blockContainingKey(key, offset, length);
+        if (b < 0) return -1; // falls before the beginning of the file! :-(
+        // Avoid re-reading the same block (that'd be dumb).
+        loadBlock(b, true);
+        return blockSeek(key, offset, length, false);
+      }
+
+      public int reseekTo(byte [] key) throws IOException {
+        return reseekTo(key, 0, key.length);
+      }
+
+      public int reseekTo(byte[] key, int offset, int length)
+        throws IOException {
+
+        if (this.block != null && this.currKeyLen != 0) {
+          ByteBuffer bb = getKey();
+          int compared = this.reader.comparator.compare(key, offset, length,
+              bb.array(), bb.arrayOffset(), bb.limit());
+          if (compared < 1) {
+            //If the required key is less than or equal to current key, then
+            //don't do anything.
+            return compared;
+          }
+        }
+
+        int b = reader.blockContainingKey(key, offset, length);
+        if (b < 0) {
+          return -1;
+        }
+        loadBlock(b, false);
+        return blockSeek(key, offset, length, false);
+      }
+
+      /**
+       * Within a loaded block, seek looking for the first key
+       * that is smaller than (or equal to?) the key we are interested in.
+       *
+       * A note on the seekBefore - if you have seekBefore = true, AND the
+       * first key in the block = key, then you'll get thrown exceptions.
+       * @param key to find
+       * @param seekBefore find the key before the exact match.
+       * @return
+       */
+      private int blockSeek(byte[] key, int offset, int length, boolean seekBefore) {
+        int klen, vlen;
+        int lastLen = 0;
+        do {
+          klen = block.getInt();
+          vlen = block.getInt();
+          int comp = this.reader.comparator.compare(key, offset, length,
+            block.array(), block.arrayOffset() + block.position(), klen);
+          if (comp == 0) {
+            if (seekBefore) {
+              block.position(block.position() - lastLen - 16);
+              currKeyLen = block.getInt();
+              currValueLen = block.getInt();
+              return 1; // non exact match.
+            }
+            currKeyLen = klen;
+            currValueLen = vlen;
+            return 0; // indicate exact match
+          }
+          if (comp < 0) {
+            // go back one key:
+            block.position(block.position() - lastLen - 16);
+            currKeyLen = block.getInt();
+            currValueLen = block.getInt();
+            return 1;
+          }
+          block.position(block.position() + klen + vlen);
+          lastLen = klen + vlen ;
+        } while(block.remaining() > 0);
+        // ok we are at the end, so go back a littleeeeee....
+        // The 8 in the below is intentionally different to the 16s in the above
+        // Do the math you you'll figure it.
+        block.position(block.position() - lastLen - 8);
+        currKeyLen = block.getInt();
+        currValueLen = block.getInt();
+        return 1; // didn't exactly find it.
+      }
+
+      public boolean seekBefore(byte [] key) throws IOException {
+        return seekBefore(key, 0, key.length);
+      }
+
+      public boolean seekBefore(byte[] key, int offset, int length)
+      throws IOException {
+        int b = reader.blockContainingKey(key, offset, length);
+        if (b < 0)
+          return false; // key is before the start of the file.
+
+        // Question: does this block begin with 'key'?
+        if (this.reader.comparator.compare(reader.blockIndex.blockKeys[b],
+            0, reader.blockIndex.blockKeys[b].length,
+            key, offset, length) == 0) {
+          // Ok the key we're interested in is the first of the block, so go back one.
+          if (b == 0) {
+            // we have a 'problem', the key we want is the first of the file.
+            return false;
+          }
+          b--;
+          // TODO shortcut: seek forward in this block to the last key of the block.
+        }
+        loadBlock(b, true);
+        blockSeek(key, offset, length, true);
+        return true;
+      }
+
+      public String getKeyString() {
+        return Bytes.toStringBinary(block.array(), block.arrayOffset() +
+          block.position(), currKeyLen);
+      }
+
+      public String getValueString() {
+        return Bytes.toString(block.array(), block.arrayOffset() +
+          block.position() + currKeyLen, currValueLen);
+      }
+
+      public Reader getReader() {
+        return this.reader;
+      }
+
+      public boolean isSeeked(){
+        return this.block != null;
+      }
+
+      public boolean seekTo() throws IOException {
+        if (this.reader.blockIndex.isEmpty()) {
+          return false;
+        }
+        if (block != null && currBlock == 0) {
+          block.rewind();
+          currKeyLen = block.getInt();
+          currValueLen = block.getInt();
+          return true;
+        }
+        currBlock = 0;
+        block = reader.readBlock(this.currBlock, this.cacheBlocks, this.pread);
+        currKeyLen = block.getInt();
+        currValueLen = block.getInt();
+        blockFetches++;
+        return true;
+      }
+
+      private void loadBlock(int bloc, boolean rewind) throws IOException {
+        if (block == null) {
+          block = reader.readBlock(bloc, this.cacheBlocks, this.pread);
+          currBlock = bloc;
+          blockFetches++;
+        } else {
+          if (bloc != currBlock) {
+            block = reader.readBlock(bloc, this.cacheBlocks, this.pread);
+            currBlock = bloc;
+            blockFetches++;
+          } else {
+            // we are already in the same block, just rewind to seek again.
+            if (rewind) {
+              block.rewind();
+            }
+            else {
+              //Go back by (size of rowlength + size of valuelength) = 8 bytes
+              block.position(block.position()-8);
+            }
+          }
+        }
+      }
+
+      @Override
+      public String toString() {
+        return "HFileScanner for reader " + String.valueOf(reader);
+      }
+    }
+
+    public String getTrailerInfo() {
+      return trailer.toString();
+    }
+  }
+
+  /*
+   * The RFile has a fixed trailer which contains offsets to other variable
+   * parts of the file.  Also includes basic metadata on this file.
+   */
+  private static class FixedFileTrailer {
+    // Offset to the fileinfo data, a small block of vitals..
+    long fileinfoOffset;
+    // Offset to the data block index.
+    long dataIndexOffset;
+    // How many index counts are there (aka: block count)
+    int dataIndexCount;
+    // Offset to the meta block index.
+    long metaIndexOffset;
+    // How many meta block index entries (aka: meta block count)
+    int metaIndexCount;
+    long totalUncompressedBytes;
+    int entryCount;
+    int compressionCodec;
+    int version = 1;
+
+    FixedFileTrailer() {
+      super();
+    }
+
+    static int trailerSize() {
+      // Keep this up to date...
+      return
+      ( Bytes.SIZEOF_INT * 5 ) +
+      ( Bytes.SIZEOF_LONG * 4 ) +
+      TRAILERBLOCKMAGIC.length;
+    }
+
+    void serialize(DataOutputStream outputStream) throws IOException {
+      outputStream.write(TRAILERBLOCKMAGIC);
+      outputStream.writeLong(fileinfoOffset);
+      outputStream.writeLong(dataIndexOffset);
+      outputStream.writeInt(dataIndexCount);
+      outputStream.writeLong(metaIndexOffset);
+      outputStream.writeInt(metaIndexCount);
+      outputStream.writeLong(totalUncompressedBytes);
+      outputStream.writeInt(entryCount);
+      outputStream.writeInt(compressionCodec);
+      outputStream.writeInt(version);
+    }
+
+    void deserialize(DataInputStream inputStream) throws IOException {
+      byte [] header = new byte[TRAILERBLOCKMAGIC.length];
+      inputStream.readFully(header);
+      if ( !Arrays.equals(header, TRAILERBLOCKMAGIC)) {
+        throw new IOException("Trailer 'header' is wrong; does the trailer " +
+          "size match content?");
+      }
+      fileinfoOffset         = inputStream.readLong();
+      dataIndexOffset        = inputStream.readLong();
+      dataIndexCount         = inputStream.readInt();
+
+      metaIndexOffset        = inputStream.readLong();
+      metaIndexCount         = inputStream.readInt();
+
+      totalUncompressedBytes = inputStream.readLong();
+      entryCount             = inputStream.readInt();
+      compressionCodec       = inputStream.readInt();
+      version                = inputStream.readInt();
+
+      if (version != 1) {
+        throw new IOException("Wrong version: " + version);
+      }
+    }
+
+    @Override
+    public String toString() {
+      return "fileinfoOffset=" + fileinfoOffset +
+      ", dataIndexOffset=" + dataIndexOffset +
+      ", dataIndexCount=" + dataIndexCount +
+      ", metaIndexOffset=" + metaIndexOffset +
+      ", metaIndexCount=" + metaIndexCount +
+      ", totalBytes=" + totalUncompressedBytes +
+      ", entryCount=" + entryCount +
+      ", version=" + version;
+    }
+  }
+
+  /*
+   * The block index for a RFile.
+   * Used reading.
+   */
+  static class BlockIndex implements HeapSize {
+    // How many actual items are there? The next insert location too.
+    int count = 0;
+    byte [][] blockKeys;
+    long [] blockOffsets;
+    int [] blockDataSizes;
+    int size = 0;
+
+    /* Needed doing lookup on blocks.
+     */
+    final RawComparator<byte []> comparator;
+
+    /*
+     * Shutdown default constructor
+     */
+    @SuppressWarnings("unused")
+    private BlockIndex() {
+      this(null);
+    }
+
+
+    /**
+     * @param c comparator used to compare keys.
+     */
+    BlockIndex(final RawComparator<byte []>c) {
+      this.comparator = c;
+      // Guess that cost of three arrays + this object is 4 * 8 bytes.
+      this.size += (4 * 8);
+    }
+
+    /**
+     * @return True if block index is empty.
+     */
+    boolean isEmpty() {
+      return this.blockKeys.length <= 0;
+    }
+
+    /**
+     * Adds a new entry in the block index.
+     *
+     * @param key Last key in the block
+     * @param offset file offset where the block is stored
+     * @param dataSize the uncompressed data size
+     */
+    void add(final byte[] key, final long offset, final int dataSize) {
+      blockOffsets[count] = offset;
+      blockKeys[count] = key;
+      blockDataSizes[count] = dataSize;
+      count++;
+      this.size += (Bytes.SIZEOF_INT * 2 + key.length);
+    }
+
+    /**
+     * @param key Key to find
+     * @return Offset of block containing <code>key</code> or -1 if this file
+     * does not contain the request.
+     */
+    int blockContainingKey(final byte[] key, int offset, int length) {
+      int pos = Bytes.binarySearch(blockKeys, key, offset, length, this.comparator);
+      if (pos < 0) {
+        pos ++;
+        pos *= -1;
+        if (pos == 0) {
+          // falls before the beginning of the file.
+          return -1;
+        }
+        // When switched to "first key in block" index, binarySearch now returns
+        // the block with a firstKey < key.  This means the value we want is potentially
+        // in the next block.
+        pos --; // in previous block.
+
+        return pos;
+      }
+      // wow, a perfect hit, how unlikely?
+      return pos;
+    }
+
+    /*
+     * @return File midkey.  Inexact.  Operates on block boundaries.  Does
+     * not go into blocks.
+     */
+    byte [] midkey() throws IOException {
+      int pos = ((this.count - 1)/2);              // middle of the index
+      if (pos < 0) {
+        throw new IOException("HFile empty");
+      }
+      return this.blockKeys[pos];
+    }
+
+    /*
+     * Write out index. Whatever we write here must jibe with what
+     * BlockIndex#readIndex is expecting.  Make sure the two ends of the
+     * index serialization match.
+     * @param o
+     * @param keys
+     * @param offsets
+     * @param sizes
+     * @param c
+     * @return Position at which we entered the index.
+     * @throws IOException
+     */
+    static long writeIndex(final FSDataOutputStream o,
+      final List<byte []> keys, final List<Long> offsets,
+      final List<Integer> sizes)
+    throws IOException {
+      long pos = o.getPos();
+      // Don't write an index if nothing in the index.
+      if (keys.size() > 0) {
+        o.write(INDEXBLOCKMAGIC);
+        // Write the index.
+        for (int i = 0; i < keys.size(); ++i) {
+          o.writeLong(offsets.get(i).longValue());
+          o.writeInt(sizes.get(i).intValue());
+          byte [] key = keys.get(i);
+          Bytes.writeByteArray(o, key);
+        }
+      }
+      return pos;
+    }
+
+    /*
+     * Read in the index that is at <code>indexOffset</code>
+     * Must match what was written by writeIndex in the Writer.close.
+     * @param c Comparator to use.
+     * @param in
+     * @param indexSize
+     * @throws IOException
+     */
+    static BlockIndex readIndex(final RawComparator<byte []> c,
+        DataInputStream in, final int indexSize)
+    throws IOException {
+      BlockIndex bi = new BlockIndex(c);
+      bi.blockOffsets = new long[indexSize];
+      bi.blockKeys = new byte[indexSize][];
+      bi.blockDataSizes = new int[indexSize];
+      // If index size is zero, no index was written.
+      if (indexSize > 0) {
+        byte [] magic = new byte[INDEXBLOCKMAGIC.length];
+        in.readFully(magic);
+        if (!Arrays.equals(magic, INDEXBLOCKMAGIC)) {
+          throw new IOException("Index block magic is wrong: " +
+            Arrays.toString(magic));
+        }
+        for (int i = 0; i < indexSize; ++i ) {
+          long offset   = in.readLong();
+          int dataSize  = in.readInt();
+          byte [] key = Bytes.readByteArray(in);
+          bi.add(key, offset, dataSize);
+        }
+      }
+      return bi;
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder();
+      sb.append("size=" + count);
+      for (int i = 0; i < count ; i++) {
+        sb.append(", ");
+        sb.append("key=").append(Bytes.toStringBinary(blockKeys[i])).
+          append(", offset=").append(blockOffsets[i]).
+          append(", dataSize=" + blockDataSizes[i]);
+      }
+      return sb.toString();
+    }
+
+    public long heapSize() {
+      long heapsize = ClassSize.align(ClassSize.OBJECT +
+          2 * Bytes.SIZEOF_INT + (3 + 1) * ClassSize.REFERENCE);
+      //Calculating the size of blockKeys
+      if(blockKeys != null) {
+        //Adding array + references overhead
+        heapsize += ClassSize.align(ClassSize.ARRAY +
+            blockKeys.length * ClassSize.REFERENCE);
+        //Adding bytes
+        for(byte [] bs : blockKeys) {
+          heapsize += ClassSize.align(ClassSize.ARRAY + bs.length);
+        }
+      }
+      if(blockOffsets != null) {
+        heapsize += ClassSize.align(ClassSize.ARRAY +
+            blockOffsets.length * Bytes.SIZEOF_LONG);
+      }
+      if(blockDataSizes != null) {
+        heapsize += ClassSize.align(ClassSize.ARRAY +
+            blockDataSizes.length * Bytes.SIZEOF_INT);
+      }
+
+      return ClassSize.align(heapsize);
+    }
+
+  }
+
+  /*
+   * Metadata for this file.  Conjured by the writer.  Read in by the reader.
+   */
+  static class FileInfo extends HbaseMapWritable<byte [], byte []> {
+    static final String RESERVED_PREFIX = "hfile.";
+    static final byte[] RESERVED_PREFIX_BYTES = Bytes.toBytes(RESERVED_PREFIX);
+    static final byte [] LASTKEY = Bytes.toBytes(RESERVED_PREFIX + "LASTKEY");
+    static final byte [] AVG_KEY_LEN =
+      Bytes.toBytes(RESERVED_PREFIX + "AVG_KEY_LEN");
+    static final byte [] AVG_VALUE_LEN =
+      Bytes.toBytes(RESERVED_PREFIX + "AVG_VALUE_LEN");
+    static final byte [] COMPARATOR =
+      Bytes.toBytes(RESERVED_PREFIX + "COMPARATOR");
+
+    /*
+     * Constructor.
+     */
+    FileInfo() {
+      super();
+    }
+  }
+
+  /**
+   * Return true if the given file info key is reserved for internal
+   * use by HFile.
+   */
+  public static boolean isReservedFileInfoKey(byte[] key) {
+    return Bytes.startsWith(key, FileInfo.RESERVED_PREFIX_BYTES);
+  }
+
+
+  /**
+   * Get names of supported compression algorithms. The names are acceptable by
+   * HFile.Writer.
+   *
+   * @return Array of strings, each represents a supported compression
+   *         algorithm. Currently, the following compression algorithms are
+   *         supported.
+   *         <ul>
+   *         <li>"none" - No compression.
+   *         <li>"gz" - GZIP compression.
+   *         </ul>
+   */
+  public static String[] getSupportedCompressionAlgorithms() {
+    return Compression.getSupportedAlgorithms();
+  }
+
+  // Utility methods.
+  /*
+   * @param l Long to convert to an int.
+   * @return <code>l</code> cast as an int.
+   */
+  static int longToInt(final long l) {
+    // Expecting the size() of a block not exceeding 4GB. Assuming the
+    // size() will wrap to negative integer if it exceeds 2GB (From tfile).
+    return (int)(l & 0x00000000ffffffffL);
+  }
+
+  /**
+   * Returns all files belonging to the given region directory. Could return an
+   * empty list.
+   *
+   * @param fs  The file system reference.
+   * @param regionDir  The region directory to scan.
+   * @return The list of files found.
+   * @throws IOException When scanning the files fails.
+   */
+  static List<Path> getStoreFiles(FileSystem fs, Path regionDir)
+  throws IOException {
+    List<Path> res = new ArrayList<Path>();
+    PathFilter dirFilter = new FSUtils.DirFilter(fs);
+    FileStatus[] familyDirs = fs.listStatus(regionDir, dirFilter);
+    for(FileStatus dir : familyDirs) {
+      FileStatus[] files = fs.listStatus(dir.getPath());
+      for (FileStatus file : files) {
+        if (!file.isDir()) {
+          res.add(file.getPath());
+        }
+      }
+    }
+    return res;
+  }
+
+  public static void main(String []args) throws IOException {
+    try {
+      // create options
+      Options options = new Options();
+      options.addOption("v", "verbose", false, "Verbose output; emits file and meta data delimiters");
+      options.addOption("p", "printkv", false, "Print key/value pairs");
+      options.addOption("m", "printmeta", false, "Print meta data of file");
+      options.addOption("k", "checkrow", false,
+        "Enable row order check; looks for out-of-order keys");
+      options.addOption("a", "checkfamily", false, "Enable family check");
+      options.addOption("f", "file", true,
+        "File to scan. Pass full-path; e.g. hdfs://a:9000/hbase/.META./12/34");
+      options.addOption("r", "region", true,
+        "Region to scan. Pass region name; e.g. '.META.,,1'");
+      if (args.length == 0) {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp("HFile ", options, true);
+        System.exit(-1);
+      }
+      CommandLineParser parser = new PosixParser();
+      CommandLine cmd = parser.parse(options, args);
+      boolean verbose = cmd.hasOption("v");
+      boolean printKeyValue = cmd.hasOption("p");
+      boolean printMeta = cmd.hasOption("m");
+      boolean checkRow = cmd.hasOption("k");
+      boolean checkFamily = cmd.hasOption("a");
+      // get configuration, file system and get list of files
+      Configuration conf = HBaseConfiguration.create();
+      conf.set("fs.defaultFS",
+        conf.get(org.apache.hadoop.hbase.HConstants.HBASE_DIR));
+      conf.set("fs.default.name",
+        conf.get(org.apache.hadoop.hbase.HConstants.HBASE_DIR));
+      FileSystem fs = FileSystem.get(conf);
+      ArrayList<Path> files = new ArrayList<Path>();
+      if (cmd.hasOption("f")) {
+        files.add(new Path(cmd.getOptionValue("f")));
+      }
+      if (cmd.hasOption("r")) {
+        String regionName = cmd.getOptionValue("r");
+        byte[] rn = Bytes.toBytes(regionName);
+        byte[][] hri = HRegionInfo.parseRegionName(rn);
+        Path rootDir = FSUtils.getRootDir(conf);
+        Path tableDir = new Path(rootDir, Bytes.toString(hri[0]));
+        String enc = HRegionInfo.encodeRegionName(rn);
+        Path regionDir = new Path(tableDir, enc);
+        if (verbose) System.out.println("region dir -> " + regionDir);
+        List<Path> regionFiles = getStoreFiles(fs, regionDir);
+        if (verbose) System.out.println("Number of region files found -> " +
+          regionFiles.size());
+        if (verbose) {
+          int i = 1;
+          for (Path p : regionFiles) {
+            if (verbose) System.out.println("Found file[" + i++ + "] -> " + p);
+          }
+        }
+        files.addAll(regionFiles);
+      }
+      // iterate over all files found
+      for (Path file : files) {
+        if (verbose) System.out.println("Scanning -> " + file);
+        if (!fs.exists(file)) {
+          System.err.println("ERROR, file doesnt exist: " + file);
+          continue;
+        }
+        // create reader and load file info
+        HFile.Reader reader = new HFile.Reader(fs, file, null, false);
+        Map<byte[],byte[]> fileInfo = reader.loadFileInfo();
+        // scan over file and read key/value's and check if requested
+        HFileScanner scanner = reader.getScanner(false, false);
+        scanner.seekTo();
+        KeyValue pkv = null;
+        int count = 0;
+        do {
+          KeyValue kv = scanner.getKeyValue();
+          // dump key value
+          if (printKeyValue) {
+            System.out.println("K: " + kv +
+              " V: " + Bytes.toStringBinary(kv.getValue()));
+          }
+          // check if rows are in order
+          if (checkRow && pkv != null) {
+            if (Bytes.compareTo(pkv.getRow(), kv.getRow()) > 0) {
+              System.err.println("WARNING, previous row is greater then" +
+                " current row\n\tfilename -> " + file +
+                "\n\tprevious -> " + Bytes.toStringBinary(pkv.getKey()) +
+                "\n\tcurrent  -> " + Bytes.toStringBinary(kv.getKey()));
+            }
+          }
+          // check if families are consistent
+          if (checkFamily) {
+            String fam = Bytes.toString(kv.getFamily());
+            if (!file.toString().contains(fam)) {
+              System.err.println("WARNING, filename does not match kv family," +
+                "\n\tfilename -> " + file +
+                "\n\tkeyvalue -> " + Bytes.toStringBinary(kv.getKey()));
+            }
+            if (pkv != null && Bytes.compareTo(pkv.getFamily(), kv.getFamily()) != 0) {
+              System.err.println("WARNING, previous kv has different family" +
+                " compared to current key\n\tfilename -> " + file +
+                "\n\tprevious -> " +  Bytes.toStringBinary(pkv.getKey()) +
+                "\n\tcurrent  -> " + Bytes.toStringBinary(kv.getKey()));
+            }
+          }
+          pkv = kv;
+          count++;
+        } while (scanner.next());
+        if (verbose || printKeyValue) {
+          System.out.println("Scanned kv count -> " + count);
+        }
+        // print meta data
+        if (printMeta) {
+          System.out.println("Block index size as per heapsize: " + reader.indexSize());
+          System.out.println(reader.toString());
+          System.out.println(reader.getTrailerInfo());
+          System.out.println("Fileinfo:");
+          for (Map.Entry<byte[], byte[]> e : fileInfo.entrySet()) {
+            System.out.print(Bytes.toString(e.getKey()) + " = " );
+            if (Bytes.compareTo(e.getKey(), Bytes.toBytes("MAX_SEQ_ID_KEY"))==0) {
+              long seqid = Bytes.toLong(e.getValue());
+              System.out.println(seqid);
+            } else if (Bytes.compareTo(e.getKey(),
+                Bytes.toBytes("TIMERANGE")) == 0) {
+              TimeRangeTracker timeRangeTracker = new TimeRangeTracker();
+              Writables.copyWritable(e.getValue(), timeRangeTracker);
+              System.out.println(timeRangeTracker.getMinimumTimestamp() +
+                  "...." + timeRangeTracker.getMaximumTimestamp());
+            } else if (Bytes.compareTo(e.getKey(), FileInfo.AVG_KEY_LEN) == 0 ||
+                Bytes.compareTo(e.getKey(), FileInfo.AVG_VALUE_LEN) == 0) {
+              System.out.println(Bytes.toInt(e.getValue()));
+            } else {
+              System.out.println(Bytes.toStringBinary(e.getValue()));
+            }
+          }
+
+          //Printing bloom information
+          ByteBuffer b = reader.getMetaBlock("BLOOM_FILTER_META", false);
+          if (b!= null) {
+            BloomFilter bloomFilter = new ByteBloomFilter(b);
+            System.out.println("BloomSize: " + bloomFilter.getByteSize());
+            System.out.println("No of Keys in bloom: " +
+                bloomFilter.getKeyCount());
+            System.out.println("Max Keys for bloom: " +
+                bloomFilter.getMaxKeys());
+          } else {
+            System.out.println("Could not get bloom data from meta block");
+          }
+        }
+        reader.close();
+      }
+    } catch (Exception e) {
+      e.printStackTrace();
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
new file mode 100644
index 0000000..b06878f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java
@@ -0,0 +1,146 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * A scanner allows you to position yourself within a HFile and
+ * scan through it.  It allows you to reposition yourself as well.
+ *
+ * <p>A scanner doesn't always have a key/value that it is pointing to
+ * when it is first created and before
+ * {@link #seekTo()}/{@link #seekTo(byte[])} are called.
+ * In this case, {@link #getKey()}/{@link #getValue()} returns null.  At most
+ * other times, a key and value will be available.  The general pattern is that
+ * you position the Scanner using the seekTo variants and then getKey and
+ * getValue.
+ */
+public interface HFileScanner {
+  /**
+   * SeekTo or just before the passed <code>key</code>.  Examine the return
+   * code to figure whether we found the key or not.
+   * Consider the key stream of all the keys in the file,
+   * <code>k[0] .. k[n]</code>, where there are n keys in the file.
+   * @param key Key to find.
+   * @return -1, if key < k[0], no position;
+   * 0, such that k[i] = key and scanner is left in position i; and
+   * 1, such that k[i] < key, and scanner is left in position i.
+   * The scanner will position itself between k[i] and k[i+1] where
+   * k[i] < key <= k[i+1].
+   * If there is no key k[i+1] greater than or equal to the input key, then the
+   * scanner will position itself at the end of the file and next() will return
+   * false when it is called.
+   * @throws IOException
+   */
+  public int seekTo(byte[] key) throws IOException;
+  public int seekTo(byte[] key, int offset, int length) throws IOException;
+  /**
+   * Reseek to or just before the passed <code>key</code>. Similar to seekTo
+   * except that this can be called even if the scanner is not at the beginning
+   * of a file.
+   * This can be used to seek only to keys which come after the current position
+   * of the scanner.
+   * Consider the key stream of all the keys in the file,
+   * <code>k[0] .. k[n]</code>, where there are n keys in the file after
+   * current position of HFileScanner.
+   * The scanner will position itself between k[i] and k[i+1] where
+   * k[i] < key <= k[i+1].
+   * If there is no key k[i+1] greater than or equal to the input key, then the
+   * scanner will position itself at the end of the file and next() will return
+   * false when it is called.
+   * @param key Key to find (should be non-null)
+   * @return -1, if key < k[0], no position;
+   * 0, such that k[i] = key and scanner is left in position i; and
+   * 1, such that k[i] < key, and scanner is left in position i.
+   * @throws IOException
+   */
+  public int reseekTo(byte[] key) throws IOException;
+  public int reseekTo(byte[] key, int offset, int length) throws IOException;
+  /**
+   * Consider the key stream of all the keys in the file,
+   * <code>k[0] .. k[n]</code>, where there are n keys in the file.
+   * @param key Key to find
+   * @return false if key <= k[0] or true with scanner in position 'i' such
+   * that: k[i] < key.  Furthermore: there may be a k[i+1], such that
+   * k[i] < key <= k[i+1] but there may also NOT be a k[i+1], and next() will
+   * return false (EOF).
+   * @throws IOException
+   */
+  public boolean seekBefore(byte [] key) throws IOException;
+  public boolean seekBefore(byte []key, int offset, int length) throws IOException;
+  /**
+   * Positions this scanner at the start of the file.
+   * @return False if empty file; i.e. a call to next would return false and
+   * the current key and value are undefined.
+   * @throws IOException
+   */
+  public boolean seekTo() throws IOException;
+  /**
+   * Scans to the next entry in the file.
+   * @return Returns false if you are at the end otherwise true if more in file.
+   * @throws IOException
+   */
+  public boolean next() throws IOException;
+  /**
+   * Gets a buffer view to the current key. You must call
+   * {@link #seekTo(byte[])} before this method.
+   * @return byte buffer for the key. The limit is set to the key size, and the
+   * position is 0, the start of the buffer view.
+   */
+  public ByteBuffer getKey();
+  /**
+   * Gets a buffer view to the current value.  You must call
+   * {@link #seekTo(byte[])} before this method.
+   *
+   * @return byte buffer for the value. The limit is set to the value size, and
+   * the position is 0, the start of the buffer view.
+   */
+  public ByteBuffer getValue();
+  /**
+   * @return Instance of {@link KeyValue}.
+   */
+  public KeyValue getKeyValue();
+  /**
+   * Convenience method to get a copy of the key as a string - interpreting the
+   * bytes as UTF8. You must call {@link #seekTo(byte[])} before this method.
+   * @return key as a string
+   */
+  public String getKeyString();
+  /**
+   * Convenience method to get a copy of the value as a string - interpreting
+   * the bytes as UTF8. You must call {@link #seekTo(byte[])} before this method.
+   * @return value as a string
+   */
+  public String getValueString();
+  /**
+   * @return Reader that underlies this Scanner instance.
+   */
+  public HFile.Reader getReader();
+  /**
+   * @return True is scanner has had one of the seek calls invoked; i.e.
+   * {@link #seekBefore(byte[])} or {@link #seekTo()} or {@link #seekTo(byte[])}.
+   * Otherwise returns false.
+   */
+  public boolean isSeeked();
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
new file mode 100644
index 0000000..4ecad53
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
@@ -0,0 +1,710 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.lang.ref.WeakReference;
+import java.nio.ByteBuffer;
+import java.util.LinkedList;
+import java.util.PriorityQueue;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * A block cache implementation that is memory-aware using {@link HeapSize},
+ * memory-bound using an LRU eviction algorithm, and concurrent: backed by a
+ * {@link ConcurrentHashMap} and with a non-blocking eviction thread giving
+ * constant-time {@link #cacheBlock} and {@link #getBlock} operations.<p>
+ *
+ * Contains three levels of block priority to allow for
+ * scan-resistance and in-memory families.  A block is added with an inMemory
+ * flag if necessary, otherwise a block becomes a single access priority.  Once
+ * a blocked is accessed again, it changes to multiple access.  This is used
+ * to prevent scans from thrashing the cache, adding a least-frequently-used
+ * element to the eviction algorithm.<p>
+ *
+ * Each priority is given its own chunk of the total cache to ensure
+ * fairness during eviction.  Each priority will retain close to its maximum
+ * size, however, if any priority is not using its entire chunk the others
+ * are able to grow beyond their chunk size.<p>
+ *
+ * Instantiated at a minimum with the total size and average block size.
+ * All sizes are in bytes.  The block size is not especially important as this
+ * cache is fully dynamic in its sizing of blocks.  It is only used for
+ * pre-allocating data structures and in initial heap estimation of the map.<p>
+ *
+ * The detailed constructor defines the sizes for the three priorities (they
+ * should total to the maximum size defined).  It also sets the levels that
+ * trigger and control the eviction thread.<p>
+ *
+ * The acceptable size is the cache size level which triggers the eviction
+ * process to start.  It evicts enough blocks to get the size below the
+ * minimum size specified.<p>
+ *
+ * Eviction happens in a separate thread and involves a single full-scan
+ * of the map.  It determines how many bytes must be freed to reach the minimum
+ * size, and then while scanning determines the fewest least-recently-used
+ * blocks necessary from each of the three priorities (would be 3 times bytes
+ * to free).  It then uses the priority chunk sizes to evict fairly according
+ * to the relative sizes and usage.
+ */
+public class LruBlockCache implements BlockCache, HeapSize {
+
+  static final Log LOG = LogFactory.getLog(LruBlockCache.class);
+
+  /** Default Configuration Parameters*/
+
+  /** Backing Concurrent Map Configuration */
+  static final float DEFAULT_LOAD_FACTOR = 0.75f;
+  static final int DEFAULT_CONCURRENCY_LEVEL = 16;
+
+  /** Eviction thresholds */
+  static final float DEFAULT_MIN_FACTOR = 0.75f;
+  static final float DEFAULT_ACCEPTABLE_FACTOR = 0.85f;
+
+  /** Priority buckets */
+  static final float DEFAULT_SINGLE_FACTOR = 0.25f;
+  static final float DEFAULT_MULTI_FACTOR = 0.50f;
+  static final float DEFAULT_MEMORY_FACTOR = 0.25f;
+
+  /** Statistics thread */
+  static final int statThreadPeriod = 60 * 5;
+
+  /** Concurrent map (the cache) */
+  private final ConcurrentHashMap<String,CachedBlock> map;
+
+  /** Eviction lock (locked when eviction in process) */
+  private final ReentrantLock evictionLock = new ReentrantLock(true);
+
+  /** Volatile boolean to track if we are in an eviction process or not */
+  private volatile boolean evictionInProgress = false;
+
+  /** Eviction thread */
+  private final EvictionThread evictionThread;
+
+  /** Statistics thread schedule pool (for heavy debugging, could remove) */
+  private final ScheduledExecutorService scheduleThreadPool =
+    Executors.newScheduledThreadPool(1);
+
+  /** Current size of cache */
+  private final AtomicLong size;
+
+  /** Current number of cached elements */
+  private final AtomicLong elements;
+
+  /** Cache access count (sequential ID) */
+  private final AtomicLong count;
+
+  /** Cache statistics */
+  private final CacheStats stats;
+
+  /** Maximum allowable size of cache (block put if size > max, evict) */
+  private long maxSize;
+
+  /** Approximate block size */
+  private long blockSize;
+
+  /** Acceptable size of cache (no evictions if size < acceptable) */
+  private float acceptableFactor;
+
+  /** Minimum threshold of cache (when evicting, evict until size < min) */
+  private float minFactor;
+
+  /** Single access bucket size */
+  private float singleFactor;
+
+  /** Multiple access bucket size */
+  private float multiFactor;
+
+  /** In-memory bucket size */
+  private float memoryFactor;
+
+  /** Overhead of the structure itself */
+  private long overhead;
+
+  /**
+   * Default constructor.  Specify maximum size and expected average block
+   * size (approximation is fine).
+   *
+   * <p>All other factors will be calculated based on defaults specified in
+   * this class.
+   * @param maxSize maximum size of cache, in bytes
+   * @param blockSize approximate size of each block, in bytes
+   */
+  public LruBlockCache(long maxSize, long blockSize) {
+    this(maxSize, blockSize, true);
+  }
+
+  /**
+   * Constructor used for testing.  Allows disabling of the eviction thread.
+   */
+  public LruBlockCache(long maxSize, long blockSize, boolean evictionThread) {
+    this(maxSize, blockSize, evictionThread,
+        (int)Math.ceil(1.2*maxSize/blockSize),
+        DEFAULT_LOAD_FACTOR, DEFAULT_CONCURRENCY_LEVEL,
+        DEFAULT_MIN_FACTOR, DEFAULT_ACCEPTABLE_FACTOR,
+        DEFAULT_SINGLE_FACTOR, DEFAULT_MULTI_FACTOR,
+        DEFAULT_MEMORY_FACTOR);
+  }
+
+  /**
+   * Configurable constructor.  Use this constructor if not using defaults.
+   * @param maxSize maximum size of this cache, in bytes
+   * @param blockSize expected average size of blocks, in bytes
+   * @param evictionThread whether to run evictions in a bg thread or not
+   * @param mapInitialSize initial size of backing ConcurrentHashMap
+   * @param mapLoadFactor initial load factor of backing ConcurrentHashMap
+   * @param mapConcurrencyLevel initial concurrency factor for backing CHM
+   * @param minFactor percentage of total size that eviction will evict until
+   * @param acceptableFactor percentage of total size that triggers eviction
+   * @param singleFactor percentage of total size for single-access blocks
+   * @param multiFactor percentage of total size for multiple-access blocks
+   * @param memoryFactor percentage of total size for in-memory blocks
+   */
+  public LruBlockCache(long maxSize, long blockSize, boolean evictionThread,
+      int mapInitialSize, float mapLoadFactor, int mapConcurrencyLevel,
+      float minFactor, float acceptableFactor,
+      float singleFactor, float multiFactor, float memoryFactor) {
+    if(singleFactor + multiFactor + memoryFactor != 1) {
+      throw new IllegalArgumentException("Single, multi, and memory factors " +
+          " should total 1.0");
+    }
+    if(minFactor >= acceptableFactor) {
+      throw new IllegalArgumentException("minFactor must be smaller than acceptableFactor");
+    }
+    if(minFactor >= 1.0f || acceptableFactor >= 1.0f) {
+      throw new IllegalArgumentException("all factors must be < 1");
+    }
+    this.maxSize = maxSize;
+    this.blockSize = blockSize;
+    map = new ConcurrentHashMap<String,CachedBlock>(mapInitialSize,
+        mapLoadFactor, mapConcurrencyLevel);
+    this.minFactor = minFactor;
+    this.acceptableFactor = acceptableFactor;
+    this.singleFactor = singleFactor;
+    this.multiFactor = multiFactor;
+    this.memoryFactor = memoryFactor;
+    this.stats = new CacheStats();
+    this.count = new AtomicLong(0);
+    this.elements = new AtomicLong(0);
+    this.overhead = calculateOverhead(maxSize, blockSize, mapConcurrencyLevel);
+    this.size = new AtomicLong(this.overhead);
+    if(evictionThread) {
+      this.evictionThread = new EvictionThread(this);
+      this.evictionThread.start(); // FindBugs SC_START_IN_CTOR
+    } else {
+      this.evictionThread = null;
+    }
+    this.scheduleThreadPool.scheduleAtFixedRate(new StatisticsThread(this),
+        statThreadPeriod, statThreadPeriod, TimeUnit.SECONDS);
+  }
+
+  public void setMaxSize(long maxSize) {
+    this.maxSize = maxSize;
+    if(this.size.get() > acceptableSize() && !evictionInProgress) {
+      runEviction();
+    }
+  }
+
+  // BlockCache implementation
+
+  /**
+   * Cache the block with the specified name and buffer.
+   * <p>
+   * It is assumed this will NEVER be called on an already cached block.  If
+   * that is done, it is assumed that you are reinserting the same exact
+   * block due to a race condition and will update the buffer but not modify
+   * the size of the cache.
+   * @param blockName block name
+   * @param buf block buffer
+   * @param inMemory if block is in-memory
+   */
+  public void cacheBlock(String blockName, ByteBuffer buf, boolean inMemory) {
+    CachedBlock cb = map.get(blockName);
+    if(cb != null) {
+      throw new RuntimeException("Cached an already cached block");
+    }
+    cb = new CachedBlock(blockName, buf, count.incrementAndGet(), inMemory);
+    long newSize = size.addAndGet(cb.heapSize());
+    map.put(blockName, cb);
+    elements.incrementAndGet();
+    if(newSize > acceptableSize() && !evictionInProgress) {
+      runEviction();
+    }
+  }
+
+  /**
+   * Cache the block with the specified name and buffer.
+   * <p>
+   * It is assumed this will NEVER be called on an already cached block.  If
+   * that is done, it is assumed that you are reinserting the same exact
+   * block due to a race condition and will update the buffer but not modify
+   * the size of the cache.
+   * @param blockName block name
+   * @param buf block buffer
+   */
+  public void cacheBlock(String blockName, ByteBuffer buf) {
+    cacheBlock(blockName, buf, false);
+  }
+
+  /**
+   * Get the buffer of the block with the specified name.
+   * @param blockName block name
+   * @return buffer of specified block name, or null if not in cache
+   */
+  public ByteBuffer getBlock(String blockName, boolean caching) {
+    CachedBlock cb = map.get(blockName);
+    if(cb == null) {
+      stats.miss(caching);
+      return null;
+    }
+    stats.hit(caching);
+    cb.access(count.incrementAndGet());
+    return cb.getBuffer();
+  }
+
+  protected long evictBlock(CachedBlock block) {
+    map.remove(block.getName());
+    size.addAndGet(-1 * block.heapSize());
+    elements.decrementAndGet();
+    stats.evicted();
+    return block.heapSize();
+  }
+
+  /**
+   * Multi-threaded call to run the eviction process.
+   */
+  private void runEviction() {
+    if(evictionThread == null) {
+      evict();
+    } else {
+      evictionThread.evict();
+    }
+  }
+
+  /**
+   * Eviction method.
+   */
+  void evict() {
+
+    // Ensure only one eviction at a time
+    if(!evictionLock.tryLock()) return;
+
+    try {
+      evictionInProgress = true;
+      long currentSize = this.size.get();
+      long bytesToFree = currentSize - minSize();
+
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Block cache LRU eviction started; Attempting to free " +
+          StringUtils.byteDesc(bytesToFree) + " of total=" +
+          StringUtils.byteDesc(currentSize));
+      }
+
+      if(bytesToFree <= 0) return;
+
+      // Instantiate priority buckets
+      BlockBucket bucketSingle = new BlockBucket(bytesToFree, blockSize,
+          singleSize());
+      BlockBucket bucketMulti = new BlockBucket(bytesToFree, blockSize,
+          multiSize());
+      BlockBucket bucketMemory = new BlockBucket(bytesToFree, blockSize,
+          memorySize());
+
+      // Scan entire map putting into appropriate buckets
+      for(CachedBlock cachedBlock : map.values()) {
+        switch(cachedBlock.getPriority()) {
+          case SINGLE: {
+            bucketSingle.add(cachedBlock);
+            break;
+          }
+          case MULTI: {
+            bucketMulti.add(cachedBlock);
+            break;
+          }
+          case MEMORY: {
+            bucketMemory.add(cachedBlock);
+            break;
+          }
+        }
+      }
+
+      PriorityQueue<BlockBucket> bucketQueue =
+        new PriorityQueue<BlockBucket>(3);
+
+      bucketQueue.add(bucketSingle);
+      bucketQueue.add(bucketMulti);
+      bucketQueue.add(bucketMemory);
+
+      int remainingBuckets = 3;
+      long bytesFreed = 0;
+
+      BlockBucket bucket;
+      while((bucket = bucketQueue.poll()) != null) {
+        long overflow = bucket.overflow();
+        if(overflow > 0) {
+          long bucketBytesToFree = Math.min(overflow,
+            (bytesToFree - bytesFreed) / remainingBuckets);
+          bytesFreed += bucket.free(bucketBytesToFree);
+        }
+        remainingBuckets--;
+      }
+
+      if (LOG.isDebugEnabled()) {
+        long single = bucketSingle.totalSize();
+        long multi = bucketMulti.totalSize();
+        long memory = bucketMemory.totalSize();
+        LOG.debug("Block cache LRU eviction completed; " +
+          "freed=" + StringUtils.byteDesc(bytesFreed) + ", " +
+          "total=" + StringUtils.byteDesc(this.size.get()) + ", " +
+          "single=" + StringUtils.byteDesc(single) + ", " +
+          "multi=" + StringUtils.byteDesc(multi) + ", " +
+          "memory=" + StringUtils.byteDesc(memory));
+      }
+    } finally {
+      stats.evict();
+      evictionInProgress = false;
+      evictionLock.unlock();
+    }
+  }
+
+  /**
+   * Used to group blocks into priority buckets.  There will be a BlockBucket
+   * for each priority (single, multi, memory).  Once bucketed, the eviction
+   * algorithm takes the appropriate number of elements out of each according
+   * to configuration parameters and their relatives sizes.
+   */
+  private class BlockBucket implements Comparable<BlockBucket> {
+
+    private CachedBlockQueue queue;
+    private long totalSize = 0;
+    private long bucketSize;
+
+    public BlockBucket(long bytesToFree, long blockSize, long bucketSize) {
+      this.bucketSize = bucketSize;
+      queue = new CachedBlockQueue(bytesToFree, blockSize);
+      totalSize = 0;
+    }
+
+    public void add(CachedBlock block) {
+      totalSize += block.heapSize();
+      queue.add(block);
+    }
+
+    public long free(long toFree) {
+      LinkedList<CachedBlock> blocks = queue.get();
+      long freedBytes = 0;
+      for(CachedBlock cb: blocks) {
+        freedBytes += evictBlock(cb);
+        if(freedBytes >= toFree) {
+          return freedBytes;
+        }
+      }
+      return freedBytes;
+    }
+
+    public long overflow() {
+      return totalSize - bucketSize;
+    }
+
+    public long totalSize() {
+      return totalSize;
+    }
+
+    public int compareTo(BlockBucket that) {
+      if(this.overflow() == that.overflow()) return 0;
+      return this.overflow() > that.overflow() ? 1 : -1;
+    }
+  }
+
+  /**
+   * Get the maximum size of this cache.
+   * @return max size in bytes
+   */
+  public long getMaxSize() {
+    return this.maxSize;
+  }
+
+  /**
+   * Get the current size of this cache.
+   * @return current size in bytes
+   */
+  public long getCurrentSize() {
+    return this.size.get();
+  }
+
+  /**
+   * Get the current size of this cache.
+   * @return current size in bytes
+   */
+  public long getFreeSize() {
+    return getMaxSize() - getCurrentSize();
+  }
+
+  /**
+   * Get the size of this cache (number of cached blocks)
+   * @return number of cached blocks
+   */
+  public long size() {
+    return this.elements.get();
+  }
+
+  /**
+   * Get the number of eviction runs that have occurred
+   */
+  public long getEvictionCount() {
+    return this.stats.getEvictionCount();
+  }
+
+  /**
+   * Get the number of blocks that have been evicted during the lifetime
+   * of this cache.
+   */
+  public long getEvictedCount() {
+    return this.stats.getEvictedCount();
+  }
+
+  /*
+   * Eviction thread.  Sits in waiting state until an eviction is triggered
+   * when the cache size grows above the acceptable level.<p>
+   *
+   * Thread is triggered into action by {@link LruBlockCache#runEviction()}
+   */
+  private static class EvictionThread extends Thread {
+    private WeakReference<LruBlockCache> cache;
+
+    public EvictionThread(LruBlockCache cache) {
+      super("LruBlockCache.EvictionThread");
+      setDaemon(true);
+      this.cache = new WeakReference<LruBlockCache>(cache);
+    }
+
+    @Override
+    public void run() {
+      while(true) {
+        synchronized(this) {
+          try {
+            this.wait();
+          } catch(InterruptedException e) {}
+        }
+        LruBlockCache cache = this.cache.get();
+        if(cache == null) break;
+        cache.evict();
+      }
+    }
+    public void evict() {
+      synchronized(this) {
+        this.notify(); // FindBugs NN_NAKED_NOTIFY
+      }
+    }
+  }
+
+  /*
+   * Statistics thread.  Periodically prints the cache statistics to the log.
+   */
+  static class StatisticsThread extends Thread {
+    LruBlockCache lru;
+
+    public StatisticsThread(LruBlockCache lru) {
+      super("LruBlockCache.StatisticsThread");
+      setDaemon(true);
+      this.lru = lru;
+    }
+    @Override
+    public void run() {
+      lru.logStats();
+    }
+  }
+
+  public void logStats() {
+    if (!LOG.isDebugEnabled()) return;
+    // Log size
+    long totalSize = heapSize();
+    long freeSize = maxSize - totalSize;
+    LruBlockCache.LOG.debug("LRU Stats: " +
+        "total=" + StringUtils.byteDesc(totalSize) + ", " +
+        "free=" + StringUtils.byteDesc(freeSize) + ", " +
+        "max=" + StringUtils.byteDesc(this.maxSize) + ", " +
+        "blocks=" + size() +", " +
+        "accesses=" + stats.getRequestCount() + ", " +
+        "hits=" + stats.getHitCount() + ", " +
+        "hitRatio=" + StringUtils.formatPercent(stats.getHitRatio(), 2) + "%, "+
+        "cachingAccesses=" + stats.getRequestCachingCount() + ", " +
+        "cachingHits=" + stats.getHitCachingCount() + ", " +
+        "cachingHitsRatio=" +
+          StringUtils.formatPercent(stats.getHitCachingRatio(), 2) + "%, " +
+        "evictions=" + stats.getEvictionCount() + ", " +
+        "evicted=" + stats.getEvictedCount() + ", " +
+        "evictedPerRun=" + stats.evictedPerEviction());
+  }
+
+  /**
+   * Get counter statistics for this cache.
+   *
+   * <p>Includes: total accesses, hits, misses, evicted blocks, and runs
+   * of the eviction processes.
+   */
+  public CacheStats getStats() {
+    return this.stats;
+  }
+
+  public static class CacheStats {
+    /** The number of getBlock requests that were cache hits */
+    private final AtomicLong hitCount = new AtomicLong(0);
+    /**
+     * The number of getBlock requests that were cache hits, but only from
+     * requests that were set to use the block cache.  This is because all reads
+     * attempt to read from the block cache even if they will not put new blocks
+     * into the block cache.  See HBASE-2253 for more information.
+     */
+    private final AtomicLong hitCachingCount = new AtomicLong(0);
+    /** The number of getBlock requests that were cache misses */
+    private final AtomicLong missCount = new AtomicLong(0);
+    /**
+     * The number of getBlock requests that were cache misses, but only from
+     * requests that were set to use the block cache.
+     */
+    private final AtomicLong missCachingCount = new AtomicLong(0);
+    /** The number of times an eviction has occurred */
+    private final AtomicLong evictionCount = new AtomicLong(0);
+    /** The total number of blocks that have been evicted */
+    private final AtomicLong evictedCount = new AtomicLong(0);
+
+    public void miss(boolean caching) {
+      missCount.incrementAndGet();
+      if (caching) missCachingCount.incrementAndGet();
+    }
+
+    public void hit(boolean caching) {
+      hitCount.incrementAndGet();
+      if (caching) hitCachingCount.incrementAndGet();
+    }
+
+    public void evict() {
+      evictionCount.incrementAndGet();
+    }
+
+    public void evicted() {
+      evictedCount.incrementAndGet();
+    }
+
+    public long getRequestCount() {
+      return getHitCount() + getMissCount();
+    }
+
+    public long getRequestCachingCount() {
+      return getHitCachingCount() + getMissCachingCount();
+    }
+
+    public long getMissCount() {
+      return missCount.get();
+    }
+
+    public long getMissCachingCount() {
+      return missCachingCount.get();
+    }
+
+    public long getHitCount() {
+      return hitCachingCount.get();
+    }
+
+    public long getHitCachingCount() {
+      return hitCachingCount.get();
+    }
+
+    public long getEvictionCount() {
+      return evictionCount.get();
+    }
+
+    public long getEvictedCount() {
+      return evictedCount.get();
+    }
+
+    public double getHitRatio() {
+      return ((float)getHitCount()/(float)getRequestCount());
+    }
+
+    public double getHitCachingRatio() {
+      return ((float)getHitCachingCount()/(float)getRequestCachingCount());
+    }
+
+    public double getMissRatio() {
+      return ((float)getMissCount()/(float)getRequestCount());
+    }
+
+    public double getMissCachingRatio() {
+      return ((float)getMissCachingCount()/(float)getRequestCachingCount());
+    }
+
+    public double evictedPerEviction() {
+      return ((float)getEvictedCount()/(float)getEvictionCount());
+    }
+  }
+
+  public final static long CACHE_FIXED_OVERHEAD = ClassSize.align(
+      (3 * Bytes.SIZEOF_LONG) + (8 * ClassSize.REFERENCE) +
+      (5 * Bytes.SIZEOF_FLOAT) + Bytes.SIZEOF_BOOLEAN
+      + ClassSize.OBJECT);
+
+  // HeapSize implementation
+  public long heapSize() {
+    return getCurrentSize();
+  }
+
+  public static long calculateOverhead(long maxSize, long blockSize, int concurrency){
+    // FindBugs ICAST_INTEGER_MULTIPLY_CAST_TO_LONG
+    return CACHE_FIXED_OVERHEAD + ClassSize.CONCURRENT_HASHMAP +
+        ((long)Math.ceil(maxSize*1.2/blockSize)
+            * ClassSize.CONCURRENT_HASHMAP_ENTRY) +
+        (concurrency * ClassSize.CONCURRENT_HASHMAP_SEGMENT);
+  }
+
+  // Simple calculators of sizes given factors and maxSize
+
+  private long acceptableSize() {
+    return (long)Math.floor(this.maxSize * this.acceptableFactor);
+  }
+  private long minSize() {
+    return (long)Math.floor(this.maxSize * this.minFactor);
+  }
+  private long singleSize() {
+    return (long)Math.floor(this.maxSize * this.singleFactor * this.minFactor);
+  }
+  private long multiSize() {
+    return (long)Math.floor(this.maxSize * this.multiFactor * this.minFactor);
+  }
+  private long memorySize() {
+    return (long)Math.floor(this.maxSize * this.memoryFactor * this.minFactor);
+  }
+
+  public void shutdown() {
+    this.scheduleThreadPool.shutdown();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java
new file mode 100644
index 0000000..088333f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/io/hfile/SimpleBlockCache.java
@@ -0,0 +1,89 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.lang.ref.ReferenceQueue;
+import java.lang.ref.SoftReference;
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Map;
+
+
+/**
+ * Simple one RFile soft reference cache.
+ */
+public class SimpleBlockCache implements BlockCache {
+  private static class Ref extends SoftReference<ByteBuffer> {
+    public String blockId;
+    public Ref(String blockId, ByteBuffer buf, ReferenceQueue q) {
+      super(buf, q);
+      this.blockId = blockId;
+    }
+  }
+  private Map<String,Ref> cache =
+    new HashMap<String,Ref>();
+
+  private ReferenceQueue q = new ReferenceQueue();
+  public int dumps = 0;
+
+  /**
+   * Constructor
+   */
+  public SimpleBlockCache() {
+    super();
+  }
+
+  void processQueue() {
+    Ref r;
+    while ( (r = (Ref)q.poll()) != null) {
+      cache.remove(r.blockId);
+      dumps++;
+    }
+  }
+
+  /**
+   * @return the size
+   */
+  public synchronized int size() {
+    processQueue();
+    return cache.size();
+  }
+
+  public synchronized ByteBuffer getBlock(String blockName, boolean caching) {
+    processQueue(); // clear out some crap.
+    Ref ref = cache.get(blockName);
+    if (ref == null)
+      return null;
+    return ref.get();
+  }
+
+  public synchronized void cacheBlock(String blockName, ByteBuffer buf) {
+    cache.put(blockName, new Ref(blockName, buf, q));
+  }
+
+  public synchronized void cacheBlock(String blockName, ByteBuffer buf,
+      boolean inMemory) {
+    cache.put(blockName, new Ref(blockName, buf, q));
+  }
+
+  public void shutdown() {
+    // noop
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/ByteBufferOutputStream.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/ByteBufferOutputStream.java
new file mode 100644
index 0000000..4d8ecbd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/ByteBufferOutputStream.java
@@ -0,0 +1,107 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.nio.ByteBuffer;
+
+/**
+ * Not thread safe!
+ */
+public class ByteBufferOutputStream extends OutputStream {
+
+  protected ByteBuffer buf;
+
+  public ByteBufferOutputStream(int capacity) {
+    this(capacity, false);
+  }
+
+  public ByteBufferOutputStream(int capacity, boolean useDirectByteBuffer) {
+    if (useDirectByteBuffer) {
+      buf = ByteBuffer.allocateDirect(capacity);
+    } else {
+      buf = ByteBuffer.allocate(capacity);
+    }
+  }
+
+  public int size() {
+    return buf.position();
+  }
+
+  /**
+   * This flips the underlying BB so be sure to use it _last_!
+   * @return ByteBuffer
+   */
+  public ByteBuffer getByteBuffer() {
+    buf.flip();
+    return buf;
+  }
+
+  private void checkSizeAndGrow(int extra) {
+    if ( (buf.position() + extra) > buf.limit()) {
+      // size calculation is complex, because we could overflow negative,
+      // and/or not allocate enough space. this fixes that.
+      int newSize = (int)Math.min((((long)buf.capacity()) * 2),
+          (long)(Integer.MAX_VALUE));
+      newSize = Math.max(newSize, buf.position() + extra);
+
+      ByteBuffer newBuf = ByteBuffer.allocate(newSize);
+      buf.flip();
+      newBuf.put(buf);
+      buf = newBuf;
+    }
+  }
+
+  // OutputStream
+  @Override
+  public void write(int b) throws IOException {
+    checkSizeAndGrow(Bytes.SIZEOF_BYTE);
+
+    buf.put((byte)b);
+  }
+
+  @Override
+  public void write(byte[] b) throws IOException {
+    checkSizeAndGrow(b.length);
+
+    buf.put(b);
+  }
+
+  @Override
+  public void write(byte[] b, int off, int len) throws IOException {
+    checkSizeAndGrow(len);
+
+    buf.put(b, off, len);
+  }
+
+  @Override
+  public void flush() throws IOException {
+    // noop
+  }
+
+  @Override
+  public void close() throws IOException {
+    // noop again. heh
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java
new file mode 100644
index 0000000..10d38de
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseClient.java
@@ -0,0 +1,908 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.ReflectionUtils;
+
+import javax.net.SocketFactory;
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.FilterInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.ConnectException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+import java.net.SocketTimeoutException;
+import java.net.UnknownHostException;
+import java.util.Hashtable;
+import java.util.Iterator;
+import java.util.Map.Entry;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+/** A client for an IPC service.  IPC calls take a single {@link Writable} as a
+ * parameter, and return a {@link Writable} as their value.  A service runs on
+ * a port and is defined by a parameter class and a value class.
+ *
+ * <p>This is the org.apache.hadoop.ipc.Client renamed as HBaseClient and
+ * moved into this package so can access package-private methods.
+ *
+ * @see HBaseServer
+ */
+public class HBaseClient {
+
+  private static final Log LOG =
+    LogFactory.getLog("org.apache.hadoop.ipc.HBaseClient");
+  protected final Hashtable<ConnectionId, Connection> connections =
+    new Hashtable<ConnectionId, Connection>();
+
+  protected final Class<? extends Writable> valueClass;   // class of call values
+  protected int counter;                            // counter for call ids
+  protected final AtomicBoolean running = new AtomicBoolean(true); // if client runs
+  final protected Configuration conf;
+  final protected int maxIdleTime; // connections will be culled if it was idle for
+                           // maxIdleTime microsecs
+  final protected int maxRetries; //the max. no. of retries for socket connections
+  final protected long failureSleep; // Time to sleep before retry on failure.
+  protected final boolean tcpNoDelay; // if T then disable Nagle's Algorithm
+  protected final boolean tcpKeepAlive; // if T then use keepalives
+  protected int pingInterval; // how often sends ping to the server in msecs
+
+  protected final SocketFactory socketFactory;           // how to create sockets
+  private int refCount = 1;
+
+  final private static String PING_INTERVAL_NAME = "ipc.ping.interval";
+  final static int DEFAULT_PING_INTERVAL = 60000; // 1 min
+  final static int PING_CALL_ID = -1;
+
+  /**
+   * set the ping interval value in configuration
+   *
+   * @param conf Configuration
+   * @param pingInterval the ping interval
+   */
+  @SuppressWarnings({"UnusedDeclaration"})
+  public static void setPingInterval(Configuration conf, int pingInterval) {
+    conf.setInt(PING_INTERVAL_NAME, pingInterval);
+  }
+
+  /**
+   * Get the ping interval from configuration;
+   * If not set in the configuration, return the default value.
+   *
+   * @param conf Configuration
+   * @return the ping interval
+   */
+  static int getPingInterval(Configuration conf) {
+    return conf.getInt(PING_INTERVAL_NAME, DEFAULT_PING_INTERVAL);
+  }
+
+  /**
+   * Increment this client's reference count
+   *
+   */
+  synchronized void incCount() {
+    refCount++;
+  }
+
+  /**
+   * Decrement this client's reference count
+   *
+   */
+  synchronized void decCount() {
+    refCount--;
+  }
+
+  /**
+   * Return if this client has no reference
+   *
+   * @return true if this client has no reference; false otherwise
+   */
+  synchronized boolean isZeroReference() {
+    return refCount==0;
+  }
+
+  /** A call waiting for a value. */
+  private class Call {
+    final int id;                                       // call id
+    final Writable param;                               // parameter
+    Writable value;                               // value, null if error
+    IOException error;                            // exception, null if value
+    boolean done;                                 // true when call is done
+
+    protected Call(Writable param) {
+      this.param = param;
+      synchronized (HBaseClient.this) {
+        this.id = counter++;
+      }
+    }
+
+    /** Indicate when the call is complete and the
+     * value or error are available.  Notifies by default.  */
+    protected synchronized void callComplete() {
+      this.done = true;
+      notify();                                 // notify caller
+    }
+
+    /** Set the exception when there is an error.
+     * Notify the caller the call is done.
+     *
+     * @param error exception thrown by the call; either local or remote
+     */
+    public synchronized void setException(IOException error) {
+      this.error = error;
+      callComplete();
+    }
+
+    /** Set the return value when there is no error.
+     * Notify the caller the call is done.
+     *
+     * @param value return value of the call.
+     */
+    public synchronized void setValue(Writable value) {
+      this.value = value;
+      callComplete();
+    }
+  }
+
+  /** Thread that reads responses and notifies callers.  Each connection owns a
+   * socket connected to a remote address.  Calls are multiplexed through this
+   * socket: responses may be delivered out of order. */
+  private class Connection extends Thread {
+    private ConnectionId remoteId;
+    private Socket socket = null;                 // connected socket
+    private DataInputStream in;
+    private DataOutputStream out;
+
+    // currently active calls
+    private final Hashtable<Integer, Call> calls = new Hashtable<Integer, Call>();
+    private final AtomicLong lastActivity = new AtomicLong();// last I/O activity time
+    protected final AtomicBoolean shouldCloseConnection = new AtomicBoolean();  // indicate if the connection is closed
+    private IOException closeException; // close reason
+
+    public Connection(InetSocketAddress address) throws IOException {
+      this(new ConnectionId(address, null, 0));
+    }
+
+    public Connection(ConnectionId remoteId) throws IOException {
+      if (remoteId.getAddress().isUnresolved()) {
+        throw new UnknownHostException("unknown host: " +
+                                       remoteId.getAddress().getHostName());
+      }
+      this.remoteId = remoteId;
+      UserGroupInformation ticket = remoteId.getTicket();
+      this.setName("IPC Client (" + socketFactory.hashCode() +") connection to " +
+        remoteId.getAddress().toString() +
+        ((ticket==null)?" from an unknown user": (" from " + ticket.getUserName())));
+      this.setDaemon(true);
+    }
+
+    /** Update lastActivity with the current time. */
+    private void touch() {
+      lastActivity.set(System.currentTimeMillis());
+    }
+
+    /**
+     * Add a call to this connection's call queue and notify
+     * a listener; synchronized.
+     * Returns false if called during shutdown.
+     * @param call to add
+     * @return true if the call was added.
+     */
+    protected synchronized boolean addCall(Call call) {
+      if (shouldCloseConnection.get())
+        return false;
+      calls.put(call.id, call);
+      notify();
+      return true;
+    }
+
+    /** This class sends a ping to the remote side when timeout on
+     * reading. If no failure is detected, it retries until at least
+     * a byte is read.
+     */
+    private class PingInputStream extends FilterInputStream {
+      /* constructor */
+      protected PingInputStream(InputStream in) {
+        super(in);
+      }
+
+      /* Process timeout exception
+       * if the connection is not going to be closed, send a ping.
+       * otherwise, throw the timeout exception.
+       */
+      private void handleTimeout(SocketTimeoutException e) throws IOException {
+        if (shouldCloseConnection.get() || !running.get() || 
+            remoteId.rpcTimeout > 0) {
+          throw e;
+        }
+        sendPing();
+      }
+
+      /** Read a byte from the stream.
+       * Send a ping if timeout on read. Retries if no failure is detected
+       * until a byte is read.
+       * @throws IOException for any IO problem other than socket timeout
+       */
+      @Override
+      public int read() throws IOException {
+        do {
+          try {
+            return super.read();
+          } catch (SocketTimeoutException e) {
+            handleTimeout(e);
+          }
+        } while (true);
+      }
+
+      /** Read bytes into a buffer starting from offset <code>off</code>
+       * Send a ping if timeout on read. Retries if no failure is detected
+       * until a byte is read.
+       *
+       * @return the total number of bytes read; -1 if the connection is closed.
+       */
+      @Override
+      public int read(byte[] buf, int off, int len) throws IOException {
+        do {
+          try {
+            return super.read(buf, off, len);
+          } catch (SocketTimeoutException e) {
+            handleTimeout(e);
+          }
+        } while (true);
+      }
+    }
+
+    /** Connect to the server and set up the I/O streams. It then sends
+     * a header to the server and starts
+     * the connection thread that waits for responses.
+     * @throws java.io.IOException e
+     */
+    protected synchronized void setupIOstreams() throws IOException {
+      if (socket != null || shouldCloseConnection.get()) {
+        return;
+      }
+
+      short ioFailures = 0;
+      short timeoutFailures = 0;
+      try {
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Connecting to "+remoteId.getAddress());
+        }
+        while (true) {
+          try {
+            this.socket = socketFactory.createSocket();
+            this.socket.setTcpNoDelay(tcpNoDelay);
+            this.socket.setKeepAlive(tcpKeepAlive);
+            // connection time out is 20s
+            NetUtils.connect(this.socket, remoteId.getAddress(), 20000);
+            if (remoteId.rpcTimeout > 0) {
+              pingInterval = remoteId.rpcTimeout; // overwrite pingInterval
+            }
+            this.socket.setSoTimeout(pingInterval);
+            break;
+          } catch (SocketTimeoutException toe) {
+            handleConnectionFailure(timeoutFailures++, maxRetries, toe);
+          } catch (IOException ie) {
+            handleConnectionFailure(ioFailures++, maxRetries, ie);
+          }
+        }
+        this.in = new DataInputStream(new BufferedInputStream
+            (new PingInputStream(NetUtils.getInputStream(socket))));
+        this.out = new DataOutputStream
+            (new BufferedOutputStream(NetUtils.getOutputStream(socket)));
+        writeHeader();
+
+        // update last activity time
+        touch();
+
+        // start the receiver thread after the socket connection has been set up
+        start();
+      } catch (IOException e) {
+        markClosed(e);
+        close();
+
+        throw e;
+      }
+    }
+
+    /* Handle connection failures
+     *
+     * If the current number of retries is equal to the max number of retries,
+     * stop retrying and throw the exception; Otherwise backoff N seconds and
+     * try connecting again.
+     *
+     * This Method is only called from inside setupIOstreams(), which is
+     * synchronized. Hence the sleep is synchronized; the locks will be retained.
+     *
+     * @param curRetries current number of retries
+     * @param maxRetries max number of retries allowed
+     * @param ioe failure reason
+     * @throws IOException if max number of retries is reached
+     */
+    private void handleConnectionFailure(
+        int curRetries, int maxRetries, IOException ioe) throws IOException {
+      // close the current connection
+      if (socket != null) { // could be null if the socket creation failed
+        try {
+          socket.close();
+        } catch (IOException e) {
+          LOG.warn("Not able to close a socket", e);
+        }
+      }
+      // set socket to null so that the next call to setupIOstreams
+      // can start the process of connect all over again.
+      socket = null;
+
+      // throw the exception if the maximum number of retries is reached
+      if (curRetries >= maxRetries) {
+        throw ioe;
+      }
+
+      // otherwise back off and retry
+      try {
+        Thread.sleep(failureSleep);
+      } catch (InterruptedException ignored) {}
+
+      LOG.info("Retrying connect to server: " + remoteId.getAddress() +
+        " after sleeping " + failureSleep + "ms. Already tried " + curRetries +
+        " time(s).");
+    }
+
+    /* Write the header for each connection
+     * Out is not synchronized because only the first thread does this.
+     */
+    private void writeHeader() throws IOException {
+      out.write(HBaseServer.HEADER.array());
+      out.write(HBaseServer.CURRENT_VERSION);
+      //When there are more fields we can have ConnectionHeader Writable.
+      DataOutputBuffer buf = new DataOutputBuffer();
+      ObjectWritable.writeObject(buf, remoteId.getTicket(),
+                                 UserGroupInformation.class, conf);
+      int bufLen = buf.getLength();
+      out.writeInt(bufLen);
+      out.write(buf.getData(), 0, bufLen);
+    }
+
+    /* wait till someone signals us to start reading RPC response or
+     * it is idle too long, it is marked as to be closed,
+     * or the client is marked as not running.
+     *
+     * Return true if it is time to read a response; false otherwise.
+     */
+    @SuppressWarnings({"ThrowableInstanceNeverThrown"})
+    private synchronized boolean waitForWork() {
+      if (calls.isEmpty() && !shouldCloseConnection.get()  && running.get())  {
+        long timeout = maxIdleTime-
+              (System.currentTimeMillis()-lastActivity.get());
+        if (timeout>0) {
+          try {
+            wait(timeout);
+          } catch (InterruptedException ignored) {}
+        }
+      }
+
+      if (!calls.isEmpty() && !shouldCloseConnection.get() && running.get()) {
+        return true;
+      } else if (shouldCloseConnection.get()) {
+        return false;
+      } else if (calls.isEmpty()) { // idle connection closed or stopped
+        markClosed(null);
+        return false;
+      } else { // get stopped but there are still pending requests
+        markClosed((IOException)new IOException().initCause(
+            new InterruptedException()));
+        return false;
+      }
+    }
+
+    public InetSocketAddress getRemoteAddress() {
+      return remoteId.getAddress();
+    }
+
+    /* Send a ping to the server if the time elapsed
+     * since last I/O activity is equal to or greater than the ping interval
+     */
+    protected synchronized void sendPing() throws IOException {
+      long curTime = System.currentTimeMillis();
+      if ( curTime - lastActivity.get() >= pingInterval) {
+        lastActivity.set(curTime);
+        //noinspection SynchronizeOnNonFinalField
+        synchronized (this.out) {
+          out.writeInt(PING_CALL_ID);
+          out.flush();
+        }
+      }
+    }
+
+    @Override
+    public void run() {
+      if (LOG.isDebugEnabled())
+        LOG.debug(getName() + ": starting, having connections "
+            + connections.size());
+
+      try {
+        while (waitForWork()) {//wait here for work - read or close connection
+          receiveResponse();
+        }
+      } catch (Throwable t) {
+        LOG.warn("Unexpected exception receiving call responses", t);
+        markClosed(new IOException("Unexpected exception receiving call responses", t));
+      }
+
+      close();
+
+      if (LOG.isDebugEnabled())
+        LOG.debug(getName() + ": stopped, remaining connections "
+            + connections.size());
+    }
+
+    /* Initiates a call by sending the parameter to the remote server.
+     * Note: this is not called from the Connection thread, but by other
+     * threads.
+     */
+    protected void sendParam(Call call) {
+      if (shouldCloseConnection.get()) {
+        return;
+      }
+
+      DataOutputBuffer d=null;
+      try {
+        //noinspection SynchronizeOnNonFinalField
+        synchronized (this.out) { // FindBugs IS2_INCONSISTENT_SYNC
+          if (LOG.isDebugEnabled())
+            LOG.debug(getName() + " sending #" + call.id);
+
+          //for serializing the
+          //data to be written
+          d = new DataOutputBuffer();
+          d.writeInt(0xdeadbeef); // placeholder for data length
+          d.writeInt(call.id);
+          call.param.write(d);
+          byte[] data = d.getData();
+          int dataLength = d.getLength();
+          // fill in the placeholder
+          Bytes.putInt(data, 0, dataLength - 4);
+          out.write(data, 0, dataLength);
+          out.flush();
+        }
+      } catch(IOException e) {
+        markClosed(e);
+      } finally {
+        //the buffer is just an in-memory buffer, but it is still polite to
+        // close early
+        IOUtils.closeStream(d);
+      }
+    }
+
+    /* Receive a response.
+     * Because only one receiver, so no synchronization on in.
+     */
+    private void receiveResponse() {
+      if (shouldCloseConnection.get()) {
+        return;
+      }
+      touch();
+
+      try {
+        int id = in.readInt();                    // try to read an id
+
+        if (LOG.isDebugEnabled())
+          LOG.debug(getName() + " got value #" + id);
+
+        Call call = calls.get(id);
+
+        boolean isError = in.readBoolean();     // read if error
+        if (isError) {
+          //noinspection ThrowableInstanceNeverThrown
+          call.setException(new RemoteException( WritableUtils.readString(in),
+              WritableUtils.readString(in)));
+          calls.remove(id);
+        } else {
+          Writable value = ReflectionUtils.newInstance(valueClass, conf);
+          value.readFields(in);                 // read value
+          call.setValue(value);
+          calls.remove(id);
+        }
+      } catch (IOException e) {
+        markClosed(e);
+      }
+    }
+
+    private synchronized void markClosed(IOException e) {
+      if (shouldCloseConnection.compareAndSet(false, true)) {
+        closeException = e;
+        notifyAll();
+      }
+    }
+
+    /** Close the connection. */
+    private synchronized void close() {
+      if (!shouldCloseConnection.get()) {
+        LOG.error("The connection is not in the closed state");
+        return;
+      }
+
+      // release the resources
+      // first thing to do;take the connection out of the connection list
+      synchronized (connections) {
+        if (connections.get(remoteId) == this) {
+          connections.remove(remoteId);
+        }
+      }
+
+      // close the streams and therefore the socket
+      IOUtils.closeStream(out);
+      IOUtils.closeStream(in);
+
+      // clean up all calls
+      if (closeException == null) {
+        if (!calls.isEmpty()) {
+          LOG.warn(
+              "A connection is closed for no cause and calls are not empty");
+
+          // clean up calls anyway
+          closeException = new IOException("Unexpected closed connection");
+          cleanupCalls();
+        }
+      } else {
+        // log the info
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("closing ipc connection to " + remoteId.address + ": " +
+              closeException.getMessage(),closeException);
+        }
+
+        // cleanup calls
+        cleanupCalls();
+      }
+      if (LOG.isDebugEnabled())
+        LOG.debug(getName() + ": closed");
+    }
+
+    /* Cleanup all calls and mark them as done */
+    private void cleanupCalls() {
+      Iterator<Entry<Integer, Call>> itor = calls.entrySet().iterator() ;
+      while (itor.hasNext()) {
+        Call c = itor.next().getValue();
+        c.setException(closeException); // local exception
+        itor.remove();
+      }
+    }
+  }
+
+  /** Call implementation used for parallel calls. */
+  private class ParallelCall extends Call {
+    private final ParallelResults results;
+    protected final int index;
+
+    public ParallelCall(Writable param, ParallelResults results, int index) {
+      super(param);
+      this.results = results;
+      this.index = index;
+    }
+
+    /** Deliver result to result collector. */
+    @Override
+    protected void callComplete() {
+      results.callComplete(this);
+    }
+  }
+
+  /** Result collector for parallel calls. */
+  private static class ParallelResults {
+    protected final Writable[] values;
+    protected int size;
+    protected int count;
+
+    public ParallelResults(int size) {
+      this.values = new Writable[size];
+      this.size = size;
+    }
+
+    /*
+     * Collect a result.
+     */
+    synchronized void callComplete(ParallelCall call) {
+      // FindBugs IS2_INCONSISTENT_SYNC
+      values[call.index] = call.value;            // store the value
+      count++;                                    // count it
+      if (count == size)                          // if all values are in
+        notify();                                 // then notify waiting caller
+    }
+  }
+
+  /**
+   * Construct an IPC client whose values are of the given {@link Writable}
+   * class.
+   * @param valueClass value class
+   * @param conf configuration
+   * @param factory socket factory
+   */
+  public HBaseClient(Class<? extends Writable> valueClass, Configuration conf,
+      SocketFactory factory) {
+    this.valueClass = valueClass;
+    this.maxIdleTime =
+      conf.getInt("hbase.ipc.client.connection.maxidletime", 10000); //10s
+    this.maxRetries = conf.getInt("hbase.ipc.client.connect.max.retries", 0);
+    this.failureSleep = conf.getInt("hbase.client.pause", 1000);
+    this.tcpNoDelay = conf.getBoolean("hbase.ipc.client.tcpnodelay", false);
+    this.tcpKeepAlive = conf.getBoolean("hbase.ipc.client.tcpkeepalive", true);
+    this.pingInterval = getPingInterval(conf);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("The ping interval is" + this.pingInterval + "ms.");
+    }
+    this.conf = conf;
+    this.socketFactory = factory;
+  }
+
+  /**
+   * Construct an IPC client with the default SocketFactory
+   * @param valueClass value class
+   * @param conf configuration
+   */
+  public HBaseClient(Class<? extends Writable> valueClass, Configuration conf) {
+    this(valueClass, conf, NetUtils.getDefaultSocketFactory(conf));
+  }
+
+  /** Return the socket factory of this client
+   *
+   * @return this client's socket factory
+   */
+  SocketFactory getSocketFactory() {
+    return socketFactory;
+  }
+
+  /** Stop all threads related to this client.  No further calls may be made
+   * using this client. */
+  public void stop() {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Stopping client");
+    }
+
+    if (!running.compareAndSet(true, false)) {
+      return;
+    }
+
+    // wake up all connections
+    synchronized (connections) {
+      for (Connection conn : connections.values()) {
+        conn.interrupt();
+      }
+    }
+
+    // wait until all connections are closed
+    while (!connections.isEmpty()) {
+      try {
+        Thread.sleep(100);
+      } catch (InterruptedException ignored) {
+      }
+    }
+  }
+
+  /** Make a call, passing <code>param</code>, to the IPC server running at
+   * <code>address</code>, returning the value.  Throws exceptions if there are
+   * network problems or if the remote code threw an exception.
+   * @param param writable parameter
+   * @param address network address
+   * @return Writable
+   * @throws IOException e
+   */
+  public Writable call(Writable param, InetSocketAddress address)
+  throws IOException {
+      return call(param, address, null, 0);
+  }
+
+  public Writable call(Writable param, InetSocketAddress addr,
+                       UserGroupInformation ticket, int rpcTimeout)
+                       throws IOException {
+    Call call = new Call(param);
+    Connection connection = getConnection(addr, ticket, rpcTimeout, call);
+    connection.sendParam(call);                 // send the parameter
+    boolean interrupted = false;
+    //noinspection SynchronizationOnLocalVariableOrMethodParameter
+    synchronized (call) {
+      while (!call.done) {
+        try {
+          call.wait();                           // wait for the result
+        } catch (InterruptedException ignored) {
+          // save the fact that we were interrupted
+          interrupted = true;
+        }
+      }
+
+      if (interrupted) {
+        // set the interrupt flag now that we are done waiting
+        Thread.currentThread().interrupt();
+      }
+
+      if (call.error != null) {
+        if (call.error instanceof RemoteException) {
+          call.error.fillInStackTrace();
+          throw call.error;
+        }
+        // local exception
+        throw wrapException(addr, call.error);
+      }
+      return call.value;
+    }
+  }
+
+  /**
+   * Take an IOException and the address we were trying to connect to
+   * and return an IOException with the input exception as the cause.
+   * The new exception provides the stack trace of the place where
+   * the exception is thrown and some extra diagnostics information.
+   * If the exception is ConnectException or SocketTimeoutException,
+   * return a new one of the same type; Otherwise return an IOException.
+   *
+   * @param addr target address
+   * @param exception the relevant exception
+   * @return an exception to throw
+   */
+  @SuppressWarnings({"ThrowableInstanceNeverThrown"})
+  private IOException wrapException(InetSocketAddress addr,
+                                         IOException exception) {
+    if (exception instanceof ConnectException) {
+      //connection refused; include the host:port in the error
+      return (ConnectException)new ConnectException(
+           "Call to " + addr + " failed on connection exception: " + exception)
+                    .initCause(exception);
+    } else if (exception instanceof SocketTimeoutException) {
+      return (SocketTimeoutException)new SocketTimeoutException(
+           "Call to " + addr + " failed on socket timeout exception: "
+                      + exception).initCause(exception);
+    } else {
+      return (IOException)new IOException(
+           "Call to " + addr + " failed on local exception: " + exception)
+                                 .initCause(exception);
+
+    }
+  }
+
+  /** Makes a set of calls in parallel.  Each parameter is sent to the
+   * corresponding address.  When all values are available, or have timed out
+   * or errored, the collected results are returned in an array.  The array
+   * contains nulls for calls that timed out or errored.
+   * @param params writable parameters
+   * @param addresses socket addresses
+   * @return  Writable[]
+   * @throws IOException e
+   */
+  public Writable[] call(Writable[] params, InetSocketAddress[] addresses)
+    throws IOException {
+    if (addresses.length == 0) return new Writable[0];
+
+    ParallelResults results = new ParallelResults(params.length);
+    // TODO this synchronization block doesnt make any sense, we should possibly fix it
+    //noinspection SynchronizationOnLocalVariableOrMethodParameter
+    synchronized (results) {
+      for (int i = 0; i < params.length; i++) {
+        ParallelCall call = new ParallelCall(params[i], results, i);
+        try {
+          Connection connection = getConnection(addresses[i], null, 0, call);
+          connection.sendParam(call);             // send each parameter
+        } catch (IOException e) {
+          // log errors
+          LOG.info("Calling "+addresses[i]+" caught: " +
+                   e.getMessage(),e);
+          results.size--;                         //  wait for one fewer result
+        }
+      }
+      while (results.count != results.size) {
+        try {
+          results.wait();                    // wait for all results
+        } catch (InterruptedException ignored) {}
+      }
+
+      return results.values;
+    }
+  }
+
+  /* Get a connection from the pool, or create a new one and add it to the
+   * pool.  Connections to a given host/port are reused. */
+  private Connection getConnection(InetSocketAddress addr,
+                                   UserGroupInformation ticket,
+                                   int rpcTimeout,
+                                   Call call)
+                                   throws IOException {
+    if (!running.get()) {
+      // the client is stopped
+      throw new IOException("The client is stopped");
+    }
+    Connection connection;
+    /* we could avoid this allocation for each RPC by having a
+     * connectionsId object and with set() method. We need to manage the
+     * refs for keys in HashMap properly. For now its ok.
+     */
+    ConnectionId remoteId = new ConnectionId(addr, ticket, rpcTimeout);
+    do {
+      synchronized (connections) {
+        connection = connections.get(remoteId);
+        if (connection == null) {
+          connection = new Connection(remoteId);
+          connections.put(remoteId, connection);
+        }
+      }
+    } while (!connection.addCall(call));
+
+    //we don't invoke the method below inside "synchronized (connections)"
+    //block above. The reason for that is if the server happens to be slow,
+    //it will take longer to establish a connection and that will slow the
+    //entire system down.
+    connection.setupIOstreams();
+    return connection;
+  }
+
+  /**
+   * This class holds the address and the user ticket. The client connections
+   * to servers are uniquely identified by <remoteAddress, ticket>
+   */
+  private static class ConnectionId {
+    final InetSocketAddress address;
+    final UserGroupInformation ticket;
+    final private int rpcTimeout;
+
+    ConnectionId(InetSocketAddress address, UserGroupInformation ticket,
+        int rpcTimeout) {
+      this.address = address;
+      this.ticket = ticket;
+      this.rpcTimeout = rpcTimeout;
+    }
+
+    InetSocketAddress getAddress() {
+      return address;
+    }
+    UserGroupInformation getTicket() {
+      return ticket;
+    }
+
+    @Override
+    public boolean equals(Object obj) {
+     if (obj instanceof ConnectionId) {
+       ConnectionId id = (ConnectionId) obj;
+       return address.equals(id.address) && ticket == id.ticket && 
+       rpcTimeout == id.rpcTimeout;
+       //Note : ticket is a ref comparision.
+     }
+     return false;
+    }
+
+    @Override
+    public int hashCode() {
+      return address.hashCode() ^ System.identityHashCode(ticket) ^ rpcTimeout;
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java
new file mode 100644
index 0000000..4f4828b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPC.java
@@ -0,0 +1,607 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import com.google.common.base.Function;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.RetriesExhaustedException;
+import org.apache.hadoop.hbase.io.HbaseObjectWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.ipc.VersionedProtocol;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import javax.net.SocketFactory;
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Array;
+import java.lang.reflect.InvocationHandler;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.lang.reflect.Proxy;
+import java.net.ConnectException;
+import java.net.InetSocketAddress;
+import java.net.SocketTimeoutException;
+import java.util.HashMap;
+import java.util.Map;
+
+/** A simple RPC mechanism.
+ *
+ * This is a local hbase copy of the hadoop RPC so we can do things like
+ * address HADOOP-414 for hbase-only and try other hbase-specific
+ * optimizations like using our own version of ObjectWritable.  Class has been
+ * renamed to avoid confusing it w/ hadoop versions.
+ * <p>
+ *
+ *
+ * A <i>protocol</i> is a Java interface.  All parameters and return types must
+ * be one of:
+ *
+ * <ul> <li>a primitive type, <code>boolean</code>, <code>byte</code>,
+ * <code>char</code>, <code>short</code>, <code>int</code>, <code>long</code>,
+ * <code>float</code>, <code>double</code>, or <code>void</code>; or</li>
+ *
+ * <li>a {@link String}; or</li>
+ *
+ * <li>a {@link Writable}; or</li>
+ *
+ * <li>an array of the above types</li> </ul>
+ *
+ * All methods in the protocol should throw only IOException.  No field data of
+ * the protocol instance is transmitted.
+ */
+public class HBaseRPC {
+  // Leave this out in the hadoop ipc package but keep class name.  Do this
+  // so that we dont' get the logging of this class's invocations by doing our
+  // blanket enabling DEBUG on the o.a.h.h. package.
+  protected static final Log LOG =
+    LogFactory.getLog("org.apache.hadoop.ipc.HbaseRPC");
+
+  private HBaseRPC() {
+    super();
+  }                                  // no public ctor
+
+
+  /** A method invocation, including the method name and its parameters.*/
+  public static class Invocation implements Writable, Configurable {
+    private String methodName;
+    @SuppressWarnings("unchecked")
+    private Class[] parameterClasses;
+    private Object[] parameters;
+    private Configuration conf;
+
+    /** default constructor */
+    public Invocation() {
+      super();
+    }
+
+    /**
+     * @param method method to call
+     * @param parameters parameters of call
+     */
+    public Invocation(Method method, Object[] parameters) {
+      this.methodName = method.getName();
+      this.parameterClasses = method.getParameterTypes();
+      this.parameters = parameters;
+    }
+
+    /** @return The name of the method invoked. */
+    public String getMethodName() { return methodName; }
+
+    /** @return The parameter classes. */
+    @SuppressWarnings("unchecked")
+    public Class[] getParameterClasses() { return parameterClasses; }
+
+    /** @return The parameter instances. */
+    public Object[] getParameters() { return parameters; }
+
+    public void readFields(DataInput in) throws IOException {
+      methodName = in.readUTF();
+      parameters = new Object[in.readInt()];
+      parameterClasses = new Class[parameters.length];
+      HbaseObjectWritable objectWritable = new HbaseObjectWritable();
+      for (int i = 0; i < parameters.length; i++) {
+        parameters[i] = HbaseObjectWritable.readObject(in, objectWritable,
+          this.conf);
+        parameterClasses[i] = objectWritable.getDeclaredClass();
+      }
+    }
+
+    public void write(DataOutput out) throws IOException {
+      out.writeUTF(this.methodName);
+      out.writeInt(parameterClasses.length);
+      for (int i = 0; i < parameterClasses.length; i++) {
+        HbaseObjectWritable.writeObject(out, parameters[i], parameterClasses[i],
+                                   conf);
+      }
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder buffer = new StringBuilder(256);
+      buffer.append(methodName);
+      buffer.append("(");
+      for (int i = 0; i < parameters.length; i++) {
+        if (i != 0)
+          buffer.append(", ");
+        buffer.append(parameters[i]);
+      }
+      buffer.append(")");
+      return buffer.toString();
+    }
+
+    public void setConf(Configuration conf) {
+      this.conf = conf;
+    }
+
+    public Configuration getConf() {
+      return this.conf;
+    }
+  }
+
+  /* Cache a client using its socket factory as the hash key */
+  static private class ClientCache {
+    private Map<SocketFactory, HBaseClient> clients =
+      new HashMap<SocketFactory, HBaseClient>();
+
+    protected ClientCache() {}
+
+    /**
+     * Construct & cache an IPC client with the user-provided SocketFactory
+     * if no cached client exists.
+     *
+     * @param conf Configuration
+     * @param factory socket factory
+     * @return an IPC client
+     */
+    protected synchronized HBaseClient getClient(Configuration conf,
+        SocketFactory factory) {
+      // Construct & cache client.  The configuration is only used for timeout,
+      // and Clients have connection pools.  So we can either (a) lose some
+      // connection pooling and leak sockets, or (b) use the same timeout for all
+      // configurations.  Since the IPC is usually intended globally, not
+      // per-job, we choose (a).
+      HBaseClient client = clients.get(factory);
+      if (client == null) {
+        // Make an hbase client instead of hadoop Client.
+        client = new HBaseClient(HbaseObjectWritable.class, conf, factory);
+        clients.put(factory, client);
+      } else {
+        client.incCount();
+      }
+      return client;
+    }
+
+    /**
+     * Construct & cache an IPC client with the default SocketFactory
+     * if no cached client exists.
+     *
+     * @param conf Configuration
+     * @return an IPC client
+     */
+    protected synchronized HBaseClient getClient(Configuration conf) {
+      return getClient(conf, SocketFactory.getDefault());
+    }
+
+    /**
+     * Stop a RPC client connection
+     * A RPC client is closed only when its reference count becomes zero.
+     * @param client client to stop
+     */
+    protected void stopClient(HBaseClient client) {
+      synchronized (this) {
+        client.decCount();
+        if (client.isZeroReference()) {
+          clients.remove(client.getSocketFactory());
+        }
+      }
+      if (client.isZeroReference()) {
+        client.stop();
+      }
+    }
+  }
+
+  protected final static ClientCache CLIENTS = new ClientCache();
+
+  private static class Invoker implements InvocationHandler {
+    private InetSocketAddress address;
+    private UserGroupInformation ticket;
+    private HBaseClient client;
+    private boolean isClosed = false;
+    final private int rpcTimeout;
+
+    /**
+     * @param address address for invoker
+     * @param ticket ticket
+     * @param conf configuration
+     * @param factory socket factory
+     */
+    public Invoker(InetSocketAddress address, UserGroupInformation ticket,
+                   Configuration conf, SocketFactory factory, int rpcTimeout) {
+      this.address = address;
+      this.ticket = ticket;
+      this.client = CLIENTS.getClient(conf, factory);
+      this.rpcTimeout = rpcTimeout;
+    }
+
+    public Object invoke(Object proxy, Method method, Object[] args)
+        throws Throwable {
+      final boolean logDebug = LOG.isDebugEnabled();
+      long startTime = 0;
+      if (logDebug) {
+        startTime = System.currentTimeMillis();
+      }
+      HbaseObjectWritable value = (HbaseObjectWritable)
+        client.call(new Invocation(method, args), address, ticket, rpcTimeout);
+      if (logDebug) {
+        long callTime = System.currentTimeMillis() - startTime;
+        LOG.debug("Call: " + method.getName() + " " + callTime);
+      }
+      return value.get();
+    }
+
+    /* close the IPC client that's responsible for this invoker's RPCs */
+    synchronized protected void close() {
+      if (!isClosed) {
+        isClosed = true;
+        CLIENTS.stopClient(client);
+      }
+    }
+  }
+
+  /**
+   * A version mismatch for the RPC protocol.
+   */
+  @SuppressWarnings("serial")
+  public static class VersionMismatch extends IOException {
+    private String interfaceName;
+    private long clientVersion;
+    private long serverVersion;
+
+    /**
+     * Create a version mismatch exception
+     * @param interfaceName the name of the protocol mismatch
+     * @param clientVersion the client's version of the protocol
+     * @param serverVersion the server's version of the protocol
+     */
+    public VersionMismatch(String interfaceName, long clientVersion,
+                           long serverVersion) {
+      super("Protocol " + interfaceName + " version mismatch. (client = " +
+            clientVersion + ", server = " + serverVersion + ")");
+      this.interfaceName = interfaceName;
+      this.clientVersion = clientVersion;
+      this.serverVersion = serverVersion;
+    }
+
+    /**
+     * Get the interface name
+     * @return the java class name
+     *          (eg. org.apache.hadoop.mapred.InterTrackerProtocol)
+     */
+    public String getInterfaceName() {
+      return interfaceName;
+    }
+
+    /**
+     * @return the client's preferred version
+     */
+    public long getClientVersion() {
+      return clientVersion;
+    }
+
+    /**
+     * @return the server's agreed to version.
+     */
+    public long getServerVersion() {
+      return serverVersion;
+    }
+  }
+
+  /**
+   * @param protocol protocol interface
+   * @param clientVersion which client version we expect
+   * @param addr address of remote service
+   * @param conf configuration
+   * @param maxAttempts max attempts
+   * @param rpcTimeout timeout for each RPC
+   * @param timeout timeout in milliseconds
+   * @return proxy
+   * @throws IOException e
+   */
+  @SuppressWarnings("unchecked")
+  public static VersionedProtocol waitForProxy(Class protocol,
+                                               long clientVersion,
+                                               InetSocketAddress addr,
+                                               Configuration conf,
+                                               int maxAttempts,
+                                               int rpcTimeout,
+                                               long timeout
+                                               ) throws IOException {
+    // HBase does limited number of reconnects which is different from hadoop.
+    long startTime = System.currentTimeMillis();
+    IOException ioe;
+    int reconnectAttempts = 0;
+    while (true) {
+      try {
+        return getProxy(protocol, clientVersion, addr, conf, rpcTimeout);
+      } catch(ConnectException se) {  // namenode has not been started
+        ioe = se;
+        if (maxAttempts >= 0 && ++reconnectAttempts >= maxAttempts) {
+          LOG.info("Server at " + addr + " could not be reached after " +
+            reconnectAttempts + " tries, giving up.");
+          throw new RetriesExhaustedException("Failed setting up proxy " +
+            protocol + " to " + addr.toString() + " after attempts=" +
+            reconnectAttempts, se);
+      }
+      } catch(SocketTimeoutException te) {  // namenode is busy
+        LOG.info("Problem connecting to server: " + addr);
+        ioe = te;
+      }
+      // check if timed out
+      if (System.currentTimeMillis()-timeout >= startTime) {
+        throw ioe;
+      }
+
+      // wait for retry
+      try {
+        Thread.sleep(1000);
+      } catch (InterruptedException ie) {
+        // IGNORE
+      }
+    }
+  }
+
+  /**
+   * Construct a client-side proxy object that implements the named protocol,
+   * talking to a server at the named address.
+   *
+   * @param protocol interface
+   * @param clientVersion version we are expecting
+   * @param addr remote address
+   * @param conf configuration
+   * @param factory socket factory
+   * @param rpcTimeout timeout for each RPC
+   * @return proxy
+   * @throws IOException e
+   */
+  public static VersionedProtocol getProxy(Class<?> protocol,
+      long clientVersion, InetSocketAddress addr, Configuration conf,
+      SocketFactory factory, int rpcTimeout) throws IOException {
+    return getProxy(protocol, clientVersion, addr, null, conf, factory,
+        rpcTimeout);
+  }
+
+  /**
+   * Construct a client-side proxy object that implements the named protocol,
+   * talking to a server at the named address.
+   *
+   * @param protocol interface
+   * @param clientVersion version we are expecting
+   * @param addr remote address
+   * @param ticket ticket
+   * @param conf configuration
+   * @param factory socket factory
+   * @param rpcTimeout timeout for each RPC
+   * @return proxy
+   * @throws IOException e
+   */
+  public static VersionedProtocol getProxy(Class<?> protocol,
+      long clientVersion, InetSocketAddress addr, UserGroupInformation ticket,
+      Configuration conf, SocketFactory factory, int rpcTimeout)
+  throws IOException {
+    VersionedProtocol proxy =
+        (VersionedProtocol) Proxy.newProxyInstance(
+            protocol.getClassLoader(), new Class[] { protocol },
+            new Invoker(addr, ticket, conf, factory, rpcTimeout));
+    long serverVersion = proxy.getProtocolVersion(protocol.getName(),
+                                                  clientVersion);
+    if (serverVersion == clientVersion) {
+      return proxy;
+    }
+    throw new VersionMismatch(protocol.getName(), clientVersion,
+                              serverVersion);
+  }
+
+  /**
+   * Construct a client-side proxy object with the default SocketFactory
+   *
+   * @param protocol interface
+   * @param clientVersion version we are expecting
+   * @param addr remote address
+   * @param conf configuration
+   * @param rpcTimeout timeout for each RPC
+   * @return a proxy instance
+   * @throws IOException e
+   */
+  public static VersionedProtocol getProxy(Class<?> protocol,
+      long clientVersion, InetSocketAddress addr, Configuration conf,
+      int rpcTimeout)
+      throws IOException {
+
+    return getProxy(protocol, clientVersion, addr, conf, NetUtils
+        .getDefaultSocketFactory(conf), rpcTimeout);
+  }
+
+  /**
+   * Stop this proxy and release its invoker's resource
+   * @param proxy the proxy to be stopped
+   */
+  public static void stopProxy(VersionedProtocol proxy) {
+    if (proxy!=null) {
+      ((Invoker)Proxy.getInvocationHandler(proxy)).close();
+    }
+  }
+
+  /**
+   * Expert: Make multiple, parallel calls to a set of servers.
+   *
+   * @param method method to invoke
+   * @param params array of parameters
+   * @param addrs array of addresses
+   * @param conf configuration
+   * @return values
+   * @throws IOException e
+   */
+  public static Object[] call(Method method, Object[][] params,
+                              InetSocketAddress[] addrs, Configuration conf)
+    throws IOException {
+
+    Invocation[] invocations = new Invocation[params.length];
+    for (int i = 0; i < params.length; i++)
+      invocations[i] = new Invocation(method, params[i]);
+    HBaseClient client = CLIENTS.getClient(conf);
+    try {
+    Writable[] wrappedValues = client.call(invocations, addrs);
+
+    if (method.getReturnType() == Void.TYPE) {
+      return null;
+    }
+
+    Object[] values =
+      (Object[])Array.newInstance(method.getReturnType(), wrappedValues.length);
+    for (int i = 0; i < values.length; i++)
+      if (wrappedValues[i] != null)
+        values[i] = ((HbaseObjectWritable)wrappedValues[i]).get();
+
+    return values;
+    } finally {
+      CLIENTS.stopClient(client);
+    }
+  }
+
+  /**
+   * Construct a server for a protocol implementation instance listening on a
+   * port and address.
+   *
+   * @param instance instance
+   * @param bindAddress bind address
+   * @param port port to bind to
+   * @param numHandlers number of handlers to start
+   * @param verbose verbose flag
+   * @param conf configuration
+   * @return Server
+   * @throws IOException e
+   */
+  public static Server getServer(final Object instance,
+                                 final Class<?>[] ifaces,
+                                 final String bindAddress, final int port,
+                                 final int numHandlers,
+                                 int metaHandlerCount, final boolean verbose, Configuration conf, int highPriorityLevel)
+    throws IOException {
+    return new Server(instance, ifaces, conf, bindAddress, port, numHandlers, metaHandlerCount, verbose, highPriorityLevel);
+  }
+
+  /** An RPC Server. */
+  public static class Server extends HBaseServer {
+    private Object instance;
+    private Class<?> implementation;
+    private Class<?> ifaces[];
+    private boolean verbose;
+
+    private static String classNameBase(String className) {
+      String[] names = className.split("\\.", -1);
+      if (names == null || names.length == 0) {
+        return className;
+      }
+      return names[names.length-1];
+    }
+
+    /** Construct an RPC server.
+     * @param instance the instance whose methods will be called
+     * @param conf the configuration to use
+     * @param bindAddress the address to bind on to listen for connection
+     * @param port the port to listen for connections on
+     * @param numHandlers the number of method handler threads to run
+     * @param verbose whether each call should be logged
+     * @throws IOException e
+     */
+    public Server(Object instance, final Class<?>[] ifaces,
+                  Configuration conf, String bindAddress, int port,
+                  int numHandlers, int metaHandlerCount, boolean verbose, int highPriorityLevel) throws IOException {
+      super(bindAddress, port, Invocation.class, numHandlers, metaHandlerCount, conf, classNameBase(instance.getClass().getName()), highPriorityLevel);
+      this.instance = instance;
+      this.implementation = instance.getClass();
+
+      this.verbose = verbose;
+
+      this.ifaces = ifaces;
+
+      // create metrics for the advertised interfaces this server implements.
+      this.rpcMetrics.createMetrics(this.ifaces);
+    }
+
+    @Override
+    public Writable call(Writable param, long receivedTime) throws IOException {
+      try {
+        Invocation call = (Invocation)param;
+        if(call.getMethodName() == null) {
+          throw new IOException("Could not find requested method, the usual " +
+              "cause is a version mismatch between client and server.");
+        }
+        if (verbose) log("Call: " + call);
+        Method method =
+          implementation.getMethod(call.getMethodName(),
+                                   call.getParameterClasses());
+
+        long startTime = System.currentTimeMillis();
+        Object value = method.invoke(instance, call.getParameters());
+        int processingTime = (int) (System.currentTimeMillis() - startTime);
+        int qTime = (int) (startTime-receivedTime);
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Served: " + call.getMethodName() +
+            " queueTime= " + qTime +
+            " procesingTime= " + processingTime);
+        }
+        rpcMetrics.rpcQueueTime.inc(qTime);
+        rpcMetrics.rpcProcessingTime.inc(processingTime);
+        rpcMetrics.inc(call.getMethodName(), processingTime);
+        if (verbose) log("Return: "+value);
+
+        return new HbaseObjectWritable(method.getReturnType(), value);
+
+      } catch (InvocationTargetException e) {
+        Throwable target = e.getTargetException();
+        if (target instanceof IOException) {
+          throw (IOException)target;
+        }
+        IOException ioe = new IOException(target.toString());
+        ioe.setStackTrace(target.getStackTrace());
+        throw ioe;
+      } catch (Throwable e) {
+        IOException ioe = new IOException(e.toString());
+        ioe.setStackTrace(e.getStackTrace());
+        throw ioe;
+      }
+    }
+  }
+
+  protected static void log(String value) {
+    String v = value;
+    if (v != null && v.length() > 55)
+      v = v.substring(0, 55)+"...";
+    LOG.info(v);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCErrorHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCErrorHandler.java
new file mode 100644
index 0000000..ad790b5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCErrorHandler.java
@@ -0,0 +1,33 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+/**
+ * An interface for calling out of RPC for error conditions.
+ */
+public interface HBaseRPCErrorHandler {
+	/**
+	 * Take actions on the event of an OutOfMemoryError.
+	 * @param e the throwable
+	 * @return if the server should be shut down
+	 */
+	public boolean checkOOME(final Throwable e) ;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCProtocolVersion.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCProtocolVersion.java
new file mode 100644
index 0000000..c97b967
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCProtocolVersion.java
@@ -0,0 +1,86 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import org.apache.hadoop.ipc.VersionedProtocol;
+
+/**
+ * There is one version id for all the RPC interfaces. If any interface
+ * is changed, the versionID must be changed here.
+ */
+public interface HBaseRPCProtocolVersion extends VersionedProtocol {
+  /**
+   * Interface version.
+   *
+   * HMasterInterface version history:
+   * <ul>
+   * <li>Version was incremented to 2 when we brought the hadoop RPC local to
+   * hbase HADOOP-2495</li>
+   * <li>Version was incremented to 3 when we changed the RPC to send codes
+   * instead of actual class names (HADOOP-2519).</li>
+   * <li>Version 4 when we moved to all byte arrays (HBASE-42).</li>
+   * <li>Version 5  HBASE-576.</li>
+   * <li>Version 6  modifyTable.</li>
+   * </ul>
+   * <p>HMasterRegionInterface version history:
+   * <ul>
+   * <li>Version 2 was when the regionServerStartup was changed to return a
+   * MapWritable instead of a HbaseMapWritable as part of HBASE-82 changes.</li>
+   * <li>Version 3 was when HMsg was refactored so it could carry optional
+   * messages (HBASE-504).</li>
+   * <li>HBASE-576 we moved this to 4.</li>
+   * </ul>
+   * <p>HRegionInterface version history:
+   * <ul>
+   * <li>Upped to 5 when we added scanner caching</li>
+   * <li>HBASE-576, we moved this to 6.</li>
+   * </ul>
+   * <p>TransactionalRegionInterface version history:
+   * <ul>
+   * <li>Moved to 2 for hbase-576.</li>
+   * </ul>
+   * <p>Unified RPC version number history:
+   * <ul>
+   * <li>Version 10: initial version (had to be &gt all other RPC versions</li>
+   * <li>Version 11: Changed getClosestRowBefore signature.</li>
+   * <li>Version 12: HServerLoad extensions (HBASE-1018).</li>
+   * <li>Version 13: HBASE-847</li>
+   * <li>Version 14: HBASE-900</li>
+   * <li>Version 15: HRegionInterface.exists</li>
+   * <li>Version 16: Removed HMasterRegionInterface.getRootRegionLocation and
+   * HMasterInterface.findRootRegion. We use ZooKeeper to store root region
+   * location instead.</li>
+   * <li>Version 17: Added incrementColumnValue.</li>
+   * <li>Version 18: HBASE-1302.</li>
+   * <li>Version 19: Added getClusterStatus().</li>
+   * <li>Version 20: Backed Transaction HBase out of HBase core.</li>
+   * <li>Version 21: HBASE-1665.</li>
+   * <li>Version 22: HBASE-2209. Added List support to RPC</li>
+   * <li>Version 23: HBASE-2066, multi-put.</li>
+   * <li>Version 24: HBASE-2473, create table with regions.</li>
+   * <li>Version 25: Added openRegion and Stoppable/Abortable to API.</li>
+   * <li>Version 26: New master and Increment, 0.90 version bump.</li>
+   * <li>Version 27: HBASE-3168, Added serverCurrentTime to regionServerStartup
+   * in HMasterRegionInterface.</li>
+   * </ul>
+   */
+  public static final long versionID = 27L;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCStatistics.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCStatistics.java
new file mode 100644
index 0000000..c9b0257
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRPCStatistics.java
@@ -0,0 +1,52 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import org.apache.hadoop.metrics.util.MBeanUtil;
+import org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+import javax.management.ObjectName;
+
+/**
+ * Exports HBase RPC statistics recorded in {@link HBaseRpcMetrics} as an MBean
+ * for JMX monitoring.
+ */
+public class HBaseRPCStatistics extends MetricsDynamicMBeanBase {
+  private final ObjectName mbeanName;
+
+  @SuppressWarnings({"UnusedDeclaration"})
+  public HBaseRPCStatistics(MetricsRegistry registry,
+      String hostName, String port) {
+	  super(registry, "HBaseRPCStatistics");
+
+    String name = String.format("RPCStatistics-%s",
+        (port != null ? port : "unknown"));
+
+    mbeanName = MBeanUtil.registerMBean("HBase", name, this);
+  }
+
+  public void shutdown() {
+    if (mbeanName != null)
+      MBeanUtil.unregisterMBean(mbeanName);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
new file mode 100644
index 0000000..19dbf2b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
@@ -0,0 +1,139 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+import org.apache.hadoop.metrics.util.MetricsTimeVaryingRate;
+
+import java.lang.reflect.Method;
+
+/**
+ *
+ * This class is for maintaining  the various RPC statistics
+ * and publishing them through the metrics interfaces.
+ * This also registers the JMX MBean for RPC.
+ * <p>
+ * This class has a number of metrics variables that are publicly accessible;
+ * these variables (objects) have methods to update their values;
+ * for example:
+ *  <p> {@link #rpcQueueTime}.inc(time)
+ *
+ */
+public class HBaseRpcMetrics implements Updater {
+  private MetricsRecord metricsRecord;
+  private static Log LOG = LogFactory.getLog(HBaseRpcMetrics.class);
+  private final HBaseRPCStatistics rpcStatistics;
+
+  public HBaseRpcMetrics(String hostName, String port) {
+    MetricsContext context = MetricsUtil.getContext("rpc");
+    metricsRecord = MetricsUtil.createRecord(context, "metrics");
+
+    metricsRecord.setTag("port", port);
+
+    LOG.info("Initializing RPC Metrics with hostName="
+        + hostName + ", port=" + port);
+
+    context.registerUpdater(this);
+
+    initMethods(HMasterInterface.class);
+    initMethods(HMasterRegionInterface.class);
+    initMethods(HRegionInterface.class);
+    rpcStatistics = new HBaseRPCStatistics(this.registry, hostName, port);
+  }
+
+
+  /**
+   * The metrics variables are public:
+   *  - they can be set directly by calling their set/inc methods
+   *  -they can also be read directly - e.g. JMX does this.
+   */
+  public final MetricsRegistry registry = new MetricsRegistry();
+
+  public MetricsTimeVaryingRate rpcQueueTime = new MetricsTimeVaryingRate("RpcQueueTime", registry);
+  public MetricsTimeVaryingRate rpcProcessingTime = new MetricsTimeVaryingRate("RpcProcessingTime", registry);
+
+  //public Map <String, MetricsTimeVaryingRate> metricsList = Collections.synchronizedMap(new HashMap<String, MetricsTimeVaryingRate>());
+
+  private void initMethods(Class<? extends HBaseRPCProtocolVersion> protocol) {
+    for (Method m : protocol.getDeclaredMethods()) {
+      if (get(m.getName()) == null)
+        create(m.getName());
+    }
+  }
+
+  private MetricsTimeVaryingRate get(String key) {
+    return (MetricsTimeVaryingRate) registry.get(key);
+  }
+  private MetricsTimeVaryingRate create(String key) {
+    return new MetricsTimeVaryingRate(key, this.registry);
+  }
+
+  public void inc(String name, int amt) {
+    MetricsTimeVaryingRate m = get(name);
+    if (m == null) {
+      LOG.warn("Got inc() request for method that doesnt exist: " +
+      name);
+      return; // ignore methods that dont exist.
+    }
+    m.inc(amt);
+  }
+
+  public void createMetrics(Class<?> []ifaces) {
+    for (Class<?> iface : ifaces) {
+      Method[] methods = iface.getMethods();
+      for (Method method : methods) {
+        if (get(method.getName()) == null)
+          create(method.getName());
+      }
+    }
+  }
+
+  /**
+   * Push the metrics to the monitoring subsystem on doUpdate() call.
+   * @param context ctx
+   */
+  public void doUpdates(MetricsContext context) {
+    rpcQueueTime.pushMetric(metricsRecord);
+    rpcProcessingTime.pushMetric(metricsRecord);
+
+    synchronized (registry) {
+      // Iterate through the registry to propagate the different rpc metrics.
+
+      for (String metricName : registry.getKeyList() ) {
+        MetricsTimeVaryingRate value = (MetricsTimeVaryingRate) registry.get(metricName);
+
+        value.pushMetric(metricsRecord);
+      }
+    }
+    metricsRecord.update();
+  }
+
+  public void shutdown() {
+    if (rpcStatistics != null)
+      rpcStatistics.shutdown();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
new file mode 100644
index 0000000..867a059
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
@@ -0,0 +1,1391 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.net.BindException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.ServerSocket;
+import java.net.Socket;
+import java.net.SocketException;
+import java.net.UnknownHostException;
+import java.nio.ByteBuffer;
+import java.nio.channels.CancelledKeyException;
+import java.nio.channels.ClosedChannelException;
+import java.nio.channels.ReadableByteChannel;
+import java.nio.channels.SelectionKey;
+import java.nio.channels.Selector;
+import java.nio.channels.ServerSocketChannel;
+import java.nio.channels.SocketChannel;
+import java.nio.channels.WritableByteChannel;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.io.WritableWithSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.ObjectWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
+
+import com.google.common.base.Function;
+
+/** An abstract IPC service.  IPC calls take a single {@link Writable} as a
+ * parameter, and return a {@link Writable} as their value.  A service runs on
+ * a port and is defined by a parameter class and a value class.
+ *
+ *
+ * <p>Copied local so can fix HBASE-900.
+ *
+ * @see HBaseClient
+ */
+public abstract class HBaseServer {
+
+  /**
+   * The first four bytes of Hadoop RPC connections
+   */
+  public static final ByteBuffer HEADER = ByteBuffer.wrap("hrpc".getBytes());
+
+  // 1 : Introduce ping and server does not throw away RPCs
+  // 3 : RPC was refactored in 0.19
+  public static final byte CURRENT_VERSION = 3;
+
+  /**
+   * How many calls/handler are allowed in the queue.
+   */
+  private static final int MAX_QUEUE_SIZE_PER_HANDLER = 100;
+
+  private static final String WARN_RESPONSE_SIZE =
+      "hbase.ipc.warn.response.size";
+
+  /** Default value for above param */
+  private static final int DEFAULT_WARN_RESPONSE_SIZE = 100 * 1024 * 1024;
+
+  private final int warnResponseSize;
+
+  public static final Log LOG =
+    LogFactory.getLog("org.apache.hadoop.ipc.HBaseServer");
+
+  protected static final ThreadLocal<HBaseServer> SERVER =
+    new ThreadLocal<HBaseServer>();
+  private volatile boolean started = false;
+
+  /** Returns the server instance called under or null.  May be called under
+   * {@link #call(Writable, long)} implementations, and under {@link Writable}
+   * methods of paramters and return values.  Permits applications to access
+   * the server context.
+   * @return HBaseServer
+   */
+  public static HBaseServer get() {
+    return SERVER.get();
+  }
+
+  /** This is set to Call object before Handler invokes an RPC and reset
+   * after the call returns.
+   */
+  protected static final ThreadLocal<Call> CurCall = new ThreadLocal<Call>();
+
+  /** Returns the remote side ip address when invoked inside an RPC
+   *  Returns null incase of an error.
+   *  @return InetAddress
+   */
+  public static InetAddress getRemoteIp() {
+    Call call = CurCall.get();
+    if (call != null) {
+      return call.connection.socket.getInetAddress();
+    }
+    return null;
+  }
+  /** Returns remote address as a string when invoked inside an RPC.
+   *  Returns null in case of an error.
+   *  @return String
+   */
+  public static String getRemoteAddress() {
+    InetAddress addr = getRemoteIp();
+    return (addr == null) ? null : addr.getHostAddress();
+  }
+
+  protected String bindAddress;
+  protected int port;                             // port we listen on
+  private int handlerCount;                       // number of handler threads
+  private int priorityHandlerCount;
+  private int readThreads;                        // number of read threads
+  protected Class<? extends Writable> paramClass; // class of call parameters
+  protected int maxIdleTime;                      // the maximum idle time after
+                                                  // which a client may be
+                                                  // disconnected
+  protected int thresholdIdleConnections;         // the number of idle
+                                                  // connections after which we
+                                                  // will start cleaning up idle
+                                                  // connections
+  int maxConnectionsToNuke;                       // the max number of
+                                                  // connections to nuke
+                                                  // during a cleanup
+
+  protected HBaseRpcMetrics  rpcMetrics;
+
+  protected Configuration conf;
+
+  private int maxQueueSize;
+  protected int socketSendBufferSize;
+  protected final boolean tcpNoDelay;   // if T then disable Nagle's Algorithm
+  protected final boolean tcpKeepAlive; // if T then use keepalives
+
+  volatile protected boolean running = true;         // true while server runs
+  protected BlockingQueue<Call> callQueue; // queued calls
+  protected BlockingQueue<Call> priorityCallQueue;
+
+  private int highPriorityLevel;  // what level a high priority call is at
+
+  protected final List<Connection> connectionList =
+    Collections.synchronizedList(new LinkedList<Connection>());
+  //maintain a list
+  //of client connections
+  private Listener listener = null;
+  protected Responder responder = null;
+  protected int numConnections = 0;
+  private Handler[] handlers = null;
+  private Handler[] priorityHandlers = null;
+  protected HBaseRPCErrorHandler errorHandler = null;
+
+  /**
+   * A convenience method to bind to a given address and report
+   * better exceptions if the address is not a valid host.
+   * @param socket the socket to bind
+   * @param address the address to bind to
+   * @param backlog the number of connections allowed in the queue
+   * @throws BindException if the address can't be bound
+   * @throws UnknownHostException if the address isn't a valid host name
+   * @throws IOException other random errors from bind
+   */
+  public static void bind(ServerSocket socket, InetSocketAddress address,
+                          int backlog) throws IOException {
+    try {
+      socket.bind(address, backlog);
+    } catch (BindException e) {
+      BindException bindException =
+        new BindException("Problem binding to " + address + " : " +
+            e.getMessage());
+      bindException.initCause(e);
+      throw bindException;
+    } catch (SocketException e) {
+      // If they try to bind to a different host's address, give a better
+      // error message.
+      if ("Unresolved address".equals(e.getMessage())) {
+        throw new UnknownHostException("Invalid hostname for server: " +
+                                       address.getHostName());
+      }
+      throw e;
+    }
+  }
+
+  /** A call queued for handling. */
+  private static class Call {
+    protected int id;                             // the client's call id
+    protected Writable param;                     // the parameter passed
+    protected Connection connection;              // connection to client
+    protected long timestamp;      // the time received when response is null
+                                   // the time served when response is not null
+    protected ByteBuffer response;                // the response for this call
+
+    public Call(int id, Writable param, Connection connection) {
+      this.id = id;
+      this.param = param;
+      this.connection = connection;
+      this.timestamp = System.currentTimeMillis();
+      this.response = null;
+    }
+
+    @Override
+    public String toString() {
+      return param.toString() + " from " + connection.toString();
+    }
+
+    public void setResponse(ByteBuffer response) {
+      this.response = response;
+    }
+  }
+
+  /** Listens on the socket. Creates jobs for the handler threads*/
+  private class Listener extends Thread {
+
+    private ServerSocketChannel acceptChannel = null; //the accept channel
+    private Selector selector = null; //the selector that we use for the server
+    private Reader[] readers = null;
+    private int currentReader = 0;
+    private InetSocketAddress address; //the address we bind at
+    private Random rand = new Random();
+    private long lastCleanupRunTime = 0; //the last time when a cleanup connec-
+                                         //-tion (for idle connections) ran
+    private long cleanupInterval = 10000; //the minimum interval between
+                                          //two cleanup runs
+    private int backlogLength = conf.getInt("ipc.server.listen.queue.size", 128);
+
+    private ExecutorService readPool;
+
+    public Listener() throws IOException {
+      address = new InetSocketAddress(bindAddress, port);
+      // Create a new server socket and set to non blocking mode
+      acceptChannel = ServerSocketChannel.open();
+      acceptChannel.configureBlocking(false);
+
+      // Bind the server socket to the local host and port
+      bind(acceptChannel.socket(), address, backlogLength);
+      port = acceptChannel.socket().getLocalPort(); //Could be an ephemeral port
+      // create a selector;
+      selector= Selector.open();
+
+      readers = new Reader[readThreads];
+      readPool = Executors.newFixedThreadPool(readThreads);
+      for (int i = 0; i < readThreads; ++i) {
+        Selector readSelector = Selector.open();
+        Reader reader = new Reader(readSelector);
+        readers[i] = reader;
+        readPool.execute(reader);
+      }
+
+      // Register accepts on the server socket with the selector.
+      acceptChannel.register(selector, SelectionKey.OP_ACCEPT);
+      this.setName("IPC Server listener on " + port);
+      this.setDaemon(true);
+    }
+
+
+    private class Reader implements Runnable {
+      private volatile boolean adding = false;
+      private Selector readSelector = null;
+
+      Reader(Selector readSelector) {
+        this.readSelector = readSelector;
+      }
+      public void run() {
+        synchronized(this) {
+          while (running) {
+            SelectionKey key = null;
+            try {
+              readSelector.select();
+              while (adding) {
+                this.wait(1000);
+              }
+
+              Iterator<SelectionKey> iter = readSelector.selectedKeys().iterator();
+              while (iter.hasNext()) {
+                key = iter.next();
+                iter.remove();
+                if (key.isValid()) {
+                  if (key.isReadable()) {
+                    doRead(key);
+                  }
+                }
+                key = null;
+              }
+            } catch (InterruptedException e) {
+              if (running) {                     // unexpected -- log it
+                LOG.info(getName() + "caught: " +
+                    StringUtils.stringifyException(e));
+              }
+            } catch (IOException ex) {
+               LOG.error("Error in Reader", ex);
+            }
+          }
+        }
+      }
+
+      /**
+       * This gets reader into the state that waits for the new channel
+       * to be registered with readSelector. If it was waiting in select()
+       * the thread will be woken up, otherwise whenever select() is called
+       * it will return even if there is nothing to read and wait
+       * in while(adding) for finishAdd call
+       */
+      public void startAdd() {
+        adding = true;
+        readSelector.wakeup();
+      }
+
+      public synchronized SelectionKey registerChannel(SocketChannel channel)
+        throws IOException {
+        return channel.register(readSelector, SelectionKey.OP_READ);
+      }
+
+      public synchronized void finishAdd() {
+        adding = false;
+        this.notify();
+      }
+    }
+
+    /** cleanup connections from connectionList. Choose a random range
+     * to scan and also have a limit on the number of the connections
+     * that will be cleanedup per run. The criteria for cleanup is the time
+     * for which the connection was idle. If 'force' is true then all
+     * connections will be looked at for the cleanup.
+     * @param force all connections will be looked at for cleanup
+     */
+    private void cleanupConnections(boolean force) {
+      if (force || numConnections > thresholdIdleConnections) {
+        long currentTime = System.currentTimeMillis();
+        if (!force && (currentTime - lastCleanupRunTime) < cleanupInterval) {
+          return;
+        }
+        int start = 0;
+        int end = numConnections - 1;
+        if (!force) {
+          start = rand.nextInt() % numConnections;
+          end = rand.nextInt() % numConnections;
+          int temp;
+          if (end < start) {
+            temp = start;
+            start = end;
+            end = temp;
+          }
+        }
+        int i = start;
+        int numNuked = 0;
+        while (i <= end) {
+          Connection c;
+          synchronized (connectionList) {
+            try {
+              c = connectionList.get(i);
+            } catch (Exception e) {return;}
+          }
+          if (c.timedOut(currentTime)) {
+            if (LOG.isDebugEnabled())
+              LOG.debug(getName() + ": disconnecting client " + c.getHostAddress());
+            closeConnection(c);
+            numNuked++;
+            end--;
+            //noinspection UnusedAssignment
+            c = null;
+            if (!force && numNuked == maxConnectionsToNuke) break;
+          }
+          else i++;
+        }
+        lastCleanupRunTime = System.currentTimeMillis();
+      }
+    }
+
+    @Override
+    public void run() {
+      LOG.info(getName() + ": starting");
+      SERVER.set(HBaseServer.this);
+
+      while (running) {
+        SelectionKey key = null;
+        try {
+          selector.select(); // FindBugs IS2_INCONSISTENT_SYNC
+          Iterator<SelectionKey> iter = selector.selectedKeys().iterator();
+          while (iter.hasNext()) {
+            key = iter.next();
+            iter.remove();
+            try {
+              if (key.isValid()) {
+                if (key.isAcceptable())
+                  doAccept(key);
+              }
+            } catch (IOException ignored) {
+            }
+            key = null;
+          }
+        } catch (OutOfMemoryError e) {
+          if (errorHandler != null) {
+            if (errorHandler.checkOOME(e)) {
+              LOG.info(getName() + ": exiting on OOME");
+              closeCurrentConnection(key);
+              cleanupConnections(true);
+              return;
+            }
+          } else {
+            // we can run out of memory if we have too many threads
+            // log the event and sleep for a minute and give
+            // some thread(s) a chance to finish
+            LOG.warn("Out of Memory in server select", e);
+            closeCurrentConnection(key);
+            cleanupConnections(true);
+            try { Thread.sleep(60000); } catch (Exception ignored) {}
+      }
+        } catch (Exception e) {
+          closeCurrentConnection(key);
+        }
+        cleanupConnections(false);
+      }
+      LOG.info("Stopping " + this.getName());
+
+      synchronized (this) {
+        try {
+          acceptChannel.close();
+          selector.close();
+        } catch (IOException ignored) { }
+
+        selector= null;
+        acceptChannel= null;
+
+        // clean up all connections
+        while (!connectionList.isEmpty()) {
+          closeConnection(connectionList.remove(0));
+        }
+      }
+    }
+
+    private void closeCurrentConnection(SelectionKey key) {
+      if (key != null) {
+        Connection c = (Connection)key.attachment();
+        if (c != null) {
+          if (LOG.isDebugEnabled())
+            LOG.debug(getName() + ": disconnecting client " + c.getHostAddress());
+          closeConnection(c);
+        }
+      }
+    }
+
+    InetSocketAddress getAddress() {
+      return (InetSocketAddress)acceptChannel.socket().getLocalSocketAddress();
+    }
+
+    void doAccept(SelectionKey key) throws IOException, OutOfMemoryError {
+      Connection c;
+      ServerSocketChannel server = (ServerSocketChannel) key.channel();
+
+      SocketChannel channel;
+      while ((channel = server.accept()) != null) {
+        channel.configureBlocking(false);
+        channel.socket().setTcpNoDelay(tcpNoDelay);
+        channel.socket().setKeepAlive(tcpKeepAlive);
+
+        Reader reader = getReader();
+        try {
+          reader.startAdd();
+          SelectionKey readKey = reader.registerChannel(channel);
+          c = new Connection(channel, System.currentTimeMillis());
+          readKey.attach(c);
+          synchronized (connectionList) {
+            connectionList.add(numConnections, c);
+            numConnections++;
+          }
+          if (LOG.isDebugEnabled())
+            LOG.debug("Server connection from " + c.toString() +
+                "; # active connections: " + numConnections +
+                "; # queued calls: " + callQueue.size());
+        } finally {
+          reader.finishAdd();
+        }
+      }
+    }
+
+    void doRead(SelectionKey key) throws InterruptedException {
+      int count = 0;
+      Connection c = (Connection)key.attachment();
+      if (c == null) {
+        return;
+      }
+      c.setLastContact(System.currentTimeMillis());
+
+      try {
+        count = c.readAndProcess();
+      } catch (InterruptedException ieo) {
+        throw ieo;
+      } catch (Exception e) {
+        LOG.debug(getName() + ": readAndProcess threw exception " + e + ". Count of bytes read: " + count, e);
+        count = -1; //so that the (count < 0) block is executed
+      }
+      if (count < 0) {
+        if (LOG.isDebugEnabled())
+          LOG.debug(getName() + ": disconnecting client " +
+                    c.getHostAddress() + ". Number of active connections: "+
+                    numConnections);
+        closeConnection(c);
+        // c = null;
+      }
+      else {
+        c.setLastContact(System.currentTimeMillis());
+      }
+    }
+
+    synchronized void doStop() {
+      if (selector != null) {
+        selector.wakeup();
+        Thread.yield();
+      }
+      if (acceptChannel != null) {
+        try {
+          acceptChannel.socket().close();
+        } catch (IOException e) {
+          LOG.info(getName() + ":Exception in closing listener socket. " + e);
+        }
+      }
+      readPool.shutdownNow();
+    }
+
+    // The method that will return the next reader to work with
+    // Simplistic implementation of round robin for now
+    Reader getReader() {
+      currentReader = (currentReader + 1) % readers.length;
+      return readers[currentReader];
+    }
+  }
+
+  // Sends responses of RPC back to clients.
+  private class Responder extends Thread {
+    private Selector writeSelector;
+    private int pending;         // connections waiting to register
+
+    final static int PURGE_INTERVAL = 900000; // 15mins
+
+    Responder() throws IOException {
+      this.setName("IPC Server Responder");
+      this.setDaemon(true);
+      writeSelector = Selector.open(); // create a selector
+      pending = 0;
+    }
+
+    @Override
+    public void run() {
+      LOG.info(getName() + ": starting");
+      SERVER.set(HBaseServer.this);
+      long lastPurgeTime = 0;   // last check for old calls.
+
+      while (running) {
+        try {
+          waitPending();     // If a channel is being registered, wait.
+          writeSelector.select(PURGE_INTERVAL);
+          Iterator<SelectionKey> iter = writeSelector.selectedKeys().iterator();
+          while (iter.hasNext()) {
+            SelectionKey key = iter.next();
+            iter.remove();
+            try {
+              if (key.isValid() && key.isWritable()) {
+                  doAsyncWrite(key);
+              }
+            } catch (IOException e) {
+              LOG.info(getName() + ": doAsyncWrite threw exception " + e);
+            }
+          }
+          long now = System.currentTimeMillis();
+          if (now < lastPurgeTime + PURGE_INTERVAL) {
+            continue;
+          }
+          lastPurgeTime = now;
+          //
+          // If there were some calls that have not been sent out for a
+          // long time, discard them.
+          //
+          LOG.debug("Checking for old call responses.");
+          ArrayList<Call> calls;
+
+          // get the list of channels from list of keys.
+          synchronized (writeSelector.keys()) {
+            calls = new ArrayList<Call>(writeSelector.keys().size());
+            iter = writeSelector.keys().iterator();
+            while (iter.hasNext()) {
+              SelectionKey key = iter.next();
+              Call call = (Call)key.attachment();
+              if (call != null && key.channel() == call.connection.channel) {
+                calls.add(call);
+              }
+            }
+          }
+
+          for(Call call : calls) {
+            doPurge(call, now);
+          }
+        } catch (OutOfMemoryError e) {
+          if (errorHandler != null) {
+            if (errorHandler.checkOOME(e)) {
+              LOG.info(getName() + ": exiting on OOME");
+              return;
+            }
+          } else {
+            //
+            // we can run out of memory if we have too many threads
+            // log the event and sleep for a minute and give
+            // some thread(s) a chance to finish
+            //
+            LOG.warn("Out of Memory in server select", e);
+            try { Thread.sleep(60000); } catch (Exception ignored) {}
+      }
+        } catch (Exception e) {
+          LOG.warn("Exception in Responder " +
+                   StringUtils.stringifyException(e));
+        }
+      }
+      LOG.info("Stopping " + this.getName());
+    }
+
+    private void doAsyncWrite(SelectionKey key) throws IOException {
+      Call call = (Call)key.attachment();
+      if (call == null) {
+        return;
+      }
+      if (key.channel() != call.connection.channel) {
+        throw new IOException("doAsyncWrite: bad channel");
+      }
+
+      synchronized(call.connection.responseQueue) {
+        if (processResponse(call.connection.responseQueue, false)) {
+          try {
+            key.interestOps(0);
+          } catch (CancelledKeyException e) {
+            /* The Listener/reader might have closed the socket.
+             * We don't explicitly cancel the key, so not sure if this will
+             * ever fire.
+             * This warning could be removed.
+             */
+            LOG.warn("Exception while changing ops : " + e);
+          }
+        }
+      }
+    }
+
+    //
+    // Remove calls that have been pending in the responseQueue
+    // for a long time.
+    //
+    private void doPurge(Call call, long now) {
+      synchronized (call.connection.responseQueue) {
+        Iterator<Call> iter = call.connection.responseQueue.listIterator(0);
+        while (iter.hasNext()) {
+          Call nextCall = iter.next();
+          if (now > nextCall.timestamp + PURGE_INTERVAL) {
+            closeConnection(nextCall.connection);
+            break;
+          }
+        }
+      }
+    }
+
+    // Processes one response. Returns true if there are no more pending
+    // data for this channel.
+    //
+    @SuppressWarnings({"ConstantConditions"})
+    private boolean processResponse(final LinkedList<Call> responseQueue,
+                                    boolean inHandler) throws IOException {
+      boolean error = true;
+      boolean done = false;       // there is more data for this channel.
+      int numElements;
+      Call call = null;
+      try {
+        //noinspection SynchronizationOnLocalVariableOrMethodParameter
+        synchronized (responseQueue) {
+          //
+          // If there are no items for this channel, then we are done
+          //
+          numElements = responseQueue.size();
+          if (numElements == 0) {
+            error = false;
+            return true;              // no more data for this channel.
+          }
+          //
+          // Extract the first call
+          //
+          call = responseQueue.removeFirst();
+          SocketChannel channel = call.connection.channel;
+          if (LOG.isDebugEnabled()) {
+            LOG.debug(getName() + ": responding to #" + call.id + " from " +
+                      call.connection);
+          }
+          //
+          // Send as much data as we can in the non-blocking fashion
+          //
+          int numBytes = channelWrite(channel, call.response);
+          if (numBytes < 0) {
+            return true;
+          }
+          if (!call.response.hasRemaining()) {
+            call.connection.decRpcCount();
+            //noinspection RedundantIfStatement
+            if (numElements == 1) {    // last call fully processes.
+              done = true;             // no more data for this channel.
+            } else {
+              done = false;            // more calls pending to be sent.
+            }
+            if (LOG.isDebugEnabled()) {
+              LOG.debug(getName() + ": responding to #" + call.id + " from " +
+                        call.connection + " Wrote " + numBytes + " bytes.");
+            }
+          } else {
+            //
+            // If we were unable to write the entire response out, then
+            // insert in Selector queue.
+            //
+            call.connection.responseQueue.addFirst(call);
+
+            if (inHandler) {
+              // set the serve time when the response has to be sent later
+              call.timestamp = System.currentTimeMillis();
+
+              incPending();
+              try {
+                // Wakeup the thread blocked on select, only then can the call
+                // to channel.register() complete.
+                writeSelector.wakeup();
+                channel.register(writeSelector, SelectionKey.OP_WRITE, call);
+              } catch (ClosedChannelException e) {
+                //Its ok. channel might be closed else where.
+                done = true;
+              } finally {
+                decPending();
+              }
+            }
+            if (LOG.isDebugEnabled()) {
+              LOG.debug(getName() + ": responding to #" + call.id + " from " +
+                        call.connection + " Wrote partial " + numBytes +
+                        " bytes.");
+            }
+          }
+          error = false;              // everything went off well
+        }
+      } finally {
+        if (error && call != null) {
+          LOG.warn(getName()+", call " + call + ": output error");
+          done = true;               // error. no more data for this channel.
+          closeConnection(call.connection);
+        }
+      }
+      return done;
+    }
+
+    //
+    // Enqueue a response from the application.
+    //
+    void doRespond(Call call) throws IOException {
+      synchronized (call.connection.responseQueue) {
+        call.connection.responseQueue.addLast(call);
+        if (call.connection.responseQueue.size() == 1) {
+          processResponse(call.connection.responseQueue, true);
+        }
+      }
+    }
+
+    private synchronized void incPending() {   // call waiting to be enqueued.
+      pending++;
+    }
+
+    private synchronized void decPending() { // call done enqueueing.
+      pending--;
+      notify();
+    }
+
+    private synchronized void waitPending() throws InterruptedException {
+      while (pending > 0) {
+        wait();
+      }
+    }
+  }
+
+  /** Reads calls from a connection and queues them for handling. */
+  private class Connection {
+    private boolean versionRead = false; //if initial signature and
+                                         //version are read
+    private boolean headerRead = false;  //if the connection header that
+                                         //follows version is read.
+    protected SocketChannel channel;
+    private ByteBuffer data;
+    private ByteBuffer dataLengthBuffer;
+    protected final LinkedList<Call> responseQueue;
+    private volatile int rpcCount = 0; // number of outstanding rpcs
+    private long lastContact;
+    private int dataLength;
+    protected Socket socket;
+    // Cache the remote host & port info so that even if the socket is
+    // disconnected, we can say where it used to connect to.
+    private String hostAddress;
+    private int remotePort;
+    protected UserGroupInformation ticket = null;
+
+    public Connection(SocketChannel channel, long lastContact) {
+      this.channel = channel;
+      this.lastContact = lastContact;
+      this.data = null;
+      this.dataLengthBuffer = ByteBuffer.allocate(4);
+      this.socket = channel.socket();
+      InetAddress addr = socket.getInetAddress();
+      if (addr == null) {
+        this.hostAddress = "*Unknown*";
+      } else {
+        this.hostAddress = addr.getHostAddress();
+      }
+      this.remotePort = socket.getPort();
+      this.responseQueue = new LinkedList<Call>();
+      if (socketSendBufferSize != 0) {
+        try {
+          socket.setSendBufferSize(socketSendBufferSize);
+        } catch (IOException e) {
+          LOG.warn("Connection: unable to set socket send buffer size to " +
+                   socketSendBufferSize);
+        }
+      }
+    }
+
+    @Override
+    public String toString() {
+      return getHostAddress() + ":" + remotePort;
+    }
+
+    public String getHostAddress() {
+      return hostAddress;
+    }
+
+    public void setLastContact(long lastContact) {
+      this.lastContact = lastContact;
+    }
+
+    public long getLastContact() {
+      return lastContact;
+    }
+
+    /* Return true if the connection has no outstanding rpc */
+    private boolean isIdle() {
+      return rpcCount == 0;
+    }
+
+    /* Decrement the outstanding RPC count */
+    protected void decRpcCount() {
+      rpcCount--;
+    }
+
+    /* Increment the outstanding RPC count */
+    private void incRpcCount() {
+      rpcCount++;
+    }
+
+    protected boolean timedOut(long currentTime) {
+      return isIdle() && currentTime - lastContact > maxIdleTime;
+    }
+
+    public int readAndProcess() throws IOException, InterruptedException {
+      while (true) {
+        /* Read at most one RPC. If the header is not read completely yet
+         * then iterate until we read first RPC or until there is no data left.
+         */
+        int count;
+        if (dataLengthBuffer.remaining() > 0) {
+          count = channelRead(channel, dataLengthBuffer);
+          if (count < 0 || dataLengthBuffer.remaining() > 0)
+            return count;
+        }
+
+        if (!versionRead) {
+          //Every connection is expected to send the header.
+          ByteBuffer versionBuffer = ByteBuffer.allocate(1);
+          count = channelRead(channel, versionBuffer);
+          if (count <= 0) {
+            return count;
+          }
+          int version = versionBuffer.get(0);
+
+          dataLengthBuffer.flip();
+          if (!HEADER.equals(dataLengthBuffer) || version != CURRENT_VERSION) {
+            //Warning is ok since this is not supposed to happen.
+            LOG.warn("Incorrect header or version mismatch from " +
+                     hostAddress + ":" + remotePort +
+                     " got version " + version +
+                     " expected version " + CURRENT_VERSION);
+            return -1;
+          }
+          dataLengthBuffer.clear();
+          versionRead = true;
+          continue;
+        }
+
+        if (data == null) {
+          dataLengthBuffer.flip();
+          dataLength = dataLengthBuffer.getInt();
+
+          if (dataLength == HBaseClient.PING_CALL_ID) {
+            dataLengthBuffer.clear();
+            return 0;  //ping message
+          }
+          data = ByteBuffer.allocate(dataLength);
+          incRpcCount();  // Increment the rpc count
+        }
+
+        count = channelRead(channel, data);
+
+        if (data.remaining() == 0) {
+          dataLengthBuffer.clear();
+          data.flip();
+          if (headerRead) {
+            processData();
+            data = null;
+            return count;
+          }
+          processHeader();
+          headerRead = true;
+          data = null;
+          continue;
+        }
+        return count;
+      }
+    }
+
+    /// Reads the header following version
+    private void processHeader() throws IOException {
+      /* In the current version, it is just a ticket.
+       * Later we could introduce a "ConnectionHeader" class.
+       */
+      DataInputStream in =
+        new DataInputStream(new ByteArrayInputStream(data.array()));
+      ticket = (UserGroupInformation) ObjectWritable.readObject(in, conf);
+    }
+
+    private void processData() throws  IOException, InterruptedException {
+      DataInputStream dis =
+        new DataInputStream(new ByteArrayInputStream(data.array()));
+      int id = dis.readInt();                    // try to read an id
+
+      if (LOG.isDebugEnabled())
+        LOG.debug(" got #" + id);
+
+      Writable param = ReflectionUtils.newInstance(paramClass, conf);           // read param
+      param.readFields(dis);
+
+      Call call = new Call(id, param, this);
+
+      if (priorityCallQueue != null && getQosLevel(param) > highPriorityLevel) {
+        priorityCallQueue.put(call);
+      } else {
+        callQueue.put(call);              // queue the call; maybe blocked here
+      }
+    }
+
+    protected synchronized void close() {
+      data = null;
+      dataLengthBuffer = null;
+      if (!channel.isOpen())
+        return;
+      try {socket.shutdownOutput();} catch(Exception ignored) {} // FindBugs DE_MIGHT_IGNORE
+      if (channel.isOpen()) {
+        try {channel.close();} catch(Exception ignored) {}
+      }
+      try {socket.close();} catch(Exception ignored) {}
+    }
+  }
+
+  /** Handles queued calls . */
+  private class Handler extends Thread {
+    private final BlockingQueue<Call> myCallQueue;
+    static final int BUFFER_INITIAL_SIZE = 1024;
+
+    public Handler(final BlockingQueue<Call> cq, int instanceNumber) {
+      this.myCallQueue = cq;
+      this.setDaemon(true);
+
+      String threadName = "IPC Server handler " + instanceNumber + " on " + port;
+      if (cq == priorityCallQueue) {
+        // this is just an amazing hack, but it works.
+        threadName = "PRI " + threadName;
+      }
+      this.setName(threadName);
+    }
+
+    @Override
+    public void run() {
+      LOG.info(getName() + ": starting");
+      SERVER.set(HBaseServer.this);
+      while (running) {
+        try {
+          Call call = myCallQueue.take(); // pop the queue; maybe blocked here
+
+          if (LOG.isDebugEnabled())
+            LOG.debug(getName() + ": has #" + call.id + " from " +
+                      call.connection);
+
+          String errorClass = null;
+          String error = null;
+          Writable value = null;
+
+          CurCall.set(call);
+          try {
+            if (!started)
+              throw new ServerNotRunningException("Server is not running yet");
+            value = call(call.param, call.timestamp);             // make the call
+          } catch (Throwable e) {
+            LOG.debug(getName()+", call "+call+": error: " + e, e);
+            errorClass = e.getClass().getName();
+            error = StringUtils.stringifyException(e);
+          }
+          CurCall.set(null);
+
+          int size = BUFFER_INITIAL_SIZE;
+          if (value instanceof WritableWithSize) {
+            // get the size hint.
+            WritableWithSize ohint = (WritableWithSize)value;
+            long hint = ohint.getWritableSize() + Bytes.SIZEOF_BYTE + Bytes.SIZEOF_INT;
+            if (hint > 0) {
+              if ((hint) > Integer.MAX_VALUE) {
+                // oops, new problem.
+                IOException ioe =
+                    new IOException("Result buffer size too large: " + hint);
+                errorClass = ioe.getClass().getName();
+                error = StringUtils.stringifyException(ioe);
+              } else {
+                size = (int)hint;
+              }
+            }
+          }
+          ByteBufferOutputStream buf = new ByteBufferOutputStream(size);
+          DataOutputStream out = new DataOutputStream(buf);
+          out.writeInt(call.id);                // write call id
+          out.writeBoolean(error != null);      // write error flag
+
+          if (error == null) {
+            value.write(out);
+          } else {
+            WritableUtils.writeString(out, errorClass);
+            WritableUtils.writeString(out, error);
+          }
+
+          if (buf.size() > warnResponseSize) {
+            LOG.warn(getName()+", responseTooLarge for: "+call+": Size: "
+                     + StringUtils.humanReadableInt(buf.size()));
+          }
+
+
+          call.setResponse(buf.getByteBuffer());
+          responder.doRespond(call);
+        } catch (InterruptedException e) {
+          if (running) {                          // unexpected -- log it
+            LOG.info(getName() + " caught: " +
+                     StringUtils.stringifyException(e));
+          }
+        } catch (OutOfMemoryError e) {
+          if (errorHandler != null) {
+            if (errorHandler.checkOOME(e)) {
+              LOG.info(getName() + ": exiting on OOME");
+              return;
+            }
+          } else {
+            // rethrow if no handler
+            throw e;
+          }
+        } catch (Exception e) {
+          LOG.warn(getName() + " caught: " +
+                   StringUtils.stringifyException(e));
+        }
+      }
+      LOG.info(getName() + ": exiting");
+    }
+
+  }
+
+  /**
+   * Gets the QOS level for this call.  If it is higher than the highPriorityLevel and there
+   * are priorityHandlers available it will be processed in it's own thread set.
+   *
+   * @param param
+   * @return priority, higher is better
+   */
+  private Function<Writable,Integer> qosFunction = null;
+  public void setQosFunction(Function<Writable, Integer> newFunc) {
+    qosFunction = newFunc;
+  }
+
+  protected int getQosLevel(Writable param) {
+    if (qosFunction == null) {
+      return 0;
+    }
+
+    Integer res = qosFunction.apply(param);
+    if (res == null) {
+      return 0;
+    }
+    return res;
+  }
+
+  /* Constructs a server listening on the named port and address.  Parameters passed must
+   * be of the named class.  The <code>handlerCount</handlerCount> determines
+   * the number of handler threads that will be used to process calls.
+   *
+   */
+  protected HBaseServer(String bindAddress, int port,
+                        Class<? extends Writable> paramClass, int handlerCount,
+                        int priorityHandlerCount, Configuration conf, String serverName,
+                        int highPriorityLevel)
+    throws IOException {
+    this.bindAddress = bindAddress;
+    this.conf = conf;
+    this.port = port;
+    this.paramClass = paramClass;
+    this.handlerCount = handlerCount;
+    this.priorityHandlerCount = priorityHandlerCount;
+    this.socketSendBufferSize = 0;
+    this.maxQueueSize = handlerCount * MAX_QUEUE_SIZE_PER_HANDLER;
+     this.readThreads = conf.getInt(
+        "ipc.server.read.threadpool.size",
+        10);
+    this.callQueue  = new LinkedBlockingQueue<Call>(maxQueueSize);
+    if (priorityHandlerCount > 0) {
+      this.priorityCallQueue = new LinkedBlockingQueue<Call>(maxQueueSize); // TODO hack on size
+    } else {
+      this.priorityCallQueue = null;
+    }
+    this.highPriorityLevel = highPriorityLevel;
+    this.maxIdleTime = 2*conf.getInt("ipc.client.connection.maxidletime", 1000);
+    this.maxConnectionsToNuke = conf.getInt("ipc.client.kill.max", 10);
+    this.thresholdIdleConnections = conf.getInt("ipc.client.idlethreshold", 4000);
+
+    // Start the listener here and let it bind to the port
+    listener = new Listener();
+    this.port = listener.getAddress().getPort();
+    this.rpcMetrics = new HBaseRpcMetrics(serverName,
+                          Integer.toString(this.port));
+    this.tcpNoDelay = conf.getBoolean("ipc.server.tcpnodelay", false);
+    this.tcpKeepAlive = conf.getBoolean("ipc.server.tcpkeepalive", true);
+
+    this.warnResponseSize = conf.getInt(WARN_RESPONSE_SIZE,
+                                        DEFAULT_WARN_RESPONSE_SIZE);
+
+
+    // Create the responder here
+    responder = new Responder();
+  }
+
+  protected void closeConnection(Connection connection) {
+    synchronized (connectionList) {
+      if (connectionList.remove(connection))
+        numConnections--;
+    }
+    connection.close();
+  }
+
+  /** Sets the socket buffer size used for responding to RPCs.
+   * @param size send size
+   */
+  public void setSocketSendBufSize(int size) { this.socketSendBufferSize = size; }
+
+  /** Starts the service.  Must be called before any calls will be handled. */
+  public void start() {
+    startThreads();
+    openServer();
+  }
+
+  /**
+   * Open a previously started server.
+   */
+  public void openServer() {
+    started = true;
+  }
+
+  /**
+   * Starts the service threads but does not allow requests to be responded yet.
+   * Client will get {@link ServerNotRunningException} instead.
+   */
+  public synchronized void startThreads() {
+    responder.start();
+    listener.start();
+    handlers = new Handler[handlerCount];
+
+    for (int i = 0; i < handlerCount; i++) {
+      handlers[i] = new Handler(callQueue, i);
+      handlers[i].start();
+    }
+
+    if (priorityHandlerCount > 0) {
+      priorityHandlers = new Handler[priorityHandlerCount];
+      for (int i = 0 ; i < priorityHandlerCount; i++) {
+        priorityHandlers[i] = new Handler(priorityCallQueue, i);
+        priorityHandlers[i].start();
+      }
+    }
+  }
+
+  /** Stops the service.  No new calls will be handled after this is called. */
+  public synchronized void stop() {
+    LOG.info("Stopping server on " + port);
+    running = false;
+    if (handlers != null) {
+      for (Handler handler : handlers) {
+        if (handler != null) {
+          handler.interrupt();
+        }
+      }
+    }
+    if (priorityHandlers != null) {
+      for (Handler handler : priorityHandlers) {
+        if (handler != null) {
+          handler.interrupt();
+        }
+      }
+    }
+    listener.interrupt();
+    listener.doStop();
+    responder.interrupt();
+    notifyAll();
+    if (this.rpcMetrics != null) {
+      this.rpcMetrics.shutdown();
+    }
+  }
+
+  /** Wait for the server to be stopped.
+   * Does not wait for all subthreads to finish.
+   *  See {@link #stop()}.
+   * @throws InterruptedException e
+   */
+  public synchronized void join() throws InterruptedException {
+    while (running) {
+      wait();
+    }
+  }
+
+  /**
+   * Return the socket (ip+port) on which the RPC server is listening to.
+   * @return the socket (ip+port) on which the RPC server is listening to.
+   */
+  public synchronized InetSocketAddress getListenerAddress() {
+    return listener.getAddress();
+  }
+
+  /** Called for each call.
+   * @param param writable parameter
+   * @param receiveTime time
+   * @return Writable
+   * @throws IOException e
+   */
+  public abstract Writable call(Writable param, long receiveTime)
+                                                throws IOException;
+
+  /**
+   * The number of open RPC conections
+   * @return the number of open rpc connections
+   */
+  public int getNumOpenConnections() {
+    return numConnections;
+  }
+
+  /**
+   * The number of rpc calls in the queue.
+   * @return The number of rpc calls in the queue.
+   */
+  public int getCallQueueLen() {
+    return callQueue.size();
+  }
+
+  /**
+   * Set the handler for calling out of RPC for error conditions.
+   * @param handler the handler implementation
+   */
+  public void setErrorHandler(HBaseRPCErrorHandler handler) {
+    this.errorHandler = handler;
+  }
+
+  /**
+   * When the read or write buffer size is larger than this limit, i/o will be
+   * done in chunks of this size. Most RPC requests and responses would be
+   * be smaller.
+   */
+  private static int NIO_BUFFER_LIMIT = 8*1024; //should not be more than 64KB.
+
+  /**
+   * This is a wrapper around {@link WritableByteChannel#write(ByteBuffer)}.
+   * If the amount of data is large, it writes to channel in smaller chunks.
+   * This is to avoid jdk from creating many direct buffers as the size of
+   * buffer increases. This also minimizes extra copies in NIO layer
+   * as a result of multiple write operations required to write a large
+   * buffer.
+   *
+   * @param channel writable byte channel to write to
+   * @param buffer buffer to write
+   * @return number of bytes written
+   * @throws java.io.IOException e
+   * @see WritableByteChannel#write(ByteBuffer)
+   */
+  protected static int channelWrite(WritableByteChannel channel,
+                                    ByteBuffer buffer) throws IOException {
+    return (buffer.remaining() <= NIO_BUFFER_LIMIT) ?
+           channel.write(buffer) : channelIO(null, channel, buffer);
+  }
+
+  /**
+   * This is a wrapper around {@link ReadableByteChannel#read(ByteBuffer)}.
+   * If the amount of data is large, it writes to channel in smaller chunks.
+   * This is to avoid jdk from creating many direct buffers as the size of
+   * ByteBuffer increases. There should not be any performance degredation.
+   *
+   * @param channel writable byte channel to write on
+   * @param buffer buffer to write
+   * @return number of bytes written
+   * @throws java.io.IOException e
+   * @see ReadableByteChannel#read(ByteBuffer)
+   */
+  protected static int channelRead(ReadableByteChannel channel,
+                                   ByteBuffer buffer) throws IOException {
+    return (buffer.remaining() <= NIO_BUFFER_LIMIT) ?
+           channel.read(buffer) : channelIO(channel, null, buffer);
+  }
+
+  /**
+   * Helper for {@link #channelRead(ReadableByteChannel, ByteBuffer)}
+   * and {@link #channelWrite(WritableByteChannel, ByteBuffer)}. Only
+   * one of readCh or writeCh should be non-null.
+   *
+   * @param readCh read channel
+   * @param writeCh write channel
+   * @param buf buffer to read or write into/out of
+   * @return bytes written
+   * @throws java.io.IOException e
+   * @see #channelRead(ReadableByteChannel, ByteBuffer)
+   * @see #channelWrite(WritableByteChannel, ByteBuffer)
+   */
+  private static int channelIO(ReadableByteChannel readCh,
+                               WritableByteChannel writeCh,
+                               ByteBuffer buf) throws IOException {
+
+    int originalLimit = buf.limit();
+    int initialRemaining = buf.remaining();
+    int ret = 0;
+
+    while (buf.remaining() > 0) {
+      try {
+        int ioSize = Math.min(buf.remaining(), NIO_BUFFER_LIMIT);
+        buf.limit(buf.position() + ioSize);
+
+        ret = (readCh == null) ? writeCh.write(buf) : readCh.read(buf);
+
+        if (ret < ioSize) {
+          break;
+        }
+
+      } finally {
+        buf.limit(originalLimit);
+      }
+    }
+
+    int nBytes = initialRemaining - buf.remaining();
+    return (nBytes > 0) ? nBytes : ret;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HMasterInterface.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HMasterInterface.java
new file mode 100644
index 0000000..a4f09f3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HMasterInterface.java
@@ -0,0 +1,190 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.UnknownRegionException;
+
+/**
+ * Clients interact with the HMasterInterface to gain access to meta-level
+ * HBase functionality, like finding an HRegionServer and creating/destroying
+ * tables.
+ *
+ * <p>NOTE: if you change the interface, you must change the RPC version
+ * number in HBaseRPCProtocolVersion
+ *
+ */
+public interface HMasterInterface extends HBaseRPCProtocolVersion {
+
+  /** @return true if master is available */
+  public boolean isMasterRunning();
+
+  // Admin tools would use these cmds
+
+  /**
+   * Creates a new table.  If splitKeys are specified, then the table will be
+   * created with an initial set of multiple regions.  If splitKeys is null,
+   * the table will be created with a single region.
+   * @param desc table descriptor
+   * @param splitKeys
+   * @throws IOException
+   */
+  public void createTable(HTableDescriptor desc, byte [][] splitKeys)
+  throws IOException;
+
+  /**
+   * Deletes a table
+   * @param tableName table to delete
+   * @throws IOException e
+   */
+  public void deleteTable(final byte [] tableName) throws IOException;
+
+  /**
+   * Adds a column to the specified table
+   * @param tableName table to modify
+   * @param column column descriptor
+   * @throws IOException e
+   */
+  public void addColumn(final byte [] tableName, HColumnDescriptor column)
+  throws IOException;
+
+  /**
+   * Modifies an existing column on the specified table
+   * @param tableName table name
+   * @param descriptor new column descriptor
+   * @throws IOException e
+   */
+  public void modifyColumn(final byte [] tableName, HColumnDescriptor descriptor)
+  throws IOException;
+
+
+  /**
+   * Deletes a column from the specified table. Table must be disabled.
+   * @param tableName table to alter
+   * @param columnName column family to remove
+   * @throws IOException e
+   */
+  public void deleteColumn(final byte [] tableName, final byte [] columnName)
+  throws IOException;
+
+  /**
+   * Puts the table on-line (only needed if table has been previously taken offline)
+   * @param tableName table to enable
+   * @throws IOException e
+   */
+  public void enableTable(final byte [] tableName) throws IOException;
+
+  /**
+   * Take table offline
+   *
+   * @param tableName table to take offline
+   * @throws IOException e
+   */
+  public void disableTable(final byte [] tableName) throws IOException;
+
+  /**
+   * Modify a table's metadata
+   *
+   * @param tableName table to modify
+   * @param htd new descriptor for table
+   * @throws IOException e
+   */
+  public void modifyTable(byte[] tableName, HTableDescriptor htd)
+  throws IOException;
+
+  /**
+   * Shutdown an HBase cluster.
+   * @throws IOException e
+   */
+  public void shutdown() throws IOException;
+
+  /**
+   * Stop HBase Master only.
+   * Does not shutdown the cluster.
+   * @throws IOException e
+   */
+  public void stopMaster() throws IOException;
+
+  /**
+   * Return cluster status.
+   * @return status object
+   */
+  public ClusterStatus getClusterStatus();
+
+
+  /**
+   * Move the region <code>r</code> to <code>dest</code>.
+   * @param encodedRegionName The encoded region name; i.e. the hash that makes
+   * up the region name suffix: e.g. if regionname is
+   * <code>TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396.</code>,
+   * then the encoded region name is: <code>527db22f95c8a9e0116f0cc13c680396</code>.
+   * @param destServerName The servername of the destination regionserver.  If
+   * passed the empty byte array we'll assign to a random server.  A server name
+   * is made of host, port and startcode.  Here is an example:
+   * <code> host187.example.com,60020,1289493121758</code>.
+   * @throws UnknownRegionException Thrown if we can't find a region named
+   * <code>encodedRegionName</code>
+   */
+  public void move(final byte [] encodedRegionName, final byte [] destServerName)
+  throws UnknownRegionException;
+
+  /**
+   * Assign a region to a server chosen at random.
+   * @param regionName Region to assign.  Will use existing RegionPlan if one
+   * found.
+   * @param force If true, will force the assignment.
+   * @throws IOException
+   */
+  public void assign(final byte [] regionName, final boolean force)
+  throws IOException;
+
+  /**
+   * Unassign a region from current hosting regionserver.  Region will then be
+   * assigned to a regionserver chosen at random.  Region could be reassigned
+   * back to the same server.  Use {@link #move(byte[], byte[])} if you want
+   * to control the region movement.
+   * @param regionName Region to unassign. Will clear any existing RegionPlan
+   * if one found.
+   * @param force If true, force unassign (Will remove region from
+   * regions-in-transition too if present).
+   * @throws IOException
+   */
+  public void unassign(final byte [] regionName, final boolean force)
+  throws IOException;
+
+  /**
+   * Run the balancer.  Will run the balancer and if regions to move, it will
+   * go ahead and do the reassignments.  Can NOT run for various reasons.  Check
+   * logs.
+   * @return True if balancer ran, false otherwise.
+   */
+  public boolean balance();
+
+  /**
+   * Turn the load balancer on or off.
+   * @param b If true, enable balancer. If false, disable balancer.
+   * @return Previous balancer value
+   */
+  public boolean balanceSwitch(final boolean b);
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HMasterRegionInterface.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HMasterRegionInterface.java
new file mode 100644
index 0000000..660c475
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HMasterRegionInterface.java
@@ -0,0 +1,66 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.io.MapWritable;
+
+import java.io.IOException;
+
+/**
+ * HRegionServers interact with the HMasterRegionInterface to report on local
+ * goings-on and to obtain data-handling instructions from the HMaster.
+ * <p>Changes here need to be reflected in HbaseObjectWritable HbaseRPC#Invoker.
+ *
+ * <p>NOTE: if you change the interface, you must change the RPC version
+ * number in HBaseRPCProtocolVersion
+ *
+ */
+public interface HMasterRegionInterface extends HBaseRPCProtocolVersion {
+
+  /**
+   * Called when a region server first starts
+   * @param info server info
+   * @param serverCurrentTime The current time of the region server in ms
+   * @throws IOException e
+   * @return Configuration for the regionserver to use: e.g. filesystem,
+   * hbase rootdir, etc.
+   */
+  public MapWritable regionServerStartup(HServerInfo info,
+    long serverCurrentTime) throws IOException;
+
+  /**
+   * Called to renew lease, tell master what the region server is doing and to
+   * receive new instructions from the master
+   *
+   * @param info server's address and start code
+   * @param msgs things the region server wants to tell the master
+   * @param mostLoadedRegions Array of HRegionInfos that should contain the
+   * reporting server's most loaded regions. These are candidates for being
+   * rebalanced.
+   * @return instructions from the master to the region server
+   * @throws IOException e
+   */
+  public HMsg[] regionServerReport(HServerInfo info, HMsg msgs[],
+    HRegionInfo mostLoadedRegions[])
+  throws IOException;
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
new file mode 100644
index 0000000..5da41be
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
@@ -0,0 +1,394 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.IOException;
+import java.net.ConnectException;
+import java.util.List;
+import java.util.NavigableSet;
+
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.MultiAction;
+import org.apache.hadoop.hbase.client.MultiPut;
+import org.apache.hadoop.hbase.client.MultiPutResponse;
+import org.apache.hadoop.hbase.client.MultiResponse;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.ipc.RemoteException;
+
+/**
+ * Clients interact with HRegionServers using a handle to the HRegionInterface.
+ *
+ * <p>NOTE: if you change the interface, you must change the RPC version
+ * number in HBaseRPCProtocolVersion
+ */
+public interface HRegionInterface extends HBaseRPCProtocolVersion, Stoppable, Abortable {
+  /**
+   * Get metainfo about an HRegion
+   *
+   * @param regionName name of the region
+   * @return HRegionInfo object for region
+   * @throws NotServingRegionException
+   * @throws ConnectException
+   * @throws IOException This can manifest as an Hadoop ipc {@link RemoteException}
+   */
+  public HRegionInfo getRegionInfo(final byte [] regionName)
+  throws NotServingRegionException, ConnectException, IOException;
+
+  /**
+   * Return all the data for the row that matches <i>row</i> exactly,
+   * or the one that immediately preceeds it.
+   *
+   * @param regionName region name
+   * @param row row key
+   * @param family Column family to look for row in.
+   * @return map of values
+   * @throws IOException e
+   */
+  public Result getClosestRowBefore(final byte [] regionName,
+    final byte [] row, final byte [] family)
+  throws IOException;
+
+  /**
+   * Perform Get operation.
+   * @param regionName name of region to get from
+   * @param get Get operation
+   * @return Result
+   * @throws IOException e
+   */
+  public Result get(byte [] regionName, Get get) throws IOException;
+
+  /**
+   * Perform exists operation.
+   * @param regionName name of region to get from
+   * @param get Get operation describing cell to test
+   * @return true if exists
+   * @throws IOException e
+   */
+  public boolean exists(byte [] regionName, Get get) throws IOException;
+
+  /**
+   * Put data into the specified region
+   * @param regionName region name
+   * @param put the data to be put
+   * @throws IOException e
+   */
+  public void put(final byte [] regionName, final Put put)
+  throws IOException;
+
+  /**
+   * Put an array of puts into the specified region
+   *
+   * @param regionName region name
+   * @param puts List of puts to execute
+   * @return The number of processed put's.  Returns -1 if all Puts
+   * processed successfully.
+   * @throws IOException e
+   */
+  public int put(final byte[] regionName, final List<Put> puts)
+  throws IOException;
+
+  /**
+   * Deletes all the KeyValues that match those found in the Delete object,
+   * if their ts <= to the Delete. In case of a delete with a specific ts it
+   * only deletes that specific KeyValue.
+   * @param regionName region name
+   * @param delete delete object
+   * @throws IOException e
+   */
+  public void delete(final byte[] regionName, final Delete delete)
+  throws IOException;
+
+  /**
+   * Put an array of deletes into the specified region
+   *
+   * @param regionName region name
+   * @param deletes delete List to execute
+   * @return The number of processed deletes.  Returns -1 if all Deletes
+   * processed successfully.
+   * @throws IOException e
+   */
+  public int delete(final byte[] regionName, final List<Delete> deletes)
+  throws IOException;
+
+  /**
+   * Atomically checks if a row/family/qualifier value match the expectedValue.
+   * If it does, it adds the put. If passed expected value is null, then the
+   * check is for non-existance of the row/column.
+   *
+   * @param regionName region name
+   * @param row row to check
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param value the expected value
+   * @param put data to put if check succeeds
+   * @throws IOException e
+   * @return true if the new put was execute, false otherwise
+   */
+  public boolean checkAndPut(final byte[] regionName, final byte [] row,
+      final byte [] family, final byte [] qualifier, final byte [] value,
+      final Put put)
+  throws IOException;
+
+
+  /**
+   * Atomically checks if a row/family/qualifier value match the expectedValue.
+   * If it does, it adds the delete. If passed expected value is null, then the
+   * check is for non-existance of the row/column.
+   *
+   * @param regionName region name
+   * @param row row to check
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param value the expected value
+   * @param delete data to delete if check succeeds
+   * @throws IOException e
+   * @return true if the new delete was execute, false otherwise
+   */
+  public boolean checkAndDelete(final byte[] regionName, final byte [] row,
+      final byte [] family, final byte [] qualifier, final byte [] value,
+      final Delete delete)
+  throws IOException;
+
+  /**
+   * Atomically increments a column value. If the column value isn't long-like,
+   * this could throw an exception. If passed expected value is null, then the
+   * check is for non-existance of the row/column.
+   *
+   * @param regionName region name
+   * @param row row to check
+   * @param family column family
+   * @param qualifier column qualifier
+   * @param amount long amount to increment
+   * @param writeToWAL whether to write the increment to the WAL
+   * @return new incremented column value
+   * @throws IOException e
+   */
+  public long incrementColumnValue(byte [] regionName, byte [] row,
+      byte [] family, byte [] qualifier, long amount, boolean writeToWAL)
+  throws IOException;
+
+  /**
+   * Increments one or more columns values in a row.  Returns the
+   * updated keys after the increment.
+   * <p>
+   * This operation does not appear atomic to readers.  Increments are done
+   * under a row lock but readers do not take row locks.
+   * @param regionName region name
+   * @param increment increment operation
+   * @return incremented cells
+   */
+  public Result increment(byte[] regionName, Increment increment)
+  throws IOException;
+
+  //
+  // remote scanner interface
+  //
+
+  /**
+   * Opens a remote scanner with a RowFilter.
+   *
+   * @param regionName name of region to scan
+   * @param scan configured scan object
+   * @return scannerId scanner identifier used in other calls
+   * @throws IOException e
+   */
+  public long openScanner(final byte [] regionName, final Scan scan)
+  throws IOException;
+
+  /**
+   * Get the next set of values
+   * @param scannerId clientId passed to openScanner
+   * @return map of values; returns null if no results.
+   * @throws IOException e
+   */
+  public Result next(long scannerId) throws IOException;
+
+  /**
+   * Get the next set of values
+   * @param scannerId clientId passed to openScanner
+   * @param numberOfRows the number of rows to fetch
+   * @return Array of Results (map of values); array is empty if done with this
+   * region and null if we are NOT to go to the next region (happens when a
+   * filter rules that the scan is done).
+   * @throws IOException e
+   */
+  public Result [] next(long scannerId, int numberOfRows) throws IOException;
+
+  /**
+   * Close a scanner
+   *
+   * @param scannerId the scanner id returned by openScanner
+   * @throws IOException e
+   */
+  public void close(long scannerId) throws IOException;
+
+  /**
+   * Opens a remote row lock.
+   *
+   * @param regionName name of region
+   * @param row row to lock
+   * @return lockId lock identifier
+   * @throws IOException e
+   */
+  public long lockRow(final byte [] regionName, final byte [] row)
+  throws IOException;
+
+  /**
+   * Releases a remote row lock.
+   *
+   * @param regionName region name
+   * @param lockId the lock id returned by lockRow
+   * @throws IOException e
+   */
+  public void unlockRow(final byte [] regionName, final long lockId)
+  throws IOException;
+
+
+  /**
+   * @return All regions online on this region server
+   * @throws IOException e
+   */
+  public List<HRegionInfo> getOnlineRegions();
+
+  /**
+   * Method used when a master is taking the place of another failed one.
+   * @return The HSI
+   * @throws IOException e
+   */
+  public HServerInfo getHServerInfo() throws IOException;
+
+  /**
+   * Method used for doing multiple actions(Deletes, Gets and Puts) in one call
+   * @param multi
+   * @return MultiResult
+   * @throws IOException
+   */
+  public MultiResponse multi(MultiAction multi) throws IOException;
+
+  /**
+   * Multi put for putting multiple regions worth of puts at once.
+   *
+   * @param puts the request
+   * @return the reply
+   * @throws IOException e
+   */
+  public MultiPutResponse multiPut(MultiPut puts) throws IOException;
+
+  /**
+   * Bulk load an HFile into an open region
+   */
+  public void bulkLoadHFile(String hfilePath, byte[] regionName, byte[] familyName)
+  throws IOException;
+
+  // Master methods
+
+  /**
+   * Opens the specified region.
+   * @param region region to open
+   * @throws IOException
+   */
+  public void openRegion(final HRegionInfo region) throws IOException;
+
+  /**
+   * Opens the specified regions.
+   * @param regions regions to open
+   * @throws IOException
+   */
+  public void openRegions(final List<HRegionInfo> regions) throws IOException;
+
+  /**
+   * Closes the specified region.
+   * @param region region to close
+   * @return true if closing region, false if not
+   * @throws IOException
+   */
+  public boolean closeRegion(final HRegionInfo region)
+  throws IOException;
+
+  /**
+   * Closes the specified region and will use or not use ZK during the close
+   * according to the specified flag.
+   * @param region region to close
+   * @param zk true if transitions should be done in ZK, false if not
+   * @return true if closing region, false if not
+   * @throws IOException
+   */
+  public boolean closeRegion(final HRegionInfo region, final boolean zk)
+  throws IOException;
+
+  // Region administrative methods
+
+  /**
+   * Flushes the MemStore of the specified region.
+   * <p>
+   * This method is synchronous.
+   * @param regionInfo region to flush
+   * @throws NotServingRegionException
+   * @throws IOException
+   */
+  void flushRegion(HRegionInfo regionInfo)
+  throws NotServingRegionException, IOException;
+
+  /**
+   * Splits the specified region.
+   * <p>
+   * This method currently flushes the region and then forces a compaction which
+   * will then trigger a split.  The flush is done synchronously but the
+   * compaction is asynchronous.
+   * @param regionInfo region to split
+   * @throws NotServingRegionException
+   * @throws IOException
+   */
+  void splitRegion(HRegionInfo regionInfo)
+  throws NotServingRegionException, IOException;
+
+  /**
+   * Compacts the specified region.  Performs a major compaction if specified.
+   * <p>
+   * This method is asynchronous.
+   * @param regionInfo region to compact
+   * @param major true to force major compaction
+   * @throws NotServingRegionException
+   * @throws IOException
+   */
+  void compactRegion(HRegionInfo regionInfo, boolean major)
+  throws NotServingRegionException, IOException;
+
+  /**
+   * Replicates the given entries. The guarantee is that the given entries
+   * will be durable on the slave cluster if this method returns without
+   * any exception.
+   * hbase.replication has to be set to true for this to work.
+   *
+   * @param entries entries to replicate
+   * @throws IOException
+   */
+  public void replicateLogEntries(HLog.Entry[] entries) throws IOException;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/ipc/ServerNotRunningException.java b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/ServerNotRunningException.java
new file mode 100644
index 0000000..2611286
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/ipc/ServerNotRunningException.java
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.ipc;
+
+import java.io.IOException;
+
+public class ServerNotRunningException extends IOException {
+  public ServerNotRunningException(String s) {
+    super(s);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/Driver.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/Driver.java
new file mode 100644
index 0000000..dcc40b1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/Driver.java
@@ -0,0 +1,40 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import org.apache.hadoop.util.ProgramDriver;
+
+/**
+ * Driver for hbase mapreduce jobs. Select which to run by passing
+ * name of job to this main.
+ */
+@Deprecated
+public class Driver {
+  /**
+   * @param args
+   * @throws Throwable
+   */
+  public static void main(String[] args) throws Throwable {
+    ProgramDriver pgd = new ProgramDriver();
+    pgd.addClass(RowCounter.NAME, RowCounter.class,
+      "Count rows in HBase table");
+    pgd.driver(args);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java
new file mode 100644
index 0000000..c368140
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java
@@ -0,0 +1,162 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+
+/**
+ * Extract grouping columns from input record
+ */
+@Deprecated
+public class GroupingTableMap
+extends MapReduceBase
+implements TableMap<ImmutableBytesWritable,Result> {
+
+  /**
+   * JobConf parameter to specify the columns used to produce the key passed to
+   * collect from the map phase
+   */
+  public static final String GROUP_COLUMNS =
+    "hbase.mapred.groupingtablemap.columns";
+
+  protected byte [][] columns;
+
+  /**
+   * Use this before submitting a TableMap job. It will appropriately set up the
+   * JobConf.
+   *
+   * @param table table to be processed
+   * @param columns space separated list of columns to fetch
+   * @param groupColumns space separated list of columns used to form the key
+   * used in collect
+   * @param mapper map class
+   * @param job job configuration object
+   */
+  @SuppressWarnings("unchecked")
+  public static void initJob(String table, String columns, String groupColumns,
+    Class<? extends TableMap> mapper, JobConf job) {
+
+    TableMapReduceUtil.initTableMapJob(table, columns, mapper,
+        ImmutableBytesWritable.class, Result.class, job);
+    job.set(GROUP_COLUMNS, groupColumns);
+  }
+
+  @Override
+  public void configure(JobConf job) {
+    super.configure(job);
+    String[] cols = job.get(GROUP_COLUMNS, "").split(" ");
+    columns = new byte[cols.length][];
+    for(int i = 0; i < cols.length; i++) {
+      columns[i] = Bytes.toBytes(cols[i]);
+    }
+  }
+
+  /**
+   * Extract the grouping columns from value to construct a new key.
+   *
+   * Pass the new key and value to reduce.
+   * If any of the grouping columns are not found in the value, the record is skipped.
+   * @param key
+   * @param value
+   * @param output
+   * @param reporter
+   * @throws IOException
+   */
+  public void map(ImmutableBytesWritable key, Result value,
+      OutputCollector<ImmutableBytesWritable,Result> output,
+      Reporter reporter) throws IOException {
+
+    byte[][] keyVals = extractKeyValues(value);
+    if(keyVals != null) {
+      ImmutableBytesWritable tKey = createGroupKey(keyVals);
+      output.collect(tKey, value);
+    }
+  }
+
+  /**
+   * Extract columns values from the current record. This method returns
+   * null if any of the columns are not found.
+   *
+   * Override this method if you want to deal with nulls differently.
+   *
+   * @param r
+   * @return array of byte values
+   */
+  protected byte[][] extractKeyValues(Result r) {
+    byte[][] keyVals = null;
+    ArrayList<byte[]> foundList = new ArrayList<byte[]>();
+    int numCols = columns.length;
+    if (numCols > 0) {
+      for (KeyValue value: r.list()) {
+        byte [] column = KeyValue.makeColumn(value.getFamily(),
+            value.getQualifier());
+        for (int i = 0; i < numCols; i++) {
+          if (Bytes.equals(column, columns[i])) {
+            foundList.add(value.getValue());
+            break;
+          }
+        }
+      }
+      if(foundList.size() == numCols) {
+        keyVals = foundList.toArray(new byte[numCols][]);
+      }
+    }
+    return keyVals;
+  }
+
+  /**
+   * Create a key by concatenating multiple column values.
+   * Override this function in order to produce different types of keys.
+   *
+   * @param vals
+   * @return key generated by concatenating multiple column values
+   */
+  protected ImmutableBytesWritable createGroupKey(byte[][] vals) {
+    if(vals == null) {
+      return null;
+    }
+    StringBuilder sb =  new StringBuilder();
+    for(int i = 0; i < vals.length; i++) {
+      if(i > 0) {
+        sb.append(" ");
+      }
+      try {
+        sb.append(new String(vals[i], HConstants.UTF8_ENCODING));
+      } catch (UnsupportedEncodingException e) {
+        throw new RuntimeException(e);
+      }
+    }
+    return new ImmutableBytesWritable(Bytes.toBytes(sb.toString()));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java
new file mode 100644
index 0000000..b58c5c7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/HRegionPartitioner.java
@@ -0,0 +1,91 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.Partitioner;
+
+
+/**
+ * This is used to partition the output keys into groups of keys.
+ * Keys are grouped according to the regions that currently exist
+ * so that each reducer fills a single region so load is distributed.
+ *
+ * @param <K2>
+ * @param <V2>
+ */
+@Deprecated
+public class HRegionPartitioner<K2,V2>
+implements Partitioner<ImmutableBytesWritable, V2> {
+  private final Log LOG = LogFactory.getLog(TableInputFormat.class);
+  private HTable table;
+  private byte[][] startKeys;
+
+  public void configure(JobConf job) {
+    try {
+      this.table = new HTable(HBaseConfiguration.create(job),
+        job.get(TableOutputFormat.OUTPUT_TABLE));
+    } catch (IOException e) {
+      LOG.error(e);
+    }
+
+    try {
+      this.startKeys = this.table.getStartKeys();
+    } catch (IOException e) {
+      LOG.error(e);
+    }
+  }
+
+  public int getPartition(ImmutableBytesWritable key,
+      V2 value, int numPartitions) {
+    byte[] region = null;
+    // Only one region return 0
+    if (this.startKeys.length == 1){
+      return 0;
+    }
+    try {
+      // Not sure if this is cached after a split so we could have problems
+      // here if a region splits while mapping
+      region = table.getRegionLocation(key.get()).getRegionInfo().getStartKey();
+    } catch (IOException e) {
+      LOG.error(e);
+    }
+    for (int i = 0; i < this.startKeys.length; i++){
+      if (Bytes.compareTo(region, this.startKeys[i]) == 0 ){
+        if (i >= numPartitions-1){
+          // cover if we have less reduces then regions.
+          return (Integer.toString(i).hashCode()
+              & Integer.MAX_VALUE) % numPartitions;
+        }
+        return i;
+      }
+    }
+    // if above fails to find start key that match we need to return something
+    return 0;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/IdentityTableMap.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/IdentityTableMap.java
new file mode 100644
index 0000000..0f67a9e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/IdentityTableMap.java
@@ -0,0 +1,76 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * Pass the given key and record as-is to reduce
+ */
+@Deprecated
+public class IdentityTableMap
+extends MapReduceBase
+implements TableMap<ImmutableBytesWritable, Result> {
+
+  /** constructor */
+  public IdentityTableMap() {
+    super();
+  }
+
+  /**
+   * Use this before submitting a TableMap job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table table name
+   * @param columns columns to scan
+   * @param mapper mapper class
+   * @param job job configuration
+   */
+  @SuppressWarnings("unchecked")
+  public static void initJob(String table, String columns,
+    Class<? extends TableMap> mapper, JobConf job) {
+    TableMapReduceUtil.initTableMapJob(table, columns, mapper,
+      ImmutableBytesWritable.class,
+      Result.class, job);
+  }
+
+  /**
+   * Pass the key, value to reduce
+   * @param key
+   * @param value
+   * @param output
+   * @param reporter
+   * @throws IOException
+   */
+  public void map(ImmutableBytesWritable key, Result value,
+      OutputCollector<ImmutableBytesWritable,Result> output,
+      Reporter reporter) throws IOException {
+
+    // convert
+    output.collect(key, value);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/IdentityTableReduce.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/IdentityTableReduce.java
new file mode 100644
index 0000000..be0a6bd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/IdentityTableReduce.java
@@ -0,0 +1,61 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * Write to table each key, record pair
+ */
+@Deprecated
+public class IdentityTableReduce
+extends MapReduceBase
+implements TableReduce<ImmutableBytesWritable, Put> {
+  @SuppressWarnings("unused")
+  private static final Log LOG =
+    LogFactory.getLog(IdentityTableReduce.class.getName());
+
+  /**
+   * No aggregation, output pairs of (key, record)
+   * @param key
+   * @param values
+   * @param output
+   * @param reporter
+   * @throws IOException
+   */
+  public void reduce(ImmutableBytesWritable key, Iterator<Put> values,
+      OutputCollector<ImmutableBytesWritable, Put> output,
+      Reporter reporter)
+      throws IOException {
+
+    while(values.hasNext()) {
+      output.collect(key, values.next());
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java
new file mode 100644
index 0000000..e43684b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java
@@ -0,0 +1,136 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.lib.IdentityReducer;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * A job with a map to count rows.
+ * Map outputs table rows IF the input row has columns that have content.
+ * Uses an {@link IdentityReducer}
+ */
+@Deprecated
+public class RowCounter extends Configured implements Tool {
+  // Name of this 'program'
+  static final String NAME = "rowcounter";
+
+  /**
+   * Mapper that runs the count.
+   */
+  static class RowCounterMapper
+  implements TableMap<ImmutableBytesWritable, Result> {
+    private static enum Counters {ROWS}
+
+    public void map(ImmutableBytesWritable row, Result values,
+        OutputCollector<ImmutableBytesWritable, Result> output,
+        Reporter reporter)
+    throws IOException {
+      boolean content = false;
+
+      for (KeyValue value: values.list()) {
+        if (value.getValue().length > 0) {
+          content = true;
+          break;
+        }
+      }
+      if (!content) {
+        // Don't count rows that are all empty values.
+        return;
+      }
+      // Give out same value every time.  We're only interested in the row/key
+      reporter.incrCounter(Counters.ROWS, 1);
+    }
+
+    public void configure(JobConf jc) {
+      // Nothing to do.
+    }
+
+    public void close() throws IOException {
+      // Nothing to do.
+    }
+  }
+
+  /**
+   * @param args
+   * @return the JobConf
+   * @throws IOException
+   */
+  public JobConf createSubmittableJob(String[] args) throws IOException {
+    JobConf c = new JobConf(getConf(), getClass());
+    c.setJobName(NAME);
+    // Columns are space delimited
+    StringBuilder sb = new StringBuilder();
+    final int columnoffset = 2;
+    for (int i = columnoffset; i < args.length; i++) {
+      if (i > columnoffset) {
+        sb.append(" ");
+      }
+      sb.append(args[i]);
+    }
+    // Second argument is the table name.
+    TableMapReduceUtil.initTableMapJob(args[1], sb.toString(),
+      RowCounterMapper.class, ImmutableBytesWritable.class, Result.class, c);
+    c.setNumReduceTasks(0);
+    // First arg is the output directory.
+    FileOutputFormat.setOutputPath(c, new Path(args[0]));
+    return c;
+  }
+
+  static int printUsage() {
+    System.out.println(NAME +
+      " <outputdir> <tablename> <column1> [<column2>...]");
+    return -1;
+  }
+
+  public int run(final String[] args) throws Exception {
+    // Make sure there are at least 3 parameters
+    if (args.length < 3) {
+      System.err.println("ERROR: Wrong number of parameters: " + args.length);
+      return printUsage();
+    }
+    JobClient.runJob(createSubmittableJob(args));
+    return 0;
+  }
+
+  /**
+   * @param args
+   * @throws Exception
+   */
+  public static void main(String[] args) throws Exception {
+    int errCode = ToolRunner.run(HBaseConfiguration.create(), new RowCounter(), args);
+    System.exit(errCode);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
new file mode 100644
index 0000000..395a626
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormat.java
@@ -0,0 +1,83 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.FileInputFormat;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.JobConfigurable;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Convert HBase tabular data into a format that is consumable by Map/Reduce.
+ */
+@Deprecated
+public class TableInputFormat extends TableInputFormatBase implements
+    JobConfigurable {
+  private final Log LOG = LogFactory.getLog(TableInputFormat.class);
+
+  /**
+   * space delimited list of columns
+   */
+  public static final String COLUMN_LIST = "hbase.mapred.tablecolumns";
+
+  public void configure(JobConf job) {
+    Path[] tableNames = FileInputFormat.getInputPaths(job);
+    String colArg = job.get(COLUMN_LIST);
+    String[] colNames = colArg.split(" ");
+    byte [][] m_cols = new byte[colNames.length][];
+    for (int i = 0; i < m_cols.length; i++) {
+      m_cols[i] = Bytes.toBytes(colNames[i]);
+    }
+    setInputColumns(m_cols);
+    try {
+      setHTable(new HTable(HBaseConfiguration.create(job), tableNames[0].getName()));
+    } catch (Exception e) {
+      LOG.error(StringUtils.stringifyException(e));
+    }
+  }
+
+  public void validateInput(JobConf job) throws IOException {
+    // expecting exactly one path
+    Path [] tableNames = FileInputFormat.getInputPaths(job);
+    if (tableNames == null || tableNames.length > 1) {
+      throw new IOException("expecting one table name");
+    }
+
+    // connected to table?
+    if (getHTable() == null) {
+      throw new IOException("could not connect to table '" +
+        tableNames[0].getName() + "'");
+    }
+
+    // expecting at least one column
+    String colArg = job.get(COLUMN_LIST);
+    if (colArg == null || colArg.length() == 0) {
+      throw new IOException("expecting at least one column");
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
new file mode 100644
index 0000000..b862eea
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java
@@ -0,0 +1,189 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.mapred.InputFormat;
+import org.apache.hadoop.mapred.InputSplit;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.RecordReader;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * A Base for {@link TableInputFormat}s. Receives a {@link HTable}, a
+ * byte[] of input columns and optionally a {@link Filter}.
+ * Subclasses may use other TableRecordReader implementations.
+ * <p>
+ * An example of a subclass:
+ * <pre>
+ *   class ExampleTIF extends TableInputFormatBase implements JobConfigurable {
+ *
+ *     public void configure(JobConf job) {
+ *       HTable exampleTable = new HTable(HBaseConfiguration.create(job),
+ *         Bytes.toBytes("exampleTable"));
+ *       // mandatory
+ *       setHTable(exampleTable);
+ *       Text[] inputColumns = new byte [][] { Bytes.toBytes("columnA"),
+ *         Bytes.toBytes("columnB") };
+ *       // mandatory
+ *       setInputColumns(inputColumns);
+ *       RowFilterInterface exampleFilter = new RegExpRowFilter("keyPrefix.*");
+ *       // optional
+ *       setRowFilter(exampleFilter);
+ *     }
+ *
+ *     public void validateInput(JobConf job) throws IOException {
+ *     }
+ *  }
+ * </pre>
+ */
+
+@Deprecated
+public abstract class TableInputFormatBase
+implements InputFormat<ImmutableBytesWritable, Result> {
+  final Log LOG = LogFactory.getLog(TableInputFormatBase.class);
+  private byte [][] inputColumns;
+  private HTable table;
+  private TableRecordReader tableRecordReader;
+  private Filter rowFilter;
+
+  /**
+   * Builds a TableRecordReader. If no TableRecordReader was provided, uses
+   * the default.
+   *
+   * @see org.apache.hadoop.mapred.InputFormat#getRecordReader(InputSplit,
+   *      JobConf, Reporter)
+   */
+  public RecordReader<ImmutableBytesWritable, Result> getRecordReader(
+      InputSplit split, JobConf job, Reporter reporter)
+  throws IOException {
+    TableSplit tSplit = (TableSplit) split;
+    TableRecordReader trr = this.tableRecordReader;
+    // if no table record reader was provided use default
+    if (trr == null) {
+      trr = new TableRecordReader();
+    }
+    trr.setStartRow(tSplit.getStartRow());
+    trr.setEndRow(tSplit.getEndRow());
+    trr.setHTable(this.table);
+    trr.setInputColumns(this.inputColumns);
+    trr.setRowFilter(this.rowFilter);
+    trr.init();
+    return trr;
+  }
+
+  /**
+   * Calculates the splits that will serve as input for the map tasks.
+   * <ul>
+   * Splits are created in number equal to the smallest between numSplits and
+   * the number of {@link HRegion}s in the table. If the number of splits is
+   * smaller than the number of {@link HRegion}s then splits are spanned across
+   * multiple {@link HRegion}s and are grouped the most evenly possible. In the
+   * case splits are uneven the bigger splits are placed first in the
+   * {@link InputSplit} array.
+   *
+   * @param job the map task {@link JobConf}
+   * @param numSplits a hint to calculate the number of splits (mapred.map.tasks).
+   *
+   * @return the input splits
+   *
+   * @see org.apache.hadoop.mapred.InputFormat#getSplits(org.apache.hadoop.mapred.JobConf, int)
+   */
+  public InputSplit[] getSplits(JobConf job, int numSplits) throws IOException {
+    if (this.table == null) {
+      throw new IOException("No table was provided");
+    }
+    byte [][] startKeys = this.table.getStartKeys();
+    if (startKeys == null || startKeys.length == 0) {
+      throw new IOException("Expecting at least one region");
+    }
+    if (this.inputColumns == null || this.inputColumns.length == 0) {
+      throw new IOException("Expecting at least one column");
+    }
+    int realNumSplits = numSplits > startKeys.length? startKeys.length:
+      numSplits;
+    InputSplit[] splits = new InputSplit[realNumSplits];
+    int middle = startKeys.length / realNumSplits;
+    int startPos = 0;
+    for (int i = 0; i < realNumSplits; i++) {
+      int lastPos = startPos + middle;
+      lastPos = startKeys.length % realNumSplits > i ? lastPos + 1 : lastPos;
+      String regionLocation = table.getRegionLocation(startKeys[startPos]).
+        getServerAddress().getHostname();
+      splits[i] = new TableSplit(this.table.getTableName(),
+        startKeys[startPos], ((i + 1) < realNumSplits) ? startKeys[lastPos]:
+          HConstants.EMPTY_START_ROW, regionLocation);
+      LOG.info("split: " + i + "->" + splits[i]);
+      startPos = lastPos;
+    }
+    return splits;
+  }
+
+  /**
+   * @param inputColumns to be passed in {@link Result} to the map task.
+   */
+  protected void setInputColumns(byte [][] inputColumns) {
+    this.inputColumns = inputColumns;
+  }
+
+  /**
+   * Allows subclasses to get the {@link HTable}.
+   */
+  protected HTable getHTable() {
+    return this.table;
+  }
+
+  /**
+   * Allows subclasses to set the {@link HTable}.
+   *
+   * @param table to get the data from
+   */
+  protected void setHTable(HTable table) {
+    this.table = table;
+  }
+
+  /**
+   * Allows subclasses to set the {@link TableRecordReader}.
+   *
+   * @param tableRecordReader
+   *                to provide other {@link TableRecordReader} implementations.
+   */
+  protected void setTableRecordReader(TableRecordReader tableRecordReader) {
+    this.tableRecordReader = tableRecordReader;
+  }
+
+  /**
+   * Allows subclasses to set the {@link Filter} to be used.
+   *
+   * @param rowFilter
+   */
+  protected void setRowFilter(Filter rowFilter) {
+    this.rowFilter = rowFilter;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableMap.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableMap.java
new file mode 100644
index 0000000..597f3ef
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableMap.java
@@ -0,0 +1,39 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapred.Mapper;
+
+/**
+ * Scan an HBase table to sort by a specified sort column.
+ * If the column does not exist, the record is not passed to Reduce.
+ *
+ * @param <K> WritableComparable key class
+ * @param <V> Writable value class
+ */
+@Deprecated
+public interface TableMap<K extends WritableComparable<? super K>, V extends Writable>
+extends Mapper<ImmutableBytesWritable, Result, K, V> {
+
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
new file mode 100644
index 0000000..db07ed1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableMapReduceUtil.java
@@ -0,0 +1,255 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapred.FileInputFormat;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.InputFormat;
+import org.apache.hadoop.mapred.OutputFormat;
+import org.apache.hadoop.mapred.TextInputFormat;
+import org.apache.hadoop.mapred.TextOutputFormat;
+
+/**
+ * Utility for {@link TableMap} and {@link TableReduce}
+ */
+@Deprecated
+@SuppressWarnings("unchecked")
+public class TableMapReduceUtil {
+
+  /**
+   * Use this before submitting a TableMap job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The table name to read from.
+   * @param columns  The columns to scan.
+   * @param mapper  The mapper class to use.
+   * @param outputKeyClass  The class of the output key.
+   * @param outputValueClass  The class of the output value.
+   * @param job  The current job configuration to adjust.
+   */
+  public static void initTableMapJob(String table, String columns,
+    Class<? extends TableMap> mapper,
+    Class<? extends WritableComparable> outputKeyClass,
+    Class<? extends Writable> outputValueClass, JobConf job) {
+    initTableMapJob(table, columns, mapper, outputKeyClass, outputValueClass, job, true);
+  }
+
+  /**
+   * Use this before submitting a TableMap job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The table name to read from.
+   * @param columns  The columns to scan.
+   * @param mapper  The mapper class to use.
+   * @param outputKeyClass  The class of the output key.
+   * @param outputValueClass  The class of the output value.
+   * @param job  The current job configuration to adjust.
+   * @param addDependencyJars upload HBase jars and jars for any of the configured
+   *           job classes via the distributed cache (tmpjars).
+   */
+  public static void initTableMapJob(String table, String columns,
+    Class<? extends TableMap> mapper,
+    Class<? extends WritableComparable> outputKeyClass,
+    Class<? extends Writable> outputValueClass, JobConf job, boolean addDependencyJars) {
+
+    job.setInputFormat(TableInputFormat.class);
+    job.setMapOutputValueClass(outputValueClass);
+    job.setMapOutputKeyClass(outputKeyClass);
+    job.setMapperClass(mapper);
+    FileInputFormat.addInputPaths(job, table);
+    job.set(TableInputFormat.COLUMN_LIST, columns);
+    if (addDependencyJars) {
+      try {
+        addDependencyJars(job);
+      } catch (IOException e) {
+        e.printStackTrace();
+      }
+    }
+  }
+
+  /**
+   * Use this before submitting a TableReduce job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The output table.
+   * @param reducer  The reducer class to use.
+   * @param job  The current job configuration to adjust.
+   * @throws IOException When determining the region count fails.
+   */
+  public static void initTableReduceJob(String table,
+    Class<? extends TableReduce> reducer, JobConf job)
+  throws IOException {
+    initTableReduceJob(table, reducer, job, null);
+  }
+
+  /**
+   * Use this before submitting a TableReduce job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The output table.
+   * @param reducer  The reducer class to use.
+   * @param job  The current job configuration to adjust.
+   * @param partitioner  Partitioner to use. Pass <code>null</code> to use
+   * default partitioner.
+   * @throws IOException When determining the region count fails.
+   */
+  public static void initTableReduceJob(String table,
+    Class<? extends TableReduce> reducer, JobConf job, Class partitioner)
+  throws IOException {
+    initTableReduceJob(table, reducer, job, partitioner, true);
+  }
+
+  /**
+   * Use this before submitting a TableReduce job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The output table.
+   * @param reducer  The reducer class to use.
+   * @param job  The current job configuration to adjust.
+   * @param partitioner  Partitioner to use. Pass <code>null</code> to use
+   * default partitioner.
+   * @param addDependencyJars upload HBase jars and jars for any of the configured
+   *           job classes via the distributed cache (tmpjars).
+   * @throws IOException When determining the region count fails.
+   */
+  public static void initTableReduceJob(String table,
+    Class<? extends TableReduce> reducer, JobConf job, Class partitioner,
+    boolean addDependencyJars) throws IOException {
+    job.setOutputFormat(TableOutputFormat.class);
+    job.setReducerClass(reducer);
+    job.set(TableOutputFormat.OUTPUT_TABLE, table);
+    job.setOutputKeyClass(ImmutableBytesWritable.class);
+    job.setOutputValueClass(Put.class);
+    if (partitioner == HRegionPartitioner.class) {
+      job.setPartitionerClass(HRegionPartitioner.class);
+      HTable outputTable = new HTable(HBaseConfiguration.create(job), table);
+      int regions = outputTable.getRegionsInfo().size();
+      if (job.getNumReduceTasks() > regions) {
+        job.setNumReduceTasks(outputTable.getRegionsInfo().size());
+      }
+    } else if (partitioner != null) {
+      job.setPartitionerClass(partitioner);
+    }
+    if (addDependencyJars) {
+      addDependencyJars(job);
+    }
+  }
+
+  /**
+   * Ensures that the given number of reduce tasks for the given job
+   * configuration does not exceed the number of regions for the given table.
+   *
+   * @param table  The table to get the region count for.
+   * @param job  The current job configuration to adjust.
+   * @throws IOException When retrieving the table details fails.
+   */
+  public static void limitNumReduceTasks(String table, JobConf job)
+  throws IOException {
+    HTable outputTable = new HTable(HBaseConfiguration.create(job), table);
+    int regions = outputTable.getRegionsInfo().size();
+    if (job.getNumReduceTasks() > regions)
+      job.setNumReduceTasks(regions);
+  }
+
+  /**
+   * Ensures that the given number of map tasks for the given job
+   * configuration does not exceed the number of regions for the given table.
+   *
+   * @param table  The table to get the region count for.
+   * @param job  The current job configuration to adjust.
+   * @throws IOException When retrieving the table details fails.
+   */
+  public static void limitNumMapTasks(String table, JobConf job)
+  throws IOException {
+    HTable outputTable = new HTable(HBaseConfiguration.create(job), table);
+    int regions = outputTable.getRegionsInfo().size();
+    if (job.getNumMapTasks() > regions)
+      job.setNumMapTasks(regions);
+  }
+
+  /**
+   * Sets the number of reduce tasks for the given job configuration to the
+   * number of regions the given table has.
+   *
+   * @param table  The table to get the region count for.
+   * @param job  The current job configuration to adjust.
+   * @throws IOException When retrieving the table details fails.
+   */
+  public static void setNumReduceTasks(String table, JobConf job)
+  throws IOException {
+    HTable outputTable = new HTable(HBaseConfiguration.create(job), table);
+    int regions = outputTable.getRegionsInfo().size();
+    job.setNumReduceTasks(regions);
+  }
+
+  /**
+   * Sets the number of map tasks for the given job configuration to the
+   * number of regions the given table has.
+   *
+   * @param table  The table to get the region count for.
+   * @param job  The current job configuration to adjust.
+   * @throws IOException When retrieving the table details fails.
+   */
+  public static void setNumMapTasks(String table, JobConf job)
+  throws IOException {
+    HTable outputTable = new HTable(HBaseConfiguration.create(job), table);
+    int regions = outputTable.getRegionsInfo().size();
+    job.setNumMapTasks(regions);
+  }
+
+  /**
+   * Sets the number of rows to return and cache with each scanner iteration.
+   * Higher caching values will enable faster mapreduce jobs at the expense of
+   * requiring more heap to contain the cached rows.
+   *
+   * @param job The current job configuration to adjust.
+   * @param batchSize The number of rows to return in batch with each scanner
+   * iteration.
+   */
+  public static void setScannerCaching(JobConf job, int batchSize) {
+    job.setInt("hbase.client.scanner.caching", batchSize);
+  }
+
+  /**
+   * @see org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil#addDependencyJars(Job)
+   */
+  public static void addDependencyJars(JobConf job) throws IOException {
+    org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.addDependencyJars(
+      job,
+      org.apache.zookeeper.ZooKeeper.class,
+      com.google.common.base.Function.class,
+      job.getMapOutputKeyClass(),
+      job.getMapOutputValueClass(),
+      job.getOutputKeyClass(),
+      job.getOutputValueClass(),
+      job.getPartitionerClass(),
+      job.getClass("mapred.input.format.class", TextInputFormat.class, InputFormat.class),
+      job.getClass("mapred.output.format.class", TextOutputFormat.class, OutputFormat.class),
+      job.getCombinerClass());
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java
new file mode 100644
index 0000000..80284bb
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java
@@ -0,0 +1,106 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapred.FileAlreadyExistsException;
+import org.apache.hadoop.mapred.InvalidJobConfException;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.FileOutputFormat;
+import org.apache.hadoop.mapred.RecordWriter;
+import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.util.Progressable;
+
+/**
+ * Convert Map/Reduce output and write it to an HBase table
+ */
+@Deprecated
+public class TableOutputFormat extends
+FileOutputFormat<ImmutableBytesWritable, Put> {
+
+  /** JobConf parameter that specifies the output table */
+  public static final String OUTPUT_TABLE = "hbase.mapred.outputtable";
+  private final Log LOG = LogFactory.getLog(TableOutputFormat.class);
+
+  /**
+   * Convert Reduce output (key, value) to (HStoreKey, KeyedDataArrayWritable)
+   * and write to an HBase table
+   */
+  protected static class TableRecordWriter
+    implements RecordWriter<ImmutableBytesWritable, Put> {
+    private HTable m_table;
+
+    /**
+     * Instantiate a TableRecordWriter with the HBase HClient for writing.
+     *
+     * @param table
+     */
+    public TableRecordWriter(HTable table) {
+      m_table = table;
+    }
+
+    public void close(Reporter reporter)
+      throws IOException {
+      m_table.flushCommits();
+    }
+
+    public void write(ImmutableBytesWritable key,
+        Put value) throws IOException {
+      m_table.put(new Put(value));
+    }
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public RecordWriter getRecordWriter(FileSystem ignored,
+      JobConf job, String name, Progressable progress) throws IOException {
+
+    // expecting exactly one path
+
+    String tableName = job.get(OUTPUT_TABLE);
+    HTable table = null;
+    try {
+      table = new HTable(HBaseConfiguration.create(job), tableName);
+    } catch(IOException e) {
+      LOG.error(e);
+      throw e;
+    }
+    table.setAutoFlush(false);
+    return new TableRecordWriter(table);
+  }
+
+  @Override
+  public void checkOutputSpecs(FileSystem ignored, JobConf job)
+  throws FileAlreadyExistsException, InvalidJobConfException, IOException {
+
+    String tableName = job.get(OUTPUT_TABLE);
+    if(tableName == null) {
+      throw new IOException("Must specify table name");
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReader.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReader.java
new file mode 100644
index 0000000..7133860
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReader.java
@@ -0,0 +1,138 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapred.RecordReader;
+
+
+/**
+ * Iterate over an HBase table data, return (Text, RowResult) pairs
+ */
+public class TableRecordReader
+implements RecordReader<ImmutableBytesWritable, Result> {
+
+  private TableRecordReaderImpl recordReaderImpl = new TableRecordReaderImpl();
+
+  /**
+   * Restart from survivable exceptions by creating a new scanner.
+   *
+   * @param firstRow
+   * @throws IOException
+   */
+  public void restart(byte[] firstRow) throws IOException {
+    this.recordReaderImpl.restart(firstRow);
+  }
+
+  /**
+   * Build the scanner. Not done in constructor to allow for extension.
+   *
+   * @throws IOException
+   */
+  public void init() throws IOException {
+    this.recordReaderImpl.restart(this.recordReaderImpl.getStartRow());
+  }
+
+  /**
+   * @param htable the {@link HTable} to scan.
+   */
+  public void setHTable(HTable htable) {
+    this.recordReaderImpl.setHTable(htable);
+  }
+
+  /**
+   * @param inputColumns the columns to be placed in {@link Result}.
+   */
+  public void setInputColumns(final byte [][] inputColumns) {
+    this.recordReaderImpl.setInputColumns(inputColumns);
+  }
+
+  /**
+   * @param startRow the first row in the split
+   */
+  public void setStartRow(final byte [] startRow) {
+    this.recordReaderImpl.setStartRow(startRow);
+  }
+
+  /**
+   *
+   * @param endRow the last row in the split
+   */
+  public void setEndRow(final byte [] endRow) {
+    this.recordReaderImpl.setEndRow(endRow);
+  }
+
+  /**
+   * @param rowFilter the {@link Filter} to be used.
+   */
+  public void setRowFilter(Filter rowFilter) {
+    this.recordReaderImpl.setRowFilter(rowFilter);
+  }
+
+  public void close() {
+    this.recordReaderImpl.close();
+  }
+
+  /**
+   * @return ImmutableBytesWritable
+   *
+   * @see org.apache.hadoop.mapred.RecordReader#createKey()
+   */
+  public ImmutableBytesWritable createKey() {
+    return this.recordReaderImpl.createKey();
+  }
+
+  /**
+   * @return RowResult
+   *
+   * @see org.apache.hadoop.mapred.RecordReader#createValue()
+   */
+  public Result createValue() {
+    return this.recordReaderImpl.createValue();
+  }
+
+  public long getPos() {
+
+    // This should be the ordinal tuple in the range;
+    // not clear how to calculate...
+    return this.recordReaderImpl.getPos();
+  }
+
+  public float getProgress() {
+    // Depends on the total number of tuples and getPos
+    return this.recordReaderImpl.getPos();
+  }
+
+  /**
+   * @param key HStoreKey as input key.
+   * @param value MapWritable as input value
+   * @return true if there was more data
+   * @throws IOException
+   */
+  public boolean next(ImmutableBytesWritable key, Result value)
+  throws IOException {
+    return this.recordReaderImpl.next(key, value);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
new file mode 100644
index 0000000..30174e2
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java
@@ -0,0 +1,193 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+import org.apache.hadoop.util.StringUtils;
+
+
+/**
+ * Iterate over an HBase table data, return (Text, RowResult) pairs
+ */
+public class TableRecordReaderImpl {
+  static final Log LOG = LogFactory.getLog(TableRecordReaderImpl.class);
+
+  private byte [] startRow;
+  private byte [] endRow;
+  private byte [] lastRow;
+  private Filter trrRowFilter;
+  private ResultScanner scanner;
+  private HTable htable;
+  private byte [][] trrInputColumns;
+
+  /**
+   * Restart from survivable exceptions by creating a new scanner.
+   *
+   * @param firstRow
+   * @throws IOException
+   */
+  public void restart(byte[] firstRow) throws IOException {
+    if ((endRow != null) && (endRow.length > 0)) {
+      if (trrRowFilter != null) {
+        Scan scan = new Scan(firstRow, endRow);
+        scan.addColumns(trrInputColumns);
+        scan.setFilter(trrRowFilter);
+        scan.setCacheBlocks(false);
+        this.scanner = this.htable.getScanner(scan);
+      } else {
+        LOG.debug("TIFB.restart, firstRow: " +
+            Bytes.toStringBinary(firstRow) + ", endRow: " +
+            Bytes.toStringBinary(endRow));
+        Scan scan = new Scan(firstRow, endRow);
+        scan.addColumns(trrInputColumns);
+        this.scanner = this.htable.getScanner(scan);
+      }
+    } else {
+      LOG.debug("TIFB.restart, firstRow: " +
+          Bytes.toStringBinary(firstRow) + ", no endRow");
+
+      Scan scan = new Scan(firstRow);
+      scan.addColumns(trrInputColumns);
+//      scan.setFilter(trrRowFilter);
+      this.scanner = this.htable.getScanner(scan);
+    }
+  }
+
+  /**
+   * Build the scanner. Not done in constructor to allow for extension.
+   *
+   * @throws IOException
+   */
+  public void init() throws IOException {
+    restart(startRow);
+  }
+
+  byte[] getStartRow() {
+    return this.startRow;
+  }
+  /**
+   * @param htable the {@link HTable} to scan.
+   */
+  public void setHTable(HTable htable) {
+    this.htable = htable;
+  }
+
+  /**
+   * @param inputColumns the columns to be placed in {@link Result}.
+   */
+  public void setInputColumns(final byte [][] inputColumns) {
+    this.trrInputColumns = inputColumns;
+  }
+
+  /**
+   * @param startRow the first row in the split
+   */
+  public void setStartRow(final byte [] startRow) {
+    this.startRow = startRow;
+  }
+
+  /**
+   *
+   * @param endRow the last row in the split
+   */
+  public void setEndRow(final byte [] endRow) {
+    this.endRow = endRow;
+  }
+
+  /**
+   * @param rowFilter the {@link Filter} to be used.
+   */
+  public void setRowFilter(Filter rowFilter) {
+    this.trrRowFilter = rowFilter;
+  }
+
+  public void close() {
+    this.scanner.close();
+  }
+
+  /**
+   * @return ImmutableBytesWritable
+   *
+   * @see org.apache.hadoop.mapred.RecordReader#createKey()
+   */
+  public ImmutableBytesWritable createKey() {
+    return new ImmutableBytesWritable();
+  }
+
+  /**
+   * @return RowResult
+   *
+   * @see org.apache.hadoop.mapred.RecordReader#createValue()
+   */
+  public Result createValue() {
+    return new Result();
+  }
+
+  public long getPos() {
+    // This should be the ordinal tuple in the range;
+    // not clear how to calculate...
+    return 0;
+  }
+
+  public float getProgress() {
+    // Depends on the total number of tuples and getPos
+    return 0;
+  }
+
+  /**
+   * @param key HStoreKey as input key.
+   * @param value MapWritable as input value
+   * @return true if there was more data
+   * @throws IOException
+   */
+  public boolean next(ImmutableBytesWritable key, Result value)
+  throws IOException {
+    Result result;
+    try {
+      result = this.scanner.next();
+    } catch (UnknownScannerException e) {
+      LOG.debug("recovered from " + StringUtils.stringifyException(e));
+      restart(lastRow);
+      this.scanner.next();    // skip presumed already mapped row
+      result = this.scanner.next();
+    }
+
+    if (result != null && result.size() > 0) {
+      key.set(result.getRow());
+      lastRow = key.get();
+      Writables.copyWritable(result, value);
+      return true;
+    }
+    return false;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableReduce.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableReduce.java
new file mode 100644
index 0000000..155ce82
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableReduce.java
@@ -0,0 +1,39 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapred.Reducer;
+
+/**
+ * Write a table, sorting by the input key
+ *
+ * @param <K> key class
+ * @param <V> value class
+ */
+@Deprecated
+@SuppressWarnings("unchecked")
+public interface TableReduce<K extends WritableComparable, V extends Writable>
+extends Reducer<K, V, ImmutableBytesWritable, Put> {
+
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableSplit.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableSplit.java
new file mode 100644
index 0000000..5956ee8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/TableSplit.java
@@ -0,0 +1,113 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.InputSplit;
+
+/**
+ * A table split corresponds to a key range [low, high)
+ */
+@Deprecated
+public class TableSplit implements InputSplit, Comparable<TableSplit> {
+  private byte [] m_tableName;
+  private byte [] m_startRow;
+  private byte [] m_endRow;
+  private String m_regionLocation;
+
+  /** default constructor */
+  public TableSplit() {
+    this(HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY,
+      HConstants.EMPTY_BYTE_ARRAY, "");
+  }
+
+  /**
+   * Constructor
+   * @param tableName
+   * @param startRow
+   * @param endRow
+   * @param location
+   */
+  public TableSplit(byte [] tableName, byte [] startRow, byte [] endRow,
+      final String location) {
+    this.m_tableName = tableName;
+    this.m_startRow = startRow;
+    this.m_endRow = endRow;
+    this.m_regionLocation = location;
+  }
+
+  /** @return table name */
+  public byte [] getTableName() {
+    return this.m_tableName;
+  }
+
+  /** @return starting row key */
+  public byte [] getStartRow() {
+    return this.m_startRow;
+  }
+
+  /** @return end row key */
+  public byte [] getEndRow() {
+    return this.m_endRow;
+  }
+
+  /** @return the region's hostname */
+  public String getRegionLocation() {
+    return this.m_regionLocation;
+  }
+
+  public String[] getLocations() {
+    return new String[] {this.m_regionLocation};
+  }
+
+  public long getLength() {
+    // Not clear how to obtain this... seems to be used only for sorting splits
+    return 0;
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    this.m_tableName = Bytes.readByteArray(in);
+    this.m_startRow = Bytes.readByteArray(in);
+    this.m_endRow = Bytes.readByteArray(in);
+    this.m_regionLocation = Bytes.toString(Bytes.readByteArray(in));
+  }
+
+  public void write(DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, this.m_tableName);
+    Bytes.writeByteArray(out, this.m_startRow);
+    Bytes.writeByteArray(out, this.m_endRow);
+    Bytes.writeByteArray(out, Bytes.toBytes(this.m_regionLocation));
+  }
+
+  @Override
+  public String toString() {
+    return m_regionLocation + ":" +
+      Bytes.toStringBinary(m_startRow) + "," + Bytes.toStringBinary(m_endRow);
+  }
+
+  public int compareTo(TableSplit o) {
+    return Bytes.compareTo(getStartRow(), o.getStartRow());
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapred/package-info.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/package-info.java
new file mode 100644
index 0000000..cc5228a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapred/package-info.java
@@ -0,0 +1,124 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+Provides HBase <a href="http://wiki.apache.org/hadoop/HadoopMapReduce">MapReduce</a>
+Input/OutputFormats, a table indexing MapReduce job, and utility
+
+<h2>Table of Contents</h2>
+<ul>
+<li><a href="#classpath">HBase, MapReduce and the CLASSPATH</a></li>
+<li><a href="#sink">HBase as MapReduce job data source and sink</a></li>
+<li><a href="#examples">Example Code</a></li>
+</ul>
+
+<h2><a name="classpath">HBase, MapReduce and the CLASSPATH</a></h2>
+
+<p>MapReduce jobs deployed to a MapReduce cluster do not by default have access
+to the HBase configuration under <code>$HBASE_CONF_DIR</code> nor to HBase classes.
+You could add <code>hbase-site.xml</code> to $HADOOP_HOME/conf and add
+<code>hbase-X.X.X.jar</code> to the <code>$HADOOP_HOME/lib</code> and copy these
+changes across your cluster but the cleanest means of adding hbase configuration
+and classes to the cluster <code>CLASSPATH</code> is by uncommenting
+<code>HADOOP_CLASSPATH</code> in <code>$HADOOP_HOME/conf/hadoop-env.sh</code>
+adding hbase dependencies here.  For example, here is how you would amend
+<code>hadoop-env.sh</code> adding the
+built hbase jar, zookeeper (needed by hbase client), hbase conf, and the
+<code>PerformanceEvaluation</code> class from the built hbase test jar to the
+hadoop <code>CLASSPATH</code>:
+
+<blockquote><pre># Extra Java CLASSPATH elements. Optional.
+# export HADOOP_CLASSPATH=
+export HADOOP_CLASSPATH=$HBASE_HOME/build/hbase-X.X.X.jar:$HBASE_HOME/build/hbase-X.X.X-test.jar:$HBASE_HOME/conf:${HBASE_HOME}/lib/zookeeper-X.X.X.jar</pre></blockquote>
+
+<p>Expand <code>$HBASE_HOME</code> in the above appropriately to suit your
+local environment.</p>
+
+<p>After copying the above change around your cluster (and restarting), this is
+how you would run the PerformanceEvaluation MR job to put up 4 clients (Presumes
+a ready mapreduce cluster):
+
+<blockquote><pre>$HADOOP_HOME/bin/hadoop org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 4</pre></blockquote>
+
+The PerformanceEvaluation class wil be found on the CLASSPATH because you
+added <code>$HBASE_HOME/build/test</code> to HADOOP_CLASSPATH
+</p>
+
+<p>Another possibility, if for example you do not have access to hadoop-env.sh or
+are unable to restart the hadoop cluster, is bundling the hbase jar into a mapreduce
+job jar adding it and its dependencies under the job jar <code>lib/</code>
+directory and the hbase conf into a job jar <code>conf/</code> directory.
+</a>
+
+<h2><a name="sink">HBase as MapReduce job data source and sink</a></h2>
+
+<p>HBase can be used as a data source, {@link org.apache.hadoop.hbase.mapred.TableInputFormat TableInputFormat},
+and data sink, {@link org.apache.hadoop.hbase.mapred.TableOutputFormat TableOutputFormat}, for MapReduce jobs.
+Writing MapReduce jobs that read or write HBase, you'll probably want to subclass
+{@link org.apache.hadoop.hbase.mapred.TableMap TableMap} and/or
+{@link org.apache.hadoop.hbase.mapred.TableReduce TableReduce}.  See the do-nothing
+pass-through classes {@link org.apache.hadoop.hbase.mapred.IdentityTableMap IdentityTableMap} and
+{@link org.apache.hadoop.hbase.mapred.IdentityTableReduce IdentityTableReduce} for basic usage.  For a more
+involved example, see <code>BuildTableIndex</code>
+or review the <code>org.apache.hadoop.hbase.mapred.TestTableMapReduce</code> unit test.
+</p>
+
+<p>Running mapreduce jobs that have hbase as source or sink, you'll need to
+specify source/sink table and column names in your configuration.</p>
+
+<p>Reading from hbase, the TableInputFormat asks hbase for the list of
+regions and makes a map-per-region or <code>mapred.map.tasks maps</code>,
+whichever is smaller (If your job only has two maps, up mapred.map.tasks
+to a number > number of regions). Maps will run on the adjacent TaskTracker
+if you are running a TaskTracer and RegionServer per node.
+Writing, it may make sense to avoid the reduce step and write yourself back into
+hbase from inside your map. You'd do this when your job does not need the sort
+and collation that mapreduce does on the map emitted data; on insert,
+hbase 'sorts' so there is no point double-sorting (and shuffling data around
+your mapreduce cluster) unless you need to. If you do not need the reduce,
+you might just have your map emit counts of records processed just so the
+framework's report at the end of your job has meaning or set the number of
+reduces to zero and use TableOutputFormat. See example code
+below. If running the reduce step makes sense in your case, its usually better
+to have lots of reducers so load is spread across the hbase cluster.</p>
+
+<p>There is also a new hbase partitioner that will run as many reducers as
+currently existing regions.  The
+{@link org.apache.hadoop.hbase.mapred.HRegionPartitioner} is suitable
+when your table is large and your upload is not such that it will greatly
+alter the number of existing regions when done; other use the default
+partitioner.
+</p>
+
+<h2><a name="examples">Example Code</a></h2>
+<h3>Sample Row Counter</h3>
+<p>See {@link org.apache.hadoop.hbase.mapred.RowCounter}.  You should be able to run
+it by doing: <code>% ./bin/hadoop jar hbase-X.X.X.jar</code>.  This will invoke
+the hbase MapReduce Driver class.  Select 'rowcounter' from the choice of jobs
+offered. You may need to add the hbase conf directory to <code>$HADOOP_HOME/conf/hadoop-env.sh#HADOOP_CLASSPATH</code>
+so the rowcounter gets pointed at the right hbase cluster (or, build a new jar
+with an appropriate hbase-site.xml built into your job jar).
+</p>
+<h3>PerformanceEvaluation</h3>
+<p>See org.apache.hadoop.hbase.PerformanceEvaluation from hbase src/test.  It runs
+a mapreduce job to run concurrent clients reading and writing hbase.
+</p>
+
+*/
+package org.apache.hadoop.hbase.mapred;
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
new file mode 100644
index 0000000..339651f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
@@ -0,0 +1,204 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.mapreduce.Job;
+
+import java.io.IOException;
+
+/**
+ * Tool used to copy a table to another one which can be on a different setup.
+ * It is also configurable with a start and time as well as a specification
+ * of the region server implementation if different from the local cluster.
+ */
+public class CopyTable {
+
+  final static String NAME = "copytable";
+  static String rsClass = null;
+  static String rsImpl = null;
+  static long startTime = 0;
+  static long endTime = 0;
+  static String tableName = null;
+  static String newTableName = null;
+  static String peerAddress = null;
+  static String families = null;
+
+  /**
+   * Sets up the actual job.
+   *
+   * @param conf  The current configuration.
+   * @param args  The command line parameters.
+   * @return The newly created job.
+   * @throws IOException When setting up the job fails.
+   */
+  public static Job createSubmittableJob(Configuration conf, String[] args)
+  throws IOException {
+    if (!doCommandLine(args)) {
+      return null;
+    }
+    Job job = new Job(conf, NAME + "_" + tableName);
+    job.setJarByClass(CopyTable.class);
+    Scan scan = new Scan();
+    if (startTime != 0) {
+      scan.setTimeRange(startTime,
+          endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
+    }
+    if(families != null) {
+      String[] fams = families.split(",");
+      for(String fam : fams) {
+        scan.addFamily(Bytes.toBytes(fam));
+      }
+    }
+    TableMapReduceUtil.initTableMapperJob(tableName, scan,
+        Import.Importer.class, null, null, job);
+    TableMapReduceUtil.initTableReducerJob(
+        newTableName == null ? tableName : newTableName, null, job,
+        null, peerAddress, rsClass, rsImpl);
+    job.setNumReduceTasks(0);
+    return job;
+  }
+
+  /*
+   * @param errorMsg Error message.  Can be null.
+   */
+  private static void printUsage(final String errorMsg) {
+    if (errorMsg != null && errorMsg.length() > 0) {
+      System.err.println("ERROR: " + errorMsg);
+    }
+    System.err.println("Usage: CopyTable [--rs.class=CLASS] " +
+        "[--rs.impl=IMPL] [--starttime=X] [--endtime=Y] " +
+        "[--new.name=NEW] [--peer.adr=ADR] <tablename>");
+    System.err.println();
+    System.err.println("Options:");
+    System.err.println(" rs.class     hbase.regionserver.class of the peer cluster");
+    System.err.println("              specify if different from current cluster");
+    System.err.println(" rs.impl      hbase.regionserver.impl of the peer cluster");
+    System.err.println(" starttime    beginning of the time range");
+    System.err.println("              without endtime means from starttime to forever");
+    System.err.println(" endtime      end of the time range");
+    System.err.println(" new.name     new table's name");
+    System.err.println(" peer.adr     Address of the peer cluster given in the format");
+    System.err.println("              hbase.zookeeer.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent");
+    System.err.println(" families     comma-seperated list of families to copy");
+    System.err.println();
+    System.err.println("Args:");
+    System.err.println(" tablename    Name of the table to copy");
+    System.err.println();
+    System.err.println("Examples:");
+    System.err.println(" To copy 'TestTable' to a cluster that uses replication for a 1 hour window:");
+    System.err.println(" $ bin/hbase " +
+        "org.apache.hadoop.hbase.mapreduce.CopyTable --rs.class=org.apache.hadoop.hbase.ipc.ReplicationRegionInterface " +
+        "--rs.impl=org.apache.hadoop.hbase.regionserver.replication.ReplicationRegionServer --starttime=1265875194289 --endtime=1265878794289 " +
+        "--peer.adr=server1,server2,server3:2181:/hbase TestTable ");
+  }
+
+  private static boolean doCommandLine(final String[] args) {
+    // Process command-line args. TODO: Better cmd-line processing
+    // (but hopefully something not as painful as cli options).
+    if (args.length < 1) {
+      printUsage(null);
+      return false;
+    }
+    try {
+      for (int i = 0; i < args.length; i++) {
+        String cmd = args[i];
+        if (cmd.equals("-h") || cmd.startsWith("--h")) {
+          printUsage(null);
+          return false;
+        }
+
+        final String rsClassArgKey = "--rs.class=";
+        if (cmd.startsWith(rsClassArgKey)) {
+          rsClass = cmd.substring(rsClassArgKey.length());
+          continue;
+        }
+
+        final String rsImplArgKey = "--rs.impl=";
+        if (cmd.startsWith(rsImplArgKey)) {
+          rsImpl = cmd.substring(rsImplArgKey.length());
+          continue;
+        }
+
+        final String startTimeArgKey = "--starttime=";
+        if (cmd.startsWith(startTimeArgKey)) {
+          startTime = Long.parseLong(cmd.substring(startTimeArgKey.length()));
+          continue;
+        }
+
+        final String endTimeArgKey = "--endtime=";
+        if (cmd.startsWith(endTimeArgKey)) {
+          endTime = Long.parseLong(cmd.substring(endTimeArgKey.length()));
+          continue;
+        }
+
+        final String newNameArgKey = "--new.name=";
+        if (cmd.startsWith(newNameArgKey)) {
+          newTableName = cmd.substring(newNameArgKey.length());
+          continue;
+        }
+
+        final String peerAdrArgKey = "--peer.adr=";
+        if (cmd.startsWith(peerAdrArgKey)) {
+          peerAddress = cmd.substring(peerAdrArgKey.length());
+          continue;
+        }
+
+        final String familiesArgKey = "--families=";
+        if (cmd.startsWith(familiesArgKey)) {
+          families = cmd.substring(familiesArgKey.length());
+          continue;
+        }
+
+        if (i == args.length-1) {
+          tableName = cmd;
+        }
+      }
+      if (newTableName == null && peerAddress == null) {
+        printUsage("At least a new table name or a " +
+            "peer address must be specified");
+        return false;
+      }
+    } catch (Exception e) {
+      e.printStackTrace();
+      printUsage("Can't start because " + e.getMessage());
+      return false;
+    }
+    return true;
+  }
+
+  /**
+   * Main entry point.
+   *
+   * @param args  The command line parameters.
+   * @throws Exception When running the job fails.
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    Job job = createSubmittableJob(conf, args);
+    if (job != null) {
+      System.exit(job.waitForCompletion(true) ? 0 : 1);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Driver.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Driver.java
new file mode 100644
index 0000000..5e00e10
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Driver.java
@@ -0,0 +1,51 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication;
+import org.apache.hadoop.util.ProgramDriver;
+
+/**
+ * Driver for hbase mapreduce jobs. Select which to run by passing
+ * name of job to this main.
+ */
+public class Driver {
+  /**
+   * @param args
+   * @throws Throwable
+   */
+  public static void main(String[] args) throws Throwable {
+    ProgramDriver pgd = new ProgramDriver();
+    pgd.addClass(RowCounter.NAME, RowCounter.class,
+      "Count rows in HBase table");
+    pgd.addClass(Export.NAME, Export.class, "Write table data to HDFS.");
+    pgd.addClass(Import.NAME, Import.class, "Import data written by Export.");
+    pgd.addClass(ImportTsv.NAME, ImportTsv.class, "Import data in TSV format.");
+    pgd.addClass(LoadIncrementalHFiles.NAME, LoadIncrementalHFiles.class,
+                 "Complete a bulk data load.");
+    pgd.addClass(CopyTable.NAME, CopyTable.class,
+        "Export a table from local cluster to peer cluster");
+    pgd.addClass(VerifyReplication.NAME, VerifyReplication.class, "Compare" +
+        " the data from tables in two different clusters. WARNING: It" +
+        " doesn't work for incrementColumnValues'd cells since the" +
+        " timestamp is changed after being appended to the log.");
+    pgd.driver(args);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java
new file mode 100644
index 0000000..a42a125
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java
@@ -0,0 +1,147 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
+import org.apache.hadoop.util.GenericOptionsParser;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Export an HBase table.
+ * Writes content to sequence files up in HDFS.  Use {@link Import} to read it
+ * back in again.
+ */
+public class Export {
+  private static final Log LOG = LogFactory.getLog(Export.class);
+  final static String NAME = "export";
+
+  /**
+   * Mapper.
+   */
+  static class Exporter
+  extends TableMapper<ImmutableBytesWritable, Result> {
+    /**
+     * @param row  The current table row key.
+     * @param value  The columns.
+     * @param context  The current context.
+     * @throws IOException When something is broken with the data.
+     * @see org.apache.hadoop.mapreduce.Mapper#map(KEYIN, VALUEIN,
+     *   org.apache.hadoop.mapreduce.Mapper.Context)
+     */
+    @Override
+    public void map(ImmutableBytesWritable row, Result value,
+      Context context)
+    throws IOException {
+      try {
+        context.write(row, value);
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+    }
+  }
+
+  /**
+   * Sets up the actual job.
+   *
+   * @param conf  The current configuration.
+   * @param args  The command line parameters.
+   * @return The newly created job.
+   * @throws IOException When setting up the job fails.
+   */
+  public static Job createSubmittableJob(Configuration conf, String[] args)
+  throws IOException {
+    String tableName = args[0];
+    Path outputDir = new Path(args[1]);
+    Job job = new Job(conf, NAME + "_" + tableName);
+    job.setJobName(NAME + "_" + tableName);
+    job.setJarByClass(Exporter.class);
+    // TODO: Allow passing filter and subset of rows/columns.
+    Scan s = new Scan();
+    // Optional arguments.
+    int versions = args.length > 2? Integer.parseInt(args[2]): 1;
+    s.setMaxVersions(versions);
+    long startTime = args.length > 3? Long.parseLong(args[3]): 0L;
+    long endTime = args.length > 4? Long.parseLong(args[4]): Long.MAX_VALUE;
+    s.setTimeRange(startTime, endTime);
+    s.setCacheBlocks(false);
+    if (conf.get(TableInputFormat.SCAN_COLUMN_FAMILY) != null) {
+      s.addFamily(Bytes.toBytes(conf.get(TableInputFormat.SCAN_COLUMN_FAMILY)));
+    }
+    LOG.info("verisons=" + versions + ", starttime=" + startTime +
+      ", endtime=" + endTime);
+    TableMapReduceUtil.initTableMapperJob(tableName, s, Exporter.class, null,
+      null, job);
+    // No reducers.  Just write straight to output files.
+    job.setNumReduceTasks(0);
+    job.setOutputFormatClass(SequenceFileOutputFormat.class);
+    job.setOutputKeyClass(ImmutableBytesWritable.class);
+    job.setOutputValueClass(Result.class);
+    FileOutputFormat.setOutputPath(job, outputDir);
+    return job;
+  }
+
+  /*
+   * @param errorMsg Error message.  Can be null.
+   */
+  private static void usage(final String errorMsg) {
+    if (errorMsg != null && errorMsg.length() > 0) {
+      System.err.println("ERROR: " + errorMsg);
+    }
+    System.err.println("Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> " +
+      "[<starttime> [<endtime>]]]\n");
+    System.err.println("  Note: -D properties will be applied to the conf used. ");
+    System.err.println("  For example: ");
+    System.err.println("   -D mapred.output.compress=true");
+    System.err.println("   -D mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec");
+    System.err.println("   -D mapred.output.compression.type=BLOCK");
+    System.err.println("  Additionally, the following SCAN properties can be specified");
+    System.err.println("  to control/limit what is exported..");
+    System.err.println("   -D " + TableInputFormat.SCAN_COLUMN_FAMILY + "=<familyName>");
+  }
+
+  /**
+   * Main entry point.
+   *
+   * @param args  The command line parameters.
+   * @throws Exception When running the job fails.
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
+    if (otherArgs.length < 2) {
+      usage("Wrong number of arguments: " + otherArgs.length);
+      System.exit(-1);
+    }
+    Job job = createSubmittableJob(conf, otherArgs);
+    System.exit(job.waitForCompletion(true)? 0 : 1);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/GroupingTableMapper.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/GroupingTableMapper.java
new file mode 100644
index 0000000..c38337b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/GroupingTableMapper.java
@@ -0,0 +1,180 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Job;
+
+/**
+ * Extract grouping columns from input record.
+ */
+public class GroupingTableMapper
+extends TableMapper<ImmutableBytesWritable,Result> implements Configurable {
+
+  /**
+   * JobConf parameter to specify the columns used to produce the key passed to
+   * collect from the map phase.
+   */
+  public static final String GROUP_COLUMNS =
+    "hbase.mapred.groupingtablemap.columns";
+
+  /** The grouping columns. */
+  protected byte [][] columns;
+  /** The current configuration. */
+  private Configuration conf = null;
+
+  /**
+   * Use this before submitting a TableMap job. It will appropriately set up
+   * the job.
+   *
+   * @param table The table to be processed.
+   * @param scan  The scan with the columns etc.
+   * @param groupColumns  A space separated list of columns used to form the
+   * key used in collect.
+   * @param mapper  The mapper class.
+   * @param job  The current job.
+   * @throws IOException When setting up the job fails.
+   */
+  @SuppressWarnings("unchecked")
+  public static void initJob(String table, Scan scan, String groupColumns,
+    Class<? extends TableMapper> mapper, Job job) throws IOException {
+    TableMapReduceUtil.initTableMapperJob(table, scan, mapper,
+        ImmutableBytesWritable.class, Result.class, job);
+    job.getConfiguration().set(GROUP_COLUMNS, groupColumns);
+  }
+
+  /**
+   * Extract the grouping columns from value to construct a new key. Pass the
+   * new key and value to reduce. If any of the grouping columns are not found
+   * in the value, the record is skipped.
+   *
+   * @param key  The current key.
+   * @param value  The current value.
+   * @param context  The current context.
+   * @throws IOException When writing the record fails.
+   * @throws InterruptedException When the job is aborted.
+   */
+  @Override
+  public void map(ImmutableBytesWritable key, Result value, Context context)
+  throws IOException, InterruptedException {
+    byte[][] keyVals = extractKeyValues(value);
+    if(keyVals != null) {
+      ImmutableBytesWritable tKey = createGroupKey(keyVals);
+      context.write(tKey, value);
+    }
+  }
+
+  /**
+   * Extract columns values from the current record. This method returns
+   * null if any of the columns are not found.
+   * <p>
+   * Override this method if you want to deal with nulls differently.
+   *
+   * @param r  The current values.
+   * @return Array of byte values.
+   */
+  protected byte[][] extractKeyValues(Result r) {
+    byte[][] keyVals = null;
+    ArrayList<byte[]> foundList = new ArrayList<byte[]>();
+    int numCols = columns.length;
+    if (numCols > 0) {
+      for (KeyValue value: r.list()) {
+        byte [] column = KeyValue.makeColumn(value.getFamily(),
+            value.getQualifier());
+        for (int i = 0; i < numCols; i++) {
+          if (Bytes.equals(column, columns[i])) {
+            foundList.add(value.getValue());
+            break;
+          }
+        }
+      }
+      if(foundList.size() == numCols) {
+        keyVals = foundList.toArray(new byte[numCols][]);
+      }
+    }
+    return keyVals;
+  }
+
+  /**
+   * Create a key by concatenating multiple column values.
+   * <p>
+   * Override this function in order to produce different types of keys.
+   *
+   * @param vals  The current key/values.
+   * @return A key generated by concatenating multiple column values.
+   */
+  protected ImmutableBytesWritable createGroupKey(byte[][] vals) {
+    if(vals == null) {
+      return null;
+    }
+    StringBuilder sb =  new StringBuilder();
+    for(int i = 0; i < vals.length; i++) {
+      if(i > 0) {
+        sb.append(" ");
+      }
+      try {
+        sb.append(new String(vals[i], HConstants.UTF8_ENCODING));
+      } catch (UnsupportedEncodingException e) {
+        throw new RuntimeException(e);
+      }
+    }
+    return new ImmutableBytesWritable(Bytes.toBytes(sb.toString()));
+  }
+
+  /**
+   * Returns the current configuration.
+   *
+   * @return The current configuration.
+   * @see org.apache.hadoop.conf.Configurable#getConf()
+   */
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  /**
+   * Sets the configuration. This is used to set up the grouping details.
+   *
+   * @param configuration  The configuration to set.
+   * @see org.apache.hadoop.conf.Configurable#setConf(
+   *   org.apache.hadoop.conf.Configuration)
+   */
+  @Override
+  public void setConf(Configuration configuration) {
+    this.conf = configuration;
+    String[] cols = conf.get(GROUP_COLUMNS, "").split(" ");
+    columns = new byte[cols.length][];
+    for(int i = 0; i < cols.length; i++) {
+      columns[i] = Bytes.toBytes(cols[i]);
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
new file mode 100644
index 0000000..8ccdf4d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
@@ -0,0 +1,275 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.filecache.DistributedCache;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Writes HFiles. Passed KeyValues must arrive in order.
+ * Currently, can only write files to a single column family at a
+ * time.  Multiple column families requires coordinating keys cross family.
+ * Writes current time as the sequence id for the file. Sets the major compacted
+ * attribute on created hfiles.
+ * @see KeyValueSortReducer
+ */
+public class HFileOutputFormat extends FileOutputFormat<ImmutableBytesWritable, KeyValue> {
+  static Log LOG = LogFactory.getLog(HFileOutputFormat.class);
+  
+  public RecordWriter<ImmutableBytesWritable, KeyValue> getRecordWriter(final TaskAttemptContext context)
+  throws IOException, InterruptedException {
+    // Get the path of the temporary output file
+    final Path outputPath = FileOutputFormat.getOutputPath(context);
+    final Path outputdir = new FileOutputCommitter(outputPath, context).getWorkPath();
+    Configuration conf = context.getConfiguration();
+    final FileSystem fs = outputdir.getFileSystem(conf);
+    // These configs. are from hbase-*.xml
+    final long maxsize = conf.getLong("hbase.hregion.max.filesize", 268435456);
+    final int blocksize =
+      conf.getInt("hbase.mapreduce.hfileoutputformat.blocksize", 65536);
+    // Invented config.  Add to hbase-*.xml if other than default compression.
+    final String compression = conf.get("hfile.compression",
+      Compression.Algorithm.NONE.getName());
+
+    return new RecordWriter<ImmutableBytesWritable, KeyValue>() {
+      // Map of families to writers and how much has been output on the writer.
+      private final Map<byte [], WriterLength> writers =
+        new TreeMap<byte [], WriterLength>(Bytes.BYTES_COMPARATOR);
+      private byte [] previousRow = HConstants.EMPTY_BYTE_ARRAY;
+      private final byte [] now = Bytes.toBytes(System.currentTimeMillis());
+
+      public void write(ImmutableBytesWritable row, KeyValue kv)
+      throws IOException {
+        long length = kv.getLength();
+        byte [] family = kv.getFamily();
+        WriterLength wl = this.writers.get(family);
+        if (wl == null || ((length + wl.written) >= maxsize) &&
+            Bytes.compareTo(this.previousRow, 0, this.previousRow.length,
+              kv.getBuffer(), kv.getRowOffset(), kv.getRowLength()) != 0) {
+          // Get a new writer.
+          Path basedir = new Path(outputdir, Bytes.toString(family));
+          if (wl == null) {
+            wl = new WriterLength();
+            this.writers.put(family, wl);
+            if (this.writers.size() > 1) throw new IOException("One family only");
+            // If wl == null, first file in family.  Ensure family dir exits.
+            if (!fs.exists(basedir)) fs.mkdirs(basedir);
+          }
+          wl.writer = getNewWriter(wl.writer, basedir);
+          LOG.info("Writer=" + wl.writer.getPath() +
+            ((wl.written == 0)? "": ", wrote=" + wl.written));
+          wl.written = 0;
+        }
+        kv.updateLatestStamp(this.now);
+        wl.writer.append(kv);
+        wl.written += length;
+        // Copy the row so we know when a row transition.
+        this.previousRow = kv.getRow();
+      }
+
+      /* Create a new HFile.Writer. Close current if there is one.
+       * @param writer
+       * @param familydir
+       * @return A new HFile.Writer.
+       * @throws IOException
+       */
+      private HFile.Writer getNewWriter(final HFile.Writer writer,
+          final Path familydir)
+      throws IOException {
+        close(writer);
+        return new HFile.Writer(fs,  StoreFile.getUniqueFile(fs, familydir),
+          blocksize, compression, KeyValue.KEY_COMPARATOR);
+      }
+
+      private void close(final HFile.Writer w) throws IOException {
+        if (w != null) {
+          w.appendFileInfo(StoreFile.BULKLOAD_TIME_KEY,
+              Bytes.toBytes(System.currentTimeMillis()));
+          w.appendFileInfo(StoreFile.BULKLOAD_TASK_KEY,
+              Bytes.toBytes(context.getTaskAttemptID().toString()));
+          w.appendFileInfo(StoreFile.MAJOR_COMPACTION_KEY, 
+              Bytes.toBytes(true));
+          w.close();
+        }
+      }
+
+      public void close(TaskAttemptContext c)
+      throws IOException, InterruptedException {
+        for (Map.Entry<byte [], WriterLength> e: this.writers.entrySet()) {
+          close(e.getValue().writer);
+        }
+      }
+    };
+  }
+
+  /*
+   * Data structure to hold a Writer and amount of data written on it.
+   */
+  static class WriterLength {
+    long written = 0;
+    HFile.Writer writer = null;
+  }
+
+  /**
+   * Return the start keys of all of the regions in this table,
+   * as a list of ImmutableBytesWritable.
+   */
+  private static List<ImmutableBytesWritable> getRegionStartKeys(HTable table)
+  throws IOException {
+    byte[][] byteKeys = table.getStartKeys();
+    ArrayList<ImmutableBytesWritable> ret =
+      new ArrayList<ImmutableBytesWritable>(byteKeys.length);
+    for (byte[] byteKey : byteKeys) {
+      ret.add(new ImmutableBytesWritable(byteKey));
+    }
+    return ret;
+  }
+
+  /**
+   * Write out a SequenceFile that can be read by TotalOrderPartitioner
+   * that contains the split points in startKeys.
+   * @param partitionsPath output path for SequenceFile
+   * @param startKeys the region start keys
+   */
+  private static void writePartitions(Configuration conf, Path partitionsPath,
+      List<ImmutableBytesWritable> startKeys) throws IOException {
+    if (startKeys.isEmpty()) {
+      throw new IllegalArgumentException("No regions passed");
+    }
+
+    // We're generating a list of split points, and we don't ever
+    // have keys < the first region (which has an empty start key)
+    // so we need to remove it. Otherwise we would end up with an
+    // empty reducer with index 0
+    TreeSet<ImmutableBytesWritable> sorted =
+      new TreeSet<ImmutableBytesWritable>(startKeys);
+
+    ImmutableBytesWritable first = sorted.first();
+    if (!first.equals(HConstants.EMPTY_BYTE_ARRAY)) {
+      throw new IllegalArgumentException(
+          "First region of table should have empty start key. Instead has: "
+          + Bytes.toStringBinary(first.get()));
+    }
+    sorted.remove(first);
+    
+    // Write the actual file
+    FileSystem fs = partitionsPath.getFileSystem(conf);
+    SequenceFile.Writer writer = SequenceFile.createWriter(fs, 
+        conf, partitionsPath, ImmutableBytesWritable.class, NullWritable.class);
+    
+    try {
+      for (ImmutableBytesWritable startKey : sorted) {
+        writer.append(startKey, NullWritable.get());
+      }
+    } finally {
+      writer.close();
+    }
+  }
+  
+  /**
+   * Configure a MapReduce Job to perform an incremental load into the given
+   * table. This
+   * <ul>
+   *   <li>Inspects the table to configure a total order partitioner</li>
+   *   <li>Uploads the partitions file to the cluster and adds it to the DistributedCache</li>
+   *   <li>Sets the number of reduce tasks to match the current number of regions</li>
+   *   <li>Sets the output key/value class to match HFileOutputFormat's requirements</li>
+   *   <li>Sets the reducer up to perform the appropriate sorting (either KeyValueSortReducer or
+   *     PutSortReducer)</li>
+   * </ul> 
+   * The user should be sure to set the map output value class to either KeyValue or Put before
+   * running this function.
+   */
+  public static void configureIncrementalLoad(Job job, HTable table) throws IOException {
+    Configuration conf = job.getConfiguration();
+    job.setPartitionerClass(TotalOrderPartitioner.class);
+    job.setOutputKeyClass(ImmutableBytesWritable.class);
+    job.setOutputValueClass(KeyValue.class);
+    job.setOutputFormatClass(HFileOutputFormat.class);
+    
+    // Based on the configured map output class, set the correct reducer to properly
+    // sort the incoming values.
+    // TODO it would be nice to pick one or the other of these formats.
+    if (KeyValue.class.equals(job.getMapOutputValueClass())) {
+      job.setReducerClass(KeyValueSortReducer.class);
+    } else if (Put.class.equals(job.getMapOutputValueClass())) {
+      job.setReducerClass(PutSortReducer.class);
+    } else {
+      LOG.warn("Unknown map output value type:" + job.getMapOutputValueClass());
+    }
+    
+    LOG.info("Looking up current regions for table " + table);
+    List<ImmutableBytesWritable> startKeys = getRegionStartKeys(table);
+    LOG.info("Configuring " + startKeys.size() + " reduce partitions " +
+        "to match current region count");
+    job.setNumReduceTasks(startKeys.size());
+    
+    Path partitionsPath = new Path(job.getWorkingDirectory(),
+        "partitions_" + System.currentTimeMillis());
+    LOG.info("Writing partition information to " + partitionsPath);
+
+    FileSystem fs = partitionsPath.getFileSystem(conf);
+    writePartitions(conf, partitionsPath, startKeys);
+    partitionsPath.makeQualified(fs);
+    URI cacheUri;
+    try {
+      cacheUri = new URI(partitionsPath.toString() + "#" +
+          TotalOrderPartitioner.DEFAULT_PATH);
+    } catch (URISyntaxException e) {
+      throw new IOException(e);
+    }
+    DistributedCache.addCacheFile(cacheUri, conf);
+    DistributedCache.createSymlink(conf);
+    
+    LOG.info("Incremental table output configured.");
+  }
+  
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
new file mode 100644
index 0000000..e42d500
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/HRegionPartitioner.java
@@ -0,0 +1,133 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Partitioner;
+
+/**
+ * This is used to partition the output keys into groups of keys.
+ * Keys are grouped according to the regions that currently exist
+ * so that each reducer fills a single region so load is distributed.
+ *
+ * <p>This class is not suitable as partitioner creating hfiles
+ * for incremental bulk loads as region spread will likely change between time of
+ * hfile creation and load time. See {@link LoadIncrementalHFiles}
+ * and <a href="http://hbase.apache.org/docs/current/bulk-loads.html">Bulk Load</a>.
+ *
+ * @param <KEY>  The type of the key.
+ * @param <VALUE>  The type of the value.
+ */
+public class HRegionPartitioner<KEY, VALUE>
+extends Partitioner<ImmutableBytesWritable, VALUE>
+implements Configurable {
+
+  private final Log LOG = LogFactory.getLog(TableInputFormat.class);
+  private Configuration conf = null;
+  private HTable table;
+  private byte[][] startKeys;
+
+  /**
+   * Gets the partition number for a given key (hence record) given the total
+   * number of partitions i.e. number of reduce-tasks for the job.
+   *
+   * <p>Typically a hash function on a all or a subset of the key.</p>
+   *
+   * @param key  The key to be partitioned.
+   * @param value  The entry value.
+   * @param numPartitions  The total number of partitions.
+   * @return The partition number for the <code>key</code>.
+   * @see org.apache.hadoop.mapreduce.Partitioner#getPartition(
+   *   java.lang.Object, java.lang.Object, int)
+   */
+  @Override
+  public int getPartition(ImmutableBytesWritable key,
+      VALUE value, int numPartitions) {
+    byte[] region = null;
+    // Only one region return 0
+    if (this.startKeys.length == 1){
+      return 0;
+    }
+    try {
+      // Not sure if this is cached after a split so we could have problems
+      // here if a region splits while mapping
+      region = table.getRegionLocation(key.get()).getRegionInfo().getStartKey();
+    } catch (IOException e) {
+      LOG.error(e);
+    }
+    for (int i = 0; i < this.startKeys.length; i++){
+      if (Bytes.compareTo(region, this.startKeys[i]) == 0 ){
+        if (i >= numPartitions-1){
+          // cover if we have less reduces then regions.
+          return (Integer.toString(i).hashCode()
+              & Integer.MAX_VALUE) % numPartitions;
+        }
+        return i;
+      }
+    }
+    // if above fails to find start key that match we need to return something
+    return 0;
+  }
+
+  /**
+   * Returns the current configuration.
+   *
+   * @return The current configuration.
+   * @see org.apache.hadoop.conf.Configurable#getConf()
+   */
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  /**
+   * Sets the configuration. This is used to determine the start keys for the
+   * given table.
+   *
+   * @param configuration  The configuration to set.
+   * @see org.apache.hadoop.conf.Configurable#setConf(
+   *   org.apache.hadoop.conf.Configuration)
+   */
+  @Override
+  public void setConf(Configuration configuration) {
+    this.conf = configuration;
+    try {
+      HBaseConfiguration.addHbaseResources(conf);
+      this.table = new HTable(this.conf,
+        configuration.get(TableOutputFormat.OUTPUT_TABLE));
+    } catch (IOException e) {
+      LOG.error(e);
+    }
+    try {
+      this.startKeys = this.table.getStartKeys();
+    } catch (IOException e) {
+      LOG.error(e);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.java
new file mode 100644
index 0000000..fd5d8fe
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableMapper.java
@@ -0,0 +1,66 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapreduce.Job;
+
+/**
+ * Pass the given key and record as-is to the reduce phase.
+ */
+public class IdentityTableMapper
+extends TableMapper<ImmutableBytesWritable, Result> {
+
+  /**
+   * Use this before submitting a TableMap job. It will appropriately set up
+   * the job.
+   *
+   * @param table  The table name.
+   * @param scan  The scan with the columns to scan.
+   * @param mapper  The mapper class.
+   * @param job  The job configuration.
+   * @throws IOException When setting up the job fails.
+   */
+  @SuppressWarnings("unchecked")
+  public static void initJob(String table, Scan scan,
+    Class<? extends TableMapper> mapper, Job job) throws IOException {
+    TableMapReduceUtil.initTableMapperJob(table, scan, mapper,
+      ImmutableBytesWritable.class, Result.class, job);
+  }
+
+  /**
+   * Pass the key, value to reduce.
+   *
+   * @param key  The current key.
+   * @param value  The current value.
+   * @param context  The current context.
+   * @throws IOException When writing the record fails.
+   * @throws InterruptedException When the job is aborted.
+   */
+  public void map(ImmutableBytesWritable key, Result value, Context context)
+  throws IOException, InterruptedException {
+    context.write(key, value);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java
new file mode 100644
index 0000000..25f466e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.OutputFormat;
+
+/**
+ * Convenience class that simply writes all values (which must be
+ * {@link org.apache.hadoop.hbase.client.Put Put} or
+ * {@link org.apache.hadoop.hbase.client.Delete Delete} instances)
+ * passed to it out to the configured HBase table. This works in combination
+ * with {@link TableOutputFormat} which actually does the writing to HBase.<p>
+ *
+ * Keys are passed along but ignored in TableOutputFormat.  However, they can
+ * be used to control how your values will be divided up amongst the specified
+ * number of reducers. <p>
+ *
+ * You can also use the {@link TableMapReduceUtil} class to set up the two
+ * classes in one step:
+ * <blockquote><code>
+ * TableMapReduceUtil.initTableReducerJob("table", IdentityTableReducer.class, job);
+ * </code></blockquote>
+ * This will also set the proper {@link TableOutputFormat} which is given the
+ * <code>table</code> parameter. The
+ * {@link org.apache.hadoop.hbase.client.Put Put} or
+ * {@link org.apache.hadoop.hbase.client.Delete Delete} define the
+ * row and columns implicitly.
+ */
+public class IdentityTableReducer
+extends TableReducer<Writable, Writable, Writable> {
+
+  @SuppressWarnings("unused")
+  private static final Log LOG = LogFactory.getLog(IdentityTableReducer.class);
+
+  /**
+   * Writes each given record, consisting of the row key and the given values,
+   * to the configured {@link OutputFormat}. It is emitting the row key and each
+   * {@link org.apache.hadoop.hbase.client.Put Put} or
+   * {@link org.apache.hadoop.hbase.client.Delete Delete} as separate pairs.
+   *
+   * @param key  The current row key.
+   * @param values  The {@link org.apache.hadoop.hbase.client.Put Put} or
+   *   {@link org.apache.hadoop.hbase.client.Delete Delete} list for the given
+   *   row.
+   * @param context  The context of the reduce.
+   * @throws IOException When writing the record fails.
+   * @throws InterruptedException When the job gets interrupted.
+   */
+  @Override
+  public void reduce(Writable key, Iterable<Writable> values, Context context)
+  throws IOException, InterruptedException {
+    for(Writable putOrDelete : values) {
+      context.write(key, putOrDelete);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java
new file mode 100644
index 0000000..653de67
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java
@@ -0,0 +1,126 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
+import org.apache.hadoop.util.GenericOptionsParser;
+
+/**
+ * Import data written by {@link Export}.
+ */
+public class Import {
+  final static String NAME = "import";
+
+  /**
+   * Write table content out to files in hdfs.
+   */
+  static class Importer
+  extends TableMapper<ImmutableBytesWritable, Put> {
+    /**
+     * @param row  The current table row key.
+     * @param value  The columns.
+     * @param context  The current context.
+     * @throws IOException When something is broken with the data.
+     * @see org.apache.hadoop.mapreduce.Mapper#map(KEYIN, VALUEIN,
+     *   org.apache.hadoop.mapreduce.Mapper.Context)
+     */
+    @Override
+    public void map(ImmutableBytesWritable row, Result value,
+      Context context)
+    throws IOException {
+      try {
+        context.write(row, resultToPut(row, value));
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+    }
+
+    private static Put resultToPut(ImmutableBytesWritable key, Result result)
+    throws IOException {
+      Put put = new Put(key.get());
+      for (KeyValue kv : result.raw()) {
+        put.add(kv);
+      }
+      return put;
+    }
+  }
+
+  /**
+   * Sets up the actual job.
+   *
+   * @param conf  The current configuration.
+   * @param args  The command line parameters.
+   * @return The newly created job.
+   * @throws IOException When setting up the job fails.
+   */
+  public static Job createSubmittableJob(Configuration conf, String[] args)
+  throws IOException {
+    String tableName = args[0];
+    Path inputDir = new Path(args[1]);
+    Job job = new Job(conf, NAME + "_" + tableName);
+    job.setJarByClass(Importer.class);
+    FileInputFormat.setInputPaths(job, inputDir);
+    job.setInputFormatClass(SequenceFileInputFormat.class);
+    job.setMapperClass(Importer.class);
+    // No reducers.  Just write straight to table.  Call initTableReducerJob
+    // because it sets up the TableOutputFormat.
+    TableMapReduceUtil.initTableReducerJob(tableName, null, job);
+    job.setNumReduceTasks(0);
+    return job;
+  }
+
+  /*
+   * @param errorMsg Error message.  Can be null.
+   */
+  private static void usage(final String errorMsg) {
+    if (errorMsg != null && errorMsg.length() > 0) {
+      System.err.println("ERROR: " + errorMsg);
+    }
+    System.err.println("Usage: Import <tablename> <inputdir>");
+  }
+
+  /**
+   * Main entry point.
+   *
+   * @param args  The command line parameters.
+   * @throws Exception When running the job fails.
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
+    if (otherArgs.length < 2) {
+      usage("Wrong number of arguments: " + otherArgs.length);
+      System.exit(-1);
+    }
+    Job job = createSubmittableJob(conf, otherArgs);
+    System.exit(job.waitForCompletion(true) ? 0 : 1);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
new file mode 100644
index 0000000..e28e06f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java
@@ -0,0 +1,374 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.BadTsvLineException;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapreduce.Counter;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.util.GenericOptionsParser;
+
+import com.google.common.base.Preconditions;
+import com.google.common.base.Splitter;
+import com.google.common.collect.Lists;
+
+/**
+ * Tool to import data from a TSV file.
+ *
+ * This tool is rather simplistic - it doesn't do any quoting or
+ * escaping, but is useful for many data loads.
+ *
+ * @see ImportTsv#usage(String)
+ */
+public class ImportTsv {
+  final static String NAME = "importtsv";
+
+  final static String SKIP_LINES_CONF_KEY = "importtsv.skip.bad.lines";
+  final static String BULK_OUTPUT_CONF_KEY = "importtsv.bulk.output";
+  final static String COLUMNS_CONF_KEY = "importtsv.columns";
+  final static String SEPARATOR_CONF_KEY = "importtsv.separator";
+  final static String DEFAULT_SEPARATOR = "\t";
+
+  static class TsvParser {
+    /**
+     * Column families and qualifiers mapped to the TSV columns
+     */
+    private final byte[][] families;
+    private final byte[][] qualifiers;
+
+    private final byte separatorByte;
+
+    private int rowKeyColumnIndex;
+    
+    public static String ROWKEY_COLUMN_SPEC="HBASE_ROW_KEY";
+
+    /**
+     * @param columnsSpecification the list of columns to parser out, comma separated.
+     * The row key should be the special token TsvParser.ROWKEY_COLUMN_SPEC
+     */
+    public TsvParser(String columnsSpecification, String separatorStr) {
+      // Configure separator
+      byte[] separator = Bytes.toBytes(separatorStr);
+      Preconditions.checkArgument(separator.length == 1,
+        "TsvParser only supports single-byte separators");
+      separatorByte = separator[0];
+
+      // Configure columns
+      ArrayList<String> columnStrings = Lists.newArrayList(
+        Splitter.on(',').trimResults().split(columnsSpecification));
+      
+      families = new byte[columnStrings.size()][];
+      qualifiers = new byte[columnStrings.size()][];
+
+      for (int i = 0; i < columnStrings.size(); i++) {
+        String str = columnStrings.get(i);
+        if (ROWKEY_COLUMN_SPEC.equals(str)) {
+          rowKeyColumnIndex = i;
+          continue;
+        }
+        String[] parts = str.split(":", 2);
+        if (parts.length == 1) {
+          families[i] = str.getBytes();
+          qualifiers[i] = HConstants.EMPTY_BYTE_ARRAY;
+        } else {
+          families[i] = parts[0].getBytes();
+          qualifiers[i] = parts[1].getBytes();
+        }
+      }
+    }
+    
+    public int getRowKeyColumnIndex() {
+      return rowKeyColumnIndex;
+    }
+    public byte[] getFamily(int idx) {
+      return families[idx];
+    }
+    public byte[] getQualifier(int idx) {
+      return qualifiers[idx];
+    }
+    
+    public ParsedLine parse(byte[] lineBytes, int length)
+    throws BadTsvLineException {
+      // Enumerate separator offsets
+      ArrayList<Integer> tabOffsets = new ArrayList<Integer>(families.length);
+      for (int i = 0; i < length; i++) {
+        if (lineBytes[i] == separatorByte) {
+          tabOffsets.add(i);
+        }
+      }
+      if (tabOffsets.isEmpty()) {
+        throw new BadTsvLineException("No delimiter");
+      }
+
+      tabOffsets.add(length);
+
+      if (tabOffsets.size() > families.length) {
+        throw new BadTsvLineException("Excessive columns");
+      } else if (tabOffsets.size() <= getRowKeyColumnIndex()) {
+        throw new BadTsvLineException("No row key");
+      }
+      return new ParsedLine(tabOffsets, lineBytes);
+    }
+    
+    class ParsedLine {
+      private final ArrayList<Integer> tabOffsets;
+      private byte[] lineBytes;
+      
+      ParsedLine(ArrayList<Integer> tabOffsets, byte[] lineBytes) {
+        this.tabOffsets = tabOffsets;
+        this.lineBytes = lineBytes;
+      }
+      
+      public int getRowKeyOffset() {
+        return getColumnOffset(rowKeyColumnIndex);
+      }
+      public int getRowKeyLength() {
+        return getColumnLength(rowKeyColumnIndex);
+      }
+      public int getColumnOffset(int idx) {
+        if (idx > 0)
+          return tabOffsets.get(idx - 1) + 1;
+        else
+          return 0;
+      }      
+      public int getColumnLength(int idx) {
+        return tabOffsets.get(idx) - getColumnOffset(idx);
+      }
+      public int getColumnCount() {
+        return tabOffsets.size();
+      }
+      public byte[] getLineBytes() {
+        return lineBytes;
+      }
+    }
+    
+    public static class BadTsvLineException extends Exception {
+      public BadTsvLineException(String err) {
+        super(err);
+      }
+      private static final long serialVersionUID = 1L;
+    }
+  }
+  
+  /**
+   * Write table content out to files in hdfs.
+   */
+  static class TsvImporter
+  extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put>
+  {
+    
+    /** Timestamp for all inserted rows */
+    private long ts;
+
+    /** Should skip bad lines */
+    private boolean skipBadLines;
+    private Counter badLineCount;
+
+    private TsvParser parser;
+
+    @Override
+    protected void setup(Context context) {
+      Configuration conf = context.getConfiguration();
+      parser = new TsvParser(conf.get(COLUMNS_CONF_KEY),
+                             conf.get(SEPARATOR_CONF_KEY, DEFAULT_SEPARATOR));
+      if (parser.getRowKeyColumnIndex() == -1) {
+        throw new RuntimeException("No row key column specified");
+      }
+      ts = System.currentTimeMillis();
+
+      skipBadLines = context.getConfiguration().getBoolean(
+        SKIP_LINES_CONF_KEY, true);
+      badLineCount = context.getCounter("ImportTsv", "Bad Lines");
+    }
+
+    /**
+     * Convert a line of TSV text into an HBase table row.
+     */
+    @Override
+    public void map(LongWritable offset, Text value,
+      Context context)
+    throws IOException {
+      byte[] lineBytes = value.getBytes();
+
+      try {
+        TsvParser.ParsedLine parsed = parser.parse(
+            lineBytes, value.getLength());
+        ImmutableBytesWritable rowKey =
+          new ImmutableBytesWritable(lineBytes,
+              parsed.getRowKeyOffset(),
+              parsed.getRowKeyLength());
+
+        Put put = new Put(rowKey.copyBytes());
+        for (int i = 0; i < parsed.getColumnCount(); i++) {
+          if (i == parser.getRowKeyColumnIndex()) continue;
+          KeyValue kv = new KeyValue(
+              lineBytes, parsed.getRowKeyOffset(), parsed.getRowKeyLength(),
+              parser.getFamily(i), 0, parser.getFamily(i).length,
+              parser.getQualifier(i), 0, parser.getQualifier(i).length,
+              ts,
+              KeyValue.Type.Put,
+              lineBytes, parsed.getColumnOffset(i), parsed.getColumnLength(i));
+          put.add(kv);
+        }
+        context.write(rowKey, put);
+      } catch (BadTsvLineException badLine) {
+        if (skipBadLines) {
+          System.err.println(
+              "Bad line at offset: " + offset.get() + ":\n" +
+              badLine.getMessage());
+          badLineCount.increment(1);
+          return;
+        } else {
+          throw new IOException(badLine);
+        }
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+    }
+  }
+
+  /**
+   * Sets up the actual job.
+   *
+   * @param conf  The current configuration.
+   * @param args  The command line parameters.
+   * @return The newly created job.
+   * @throws IOException When setting up the job fails.
+   */
+  public static Job createSubmittableJob(Configuration conf, String[] args)
+  throws IOException {
+    String tableName = args[0];
+    Path inputDir = new Path(args[1]);
+    Job job = new Job(conf, NAME + "_" + tableName);
+    job.setJarByClass(TsvImporter.class);
+    FileInputFormat.setInputPaths(job, inputDir);
+    job.setInputFormatClass(TextInputFormat.class);
+    job.setMapperClass(TsvImporter.class);
+
+    String hfileOutPath = conf.get(BULK_OUTPUT_CONF_KEY);
+    if (hfileOutPath != null) {
+      HTable table = new HTable(conf, tableName);
+      job.setReducerClass(PutSortReducer.class);
+      Path outputDir = new Path(hfileOutPath);
+      FileOutputFormat.setOutputPath(job, outputDir);
+      job.setMapOutputKeyClass(ImmutableBytesWritable.class);
+      job.setMapOutputValueClass(Put.class);
+      HFileOutputFormat.configureIncrementalLoad(job, table);
+    } else {
+      // No reducers.  Just write straight to table.  Call initTableReducerJob
+      // to set up the TableOutputFormat.
+      TableMapReduceUtil.initTableReducerJob(tableName, null, job);
+      job.setNumReduceTasks(0);
+    }
+    
+    TableMapReduceUtil.addDependencyJars(job);
+    TableMapReduceUtil.addDependencyJars(job.getConfiguration(), 
+        com.google.common.base.Function.class /* Guava used by TsvParser */);
+    return job;
+  }
+
+  /*
+   * @param errorMsg Error message.  Can be null.
+   */
+  private static void usage(final String errorMsg) {
+    if (errorMsg != null && errorMsg.length() > 0) {
+      System.err.println("ERROR: " + errorMsg);
+    }
+    String usage = 
+      "Usage: " + NAME + " -Dimporttsv.columns=a,b,c <tablename> <inputdir>\n" +
+      "\n" +
+      "Imports the given input directory of TSV data into the specified table.\n" +
+      "\n" +
+      "The column names of the TSV data must be specified using the -Dimporttsv.columns\n" +
+      "option. This option takes the form of comma-separated column names, where each\n" +
+      "column name is either a simple column family, or a columnfamily:qualifier. The special\n" +
+      "column name HBASE_ROW_KEY is used to designate that this column should be used\n" +
+      "as the row key for each imported record. You must specify exactly one column\n" +
+      "to be the row key.\n" +
+      "\n" +
+      "In order to prepare data for a bulk data load, pass the option:\n" +
+      "  -D" + BULK_OUTPUT_CONF_KEY + "=/path/for/output\n" +
+      "\n" +
+      "Other options that may be specified with -D include:\n" +
+      "  -D" + SKIP_LINES_CONF_KEY + "=false - fail if encountering an invalid line\n" +
+      "  '-D" + SEPARATOR_CONF_KEY + "=|' - eg separate on pipes instead of tabs";
+    System.err.println(usage);
+  }
+
+  /**
+   * Main entry point.
+   *
+   * @param args  The command line parameters.
+   * @throws Exception When running the job fails.
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
+    if (otherArgs.length < 2) {
+      usage("Wrong number of arguments: " + otherArgs.length);
+      System.exit(-1);
+    }
+
+    // Make sure columns are specified
+    String columns[] = conf.getStrings(COLUMNS_CONF_KEY);
+    if (columns == null) {
+      usage("No columns specified. Please specify with -D" +
+            COLUMNS_CONF_KEY+"=...");
+      System.exit(-1);
+    }
+
+    // Make sure they specify exactly one column as the row key
+    int rowkeysFound=0;
+    for (String col : columns) {
+      if (col.equals(TsvParser.ROWKEY_COLUMN_SPEC)) rowkeysFound++;
+    }
+    if (rowkeysFound != 1) {
+      usage("Must specify exactly one column as " + TsvParser.ROWKEY_COLUMN_SPEC);
+      System.exit(-1);
+    }
+
+    // Make sure one or more columns are specified
+    if (columns.length < 2) {
+      usage("One or more columns in addition to the row key are required");
+      System.exit(-1);
+    }
+
+    Job job = createSubmittableJob(conf, otherArgs);
+    System.exit(job.waitForCompletion(true) ? 0 : 1);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.java
new file mode 100644
index 0000000..1f1567e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapreduce.Reducer;
+
+/**
+ * Emits sorted KeyValues.
+ * Reads in all KeyValues from passed Iterator, sorts them, then emits
+ * KeyValues in sorted order.  If lots of columns per row, it will use lots of
+ * memory sorting.
+ * @see HFileOutputFormat
+ */
+public class KeyValueSortReducer extends Reducer<ImmutableBytesWritable, KeyValue, ImmutableBytesWritable, KeyValue> {
+  protected void reduce(ImmutableBytesWritable row, java.lang.Iterable<KeyValue> kvs,
+      org.apache.hadoop.mapreduce.Reducer<ImmutableBytesWritable, KeyValue, ImmutableBytesWritable, KeyValue>.Context context)
+  throws java.io.IOException, InterruptedException {
+    TreeSet<KeyValue> map = new TreeSet<KeyValue>(KeyValue.COMPARATOR);
+    for (KeyValue kv: kvs) {
+      map.add(kv.clone());
+    }
+    context.setStatus("Read " + map.getClass());
+    int index = 0;
+    for (KeyValue kv: map) {
+      context.write(row, kv);
+      if (index > 0 && index % 100 == 0) context.setStatus("Wrote " + index);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
new file mode 100644
index 0000000..e051c58
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
@@ -0,0 +1,321 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.Deque;
+import java.util.LinkedList;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.ServerCallable;
+import org.apache.hadoop.hbase.io.HalfStoreFileReader;
+import org.apache.hadoop.hbase.io.Reference;
+import org.apache.hadoop.hbase.io.Reference.Range;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.io.hfile.Compression.Algorithm;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFile.BloomType;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Tool to load the output of HFileOutputFormat into an existing table.
+ * @see #usage()
+ */
+public class LoadIncrementalHFiles extends Configured implements Tool {
+
+  static Log LOG = LogFactory.getLog(LoadIncrementalHFiles.class);
+
+  public static String NAME = "completebulkload";
+
+  public LoadIncrementalHFiles(Configuration conf) {
+    super(conf);
+  }
+
+  public LoadIncrementalHFiles() {
+    super();
+  }
+
+
+  private void usage() {
+    System.err.println("usage: " + NAME +
+        " /path/to/hfileoutputformat-output " +
+        "tablename");
+  }
+
+  /**
+   * Represents an HFile waiting to be loaded. An queue is used
+   * in this class in order to support the case where a region has
+   * split during the process of the load. When this happens,
+   * the HFile is split into two physical parts across the new
+   * region boundary, and each part is added back into the queue.
+   * The import process finishes when the queue is empty.
+   */
+  private static class LoadQueueItem {
+    final byte[] family;
+    final Path hfilePath;
+
+    public LoadQueueItem(byte[] family, Path hfilePath) {
+      this.family = family;
+      this.hfilePath = hfilePath;
+    }
+  }
+
+  /**
+   * Walk the given directory for all HFiles, and return a Queue
+   * containing all such files.
+   */
+  private Deque<LoadQueueItem> discoverLoadQueue(Path hfofDir)
+  throws IOException {
+    FileSystem fs = hfofDir.getFileSystem(getConf());
+
+    if (!fs.exists(hfofDir)) {
+      throw new FileNotFoundException("HFileOutputFormat dir " +
+          hfofDir + " not found");
+    }
+
+    FileStatus[] familyDirStatuses = fs.listStatus(hfofDir);
+    if (familyDirStatuses == null) {
+      throw new FileNotFoundException("No families found in " + hfofDir);
+    }
+
+    Deque<LoadQueueItem> ret = new LinkedList<LoadQueueItem>();
+    for (FileStatus stat : familyDirStatuses) {
+      if (!stat.isDir()) {
+        LOG.warn("Skipping non-directory " + stat.getPath());
+        continue;
+      }
+      Path familyDir = stat.getPath();
+      // Skip _logs, etc
+      if (familyDir.getName().startsWith("_")) continue;
+      byte[] family = familyDir.getName().getBytes();
+      Path[] hfiles = FileUtil.stat2Paths(fs.listStatus(familyDir));
+      for (Path hfile : hfiles) {
+        if (hfile.getName().startsWith("_")) continue;
+        ret.add(new LoadQueueItem(family, hfile));
+      }
+    }
+    return ret;
+  }
+
+  /**
+   * Perform a bulk load of the given directory into the given
+   * pre-existing table.
+   * @param hfofDir the directory that was provided as the output path
+   * of a job using HFileOutputFormat
+   * @param table the table to load into
+   * @throws TableNotFoundException if table does not yet exist
+   */
+  public void doBulkLoad(Path hfofDir, HTable table)
+    throws TableNotFoundException, IOException
+  {
+    HConnection conn = table.getConnection();
+
+    if (!conn.isTableAvailable(table.getTableName())) {
+      throw new TableNotFoundException("Table " +
+          Bytes.toStringBinary(table.getTableName()) +
+          "is not currently available.");
+    }
+
+    Deque<LoadQueueItem> queue = null;
+    try {
+      queue = discoverLoadQueue(hfofDir);
+      while (!queue.isEmpty()) {
+        LoadQueueItem item = queue.remove();
+        tryLoad(item, conn, table.getTableName(), queue);
+      }
+    } finally {
+      if (queue != null && !queue.isEmpty()) {
+        StringBuilder err = new StringBuilder();
+        err.append("-------------------------------------------------\n");
+        err.append("Bulk load aborted with some files not yet loaded:\n");
+        err.append("-------------------------------------------------\n");
+        for (LoadQueueItem q : queue) {
+          err.append("  ").append(q.hfilePath).append('\n');
+        }
+        LOG.error(err);
+      }
+    }
+  }
+
+  /**
+   * Attempt to load the given load queue item into its target region server.
+   * If the hfile boundary no longer fits into a region, physically splits
+   * the hfile such that the new bottom half will fit, and adds the two
+   * resultant hfiles back into the load queue.
+   */
+  private void tryLoad(final LoadQueueItem item,
+      HConnection conn, final byte[] table,
+      final Deque<LoadQueueItem> queue)
+  throws IOException {
+    final Path hfilePath = item.hfilePath;
+    final FileSystem fs = hfilePath.getFileSystem(getConf());
+    HFile.Reader hfr = new HFile.Reader(fs, hfilePath, null, false);
+    final byte[] first, last;
+    try {
+      hfr.loadFileInfo();
+      first = hfr.getFirstRowKey();
+      last = hfr.getLastRowKey();
+    }  finally {
+      hfr.close();
+    }
+
+    LOG.info("Trying to load hfile=" + hfilePath +
+        " first=" + Bytes.toStringBinary(first) +
+        " last="  + Bytes.toStringBinary(last));
+    if (first == null || last == null) {
+      assert first == null && last == null;
+      LOG.info("hfile " + hfilePath + " has no entries, skipping");
+      return;
+    }
+
+    // We use a '_' prefix which is ignored when walking directory trees
+    // above.
+    final Path tmpDir = new Path(item.hfilePath.getParent(), "_tmp");
+
+    conn.getRegionServerWithRetries(
+      new ServerCallable<Void>(conn, table, first) {
+        @Override
+        public Void call() throws Exception {
+          LOG.debug("Going to connect to server " + location +
+              "for row " + Bytes.toStringBinary(row));
+          HRegionInfo hri = location.getRegionInfo();
+          if (!hri.containsRange(first, last)) {
+            LOG.info("HFile at " + hfilePath + " no longer fits inside a single " +
+                "region. Splitting...");
+
+            HColumnDescriptor familyDesc = hri.getTableDesc().getFamily(item.family);
+            Path botOut = new Path(tmpDir, hri.getEncodedName() + ".bottom");
+            Path topOut = new Path(tmpDir, hri.getEncodedName() + ".top");
+            splitStoreFile(getConf(), hfilePath, familyDesc, hri.getEndKey(),
+                botOut, topOut);
+
+            // Add these back at the *front* of the queue, so there's a lower
+            // chance that the region will just split again before we get there.
+            queue.addFirst(new LoadQueueItem(item.family, botOut));
+            queue.addFirst(new LoadQueueItem(item.family, topOut));
+            LOG.info("Successfully split into new HFiles " + botOut + " and " + topOut);
+            return null;
+          }
+
+          byte[] regionName = location.getRegionInfo().getRegionName();
+          server.bulkLoadHFile(hfilePath.toString(), regionName, item.family);
+          return null;
+        }
+      });
+  }
+
+  /**
+   * Split a storefile into a top and bottom half, maintaining
+   * the metadata, recreating bloom filters, etc.
+   */
+  static void splitStoreFile(
+      Configuration conf, Path inFile,
+      HColumnDescriptor familyDesc, byte[] splitKey,
+      Path bottomOut, Path topOut) throws IOException
+  {
+    // Open reader with no block cache, and not in-memory
+    Reference topReference = new Reference(splitKey, Range.top);
+    Reference bottomReference = new Reference(splitKey, Range.bottom);
+
+    copyHFileHalf(conf, inFile, topOut, topReference, familyDesc);
+    copyHFileHalf(conf, inFile, bottomOut, bottomReference, familyDesc);
+  }
+
+  /**
+   * Copy half of an HFile into a new HFile.
+   */
+  private static void copyHFileHalf(
+      Configuration conf, Path inFile, Path outFile, Reference reference,
+      HColumnDescriptor familyDescriptor)
+  throws IOException {
+    FileSystem fs = inFile.getFileSystem(conf);
+    HalfStoreFileReader halfReader = null;
+    StoreFile.Writer halfWriter = null;
+    try {
+      halfReader = new HalfStoreFileReader(fs, inFile, null, reference);
+      Map<byte[], byte[]> fileInfo = halfReader.loadFileInfo();
+
+      int blocksize = familyDescriptor.getBlocksize();
+      Algorithm compression = familyDescriptor.getCompression();
+      BloomType bloomFilterType = familyDescriptor.getBloomFilterType();
+
+      halfWriter = new StoreFile.Writer(
+          fs, outFile, blocksize, compression, conf, KeyValue.COMPARATOR,
+          bloomFilterType, 0);
+      HFileScanner scanner = halfReader.getScanner(false, false);
+      scanner.seekTo();
+      do {
+        KeyValue kv = scanner.getKeyValue();
+        halfWriter.append(kv);
+      } while (scanner.next());
+
+      for (Map.Entry<byte[],byte[]> entry : fileInfo.entrySet()) {
+        if (shouldCopyHFileMetaKey(entry.getKey())) {
+          halfWriter.appendFileInfo(entry.getKey(), entry.getValue());
+        }
+      }
+    } finally {
+      if (halfWriter != null) halfWriter.close();
+      if (halfReader != null) halfReader.close();
+    }
+  }
+
+  private static boolean shouldCopyHFileMetaKey(byte[] key) {
+    return !HFile.isReservedFileInfoKey(key);
+  }
+
+
+  @Override
+  public int run(String[] args) throws Exception {
+    if (args.length != 2) {
+      usage();
+      return -1;
+    }
+
+    Path hfofDir = new Path(args[0]);
+    HTable table = new HTable(args[1]);
+
+    doBulkLoad(hfofDir, table);
+    return 0;
+  }
+
+  public static void main(String[] args) throws Exception {
+    ToolRunner.run(new LoadIncrementalHFiles(), args);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
new file mode 100644
index 0000000..81d2746
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java
@@ -0,0 +1,163 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.OutputFormat;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * <p>
+ * Hadoop output format that writes to one or more HBase tables. The key is
+ * taken to be the table name while the output value <em>must</em> be either a
+ * {@link Put} or a {@link Delete} instance. All tables must already exist, and
+ * all Puts and Deletes must reference only valid column families.
+ * </p>
+ *
+ * <p>
+ * Write-ahead logging (HLog) for Puts can be disabled by setting
+ * {@link #WAL_PROPERTY} to {@link #WAL_OFF}. Default value is {@link #WAL_ON}.
+ * Note that disabling write-ahead logging is only appropriate for jobs where
+ * loss of data due to region server failure can be tolerated (for example,
+ * because it is easy to rerun a bulk import).
+ * </p>
+ */
+public class MultiTableOutputFormat extends OutputFormat<ImmutableBytesWritable, Writable> {
+  /** Set this to {@link #WAL_OFF} to turn off write-ahead logging (HLog) */
+  public static final String WAL_PROPERTY = "hbase.mapreduce.multitableoutputformat.wal";
+  /** Property value to use write-ahead logging */
+  public static final boolean WAL_ON = true;
+  /** Property value to disable write-ahead logging */
+  public static final boolean WAL_OFF = false;
+  /**
+   * Record writer for outputting to multiple HTables.
+   */
+  protected static class MultiTableRecordWriter extends
+      RecordWriter<ImmutableBytesWritable, Writable> {
+    private static final Log LOG = LogFactory.getLog(MultiTableRecordWriter.class);
+    Map<ImmutableBytesWritable, HTable> tables;
+    Configuration conf;
+    boolean useWriteAheadLogging;
+
+    /**
+     * @param conf
+     *          HBaseConfiguration to used
+     * @param useWriteAheadLogging
+     *          whether to use write ahead logging. This can be turned off (
+     *          <tt>false</tt>) to improve performance when bulk loading data.
+     */
+    public MultiTableRecordWriter(Configuration conf,
+        boolean useWriteAheadLogging) {
+      LOG.debug("Created new MultiTableRecordReader with WAL "
+          + (useWriteAheadLogging ? "on" : "off"));
+      this.tables = new HashMap<ImmutableBytesWritable, HTable>();
+      this.conf = conf;
+      this.useWriteAheadLogging = useWriteAheadLogging;
+    }
+
+    /**
+     * @param tableName
+     *          the name of the table, as a string
+     * @return the named table
+     * @throws IOException
+     *           if there is a problem opening a table
+     */
+    HTable getTable(ImmutableBytesWritable tableName) throws IOException {
+      if (!tables.containsKey(tableName)) {
+        LOG.debug("Opening HTable \"" + Bytes.toString(tableName.get())+ "\" for writing");
+        HTable table = new HTable(conf, tableName.get());
+        table.setAutoFlush(false);
+        tables.put(tableName, table);
+      }
+      return tables.get(tableName);
+    }
+
+    @Override
+    public void close(TaskAttemptContext context) throws IOException {
+      for (HTable table : tables.values()) {
+        table.flushCommits();
+      }
+    }
+
+    /**
+     * Writes an action (Put or Delete) to the specified table.
+     *
+     * @param tableName
+     *          the table being updated.
+     * @param action
+     *          the update, either a put or a delete.
+     * @throws IllegalArgumentException
+     *          if the action is not a put or a delete.
+     */
+    @Override
+    public void write(ImmutableBytesWritable tableName, Writable action) throws IOException {
+      HTable table = getTable(tableName);
+      // The actions are not immutable, so we defensively copy them
+      if (action instanceof Put) {
+        Put put = new Put((Put) action);
+        put.setWriteToWAL(useWriteAheadLogging);
+        table.put(put);
+      } else if (action instanceof Delete) {
+        Delete delete = new Delete((Delete) action);
+        table.delete(delete);
+      } else
+        throw new IllegalArgumentException(
+            "action must be either Delete or Put");
+    }
+  }
+
+  @Override
+  public void checkOutputSpecs(JobContext context) throws IOException,
+      InterruptedException {
+    // we can't know ahead of time if it's going to blow up when the user
+    // passes a table name that doesn't exist, so nothing useful here.
+  }
+
+  @Override
+  public OutputCommitter getOutputCommitter(TaskAttemptContext context)
+      throws IOException, InterruptedException {
+    return new TableOutputCommitter();
+  }
+
+  @Override
+  public RecordWriter<ImmutableBytesWritable, Writable> getRecordWriter(TaskAttemptContext context)
+      throws IOException, InterruptedException {
+    Configuration conf = context.getConfiguration();
+    return new MultiTableRecordWriter(HBaseConfiguration.create(conf),
+        conf.getBoolean(WAL_PROPERTY, WAL_ON));
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/PutSortReducer.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/PutSortReducer.java
new file mode 100644
index 0000000..5fb3e83
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/PutSortReducer.java
@@ -0,0 +1,66 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.util.List;
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapreduce.Reducer;
+
+/**
+ * Emits sorted Puts.
+ * Reads in all Puts from passed Iterator, sorts them, then emits
+ * Puts in sorted order.  If lots of columns per row, it will use lots of
+ * memory sorting.
+ * @see HFileOutputFormat
+ * @see KeyValueSortReducer
+ */
+public class PutSortReducer extends
+    Reducer<ImmutableBytesWritable, Put, ImmutableBytesWritable, KeyValue> {
+  
+  @Override
+  protected void reduce(
+      ImmutableBytesWritable row,
+      java.lang.Iterable<Put> puts,
+      Reducer<ImmutableBytesWritable, Put,
+              ImmutableBytesWritable, KeyValue>.Context context)
+      throws java.io.IOException, InterruptedException
+  {
+    TreeSet<KeyValue> map = new TreeSet<KeyValue>(KeyValue.COMPARATOR);
+  
+    for (Put p : puts) {
+      for (List<KeyValue> kvs : p.getFamilyMap().values()) {
+        for (KeyValue kv : kvs) {
+          map.add(kv.clone());
+        }
+      }
+    }
+    context.setStatus("Read " + map.getClass());
+    int index = 0;
+    for (KeyValue kv : map) {
+      context.write(row, kv);
+      if (index > 0 && index % 100 == 0)
+        context.setStatus("Wrote " + index);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
new file mode 100644
index 0000000..9141fb6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
@@ -0,0 +1,136 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.hadoop.util.GenericOptionsParser;
+
+/**
+ * A job with a just a map phase to count rows. Map outputs table rows IF the
+ * input row has columns that have content.
+ */
+public class RowCounter {
+
+  /** Name of this 'program'. */
+  static final String NAME = "rowcounter";
+
+  /**
+   * Mapper that runs the count.
+   */
+  static class RowCounterMapper
+  extends TableMapper<ImmutableBytesWritable, Result> {
+
+    /** Counter enumeration to count the actual rows. */
+    public static enum Counters {ROWS}
+
+    /**
+     * Maps the data.
+     *
+     * @param row  The current table row key.
+     * @param values  The columns.
+     * @param context  The current context.
+     * @throws IOException When something is broken with the data.
+     * @see org.apache.hadoop.mapreduce.Mapper#map(KEYIN, VALUEIN,
+     *   org.apache.hadoop.mapreduce.Mapper.Context)
+     */
+    @Override
+    public void map(ImmutableBytesWritable row, Result values,
+      Context context)
+    throws IOException {
+      for (KeyValue value: values.list()) {
+        if (value.getValue().length > 0) {
+          context.getCounter(Counters.ROWS).increment(1);
+          break;
+        }
+      }
+    }
+  }
+
+  /**
+   * Sets up the actual job.
+   *
+   * @param conf  The current configuration.
+   * @param args  The command line parameters.
+   * @return The newly created job.
+   * @throws IOException When setting up the job fails.
+   */
+  public static Job createSubmittableJob(Configuration conf, String[] args)
+  throws IOException {
+    String tableName = args[0];
+    Job job = new Job(conf, NAME + "_" + tableName);
+    job.setJarByClass(RowCounter.class);
+    // Columns are space delimited
+    StringBuilder sb = new StringBuilder();
+    final int columnoffset = 1;
+    for (int i = columnoffset; i < args.length; i++) {
+      if (i > columnoffset) {
+        sb.append(" ");
+      }
+      sb.append(args[i]);
+    }
+    Scan scan = new Scan();
+    scan.setFilter(new FirstKeyOnlyFilter());
+    if (sb.length() > 0) {
+      for (String columnName :sb.toString().split(" ")) {
+        String [] fields = columnName.split(":");
+        if(fields.length == 1) {
+          scan.addFamily(Bytes.toBytes(fields[0]));
+        } else {
+          scan.addColumn(Bytes.toBytes(fields[0]), Bytes.toBytes(fields[1]));
+        }
+      }
+    }
+    // Second argument is the table name.
+    job.setOutputFormatClass(NullOutputFormat.class);
+    TableMapReduceUtil.initTableMapperJob(tableName, scan,
+      RowCounterMapper.class, ImmutableBytesWritable.class, Result.class, job);
+    job.setNumReduceTasks(0);
+    return job;
+  }
+
+  /**
+   * Main entry point.
+   *
+   * @param args  The command line parameters.
+   * @throws Exception When running the job fails.
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
+    if (otherArgs.length < 1) {
+      System.err.println("ERROR: Wrong number of parameters: " + args.length);
+      System.err.println("Usage: RowCounter <tablename> [<column1> <column2>...]");
+      System.exit(-1);
+    }
+    Job job = createSubmittableJob(conf, otherArgs);
+    System.exit(job.waitForCompletion(true) ? 0 : 1);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/SimpleTotalOrderPartitioner.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/SimpleTotalOrderPartitioner.java
new file mode 100644
index 0000000..0cda76e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/SimpleTotalOrderPartitioner.java
@@ -0,0 +1,139 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Partitioner;
+
+/**
+ * A partitioner that takes start and end keys and uses bigdecimal to figure
+ * which reduce a key belongs to.  Pass the start and end
+ * keys in the Configuration using <code>hbase.simpletotalorder.start</code>
+ * and <code>hbase.simpletotalorder.end</code>.  The end key needs to be
+ * exclusive; i.e. one larger than the biggest key in your key space.
+ * You may be surprised at how this class partitions the space; it may not
+ * align with preconceptions; e.g. a start key of zero and an end key of 100
+ * divided in ten will not make regions whose range is 0-10, 10-20, and so on.
+ * Make your own partitioner if you need the region spacing to come out a
+ * particular way.
+ * @param <VALUE>
+ * @see #START
+ * @see #END
+ */
+public class SimpleTotalOrderPartitioner<VALUE> extends Partitioner<ImmutableBytesWritable, VALUE>
+implements Configurable {
+  private final static Log LOG = LogFactory.getLog(SimpleTotalOrderPartitioner.class);
+
+  @Deprecated
+  public static final String START = "hbase.simpletotalorder.start";
+  @Deprecated
+  public static final String END = "hbase.simpletotalorder.end";
+  
+  static final String START_BASE64 = "hbase.simpletotalorder.start.base64";
+  static final String END_BASE64 = "hbase.simpletotalorder.end.base64";
+  
+  private Configuration c;
+  private byte [] startkey;
+  private byte [] endkey;
+  private byte [][] splits;
+  private int lastReduces = -1;
+
+  public static void setStartKey(Configuration conf, byte[] startKey) {
+    conf.set(START_BASE64, Base64.encodeBytes(startKey));
+  }
+  
+  public static void setEndKey(Configuration conf, byte[] endKey) {
+    conf.set(END_BASE64, Base64.encodeBytes(endKey));
+  }
+  
+  @SuppressWarnings("deprecation")
+  static byte[] getStartKey(Configuration conf) {
+    return getKeyFromConf(conf, START_BASE64, START);
+  }
+  
+  @SuppressWarnings("deprecation")
+  static byte[] getEndKey(Configuration conf) {
+    return getKeyFromConf(conf, END_BASE64, END);
+  }
+  
+  private static byte[] getKeyFromConf(Configuration conf,
+      String base64Key, String deprecatedKey) {
+    String encoded = conf.get(base64Key);
+    if (encoded != null) {
+      return Base64.decode(encoded);
+    }
+    String oldStyleVal = conf.get(deprecatedKey);
+    if (oldStyleVal == null) {
+      return null;
+    }
+    LOG.warn("Using deprecated configuration " + deprecatedKey +
+        " - please use static accessor methods instead.");
+    return Bytes.toBytes(oldStyleVal);
+  }
+  
+  @Override
+  public int getPartition(final ImmutableBytesWritable key, final VALUE value,
+      final int reduces) {
+    if (reduces == 1) return 0;
+    if (this.lastReduces != reduces) {
+      this.splits = Bytes.split(this.startkey, this.endkey, reduces - 1);
+      for (int i = 0; i < splits.length; i++) {
+        LOG.info(Bytes.toString(splits[i]));
+      }
+    }
+    int pos = Bytes.binarySearch(this.splits, key.get(), key.getOffset(),
+      key.getLength(), Bytes.BYTES_RAWCOMPARATOR);
+    // Below code is from hfile index search.
+    if (pos < 0) {
+      pos++;
+      pos *= -1;
+      if (pos == 0) {
+        // falls before the beginning of the file.
+        throw new RuntimeException("Key outside start/stop range: " +
+          key.toString());
+      }
+      pos--;
+    }
+    return pos;
+  }
+
+  @Override
+  public Configuration getConf() {
+    return this.c;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    this.c = conf;
+    this.startkey = getStartKey(conf);
+    this.endkey = getEndKey(conf);
+    if (startkey == null || endkey == null) {
+      throw new RuntimeException(this.getClass() + " not configured");
+    }
+    LOG.info("startkey=" + Bytes.toStringBinary(startkey) +
+        ", endkey=" + Bytes.toStringBinary(endkey));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormat.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormat.java
new file mode 100644
index 0000000..5dfb13e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormat.java
@@ -0,0 +1,143 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Convert HBase tabular data into a format that is consumable by Map/Reduce.
+ */
+public class TableInputFormat extends TableInputFormatBase
+implements Configurable {
+
+  private final Log LOG = LogFactory.getLog(TableInputFormat.class);
+
+  /** Job parameter that specifies the input table. */
+  public static final String INPUT_TABLE = "hbase.mapreduce.inputtable";
+  /** Base-64 encoded scanner. All other SCAN_ confs are ignored if this is specified.
+   * See {@link TableMapReduceUtil#convertScanToString(Scan)} for more details.
+   */
+  public static final String SCAN = "hbase.mapreduce.scan";
+  /** Column Family to Scan */
+  public static final String SCAN_COLUMN_FAMILY = "hbase.mapreduce.scan.column.family";
+  /** Space delimited list of columns to scan. */
+  public static final String SCAN_COLUMNS = "hbase.mapreduce.scan.columns";
+  /** The timestamp used to filter columns with a specific timestamp. */
+  public static final String SCAN_TIMESTAMP = "hbase.mapreduce.scan.timestamp";
+  /** The starting timestamp used to filter columns with a specific range of versions. */
+  public static final String SCAN_TIMERANGE_START = "hbase.mapreduce.scan.timerange.start";
+  /** The ending timestamp used to filter columns with a specific range of versions. */
+  public static final String SCAN_TIMERANGE_END = "hbase.mapreduce.scan.timerange.end";
+  /** The maximum number of version to return. */
+  public static final String SCAN_MAXVERSIONS = "hbase.mapreduce.scan.maxversions";
+  /** Set to false to disable server-side caching of blocks for this scan. */
+  public static final String SCAN_CACHEBLOCKS = "hbase.mapreduce.scan.cacheblocks";
+  /** The number of rows for caching that will be passed to scanners. */
+  public static final String SCAN_CACHEDROWS = "hbase.mapreduce.scan.cachedrows";
+
+  /** The configuration. */
+  private Configuration conf = null;
+
+  /**
+   * Returns the current configuration.
+   *
+   * @return The current configuration.
+   * @see org.apache.hadoop.conf.Configurable#getConf()
+   */
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  /**
+   * Sets the configuration. This is used to set the details for the table to
+   * be scanned.
+   *
+   * @param configuration  The configuration to set.
+   * @see org.apache.hadoop.conf.Configurable#setConf(
+   *   org.apache.hadoop.conf.Configuration)
+   */
+  @Override
+  public void setConf(Configuration configuration) {
+    this.conf = configuration;
+    String tableName = conf.get(INPUT_TABLE);
+    try {
+      setHTable(new HTable(new Configuration(conf), tableName));
+    } catch (Exception e) {
+      LOG.error(StringUtils.stringifyException(e));
+    }
+
+    Scan scan = null;
+
+    if (conf.get(SCAN) != null) {
+      try {
+        scan = TableMapReduceUtil.convertStringToScan(conf.get(SCAN));
+      } catch (IOException e) {
+        LOG.error("An error occurred.", e);
+      }
+    } else {
+      try {
+        scan = new Scan();
+
+        if (conf.get(SCAN_COLUMNS) != null) {
+          scan.addColumns(conf.get(SCAN_COLUMNS));
+        }
+
+        if (conf.get(SCAN_COLUMN_FAMILY) != null) {
+          scan.addFamily(Bytes.toBytes(conf.get(SCAN_COLUMN_FAMILY)));
+        }
+
+        if (conf.get(SCAN_TIMESTAMP) != null) {
+          scan.setTimeStamp(Long.parseLong(conf.get(SCAN_TIMESTAMP)));
+        }
+
+        if (conf.get(SCAN_TIMERANGE_START) != null && conf.get(SCAN_TIMERANGE_END) != null) {
+          scan.setTimeRange(
+              Long.parseLong(conf.get(SCAN_TIMERANGE_START)),
+              Long.parseLong(conf.get(SCAN_TIMERANGE_END)));
+        }
+
+        if (conf.get(SCAN_MAXVERSIONS) != null) {
+          scan.setMaxVersions(Integer.parseInt(conf.get(SCAN_MAXVERSIONS)));
+        }
+
+        if (conf.get(SCAN_CACHEDROWS) != null) {
+          scan.setCaching(Integer.parseInt(conf.get(SCAN_CACHEDROWS)));
+        }
+
+        // false by default, full table scans generate too much BC churn
+        scan.setCacheBlocks((conf.getBoolean(SCAN_CACHEBLOCKS, false)));
+      } catch (Exception e) {
+          LOG.error(StringUtils.stringifyException(e));
+      }
+    }
+
+    setScan(scan);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
new file mode 100644
index 0000000..c813c49
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java
@@ -0,0 +1,239 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * A base for {@link TableInputFormat}s. Receives a {@link HTable}, an
+ * {@link Scan} instance that defines the input columns etc. Subclasses may use
+ * other TableRecordReader implementations.
+ * <p>
+ * An example of a subclass:
+ * <pre>
+ *   class ExampleTIF extends TableInputFormatBase implements JobConfigurable {
+ *
+ *     public void configure(JobConf job) {
+ *       HTable exampleTable = new HTable(HBaseConfiguration.create(job),
+ *         Bytes.toBytes("exampleTable"));
+ *       // mandatory
+ *       setHTable(exampleTable);
+ *       Text[] inputColumns = new byte [][] { Bytes.toBytes("columnA"),
+ *         Bytes.toBytes("columnB") };
+ *       // mandatory
+ *       setInputColumns(inputColumns);
+ *       RowFilterInterface exampleFilter = new RegExpRowFilter("keyPrefix.*");
+ *       // optional
+ *       setRowFilter(exampleFilter);
+ *     }
+ *
+ *     public void validateInput(JobConf job) throws IOException {
+ *     }
+ *  }
+ * </pre>
+ */
+public abstract class TableInputFormatBase
+extends InputFormat<ImmutableBytesWritable, Result> {
+
+  final Log LOG = LogFactory.getLog(TableInputFormatBase.class);
+
+  /** Holds the details for the internal scanner. */
+  private Scan scan = null;
+  /** The table to scan. */
+  private HTable table = null;
+  /** The reader scanning the table, can be a custom one. */
+  private TableRecordReader tableRecordReader = null;
+
+
+  /**
+   * Builds a TableRecordReader. If no TableRecordReader was provided, uses
+   * the default.
+   *
+   * @param split  The split to work with.
+   * @param context  The current context.
+   * @return The newly created record reader.
+   * @throws IOException When creating the reader fails.
+   * @see org.apache.hadoop.mapreduce.InputFormat#createRecordReader(
+   *   org.apache.hadoop.mapreduce.InputSplit,
+   *   org.apache.hadoop.mapreduce.TaskAttemptContext)
+   */
+  @Override
+  public RecordReader<ImmutableBytesWritable, Result> createRecordReader(
+      InputSplit split, TaskAttemptContext context)
+  throws IOException {
+    if (table == null) {
+      throw new IOException("Cannot create a record reader because of a" +
+          " previous error. Please look at the previous logs lines from" +
+          " the task's full log for more details.");
+    }
+    TableSplit tSplit = (TableSplit) split;
+    TableRecordReader trr = this.tableRecordReader;
+    // if no table record reader was provided use default
+    if (trr == null) {
+      trr = new TableRecordReader();
+    }
+    Scan sc = new Scan(this.scan);
+    sc.setStartRow(tSplit.getStartRow());
+    sc.setStopRow(tSplit.getEndRow());
+    trr.setScan(sc);
+    trr.setHTable(table);
+    trr.init();
+    return trr;
+  }
+
+  /**
+   * Calculates the splits that will serve as input for the map tasks. The
+   * number of splits matches the number of regions in a table.
+   *
+   * @param context  The current job context.
+   * @return The list of input splits.
+   * @throws IOException When creating the list of splits fails.
+   * @see org.apache.hadoop.mapreduce.InputFormat#getSplits(
+   *   org.apache.hadoop.mapreduce.JobContext)
+   */
+  @Override
+  public List<InputSplit> getSplits(JobContext context) throws IOException {
+	if (table == null) {
+	    throw new IOException("No table was provided.");
+	}
+    Pair<byte[][], byte[][]> keys = table.getStartEndKeys();
+    if (keys == null || keys.getFirst() == null ||
+        keys.getFirst().length == 0) {
+      throw new IOException("Expecting at least one region.");
+    }
+    int count = 0;
+    List<InputSplit> splits = new ArrayList<InputSplit>(keys.getFirst().length);
+    for (int i = 0; i < keys.getFirst().length; i++) {
+      if ( !includeRegionInSplit(keys.getFirst()[i], keys.getSecond()[i])) {
+        continue;
+      }
+      String regionLocation = table.getRegionLocation(keys.getFirst()[i]).
+        getServerAddress().getHostname();
+      byte[] startRow = scan.getStartRow();
+      byte[] stopRow = scan.getStopRow();
+      // determine if the given start an stop key fall into the region
+      if ((startRow.length == 0 || keys.getSecond()[i].length == 0 ||
+           Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) &&
+          (stopRow.length == 0 ||
+           Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0)) {
+        byte[] splitStart = startRow.length == 0 ||
+          Bytes.compareTo(keys.getFirst()[i], startRow) >= 0 ?
+            keys.getFirst()[i] : startRow;
+        byte[] splitStop = (stopRow.length == 0 ||
+          Bytes.compareTo(keys.getSecond()[i], stopRow) <= 0) &&
+          keys.getSecond()[i].length > 0 ?
+            keys.getSecond()[i] : stopRow;
+        InputSplit split = new TableSplit(table.getTableName(),
+          splitStart, splitStop, regionLocation);
+        splits.add(split);
+        if (LOG.isDebugEnabled())
+          LOG.debug("getSplits: split -> " + (count++) + " -> " + split);
+      }
+    }
+    return splits;
+  }
+
+  /**
+   *
+   *
+   * Test if the given region is to be included in the InputSplit while splitting
+   * the regions of a table.
+   * <p>
+   * This optimization is effective when there is a specific reasoning to exclude an entire region from the M-R job,
+   * (and hence, not contributing to the InputSplit), given the start and end keys of the same. <br>
+   * Useful when we need to remember the last-processed top record and revisit the [last, current) interval for M-R processing,
+   * continuously. In addition to reducing InputSplits, reduces the load on the region server as well, due to the ordering of the keys.
+   * <br>
+   * <br>
+   * Note: It is possible that <code>endKey.length() == 0 </code> , for the last (recent) region.
+   * <br>
+   * Override this method, if you want to bulk exclude regions altogether from M-R. By default, no region is excluded( i.e. all regions are included).
+   *
+   *
+   * @param startKey Start key of the region
+   * @param endKey End key of the region
+   * @return true, if this region needs to be included as part of the input (default).
+   *
+   */
+  protected boolean includeRegionInSplit(final byte[] startKey, final byte [] endKey) {
+    return true;
+  }
+
+  /**
+   * Allows subclasses to get the {@link HTable}.
+   */
+  protected HTable getHTable() {
+    return this.table;
+  }
+
+  /**
+   * Allows subclasses to set the {@link HTable}.
+   *
+   * @param table  The table to get the data from.
+   */
+  protected void setHTable(HTable table) {
+    this.table = table;
+  }
+
+  /**
+   * Gets the scan defining the actual details like columns etc.
+   *
+   * @return The internal scan instance.
+   */
+  public Scan getScan() {
+    if (this.scan == null) this.scan = new Scan();
+    return scan;
+  }
+
+  /**
+   * Sets the scan defining the actual details like columns etc.
+   *
+   * @param scan  The scan to set.
+   */
+  public void setScan(Scan scan) {
+    this.scan = scan;
+  }
+
+  /**
+   * Allows subclasses to set the {@link TableRecordReader}.
+   *
+   * @param tableRecordReader A different {@link TableRecordReader}
+   *   implementation.
+   */
+  protected void setTableRecordReader(TableRecordReader tableRecordReader) {
+    this.tableRecordReader = tableRecordReader;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
new file mode 100644
index 0000000..361a2c5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
@@ -0,0 +1,411 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.net.URL;
+import java.net.URLDecoder;
+import java.util.Enumeration;
+import java.util.HashSet;
+import java.util.Set;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Utility for {@link TableMapper} and {@link TableReducer}
+ */
+@SuppressWarnings("unchecked")
+public class TableMapReduceUtil {
+  static Log LOG = LogFactory.getLog(TableMapReduceUtil.class);
+  
+  /**
+   * Use this before submitting a TableMap job. It will appropriately set up
+   * the job.
+   *
+   * @param table  The table name to read from.
+   * @param scan  The scan instance with the columns, time range etc.
+   * @param mapper  The mapper class to use.
+   * @param outputKeyClass  The class of the output key.
+   * @param outputValueClass  The class of the output value.
+   * @param job  The current job to adjust.  Make sure the passed job is
+   * carrying all necessary HBase configuration.
+   * @throws IOException When setting up the details fails.
+   */
+  public static void initTableMapperJob(String table, Scan scan,
+      Class<? extends TableMapper> mapper,
+      Class<? extends WritableComparable> outputKeyClass,
+      Class<? extends Writable> outputValueClass, Job job)
+  throws IOException {
+    initTableMapperJob(table, scan, mapper, outputKeyClass, outputValueClass,
+        job, true);
+  }
+
+  /**
+   * Use this before submitting a TableMap job. It will appropriately set up
+   * the job.
+   *
+   * @param table  The table name to read from.
+   * @param scan  The scan instance with the columns, time range etc.
+   * @param mapper  The mapper class to use.
+   * @param outputKeyClass  The class of the output key.
+   * @param outputValueClass  The class of the output value.
+   * @param job  The current job to adjust.  Make sure the passed job is
+   * carrying all necessary HBase configuration.
+   * @param addDependencyJars upload HBase jars and jars for any of the configured
+   *           job classes via the distributed cache (tmpjars).
+   * @throws IOException When setting up the details fails.
+   */
+  public static void initTableMapperJob(String table, Scan scan,
+      Class<? extends TableMapper> mapper,
+      Class<? extends WritableComparable> outputKeyClass,
+      Class<? extends Writable> outputValueClass, Job job,
+      boolean addDependencyJars)
+  throws IOException {
+    job.setInputFormatClass(TableInputFormat.class);
+    if (outputValueClass != null) job.setMapOutputValueClass(outputValueClass);
+    if (outputKeyClass != null) job.setMapOutputKeyClass(outputKeyClass);
+    job.setMapperClass(mapper);
+    job.getConfiguration().set(TableInputFormat.INPUT_TABLE, table);
+    job.getConfiguration().set(TableInputFormat.SCAN,
+      convertScanToString(scan));
+    if (addDependencyJars) {
+      addDependencyJars(job);
+    }
+  }
+
+  /**
+   * Writes the given scan into a Base64 encoded string.
+   *
+   * @param scan  The scan to write out.
+   * @return The scan saved in a Base64 encoded string.
+   * @throws IOException When writing the scan fails.
+   */
+  static String convertScanToString(Scan scan) throws IOException {
+    ByteArrayOutputStream out = new ByteArrayOutputStream();
+    DataOutputStream dos = new DataOutputStream(out);
+    scan.write(dos);
+    return Base64.encodeBytes(out.toByteArray());
+  }
+
+  /**
+   * Converts the given Base64 string back into a Scan instance.
+   *
+   * @param base64  The scan details.
+   * @return The newly created Scan instance.
+   * @throws IOException When reading the scan instance fails.
+   */
+  static Scan convertStringToScan(String base64) throws IOException {
+    ByteArrayInputStream bis = new ByteArrayInputStream(Base64.decode(base64));
+    DataInputStream dis = new DataInputStream(bis);
+    Scan scan = new Scan();
+    scan.readFields(dis);
+    return scan;
+  }
+
+  /**
+   * Use this before submitting a TableReduce job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The output table.
+   * @param reducer  The reducer class to use.
+   * @param job  The current job to adjust.
+   * @throws IOException When determining the region count fails.
+   */
+  public static void initTableReducerJob(String table,
+    Class<? extends TableReducer> reducer, Job job)
+  throws IOException {
+    initTableReducerJob(table, reducer, job, null);
+  }
+
+  /**
+   * Use this before submitting a TableReduce job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The output table.
+   * @param reducer  The reducer class to use.
+   * @param job  The current job to adjust.
+   * @param partitioner  Partitioner to use. Pass <code>null</code> to use
+   * default partitioner.
+   * @throws IOException When determining the region count fails.
+   */
+  public static void initTableReducerJob(String table,
+    Class<? extends TableReducer> reducer, Job job,
+    Class partitioner) throws IOException {
+    initTableReducerJob(table, reducer, job, partitioner, null, null, null);
+  }
+
+  /**
+   * Use this before submitting a TableReduce job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The output table.
+   * @param reducer  The reducer class to use.
+   * @param job  The current job to adjust.  Make sure the passed job is
+   * carrying all necessary HBase configuration.
+   * @param partitioner  Partitioner to use. Pass <code>null</code> to use
+   * default partitioner.
+   * @param quorumAddress Distant cluster to write to; default is null for
+   * output to the cluster that is designated in <code>hbase-site.xml</code>.
+   * Set this String to the zookeeper ensemble of an alternate remote cluster
+   * when you would have the reduce write a cluster that is other than the
+   * default; e.g. copying tables between clusters, the source would be
+   * designated by <code>hbase-site.xml</code> and this param would have the
+   * ensemble address of the remote cluster.  The format to pass is particular.
+   * Pass <code> &lt;hbase.zookeeper.quorum>:&lt;hbase.zookeeper.client.port>:&lt;zookeeper.znode.parent>
+   * </code> such as <code>server,server2,server3:2181:/hbase</code>.
+   * @param serverClass redefined hbase.regionserver.class
+   * @param serverImpl redefined hbase.regionserver.impl
+   * @throws IOException When determining the region count fails.
+   */
+  public static void initTableReducerJob(String table,
+    Class<? extends TableReducer> reducer, Job job,
+    Class partitioner, String quorumAddress, String serverClass,
+    String serverImpl) throws IOException {
+    initTableReducerJob(table, reducer, job, partitioner, quorumAddress,
+        serverClass, serverImpl, true);
+  }
+
+  /**
+   * Use this before submitting a TableReduce job. It will
+   * appropriately set up the JobConf.
+   *
+   * @param table  The output table.
+   * @param reducer  The reducer class to use.
+   * @param job  The current job to adjust.  Make sure the passed job is
+   * carrying all necessary HBase configuration.
+   * @param partitioner  Partitioner to use. Pass <code>null</code> to use
+   * default partitioner.
+   * @param quorumAddress Distant cluster to write to; default is null for
+   * output to the cluster that is designated in <code>hbase-site.xml</code>.
+   * Set this String to the zookeeper ensemble of an alternate remote cluster
+   * when you would have the reduce write a cluster that is other than the
+   * default; e.g. copying tables between clusters, the source would be
+   * designated by <code>hbase-site.xml</code> and this param would have the
+   * ensemble address of the remote cluster.  The format to pass is particular.
+   * Pass <code> &lt;hbase.zookeeper.quorum>:&lt;hbase.zookeeper.client.port>:&lt;zookeeper.znode.parent>
+   * </code> such as <code>server,server2,server3:2181:/hbase</code>.
+   * @param serverClass redefined hbase.regionserver.class
+   * @param serverImpl redefined hbase.regionserver.impl
+   * @param addDependencyJars upload HBase jars and jars for any of the configured
+   *           job classes via the distributed cache (tmpjars).
+   * @throws IOException When determining the region count fails.
+   */
+  public static void initTableReducerJob(String table,
+    Class<? extends TableReducer> reducer, Job job,
+    Class partitioner, String quorumAddress, String serverClass,
+    String serverImpl, boolean addDependencyJars) throws IOException {
+
+    Configuration conf = job.getConfiguration();
+    job.setOutputFormatClass(TableOutputFormat.class);
+    if (reducer != null) job.setReducerClass(reducer);
+    conf.set(TableOutputFormat.OUTPUT_TABLE, table);
+    // If passed a quorum/ensemble address, pass it on to TableOutputFormat.
+    if (quorumAddress != null) {
+      // Calling this will validate the format
+      ZKUtil.transformClusterKey(quorumAddress);
+      conf.set(TableOutputFormat.QUORUM_ADDRESS,quorumAddress);
+    }
+    if (serverClass != null && serverImpl != null) {
+      conf.set(TableOutputFormat.REGION_SERVER_CLASS, serverClass);
+      conf.set(TableOutputFormat.REGION_SERVER_IMPL, serverImpl);
+    }
+    job.setOutputKeyClass(ImmutableBytesWritable.class);
+    job.setOutputValueClass(Writable.class);
+    if (partitioner == HRegionPartitioner.class) {
+      job.setPartitionerClass(HRegionPartitioner.class);
+      HTable outputTable = new HTable(conf, table);
+      int regions = outputTable.getRegionsInfo().size();
+      if (job.getNumReduceTasks() > regions) {
+        job.setNumReduceTasks(outputTable.getRegionsInfo().size());
+      }
+    } else if (partitioner != null) {
+      job.setPartitionerClass(partitioner);
+    }
+
+    if (addDependencyJars) {
+      addDependencyJars(job);
+    }
+  }
+
+  /**
+   * Ensures that the given number of reduce tasks for the given job
+   * configuration does not exceed the number of regions for the given table.
+   *
+   * @param table  The table to get the region count for.
+   * @param job  The current job to adjust.
+   * @throws IOException When retrieving the table details fails.
+   */
+  public static void limitNumReduceTasks(String table, Job job)
+  throws IOException {
+    HTable outputTable = new HTable(job.getConfiguration(), table);
+    int regions = outputTable.getRegionsInfo().size();
+    if (job.getNumReduceTasks() > regions)
+      job.setNumReduceTasks(regions);
+  }
+
+  /**
+   * Sets the number of reduce tasks for the given job configuration to the
+   * number of regions the given table has.
+   *
+   * @param table  The table to get the region count for.
+   * @param job  The current job to adjust.
+   * @throws IOException When retrieving the table details fails.
+   */
+  public static void setNumReduceTasks(String table, Job job)
+  throws IOException {
+    HTable outputTable = new HTable(job.getConfiguration(), table);
+    int regions = outputTable.getRegionsInfo().size();
+    job.setNumReduceTasks(regions);
+  }
+
+  /**
+   * Sets the number of rows to return and cache with each scanner iteration.
+   * Higher caching values will enable faster mapreduce jobs at the expense of
+   * requiring more heap to contain the cached rows.
+   *
+   * @param job The current job to adjust.
+   * @param batchSize The number of rows to return in batch with each scanner
+   * iteration.
+   */
+  public static void setScannerCaching(Job job, int batchSize) {
+    job.getConfiguration().setInt("hbase.client.scanner.caching", batchSize);
+  }
+
+  /**
+   * Add the HBase dependency jars as well as jars for any of the configured
+   * job classes to the job configuration, so that JobClient will ship them
+   * to the cluster and add them to the DistributedCache.
+   */
+  public static void addDependencyJars(Job job) throws IOException {
+    try {
+      addDependencyJars(job.getConfiguration(),
+          org.apache.zookeeper.ZooKeeper.class,
+          job.getMapOutputKeyClass(),
+          job.getMapOutputValueClass(),
+          job.getInputFormatClass(),
+          job.getOutputKeyClass(),
+          job.getOutputValueClass(),
+          job.getOutputFormatClass(),
+          job.getPartitionerClass(),
+          job.getCombinerClass());
+    } catch (ClassNotFoundException e) {
+      throw new IOException(e);
+    }    
+  }
+  
+  /**
+   * Add the jars containing the given classes to the job's configuration
+   * such that JobClient will ship them to the cluster and add them to
+   * the DistributedCache.
+   */
+  public static void addDependencyJars(Configuration conf,
+      Class... classes) throws IOException {
+
+    FileSystem localFs = FileSystem.getLocal(conf);
+
+    Set<String> jars = new HashSet<String>();
+
+    // Add jars that are already in the tmpjars variable
+    jars.addAll( conf.getStringCollection("tmpjars") );
+
+    // Add jars containing the specified classes
+    for (Class clazz : classes) {
+      if (clazz == null) continue;
+
+      String pathStr = findContainingJar(clazz);
+      if (pathStr == null) {
+        LOG.warn("Could not find jar for class " + clazz +
+                 " in order to ship it to the cluster.");
+        continue;
+      }
+      Path path = new Path(pathStr);
+      if (!localFs.exists(path)) {
+        LOG.warn("Could not validate jar file " + path + " for class "
+                 + clazz);
+        continue;
+      }
+      jars.add(path.makeQualified(localFs).toString());
+    }
+    if (jars.isEmpty()) return;
+
+    conf.set("tmpjars",
+             StringUtils.arrayToString(jars.toArray(new String[0])));
+  }
+
+  /** 
+   * Find a jar that contains a class of the same name, if any.
+   * It will return a jar file, even if that is not the first thing
+   * on the class path that has a class with the same name.
+   * 
+   * This is shamelessly copied from JobConf
+   * 
+   * @param my_class the class to find.
+   * @return a jar file that contains the class, or null.
+   * @throws IOException
+   */
+  private static String findContainingJar(Class my_class) {
+    ClassLoader loader = my_class.getClassLoader();
+    String class_file = my_class.getName().replaceAll("\\.", "/") + ".class";
+    try {
+      for(Enumeration itr = loader.getResources(class_file);
+          itr.hasMoreElements();) {
+        URL url = (URL) itr.nextElement();
+        if ("jar".equals(url.getProtocol())) {
+          String toReturn = url.getPath();
+          if (toReturn.startsWith("file:")) {
+            toReturn = toReturn.substring("file:".length());
+          }
+          // URLDecoder is a misnamed class, since it actually decodes
+          // x-www-form-urlencoded MIME type rather than actual
+          // URL encoding (which the file path has). Therefore it would
+          // decode +s to ' 's which is incorrect (spaces are actually
+          // either unencoded or encoded as "%20"). Replace +s first, so
+          // that they are kept sacred during the decoding process.
+          toReturn = toReturn.replaceAll("\\+", "%2B");
+          toReturn = URLDecoder.decode(toReturn, "UTF-8");
+          return toReturn.replaceAll("!.*$", "");
+        }
+      }
+    } catch (IOException e) {
+      throw new RuntimeException(e);
+    }
+    return null;
+  }
+
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapper.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapper.java
new file mode 100644
index 0000000..bbceb63
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapper.java
@@ -0,0 +1,37 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapreduce.Mapper;
+
+/**
+ * Extends the base <code>Mapper</code> class to add the required input key
+ * and value classes.
+ *
+ * @param <KEYOUT>  The type of the key.
+ * @param <VALUEOUT>  The type of the value.
+ * @see org.apache.hadoop.mapreduce.Mapper
+ */
+public abstract class TableMapper<KEYOUT, VALUEOUT>
+extends Mapper<ImmutableBytesWritable, Result, KEYOUT, VALUEOUT> {
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputCommitter.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputCommitter.java
new file mode 100644
index 0000000..5289da7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputCommitter.java
@@ -0,0 +1,58 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * Small committer class that does not do anything.
+ */
+public class TableOutputCommitter extends OutputCommitter {
+
+  @Override
+  public void abortTask(TaskAttemptContext arg0) throws IOException {
+  }
+
+  @Override
+  public void cleanupJob(JobContext arg0) throws IOException {
+  }
+
+  @Override
+  public void commitTask(TaskAttemptContext arg0) throws IOException {
+  }
+
+  @Override
+  public boolean needsTaskCommit(TaskAttemptContext arg0) throws IOException {
+    return false;
+  }
+
+  @Override
+  public void setupJob(JobContext arg0) throws IOException {
+  }
+
+  @Override
+  public void setupTask(TaskAttemptContext arg0) throws IOException {
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java
new file mode 100644
index 0000000..d4a1ed6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java
@@ -0,0 +1,203 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.OutputFormat;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * Convert Map/Reduce output and write it to an HBase table. The KEY is ignored
+ * while the output value <u>must</u> be either a {@link Put} or a
+ * {@link Delete} instance.
+ *
+ * @param <KEY>  The type of the key. Ignored in this class.
+ */
+public class TableOutputFormat<KEY> extends OutputFormat<KEY, Writable>
+implements Configurable {
+
+  private final Log LOG = LogFactory.getLog(TableOutputFormat.class);
+
+  /** Job parameter that specifies the output table. */
+  public static final String OUTPUT_TABLE = "hbase.mapred.outputtable";
+
+  /**
+   * Optional job parameter to specify a peer cluster.
+   * Used specifying remote cluster when copying between hbase clusters (the
+   * source is picked up from <code>hbase-site.xml</code>).
+   * @see TableMapReduceUtil#initTableReducerJob(String, Class, org.apache.hadoop.mapreduce.Job, Class, String, String, String)
+   */
+  public static final String QUORUM_ADDRESS = "hbase.mapred.output.quorum";
+
+  /** Optional specification of the rs class name of the peer cluster */
+  public static final String
+      REGION_SERVER_CLASS = "hbase.mapred.output.rs.class";
+  /** Optional specification of the rs impl name of the peer cluster */
+  public static final String
+      REGION_SERVER_IMPL = "hbase.mapred.output.rs.impl";
+
+  /** The configuration. */
+  private Configuration conf = null;
+
+  private HTable table;
+
+  /**
+   * Writes the reducer output to an HBase table.
+   *
+   * @param <KEY>  The type of the key.
+   */
+  protected static class TableRecordWriter<KEY>
+  extends RecordWriter<KEY, Writable> {
+
+    /** The table to write to. */
+    private HTable table;
+
+    /**
+     * Instantiate a TableRecordWriter with the HBase HClient for writing.
+     *
+     * @param table  The table to write to.
+     */
+    public TableRecordWriter(HTable table) {
+      this.table = table;
+    }
+
+    /**
+     * Closes the writer, in this case flush table commits.
+     *
+     * @param context  The context.
+     * @throws IOException When closing the writer fails.
+     * @see org.apache.hadoop.mapreduce.RecordWriter#close(org.apache.hadoop.mapreduce.TaskAttemptContext)
+     */
+    @Override
+    public void close(TaskAttemptContext context)
+    throws IOException {
+      table.flushCommits();
+      // The following call will shutdown all connections to the cluster from
+      // this JVM.  It will close out our zk session otherwise zk wil log
+      // expired sessions rather than closed ones.  If any other HTable instance
+      // running in this JVM, this next call will cause it damage.  Presumption
+      // is that the above this.table is only instance.
+      HConnectionManager.deleteAllConnections(true);
+    }
+
+    /**
+     * Writes a key/value pair into the table.
+     *
+     * @param key  The key.
+     * @param value  The value.
+     * @throws IOException When writing fails.
+     * @see org.apache.hadoop.mapreduce.RecordWriter#write(java.lang.Object, java.lang.Object)
+     */
+    @Override
+    public void write(KEY key, Writable value)
+    throws IOException {
+      if (value instanceof Put) this.table.put(new Put((Put)value));
+      else if (value instanceof Delete) this.table.delete(new Delete((Delete)value));
+      else throw new IOException("Pass a Delete or a Put");
+    }
+  }
+
+  /**
+   * Creates a new record writer.
+   *
+   * @param context  The current task context.
+   * @return The newly created writer instance.
+   * @throws IOException When creating the writer fails.
+   * @throws InterruptedException When the jobs is cancelled.
+   * @see org.apache.hadoop.mapreduce.lib.output.FileOutputFormat#getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext)
+   */
+  @Override
+  public RecordWriter<KEY, Writable> getRecordWriter(
+    TaskAttemptContext context)
+  throws IOException, InterruptedException {
+    return new TableRecordWriter<KEY>(this.table);
+  }
+
+  /**
+   * Checks if the output target exists.
+   *
+   * @param context  The current context.
+   * @throws IOException When the check fails.
+   * @throws InterruptedException When the job is aborted.
+   * @see org.apache.hadoop.mapreduce.OutputFormat#checkOutputSpecs(org.apache.hadoop.mapreduce.JobContext)
+   */
+  @Override
+  public void checkOutputSpecs(JobContext context) throws IOException,
+      InterruptedException {
+    // TODO Check if the table exists?
+
+  }
+
+  /**
+   * Returns the output committer.
+   *
+   * @param context  The current context.
+   * @return The committer.
+   * @throws IOException When creating the committer fails.
+   * @throws InterruptedException When the job is aborted.
+   * @see org.apache.hadoop.mapreduce.OutputFormat#getOutputCommitter(org.apache.hadoop.mapreduce.TaskAttemptContext)
+   */
+  @Override
+  public OutputCommitter getOutputCommitter(TaskAttemptContext context)
+  throws IOException, InterruptedException {
+    return new TableOutputCommitter();
+  }
+
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    String tableName = conf.get(OUTPUT_TABLE);
+    String address = conf.get(QUORUM_ADDRESS);
+    String serverClass = conf.get(REGION_SERVER_CLASS);
+    String serverImpl = conf.get(REGION_SERVER_IMPL);
+    try {
+      if (address != null) {
+        ZKUtil.applyClusterKeyToConf(conf, address);
+      }
+      if (serverClass != null) {
+        conf.set(HConstants.REGION_SERVER_CLASS, serverClass);
+        conf.set(HConstants.REGION_SERVER_IMPL, serverImpl);
+      }
+      this.table = new HTable(conf, tableName);
+      table.setAutoFlush(false);
+      LOG.info("Created table instance for "  + tableName);
+    } catch(IOException e) {
+      LOG.error(e);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReader.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReader.java
new file mode 100644
index 0000000..fa7de8f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReader.java
@@ -0,0 +1,155 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * Iterate over an HBase table data, return (ImmutableBytesWritable, Result)
+ * pairs.
+ */
+public class TableRecordReader
+extends RecordReader<ImmutableBytesWritable, Result> {
+
+  private TableRecordReaderImpl recordReaderImpl = new TableRecordReaderImpl();
+
+  /**
+   * Restart from survivable exceptions by creating a new scanner.
+   *
+   * @param firstRow  The first row to start at.
+   * @throws IOException When restarting fails.
+   */
+  public void restart(byte[] firstRow) throws IOException {
+    this.recordReaderImpl.restart(firstRow);
+  }
+
+  /**
+   * Build the scanner. Not done in constructor to allow for extension.
+   *
+   * @throws IOException When restarting the scan fails.
+   */
+  public void init() throws IOException {
+    this.recordReaderImpl.init();
+  }
+
+  /**
+   * Sets the HBase table.
+   *
+   * @param htable  The {@link HTable} to scan.
+   */
+  public void setHTable(HTable htable) {
+    this.recordReaderImpl.setHTable(htable);
+  }
+
+  /**
+   * Sets the scan defining the actual details like columns etc.
+   *
+   * @param scan  The scan to set.
+   */
+  public void setScan(Scan scan) {
+    this.recordReaderImpl.setScan(scan);
+  }
+
+  /**
+   * Closes the split.
+   *
+   * @see org.apache.hadoop.mapreduce.RecordReader#close()
+   */
+  @Override
+  public void close() {
+    this.recordReaderImpl.close();
+  }
+
+  /**
+   * Returns the current key.
+   *
+   * @return The current key.
+   * @throws IOException
+   * @throws InterruptedException When the job is aborted.
+   * @see org.apache.hadoop.mapreduce.RecordReader#getCurrentKey()
+   */
+  @Override
+  public ImmutableBytesWritable getCurrentKey() throws IOException,
+      InterruptedException {
+    return this.recordReaderImpl.getCurrentKey();
+  }
+
+  /**
+   * Returns the current value.
+   *
+   * @return The current value.
+   * @throws IOException When the value is faulty.
+   * @throws InterruptedException When the job is aborted.
+   * @see org.apache.hadoop.mapreduce.RecordReader#getCurrentValue()
+   */
+  @Override
+  public Result getCurrentValue() throws IOException, InterruptedException {
+    return this.recordReaderImpl.getCurrentValue();
+  }
+
+  /**
+   * Initializes the reader.
+   *
+   * @param inputsplit  The split to work with.
+   * @param context  The current task context.
+   * @throws IOException When setting up the reader fails.
+   * @throws InterruptedException When the job is aborted.
+   * @see org.apache.hadoop.mapreduce.RecordReader#initialize(
+   *   org.apache.hadoop.mapreduce.InputSplit,
+   *   org.apache.hadoop.mapreduce.TaskAttemptContext)
+   */
+  @Override
+  public void initialize(InputSplit inputsplit,
+      TaskAttemptContext context) throws IOException,
+      InterruptedException {
+  }
+
+  /**
+   * Positions the record reader to the next record.
+   *
+   * @return <code>true</code> if there was another record.
+   * @throws IOException When reading the record failed.
+   * @throws InterruptedException When the job was aborted.
+   * @see org.apache.hadoop.mapreduce.RecordReader#nextKeyValue()
+   */
+  @Override
+  public boolean nextKeyValue() throws IOException, InterruptedException {
+    return this.recordReaderImpl.nextKeyValue();
+  }
+
+  /**
+   * The current progress of the record reader through its data.
+   *
+   * @return A number between 0.0 and 1.0, the fraction of the data read.
+   * @see org.apache.hadoop.mapreduce.RecordReader#getProgress()
+   */
+  @Override
+  public float getProgress() {
+    return this.recordReaderImpl.getProgress();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
new file mode 100644
index 0000000..dbb11e8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReaderImpl.java
@@ -0,0 +1,164 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Iterate over an HBase table data, return (ImmutableBytesWritable, Result)
+ * pairs.
+ */
+public class TableRecordReaderImpl {
+
+
+  static final Log LOG = LogFactory.getLog(TableRecordReader.class);
+
+  private ResultScanner scanner = null;
+  private Scan scan = null;
+  private HTable htable = null;
+  private byte[] lastRow = null;
+  private ImmutableBytesWritable key = null;
+  private Result value = null;
+
+  /**
+   * Restart from survivable exceptions by creating a new scanner.
+   *
+   * @param firstRow  The first row to start at.
+   * @throws IOException When restarting fails.
+   */
+  public void restart(byte[] firstRow) throws IOException {
+    Scan newScan = new Scan(scan);
+    newScan.setStartRow(firstRow);
+    this.scanner = this.htable.getScanner(newScan);
+  }
+
+  /**
+   * Build the scanner. Not done in constructor to allow for extension.
+   *
+   * @throws IOException When restarting the scan fails.
+   */
+  public void init() throws IOException {
+    restart(scan.getStartRow());
+  }
+
+  /**
+   * Sets the HBase table.
+   *
+   * @param htable  The {@link HTable} to scan.
+   */
+  public void setHTable(HTable htable) {
+    this.htable = htable;
+  }
+
+  /**
+   * Sets the scan defining the actual details like columns etc.
+   *
+   * @param scan  The scan to set.
+   */
+  public void setScan(Scan scan) {
+    this.scan = scan;
+  }
+
+  /**
+   * Closes the split.
+   *
+   *
+   */
+  public void close() {
+    this.scanner.close();
+  }
+
+  /**
+   * Returns the current key.
+   *
+   * @return The current key.
+   * @throws IOException
+   * @throws InterruptedException When the job is aborted.
+   */
+  public ImmutableBytesWritable getCurrentKey() throws IOException,
+      InterruptedException {
+    return key;
+  }
+
+  /**
+   * Returns the current value.
+   *
+   * @return The current value.
+   * @throws IOException When the value is faulty.
+   * @throws InterruptedException When the job is aborted.
+   */
+  public Result getCurrentValue() throws IOException, InterruptedException {
+    return value;
+  }
+
+
+  /**
+   * Positions the record reader to the next record.
+   *
+   * @return <code>true</code> if there was another record.
+   * @throws IOException When reading the record failed.
+   * @throws InterruptedException When the job was aborted.
+   */
+  public boolean nextKeyValue() throws IOException, InterruptedException {
+    if (key == null) key = new ImmutableBytesWritable();
+    if (value == null) value = new Result();
+    try {
+      value = this.scanner.next();
+    } catch (IOException e) {
+      LOG.debug("recovered from " + StringUtils.stringifyException(e));
+      if (lastRow == null) {
+        LOG.warn("We are restarting the first next() invocation," +
+            " if your mapper's restarted a few other times like this" +
+            " then you should consider killing this job and investigate" +
+            " why it's taking so long.");
+        lastRow = scan.getStartRow();
+      }
+      restart(lastRow);
+      scanner.next();    // skip presumed already mapped row
+      value = scanner.next();
+    }
+    if (value != null && value.size() > 0) {
+      key.set(value.getRow());
+      lastRow = key.get();
+      return true;
+    }
+    return false;
+  }
+
+  /**
+   * The current progress of the record reader through its data.
+   *
+   * @return A number between 0.0 and 1.0, the fraction of the data read.
+   */
+  public float getProgress() {
+    // Depends on the total number of tuples
+    return 0;
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableReducer.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableReducer.java
new file mode 100644
index 0000000..d087f85
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableReducer.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.Reducer;
+
+/**
+ * Extends the basic <code>Reducer</code> class to add the required key and
+ * value input/output classes. While the input key and value as well as the
+ * output key can be anything handed in from the previous map phase the output
+ * value <u>must</u> be either a {@link org.apache.hadoop.hbase.client.Put Put}
+ * or a {@link org.apache.hadoop.hbase.client.Delete Delete} instance when
+ * using the {@link TableOutputFormat} class.
+ * <p>
+ * This class is extended by {@link IdentityTableReducer} but can also be
+ * subclassed to implement similar features or any custom code needed. It has
+ * the advantage to enforce the output value to a specific basic type.
+ *
+ * @param <KEYIN>  The type of the input key.
+ * @param <VALUEIN>  The type of the input value.
+ * @param <KEYOUT>  The type of the output key.
+ * @see org.apache.hadoop.mapreduce.Reducer
+ */
+public abstract class TableReducer<KEYIN, VALUEIN, KEYOUT>
+extends Reducer<KEYIN, VALUEIN, KEYOUT, Writable> {
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSplit.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSplit.java
new file mode 100644
index 0000000..082c931
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/TableSplit.java
@@ -0,0 +1,186 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+
+/**
+ * A table split corresponds to a key range (low, high). All references to row
+ * below refer to the key of the row.
+ */
+public class TableSplit extends InputSplit
+implements Writable, Comparable<TableSplit> {
+
+  private byte [] tableName;
+  private byte [] startRow;
+  private byte [] endRow;
+  private String regionLocation;
+
+  /** Default constructor. */
+  public TableSplit() {
+    this(HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY,
+      HConstants.EMPTY_BYTE_ARRAY, "");
+  }
+
+  /**
+   * Creates a new instance while assigning all variables.
+   *
+   * @param tableName  The name of the current table.
+   * @param startRow  The start row of the split.
+   * @param endRow  The end row of the split.
+   * @param location  The location of the region.
+   */
+  public TableSplit(byte [] tableName, byte [] startRow, byte [] endRow,
+      final String location) {
+    this.tableName = tableName;
+    this.startRow = startRow;
+    this.endRow = endRow;
+    this.regionLocation = location;
+  }
+
+  /**
+   * Returns the table name.
+   *
+   * @return The table name.
+   */
+  public byte [] getTableName() {
+    return tableName;
+  }
+
+  /**
+   * Returns the start row.
+   *
+   * @return The start row.
+   */
+  public byte [] getStartRow() {
+    return startRow;
+  }
+
+  /**
+   * Returns the end row.
+   *
+   * @return The end row.
+   */
+  public byte [] getEndRow() {
+    return endRow;
+  }
+
+  /**
+   * Returns the region location.
+   *
+   * @return The region's location.
+   */
+  public String getRegionLocation() {
+    return regionLocation;
+  }
+
+  /**
+   * Returns the region's location as an array.
+   *
+   * @return The array containing the region location.
+   * @see org.apache.hadoop.mapreduce.InputSplit#getLocations()
+   */
+  @Override
+  public String[] getLocations() {
+    return new String[] {regionLocation};
+  }
+
+  /**
+   * Returns the length of the split.
+   *
+   * @return The length of the split.
+   * @see org.apache.hadoop.mapreduce.InputSplit#getLength()
+   */
+  @Override
+  public long getLength() {
+    // Not clear how to obtain this... seems to be used only for sorting splits
+    return 0;
+  }
+
+  /**
+   * Reads the values of each field.
+   *
+   * @param in  The input to read from.
+   * @throws IOException When reading the input fails.
+   */
+  @Override
+  public void readFields(DataInput in) throws IOException {
+    tableName = Bytes.readByteArray(in);
+    startRow = Bytes.readByteArray(in);
+    endRow = Bytes.readByteArray(in);
+    regionLocation = Bytes.toString(Bytes.readByteArray(in));
+  }
+
+  /**
+   * Writes the field values to the output.
+   *
+   * @param out  The output to write to.
+   * @throws IOException When writing the values to the output fails.
+   */
+  @Override
+  public void write(DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, tableName);
+    Bytes.writeByteArray(out, startRow);
+    Bytes.writeByteArray(out, endRow);
+    Bytes.writeByteArray(out, Bytes.toBytes(regionLocation));
+  }
+
+  /**
+   * Returns the details about this instance as a string.
+   *
+   * @return The values of this instance as a string.
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    return regionLocation + ":" +
+      Bytes.toStringBinary(startRow) + "," + Bytes.toStringBinary(endRow);
+  }
+
+  /**
+   * Compares this split against the given one.
+   *
+   * @param split  The split to compare to.
+   * @return The result of the comparison.
+   * @see java.lang.Comparable#compareTo(java.lang.Object)
+   */
+  @Override
+  public int compareTo(TableSplit split) {
+    return Bytes.compareTo(getStartRow(), split.getStartRow());
+  }
+
+  @Override
+  public boolean equals(Object o) {
+    if (o == null || !(o instanceof TableSplit)) {
+      return false;
+    }
+    return Bytes.equals(tableName, ((TableSplit)o).tableName) &&
+      Bytes.equals(startRow, ((TableSplit)o).startRow) &&
+      Bytes.equals(endRow, ((TableSplit)o).endRow) &&
+      regionLocation.equals(((TableSplit)o).regionLocation);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/hadoopbackport/InputSampler.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/hadoopbackport/InputSampler.java
new file mode 100644
index 0000000..13be81e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/hadoopbackport/InputSampler.java
@@ -0,0 +1,413 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.mapreduce.hadoopbackport;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.TaskAttemptID;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Utility for collecting samples and writing a partition file for
+ * {@link TotalOrderPartitioner}.
+ *
+ * This is an identical copy of o.a.h.mapreduce.lib.partition.TotalOrderPartitioner
+ * from Hadoop trunk at r910774, with the exception of replacing
+ * TaskAttemptContextImpl with TaskAttemptContext.
+ */
+public class InputSampler<K,V> extends Configured implements Tool  {
+
+  private static final Log LOG = LogFactory.getLog(InputSampler.class);
+
+  static int printUsage() {
+    System.out.println("sampler -r <reduces>\n" +
+      "      [-inFormat <input format class>]\n" +
+      "      [-keyClass <map input & output key class>]\n" +
+      "      [-splitRandom <double pcnt> <numSamples> <maxsplits> | " +
+      "// Sample from random splits at random (general)\n" +
+      "       -splitSample <numSamples> <maxsplits> | " +
+      "             // Sample from first records in splits (random data)\n"+
+      "       -splitInterval <double pcnt> <maxsplits>]" +
+      "             // Sample from splits at intervals (sorted data)");
+    System.out.println("Default sampler: -splitRandom 0.1 10000 10");
+    ToolRunner.printGenericCommandUsage(System.out);
+    return -1;
+  }
+
+  public InputSampler(Configuration conf) {
+    setConf(conf);
+  }
+
+  /**
+   * Interface to sample using an 
+   * {@link org.apache.hadoop.mapreduce.InputFormat}.
+   */
+  public interface Sampler<K,V> {
+    /**
+     * For a given job, collect and return a subset of the keys from the
+     * input data.
+     */
+    K[] getSample(InputFormat<K,V> inf, Job job) 
+    throws IOException, InterruptedException;
+  }
+
+  /**
+   * Samples the first n records from s splits.
+   * Inexpensive way to sample random data.
+   */
+  public static class SplitSampler<K,V> implements Sampler<K,V> {
+
+    private final int numSamples;
+    private final int maxSplitsSampled;
+
+    /**
+     * Create a SplitSampler sampling <em>all</em> splits.
+     * Takes the first numSamples / numSplits records from each split.
+     * @param numSamples Total number of samples to obtain from all selected
+     *                   splits.
+     */
+    public SplitSampler(int numSamples) {
+      this(numSamples, Integer.MAX_VALUE);
+    }
+
+    /**
+     * Create a new SplitSampler.
+     * @param numSamples Total number of samples to obtain from all selected
+     *                   splits.
+     * @param maxSplitsSampled The maximum number of splits to examine.
+     */
+    public SplitSampler(int numSamples, int maxSplitsSampled) {
+      this.numSamples = numSamples;
+      this.maxSplitsSampled = maxSplitsSampled;
+    }
+
+    /**
+     * From each split sampled, take the first numSamples / numSplits records.
+     */
+    @SuppressWarnings("unchecked") // ArrayList::toArray doesn't preserve type
+    public K[] getSample(InputFormat<K,V> inf, Job job) 
+        throws IOException, InterruptedException {
+      List<InputSplit> splits = inf.getSplits(job);
+      ArrayList<K> samples = new ArrayList<K>(numSamples);
+      int splitsToSample = Math.min(maxSplitsSampled, splits.size());
+      int splitStep = splits.size() / splitsToSample;
+      int samplesPerSplit = numSamples / splitsToSample;
+      long records = 0;
+      for (int i = 0; i < splitsToSample; ++i) {
+        RecordReader<K,V> reader = inf.createRecordReader(
+          splits.get(i * splitStep), 
+          new TaskAttemptContext(job.getConfiguration(), 
+                                 new TaskAttemptID()));
+        while (reader.nextKeyValue()) {
+          samples.add(reader.getCurrentKey());
+          ++records;
+          if ((i+1) * samplesPerSplit <= records) {
+            break;
+          }
+        }
+        reader.close();
+      }
+      return (K[])samples.toArray();
+    }
+  }
+
+  /**
+   * Sample from random points in the input.
+   * General-purpose sampler. Takes numSamples / maxSplitsSampled inputs from
+   * each split.
+   */
+  public static class RandomSampler<K,V> implements Sampler<K,V> {
+    private double freq;
+    private final int numSamples;
+    private final int maxSplitsSampled;
+
+    /**
+     * Create a new RandomSampler sampling <em>all</em> splits.
+     * This will read every split at the client, which is very expensive.
+     * @param freq Probability with which a key will be chosen.
+     * @param numSamples Total number of samples to obtain from all selected
+     *                   splits.
+     */
+    public RandomSampler(double freq, int numSamples) {
+      this(freq, numSamples, Integer.MAX_VALUE);
+    }
+
+    /**
+     * Create a new RandomSampler.
+     * @param freq Probability with which a key will be chosen.
+     * @param numSamples Total number of samples to obtain from all selected
+     *                   splits.
+     * @param maxSplitsSampled The maximum number of splits to examine.
+     */
+    public RandomSampler(double freq, int numSamples, int maxSplitsSampled) {
+      this.freq = freq;
+      this.numSamples = numSamples;
+      this.maxSplitsSampled = maxSplitsSampled;
+    }
+
+    /**
+     * Randomize the split order, then take the specified number of keys from
+     * each split sampled, where each key is selected with the specified
+     * probability and possibly replaced by a subsequently selected key when
+     * the quota of keys from that split is satisfied.
+     */
+    @SuppressWarnings("unchecked") // ArrayList::toArray doesn't preserve type
+    public K[] getSample(InputFormat<K,V> inf, Job job) 
+        throws IOException, InterruptedException {
+      List<InputSplit> splits = inf.getSplits(job);
+      ArrayList<K> samples = new ArrayList<K>(numSamples);
+      int splitsToSample = Math.min(maxSplitsSampled, splits.size());
+
+      Random r = new Random();
+      long seed = r.nextLong();
+      r.setSeed(seed);
+      LOG.debug("seed: " + seed);
+      // shuffle splits
+      for (int i = 0; i < splits.size(); ++i) {
+        InputSplit tmp = splits.get(i);
+        int j = r.nextInt(splits.size());
+        splits.set(i, splits.get(j));
+        splits.set(j, tmp);
+      }
+      // our target rate is in terms of the maximum number of sample splits,
+      // but we accept the possibility of sampling additional splits to hit
+      // the target sample keyset
+      for (int i = 0; i < splitsToSample ||
+                     (i < splits.size() && samples.size() < numSamples); ++i) {
+        RecordReader<K,V> reader = inf.createRecordReader(splits.get(i), 
+          new TaskAttemptContext(job.getConfiguration(), 
+                                 new TaskAttemptID()));
+        while (reader.nextKeyValue()) {
+          if (r.nextDouble() <= freq) {
+            if (samples.size() < numSamples) {
+              samples.add(reader.getCurrentKey());
+            } else {
+              // When exceeding the maximum number of samples, replace a
+              // random element with this one, then adjust the frequency
+              // to reflect the possibility of existing elements being
+              // pushed out
+              int ind = r.nextInt(numSamples);
+              if (ind != numSamples) {
+                samples.set(ind, reader.getCurrentKey());
+              }
+              freq *= (numSamples - 1) / (double) numSamples;
+            }
+          }
+        }
+        reader.close();
+      }
+      return (K[])samples.toArray();
+    }
+  }
+
+  /**
+   * Sample from s splits at regular intervals.
+   * Useful for sorted data.
+   */
+  public static class IntervalSampler<K,V> implements Sampler<K,V> {
+    private final double freq;
+    private final int maxSplitsSampled;
+
+    /**
+     * Create a new IntervalSampler sampling <em>all</em> splits.
+     * @param freq The frequency with which records will be emitted.
+     */
+    public IntervalSampler(double freq) {
+      this(freq, Integer.MAX_VALUE);
+    }
+
+    /**
+     * Create a new IntervalSampler.
+     * @param freq The frequency with which records will be emitted.
+     * @param maxSplitsSampled The maximum number of splits to examine.
+     * @see #getSample
+     */
+    public IntervalSampler(double freq, int maxSplitsSampled) {
+      this.freq = freq;
+      this.maxSplitsSampled = maxSplitsSampled;
+    }
+
+    /**
+     * For each split sampled, emit when the ratio of the number of records
+     * retained to the total record count is less than the specified
+     * frequency.
+     */
+    @SuppressWarnings("unchecked") // ArrayList::toArray doesn't preserve type
+    public K[] getSample(InputFormat<K,V> inf, Job job) 
+        throws IOException, InterruptedException {
+      List<InputSplit> splits = inf.getSplits(job);
+      ArrayList<K> samples = new ArrayList<K>();
+      int splitsToSample = Math.min(maxSplitsSampled, splits.size());
+      int splitStep = splits.size() / splitsToSample;
+      long records = 0;
+      long kept = 0;
+      for (int i = 0; i < splitsToSample; ++i) {
+        RecordReader<K,V> reader = inf.createRecordReader(
+          splits.get(i * splitStep),
+          new TaskAttemptContext(job.getConfiguration(), 
+                                 new TaskAttemptID()));
+        while (reader.nextKeyValue()) {
+          ++records;
+          if ((double) kept / records < freq) {
+            ++kept;
+            samples.add(reader.getCurrentKey());
+          }
+        }
+        reader.close();
+      }
+      return (K[])samples.toArray();
+    }
+  }
+
+  /**
+   * Write a partition file for the given job, using the Sampler provided.
+   * Queries the sampler for a sample keyset, sorts by the output key
+   * comparator, selects the keys for each rank, and writes to the destination
+   * returned from {@link TotalOrderPartitioner#getPartitionFile}.
+   */
+  @SuppressWarnings("unchecked") // getInputFormat, getOutputKeyComparator
+  public static <K,V> void writePartitionFile(Job job, Sampler<K,V> sampler) 
+      throws IOException, ClassNotFoundException, InterruptedException {
+    Configuration conf = job.getConfiguration();
+    final InputFormat inf = 
+        ReflectionUtils.newInstance(job.getInputFormatClass(), conf);
+    int numPartitions = job.getNumReduceTasks();
+    K[] samples = sampler.getSample(inf, job);
+    LOG.info("Using " + samples.length + " samples");
+    RawComparator<K> comparator =
+      (RawComparator<K>) job.getSortComparator();
+    Arrays.sort(samples, comparator);
+    Path dst = new Path(TotalOrderPartitioner.getPartitionFile(conf));
+    FileSystem fs = dst.getFileSystem(conf);
+    if (fs.exists(dst)) {
+      fs.delete(dst, false);
+    }
+    SequenceFile.Writer writer = SequenceFile.createWriter(fs, 
+      conf, dst, job.getMapOutputKeyClass(), NullWritable.class);
+    NullWritable nullValue = NullWritable.get();
+    float stepSize = samples.length / (float) numPartitions;
+    int last = -1;
+    for(int i = 1; i < numPartitions; ++i) {
+      int k = Math.round(stepSize * i);
+      while (last >= k && comparator.compare(samples[last], samples[k]) == 0) {
+        ++k;
+      }
+      writer.append(samples[k], nullValue);
+      last = k;
+    }
+    writer.close();
+  }
+
+  /**
+   * Driver for InputSampler from the command line.
+   * Configures a JobConf instance and calls {@link #writePartitionFile}.
+   */
+  public int run(String[] args) throws Exception {
+    Job job = new Job(getConf());
+    ArrayList<String> otherArgs = new ArrayList<String>();
+    Sampler<K,V> sampler = null;
+    for(int i=0; i < args.length; ++i) {
+      try {
+        if ("-r".equals(args[i])) {
+          job.setNumReduceTasks(Integer.parseInt(args[++i]));
+        } else if ("-inFormat".equals(args[i])) {
+          job.setInputFormatClass(
+              Class.forName(args[++i]).asSubclass(InputFormat.class));
+        } else if ("-keyClass".equals(args[i])) {
+          job.setMapOutputKeyClass(
+              Class.forName(args[++i]).asSubclass(WritableComparable.class));
+        } else if ("-splitSample".equals(args[i])) {
+          int numSamples = Integer.parseInt(args[++i]);
+          int maxSplits = Integer.parseInt(args[++i]);
+          if (0 >= maxSplits) maxSplits = Integer.MAX_VALUE;
+          sampler = new SplitSampler<K,V>(numSamples, maxSplits);
+        } else if ("-splitRandom".equals(args[i])) {
+          double pcnt = Double.parseDouble(args[++i]);
+          int numSamples = Integer.parseInt(args[++i]);
+          int maxSplits = Integer.parseInt(args[++i]);
+          if (0 >= maxSplits) maxSplits = Integer.MAX_VALUE;
+          sampler = new RandomSampler<K,V>(pcnt, numSamples, maxSplits);
+        } else if ("-splitInterval".equals(args[i])) {
+          double pcnt = Double.parseDouble(args[++i]);
+          int maxSplits = Integer.parseInt(args[++i]);
+          if (0 >= maxSplits) maxSplits = Integer.MAX_VALUE;
+          sampler = new IntervalSampler<K,V>(pcnt, maxSplits);
+        } else {
+          otherArgs.add(args[i]);
+        }
+      } catch (NumberFormatException except) {
+        System.out.println("ERROR: Integer expected instead of " + args[i]);
+        return printUsage();
+      } catch (ArrayIndexOutOfBoundsException except) {
+        System.out.println("ERROR: Required parameter missing from " +
+            args[i-1]);
+        return printUsage();
+      }
+    }
+    if (job.getNumReduceTasks() <= 1) {
+      System.err.println("Sampler requires more than one reducer");
+      return printUsage();
+    }
+    if (otherArgs.size() < 2) {
+      System.out.println("ERROR: Wrong number of parameters: ");
+      return printUsage();
+    }
+    if (null == sampler) {
+      sampler = new RandomSampler<K,V>(0.1, 10000, 10);
+    }
+
+    Path outf = new Path(otherArgs.remove(otherArgs.size() - 1));
+    TotalOrderPartitioner.setPartitionFile(getConf(), outf);
+    for (String s : otherArgs) {
+      FileInputFormat.addInputPath(job, new Path(s));
+    }
+    InputSampler.<K,V>writePartitionFile(job, sampler);
+
+    return 0;
+  }
+
+  public static void main(String[] args) throws Exception {
+    InputSampler<?,?> sampler = new InputSampler(new Configuration());
+    int res = ToolRunner.run(sampler, args);
+    System.exit(res);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/hadoopbackport/TotalOrderPartitioner.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/hadoopbackport/TotalOrderPartitioner.java
new file mode 100644
index 0000000..065e844
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/hadoopbackport/TotalOrderPartitioner.java
@@ -0,0 +1,401 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce.hadoopbackport;
+
+import java.io.IOException;
+import java.lang.reflect.Array;
+import java.util.ArrayList;
+import java.util.Arrays;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.BinaryComparable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Partitioner;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * Partitioner effecting a total order by reading split points from
+ * an externally generated source.
+ * 
+ * This is an identical copy of o.a.h.mapreduce.lib.partition.TotalOrderPartitioner
+ * from Hadoop trunk at r910774.
+ */
+public class TotalOrderPartitioner<K extends WritableComparable<?>,V>
+    extends Partitioner<K,V> implements Configurable {
+
+  private Node partitions;
+  public static final String DEFAULT_PATH = "_partition.lst";
+  public static final String PARTITIONER_PATH = 
+    "mapreduce.totalorderpartitioner.path";
+  public static final String MAX_TRIE_DEPTH = 
+    "mapreduce.totalorderpartitioner.trie.maxdepth"; 
+  public static final String NATURAL_ORDER = 
+    "mapreduce.totalorderpartitioner.naturalorder";
+  Configuration conf;
+
+  public TotalOrderPartitioner() { }
+
+  /**
+   * Read in the partition file and build indexing data structures.
+   * If the keytype is {@link org.apache.hadoop.io.BinaryComparable} and
+   * <tt>total.order.partitioner.natural.order</tt> is not false, a trie
+   * of the first <tt>total.order.partitioner.max.trie.depth</tt>(2) + 1 bytes
+   * will be built. Otherwise, keys will be located using a binary search of
+   * the partition keyset using the {@link org.apache.hadoop.io.RawComparator}
+   * defined for this job. The input file must be sorted with the same
+   * comparator and contain {@link Job#getNumReduceTasks()} - 1 keys.
+   */
+  @SuppressWarnings("unchecked") // keytype from conf not static
+  public void setConf(Configuration conf) {
+    try {
+      this.conf = conf;
+      String parts = getPartitionFile(conf);
+      final Path partFile = new Path(parts);
+      final FileSystem fs = (DEFAULT_PATH.equals(parts))
+        ? FileSystem.getLocal(conf)     // assume in DistributedCache
+        : partFile.getFileSystem(conf);
+
+      Job job = new Job(conf);
+      Class<K> keyClass = (Class<K>)job.getMapOutputKeyClass();
+      K[] splitPoints = readPartitions(fs, partFile, keyClass, conf);
+      if (splitPoints.length != job.getNumReduceTasks() - 1) {
+        throw new IOException("Wrong number of partitions in keyset:"
+            + splitPoints.length);
+      }
+      RawComparator<K> comparator =
+        (RawComparator<K>) job.getSortComparator();
+      for (int i = 0; i < splitPoints.length - 1; ++i) {
+        if (comparator.compare(splitPoints[i], splitPoints[i+1]) >= 0) {
+          throw new IOException("Split points are out of order");
+        }
+      }
+      boolean natOrder =
+        conf.getBoolean(NATURAL_ORDER, true);
+      if (natOrder && BinaryComparable.class.isAssignableFrom(keyClass)) {
+        partitions = buildTrie((BinaryComparable[])splitPoints, 0,
+            splitPoints.length, new byte[0],
+            // Now that blocks of identical splitless trie nodes are 
+            // represented reentrantly, and we develop a leaf for any trie
+            // node with only one split point, the only reason for a depth
+            // limit is to refute stack overflow or bloat in the pathological
+            // case where the split points are long and mostly look like bytes 
+            // iii...iixii...iii   .  Therefore, we make the default depth
+            // limit large but not huge.
+            conf.getInt(MAX_TRIE_DEPTH, 200));
+      } else {
+        partitions = new BinarySearchNode(splitPoints, comparator);
+      }
+    } catch (IOException e) {
+      throw new IllegalArgumentException("Can't read partitions file", e);
+    }
+  }
+
+  public Configuration getConf() {
+    return conf;
+  }
+  
+  // by construction, we know if our keytype
+  @SuppressWarnings("unchecked") // is memcmp-able and uses the trie
+  public int getPartition(K key, V value, int numPartitions) {
+    return partitions.findPartition(key);
+  }
+
+  /**
+   * Set the path to the SequenceFile storing the sorted partition keyset.
+   * It must be the case that for <tt>R</tt> reduces, there are <tt>R-1</tt>
+   * keys in the SequenceFile.
+   */
+  public static void setPartitionFile(Configuration conf, Path p) {
+    conf.set(PARTITIONER_PATH, p.toString());
+  }
+
+  /**
+   * Get the path to the SequenceFile storing the sorted partition keyset.
+   * @see #setPartitionFile(Configuration, Path)
+   */
+  public static String getPartitionFile(Configuration conf) {
+    return conf.get(PARTITIONER_PATH, DEFAULT_PATH);
+  }
+
+  /**
+   * Interface to the partitioner to locate a key in the partition keyset.
+   */
+  interface Node<T> {
+    /**
+     * Locate partition in keyset K, st [Ki..Ki+1) defines a partition,
+     * with implicit K0 = -inf, Kn = +inf, and |K| = #partitions - 1.
+     */
+    int findPartition(T key);
+  }
+
+  /**
+   * Base class for trie nodes. If the keytype is memcomp-able, this builds
+   * tries of the first <tt>total.order.partitioner.max.trie.depth</tt>
+   * bytes.
+   */
+  static abstract class TrieNode implements Node<BinaryComparable> {
+    private final int level;
+    TrieNode(int level) {
+      this.level = level;
+    }
+    int getLevel() {
+      return level;
+    }
+  }
+
+  /**
+   * For types that are not {@link org.apache.hadoop.io.BinaryComparable} or
+   * where disabled by <tt>total.order.partitioner.natural.order</tt>,
+   * search the partition keyset with a binary search.
+   */
+  class BinarySearchNode implements Node<K> {
+    private final K[] splitPoints;
+    private final RawComparator<K> comparator;
+    BinarySearchNode(K[] splitPoints, RawComparator<K> comparator) {
+      this.splitPoints = splitPoints;
+      this.comparator = comparator;
+    }
+    public int findPartition(K key) {
+      final int pos = Arrays.binarySearch(splitPoints, key, comparator) + 1;
+      return (pos < 0) ? -pos : pos;
+    }
+  }
+
+  /**
+   * An inner trie node that contains 256 children based on the next
+   * character.
+   */
+  class InnerTrieNode extends TrieNode {
+    private TrieNode[] child = new TrieNode[256];
+
+    InnerTrieNode(int level) {
+      super(level);
+    }
+    public int findPartition(BinaryComparable key) {
+      int level = getLevel();
+      if (key.getLength() <= level) {
+        return child[0].findPartition(key);
+      }
+      return child[0xFF & key.getBytes()[level]].findPartition(key);
+    }
+  }
+  
+  /**
+   * @param level        the tree depth at this node
+   * @param splitPoints  the full split point vector, which holds
+   *                     the split point or points this leaf node
+   *                     should contain
+   * @param lower        first INcluded element of splitPoints
+   * @param upper        first EXcluded element of splitPoints
+   * @return  a leaf node.  They come in three kinds: no split points 
+   *          [and the findParttion returns a canned index], one split
+   *          point [and we compare with a single comparand], or more
+   *          than one [and we do a binary search].  The last case is
+   *          rare.
+   */
+  private TrieNode LeafTrieNodeFactory
+             (int level, BinaryComparable[] splitPoints, int lower, int upper) {
+      switch (upper - lower) {
+      case 0:
+          return new UnsplitTrieNode(level, lower);
+          
+      case 1:
+          return new SinglySplitTrieNode(level, splitPoints, lower);
+          
+      default:
+          return new LeafTrieNode(level, splitPoints, lower, upper);
+      }
+  }
+
+  /**
+   * A leaf trie node that scans for the key between lower..upper.
+   * 
+   * We don't generate many of these now, since we usually continue trie-ing 
+   * when more than one split point remains at this level. and we make different
+   * objects for nodes with 0 or 1 split point.
+   */
+  private class LeafTrieNode extends TrieNode {
+    final int lower;
+    final int upper;
+    final BinaryComparable[] splitPoints;
+    LeafTrieNode(int level, BinaryComparable[] splitPoints, int lower, int upper) {
+      super(level);
+      this.lower = lower;
+      this.upper = upper;
+      this.splitPoints = splitPoints;
+    }
+    public int findPartition(BinaryComparable key) {
+      final int pos = Arrays.binarySearch(splitPoints, lower, upper, key) + 1;
+      return (pos < 0) ? -pos : pos;
+    }
+  }
+  
+  private class UnsplitTrieNode extends TrieNode {
+      final int result;
+      
+      UnsplitTrieNode(int level, int value) {
+          super(level);
+          this.result = value;
+      }
+      
+      public int findPartition(BinaryComparable key) {
+          return result;
+      }
+  }
+  
+  private class SinglySplitTrieNode extends TrieNode {
+      final int               lower;
+      final BinaryComparable  mySplitPoint;
+      
+      SinglySplitTrieNode(int level, BinaryComparable[] splitPoints, int lower) {
+          super(level);
+          this.lower = lower;
+          this.mySplitPoint = splitPoints[lower];
+      }
+      
+      public int findPartition(BinaryComparable key) {
+          return lower + (key.compareTo(mySplitPoint) < 0 ? 0 : 1);
+      }
+  }
+
+
+  /**
+   * Read the cut points from the given IFile.
+   * @param fs The file system
+   * @param p The path to read
+   * @param keyClass The map output key class
+   * @param job The job config
+   * @throws IOException
+   */
+                                 // matching key types enforced by passing in
+  @SuppressWarnings("unchecked") // map output key class
+  private K[] readPartitions(FileSystem fs, Path p, Class<K> keyClass,
+      Configuration conf) throws IOException {
+    SequenceFile.Reader reader = new SequenceFile.Reader(fs, p, conf);
+    ArrayList<K> parts = new ArrayList<K>();
+    K key = ReflectionUtils.newInstance(keyClass, conf);
+    NullWritable value = NullWritable.get();
+    while (reader.next(key, value)) {
+      parts.add(key);
+      key = ReflectionUtils.newInstance(keyClass, conf);
+    }
+    reader.close();
+    return parts.toArray((K[])Array.newInstance(keyClass, parts.size()));
+  }
+  
+  /**
+   * 
+   * This object contains a TrieNodeRef if there is such a thing that
+   * can be repeated.  Two adjacent trie node slots that contain no 
+   * split points can be filled with the same trie node, even if they
+   * are not on the same level.  See buildTreeRec, below.
+   *
+   */  
+  private class CarriedTrieNodeRef
+  {
+      TrieNode   content;
+      
+      CarriedTrieNodeRef() {
+          content = null;
+      }
+  }
+
+  
+  /**
+   * Given a sorted set of cut points, build a trie that will find the correct
+   * partition quickly.
+   * @param splits the list of cut points
+   * @param lower the lower bound of partitions 0..numPartitions-1
+   * @param upper the upper bound of partitions 0..numPartitions-1
+   * @param prefix the prefix that we have already checked against
+   * @param maxDepth the maximum depth we will build a trie for
+   * @return the trie node that will divide the splits correctly
+   */
+  private TrieNode buildTrie(BinaryComparable[] splits, int lower,
+          int upper, byte[] prefix, int maxDepth) {
+      return buildTrieRec
+               (splits, lower, upper, prefix, maxDepth, new CarriedTrieNodeRef());
+  }
+  
+  /**
+   * This is the core of buildTrie.  The interface, and stub, above, just adds
+   * an empty CarriedTrieNodeRef.  
+   * 
+   * We build trie nodes in depth first order, which is also in key space
+   * order.  Every leaf node is referenced as a slot in a parent internal
+   * node.  If two adjacent slots [in the DFO] hold leaf nodes that have
+   * no split point, then they are not separated by a split point either, 
+   * because there's no place in key space for that split point to exist.
+   * 
+   * When that happens, the leaf nodes would be semantically identical, and
+   * we reuse the object.  A single CarriedTrieNodeRef "ref" lives for the 
+   * duration of the tree-walk.  ref carries a potentially reusable, unsplit
+   * leaf node for such reuse until a leaf node with a split arises, which 
+   * breaks the chain until we need to make a new unsplit leaf node.
+   * 
+   * Note that this use of CarriedTrieNodeRef means that for internal nodes, 
+   * for internal nodes if this code is modified in any way we still need 
+   * to make or fill in the subnodes in key space order.
+   */
+  private TrieNode buildTrieRec(BinaryComparable[] splits, int lower,
+      int upper, byte[] prefix, int maxDepth, CarriedTrieNodeRef ref) {
+    final int depth = prefix.length;
+    // We generate leaves for a single split point as well as for 
+    // no split points.
+    if (depth >= maxDepth || lower >= upper - 1) {
+        // If we have two consecutive requests for an unsplit trie node, we
+        // can deliver the same one the second time.
+        if (lower == upper && ref.content != null) {
+            return ref.content;
+        }
+        TrieNode  result = LeafTrieNodeFactory(depth, splits, lower, upper);
+        ref.content = lower == upper ? result : null;
+        return result;
+    }
+    InnerTrieNode result = new InnerTrieNode(depth);
+    byte[] trial = Arrays.copyOf(prefix, prefix.length + 1);
+    // append an extra byte on to the prefix
+    int         currentBound = lower;
+    for(int ch = 0; ch < 0xFF; ++ch) {
+      trial[depth] = (byte) (ch + 1);
+      lower = currentBound;
+      while (currentBound < upper) {
+        if (splits[currentBound].compareTo(trial, 0, trial.length) >= 0) {
+          break;
+        }
+        currentBound += 1;
+      }
+      trial[depth] = (byte) ch;
+      result.child[0xFF & ch]
+                   = buildTrieRec(splits, lower, currentBound, trial, maxDepth, ref);
+    }
+    // pick up the rest
+    trial[depth] = (byte)0xFF;
+    result.child[0xFF] 
+                 = buildTrieRec(splits, lower, currentBound, trial, maxDepth, ref);
+    
+    return result;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/package-info.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/package-info.java
new file mode 100644
index 0000000..affb940
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/package-info.java
@@ -0,0 +1,164 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+Provides HBase <a href="http://wiki.apache.org/hadoop/HadoopMapReduce">MapReduce</a>
+Input/OutputFormats, a table indexing MapReduce job, and utility
+
+<h2>Table of Contents</h2>
+<ul>
+<li><a href="#classpath">HBase, MapReduce and the CLASSPATH</a></li>
+<li><a href="#driver">Bundled HBase MapReduce Jobs</a></li>
+<li><a href="#sink">HBase as MapReduce job data source and sink</a></li>
+<li><a href="#bulk">Bulk Import writing HFiles directly</a></li>
+<li><a href="#examples">Example Code</a></li>
+</ul>
+
+<h2><a name="classpath">HBase, MapReduce and the CLASSPATH</a></h2>
+
+<p>MapReduce jobs deployed to a MapReduce cluster do not by default have access
+to the HBase configuration under <code>$HBASE_CONF_DIR</code> nor to HBase classes.
+You could add <code>hbase-site.xml</code> to
+<code>$HADOOP_HOME/conf</code> and add
+HBase jars to the <code>$HADOOP_HOME/lib</code> and copy these
+changes across your cluster (or edit conf/hadoop-env.sh and add them to the
+<code>HADOOP_CLASSPATH</code> variable) but this will pollute your
+hadoop install with HBase references; its also obnoxious requiring restart of
+the hadoop cluster before it'll notice your HBase additions.</p>
+
+<p>As of 0.90.x, HBase will just add its dependency jars to the job
+configuration; the dependencies just need to be available on the local
+<code>CLASSPATH</code>.  For example, to run the bundled HBase
+{@link org.apache.hadoop.hbase.mapreduce.RowCounter} mapreduce job against a table named <code>usertable</code>,
+type:
+
+<blockquote><pre>
+$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0.jar rowcounter usertable
+</pre></blockquote>
+
+Expand <code>$HBASE_HOME</code> and <code>$HADOOP_HOME</code> in the above
+appropriately to suit your local environment.  The content of <code>HADOOP_CLASSPATH</code>
+is set to the HBase <code>CLASSPATH</code> via backticking the command
+<code>${HBASE_HOME}/bin/hbase classpath</code>.
+
+<p>When the above runs, internally, the HBase jar finds its zookeeper and
+<a href="http://code.google.com/p/guava-libraries/">guava</a>,
+etc., dependencies on the passed
+</code>HADOOP_CLASSPATH</code> and adds the found jars to the mapreduce
+job configuration. See the source at
+<code>TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job)</code>
+for how this is done.
+</p>
+<p>The above may not work if you are running your HBase from its build directory;
+i.e. you've done <code>$ mvn test install</code> at
+<code>${HBASE_HOME}</code> and you are now
+trying to use this build in your mapreduce job.  If you get
+<blockquote><pre>java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper
+...
+</pre></blockquote>
+exception thrown, try doing the following:
+<blockquote><pre>
+$ HADOOP_CLASSPATH=${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/target/hbase-0.90.0-SNAPSHOT.jar rowcounter usertable
+</pre></blockquote>
+Notice how we preface the backtick invocation setting
+<code>HADOOP_CLASSPATH</code> with reference to the built HBase jar over in
+the <code>target</code> directory.
+</p>
+
+<h2><a name="driver">Bundled HBase MapReduce Jobs</a></h2>
+<p>The HBase jar also serves as a Driver for some bundled mapreduce jobs. To
+learn about the bundled mapreduce jobs run:
+<blockquote><pre>
+$ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-0.90.0-SNAPSHOT.jar
+An example program must be given as the first argument.
+Valid program names are:
+  copytable: Export a table from local cluster to peer cluster
+  completebulkload: Complete a bulk data load.
+  export: Write table data to HDFS.
+  import: Import data written by Export.
+  importtsv: Import data in TSV format.
+  rowcounter: Count rows in HBase table
+</pre></blockquote>
+
+<h2><a name="sink">HBase as MapReduce job data source and sink</a></h2>
+
+<p>HBase can be used as a data source, {@link org.apache.hadoop.hbase.mapreduce.TableInputFormat TableInputFormat},
+and data sink, {@link org.apache.hadoop.hbase.mapreduce.TableOutputFormat TableOutputFormat}
+or {@link org.apache.hadoop.hbase.mapreduce.MultiTableOutputFormat MultiTableOutputFormat},
+for MapReduce jobs.
+Writing MapReduce jobs that read or write HBase, you'll probably want to subclass
+{@link org.apache.hadoop.hbase.mapreduce.TableMapper TableMapper} and/or
+{@link org.apache.hadoop.hbase.mapreduce.TableReducer TableReducer}.  See the do-nothing
+pass-through classes {@link org.apache.hadoop.hbase.mapreduce.IdentityTableMapper IdentityTableMapper} and
+{@link org.apache.hadoop.hbase.mapreduce.IdentityTableReducer IdentityTableReducer} for basic usage.  For a more
+involved example, see {@link org.apache.hadoop.hbase.mapreduce.RowCounter}
+or review the <code>org.apache.hadoop.hbase.mapreduce.TestTableMapReduce</code> unit test.
+</p>
+
+<p>Running mapreduce jobs that have HBase as source or sink, you'll need to
+specify source/sink table and column names in your configuration.</p>
+
+<p>Reading from HBase, the TableInputFormat asks HBase for the list of
+regions and makes a map-per-region or <code>mapred.map.tasks maps</code>,
+whichever is smaller (If your job only has two maps, up mapred.map.tasks
+to a number &gt; number of regions). Maps will run on the adjacent TaskTracker
+if you are running a TaskTracer and RegionServer per node.
+Writing, it may make sense to avoid the reduce step and write yourself back into
+HBase from inside your map. You'd do this when your job does not need the sort
+and collation that mapreduce does on the map emitted data; on insert,
+HBase 'sorts' so there is no point double-sorting (and shuffling data around
+your mapreduce cluster) unless you need to. If you do not need the reduce,
+you might just have your map emit counts of records processed just so the
+framework's report at the end of your job has meaning or set the number of
+reduces to zero and use TableOutputFormat. See example code
+below. If running the reduce step makes sense in your case, its usually better
+to have lots of reducers so load is spread across the HBase cluster.</p>
+
+<p>There is also a new HBase partitioner that will run as many reducers as
+currently existing regions.  The
+{@link org.apache.hadoop.hbase.mapreduce.HRegionPartitioner} is suitable
+when your table is large and your upload is not such that it will greatly
+alter the number of existing regions when done; otherwise use the default
+partitioner.
+</p>
+
+<h2><a name="bulk">Bulk import writing HFiles directly</a></h2>
+<p>If importing into a new table, its possible to by-pass the HBase API
+and write your content directly to the filesystem properly formatted as
+HBase data files (HFiles).  Your import will run faster, perhaps an order of
+magnitude faster if not more.  For more on how this mechanism works, see
+<a href="http://hbase.apache.org/docs/current/bulk-loads.html">Bulk Loads</code>
+documentation.
+</p>
+
+<h2><a name="examples">Example Code</a></h2>
+<h3>Sample Row Counter</h3>
+<p>See {@link org.apache.hadoop.hbase.mapreduce.RowCounter}.  This job uses
+{@link org.apache.hadoop.hbase.mapreduce.TableInputFormat TableInputFormat} and
+does a count of all rows in specified table.
+You should be able to run
+it by doing: <code>% ./bin/hadoop jar hbase-X.X.X.jar</code>.  This will invoke
+the hbase MapReduce Driver class.  Select 'rowcounter' from the choice of jobs
+offered. This will emit rowcouner 'usage'.  Specify tablename, column to count
+and output directory.  You may need to add the hbase conf directory to <code>$HADOOP_HOME/conf/hadoop-env.sh#HADOOP_CLASSPATH</code>
+so the rowcounter gets pointed at the right hbase cluster (or, build a new jar
+with an appropriate hbase-site.xml built into your job jar).
+</p>
+*/
+package org.apache.hadoop.hbase.mapreduce;
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
new file mode 100644
index 0000000..ed88bfa
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java
@@ -0,0 +1,283 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce.replication;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.replication.ReplicationAdmin;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.mapreduce.TableInputFormat;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.mapreduce.TableMapper;
+import org.apache.hadoop.hbase.replication.ReplicationPeer;
+import org.apache.hadoop.hbase.replication.ReplicationZookeeper;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * This map-only job compares the data from a local table with a remote one.
+ * Every cell is compared and must have exactly the same keys (even timestamp)
+ * as well as same value. It is possible to restrict the job by time range and
+ * families. The peer id that's provided must match the one given when the
+ * replication stream was setup.
+ * <p>
+ * Two counters are provided, Verifier.Counters.GOODROWS and BADROWS. The reason
+ * for a why a row is different is shown in the map's log.
+ */
+public class VerifyReplication {
+
+  private static final Log LOG =
+      LogFactory.getLog(VerifyReplication.class);
+
+  public final static String NAME = "verifyrep";
+  static long startTime = 0;
+  static long endTime = 0;
+  static String tableName = null;
+  static String families = null;
+  static String peerId = null;
+
+  /**
+   * Map-only comparator for 2 tables
+   */
+  public static class Verifier
+      extends TableMapper<ImmutableBytesWritable, Put> {
+
+    public static enum Counters {GOODROWS, BADROWS}
+
+    private ResultScanner replicatedScanner;
+
+    /**
+     * Map method that compares every scanned row with the equivalent from
+     * a distant cluster.
+     * @param row  The current table row key.
+     * @param value  The columns.
+     * @param context  The current context.
+     * @throws IOException When something is broken with the data.
+     */
+    @Override
+    public void map(ImmutableBytesWritable row, Result value,
+                    Context context)
+        throws IOException {
+      if (replicatedScanner == null) {
+        Configuration conf = context.getConfiguration();
+        Scan scan = new Scan();
+        scan.setCaching(conf.getInt(TableInputFormat.SCAN_CACHEDROWS, 1));
+        long startTime = conf.getLong(NAME + ".startTime", 0);
+        long endTime = conf.getLong(NAME + ".endTime", 0);
+        String families = conf.get(NAME + ".families", null);
+        if(families != null) {
+          String[] fams = families.split(",");
+          for(String fam : fams) {
+            scan.addFamily(Bytes.toBytes(fam));
+          }
+        }
+        if (startTime != 0) {
+          scan.setTimeRange(startTime,
+              endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
+        }
+        try {
+          HConnection conn = HConnectionManager.getConnection(conf);
+          ReplicationZookeeper zk = new ReplicationZookeeper(conn, conf,
+              conn.getZooKeeperWatcher());
+          ReplicationPeer peer = zk.getPeer(conf.get(NAME+".peerId"));
+          HTable replicatedTable = new HTable(peer.getConfiguration(),
+              conf.get(NAME+".tableName"));
+          scan.setStartRow(value.getRow());
+          replicatedScanner = replicatedTable.getScanner(scan);
+        } catch (KeeperException e) {
+          throw new IOException("Got a ZK exception", e);
+        }
+      }
+      Result res = replicatedScanner.next();
+      try {
+        Result.compareResults(value, res);
+        context.getCounter(Counters.GOODROWS).increment(1);
+      } catch (Exception e) {
+        LOG.warn("Bad row", e);
+        context.getCounter(Counters.BADROWS).increment(1);
+      }
+    }
+
+    protected void cleanup(Context context) {
+      replicatedScanner.close();
+    }
+  }
+
+  /**
+   * Sets up the actual job.
+   *
+   * @param conf  The current configuration.
+   * @param args  The command line parameters.
+   * @return The newly created job.
+   * @throws java.io.IOException When setting up the job fails.
+   */
+  public static Job createSubmittableJob(Configuration conf, String[] args)
+  throws IOException {
+    if (!doCommandLine(args)) {
+      return null;
+    }
+    if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY, false)) {
+      throw new IOException("Replication needs to be enabled to verify it.");
+    }
+    try {
+      HConnection conn = HConnectionManager.getConnection(conf);
+      ReplicationZookeeper zk = new ReplicationZookeeper(conn, conf,
+          conn.getZooKeeperWatcher());
+      // Just verifying it we can connect
+      ReplicationPeer peer = zk.getPeer(peerId);
+      if (peer == null) {
+        throw new IOException("Couldn't get access to the slave cluster," +
+            "please see the log");
+      }
+    } catch (KeeperException ex) {
+      throw new IOException("Couldn't get access to the slave cluster" +
+          " because: ", ex);
+    }
+    conf.set(NAME+".peerId", peerId);
+    conf.set(NAME+".tableName", tableName);
+    conf.setLong(NAME+".startTime", startTime);
+    conf.setLong(NAME+".endTime", endTime);
+    if (families != null) {
+      conf.set(NAME+".families", families);
+    }
+    Job job = new Job(conf, NAME + "_" + tableName);
+    job.setJarByClass(VerifyReplication.class);
+
+    Scan scan = new Scan();
+    if (startTime != 0) {
+      scan.setTimeRange(startTime,
+          endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime);
+    }
+    if(families != null) {
+      String[] fams = families.split(",");
+      for(String fam : fams) {
+        scan.addFamily(Bytes.toBytes(fam));
+      }
+    }
+    TableMapReduceUtil.initTableMapperJob(tableName, scan,
+        Verifier.class, null, null, job);
+    job.setOutputFormatClass(NullOutputFormat.class);
+    job.setNumReduceTasks(0);
+    return job;
+  }
+
+  private static boolean doCommandLine(final String[] args) {
+    if (args.length < 2) {
+      printUsage(null);
+      return false;
+    }
+    try {
+      for (int i = 0; i < args.length; i++) {
+        String cmd = args[i];
+        if (cmd.equals("-h") || cmd.startsWith("--h")) {
+          printUsage(null);
+          return false;
+        }
+
+        final String startTimeArgKey = "--starttime=";
+        if (cmd.startsWith(startTimeArgKey)) {
+          startTime = Long.parseLong(cmd.substring(startTimeArgKey.length()));
+          continue;
+        }
+
+        final String endTimeArgKey = "--endtime=";
+        if (cmd.startsWith(endTimeArgKey)) {
+          endTime = Long.parseLong(cmd.substring(endTimeArgKey.length()));
+          continue;
+        }
+
+        final String familiesArgKey = "--families=";
+        if (cmd.startsWith(familiesArgKey)) {
+          families = cmd.substring(familiesArgKey.length());
+          continue;
+        }
+
+        if (i == args.length-2) {
+          peerId = cmd;
+        }
+
+        if (i == args.length-1) {
+          tableName = cmd;
+        }
+      }
+    } catch (Exception e) {
+      e.printStackTrace();
+      printUsage("Can't start because " + e.getMessage());
+      return false;
+    }
+    return true;
+  }
+
+  /*
+   * @param errorMsg Error message.  Can be null.
+   */
+  private static void printUsage(final String errorMsg) {
+    if (errorMsg != null && errorMsg.length() > 0) {
+      System.err.println("ERROR: " + errorMsg);
+    }
+    System.err.println("Usage: verifyrep [--starttime=X]" +
+        " [--stoptime=Y] [--families=A] <peerid> <tablename>");
+    System.err.println();
+    System.err.println("Options:");
+    System.err.println(" starttime    beginning of the time range");
+    System.err.println("              without endtime means from starttime to forever");
+    System.err.println(" stoptime     end of the time range");
+    System.err.println(" families     comma-separated list of families to copy");
+    System.err.println();
+    System.err.println("Args:");
+    System.err.println(" peerid       Id of the peer used for verification, must match the one given for replication");
+    System.err.println(" tablename    Name of the table to verify");
+    System.err.println();
+    System.err.println("Examples:");
+    System.err.println(" To verify the data replicated from TestTable for a 1 hour window with peer #5 ");
+    System.err.println(" $ bin/hbase " +
+        "org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication" +
+        " --starttime=1265875194289 --stoptime=1265878794289 5 TestTable ");
+  }
+
+  /**
+   * Main entry point.
+   *
+   * @param args  The command line parameters.
+   * @throws Exception When running the job fails.
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    Job job = createSubmittableJob(conf, args);
+    if (job != null) {
+      System.exit(job.waitForCompletion(true) ? 0 : 1);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
new file mode 100644
index 0000000..66a3345
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/ActiveMasterManager.java
@@ -0,0 +1,191 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Handles everything on master-side related to master election.
+ *
+ * <p>Listens and responds to ZooKeeper notifications on the master znode,
+ * both <code>nodeCreated</code> and <code>nodeDeleted</code>.
+ *
+ * <p>Contains blocking methods which will hold up backup masters, waiting
+ * for the active master to fail.
+ *
+ * <p>This class is instantiated in the HMaster constructor and the method
+ * {@link #blockUntilBecomingActiveMaster()} is called to wait until becoming
+ * the active master of the cluster.
+ */
+class ActiveMasterManager extends ZooKeeperListener {
+  private static final Log LOG = LogFactory.getLog(ActiveMasterManager.class);
+
+  final AtomicBoolean clusterHasActiveMaster = new AtomicBoolean(false);
+
+  private final HServerAddress address;
+  private final Server master;
+
+  ActiveMasterManager(ZooKeeperWatcher watcher, HServerAddress address,
+      Server master) {
+    super(watcher);
+    this.address = address;
+    this.master = master;
+  }
+
+  @Override
+  public void nodeCreated(String path) {
+    if(path.equals(watcher.masterAddressZNode) && !master.isStopped()) {
+      handleMasterNodeChange();
+    }
+  }
+
+  @Override
+  public void nodeDeleted(String path) {
+    if(path.equals(watcher.masterAddressZNode) && !master.isStopped()) {
+      handleMasterNodeChange();
+    }
+  }
+
+  /**
+   * Handle a change in the master node.  Doesn't matter whether this was called
+   * from a nodeCreated or nodeDeleted event because there are no guarantees
+   * that the current state of the master node matches the event at the time of
+   * our next ZK request.
+   *
+   * <p>Uses the watchAndCheckExists method which watches the master address node
+   * regardless of whether it exists or not.  If it does exist (there is an
+   * active master), it returns true.  Otherwise it returns false.
+   *
+   * <p>A watcher is set which guarantees that this method will get called again if
+   * there is another change in the master node.
+   */
+  private void handleMasterNodeChange() {
+    // Watch the node and check if it exists.
+    try {
+      synchronized(clusterHasActiveMaster) {
+        if(ZKUtil.watchAndCheckExists(watcher, watcher.masterAddressZNode)) {
+          // A master node exists, there is an active master
+          LOG.debug("A master is now available");
+          clusterHasActiveMaster.set(true);
+        } else {
+          // Node is no longer there, cluster does not have an active master
+          LOG.debug("No master available. Notifying waiting threads");
+          clusterHasActiveMaster.set(false);
+          // Notify any thread waiting to become the active master
+          clusterHasActiveMaster.notifyAll();
+        }
+      }
+    } catch (KeeperException ke) {
+      master.abort("Received an unexpected KeeperException, aborting", ke);
+    }
+  }
+
+  /**
+   * Block until becoming the active master.
+   *
+   * Method blocks until there is not another active master and our attempt
+   * to become the new active master is successful.
+   *
+   * This also makes sure that we are watching the master znode so will be
+   * notified if another master dies.
+   * @return True if no issue becoming active master else false if another
+   * master was running or if some other problem (zookeeper, stop flag has been
+   * set on this Master)
+   */
+  boolean blockUntilBecomingActiveMaster() {
+    boolean cleanSetOfActiveMaster = true;
+    // Try to become the active master, watch if there is another master
+    try {
+      if (ZKUtil.setAddressAndWatch(this.watcher,
+          this.watcher.masterAddressZNode, this.address)) {
+        // We are the master, return
+        this.clusterHasActiveMaster.set(true);
+        LOG.info("Master=" + this.address);
+        return cleanSetOfActiveMaster;
+      }
+      cleanSetOfActiveMaster = false;
+
+      // There is another active master running elsewhere or this is a restart
+      // and the master ephemeral node has not expired yet.
+      this.clusterHasActiveMaster.set(true);
+      HServerAddress currentMaster =
+        ZKUtil.getDataAsAddress(this.watcher, this.watcher.masterAddressZNode);
+      if (currentMaster != null && currentMaster.equals(this.address)) {
+        LOG.info("Current master has this master's address, " + currentMaster +
+          "; master was restarted?  Waiting on znode to expire...");
+        // Hurry along the expiration of the znode.
+        ZKUtil.deleteNode(this.watcher, this.watcher.masterAddressZNode);
+      } else {
+        LOG.info("Another master is the active master, " + currentMaster +
+          "; waiting to become the next active master");
+      }
+    } catch (KeeperException ke) {
+      master.abort("Received an unexpected KeeperException, aborting", ke);
+      return false;
+    }
+    synchronized (this.clusterHasActiveMaster) {
+      while (this.clusterHasActiveMaster.get() && !this.master.isStopped()) {
+        try {
+          this.clusterHasActiveMaster.wait();
+        } catch (InterruptedException e) {
+          // We expect to be interrupted when a master dies, will fall out if so
+          LOG.debug("Interrupted waiting for master to die", e);
+        }
+      }
+      if (this.master.isStopped()) {
+        return cleanSetOfActiveMaster;
+      }
+      // Try to become active master again now that there is no active master
+      blockUntilBecomingActiveMaster();
+    }
+    return cleanSetOfActiveMaster;
+  }
+
+  /**
+   * @return True if cluster has an active master.
+   */
+  public boolean isActiveMaster() {
+    return this.clusterHasActiveMaster.get();
+  }
+
+  public void stop() {
+    try {
+      // If our address is in ZK, delete it on our way out
+      HServerAddress zkAddress =
+        ZKUtil.getDataAsAddress(watcher, watcher.masterAddressZNode);
+      // TODO: redo this to make it atomic (only added for tests)
+      if(zkAddress != null &&
+          zkAddress.equals(address)) {
+        ZKUtil.deleteNode(watcher, watcher.masterAddressZNode);
+      }
+    } catch (KeeperException e) {
+      LOG.error(this.watcher.prefix("Error deleting our own master address node"), e);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
new file mode 100644
index 0000000..2b345fb
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
@@ -0,0 +1,1910 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.EOFException;
+import java.net.ConnectException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.catalog.RootLocationEditor;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.executor.RegionTransitionData;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.master.LoadBalancer.RegionPlan;
+import org.apache.hadoop.hbase.master.handler.ClosedRegionHandler;
+import org.apache.hadoop.hbase.master.handler.OpenedRegionHandler;
+import org.apache.hadoop.hbase.master.handler.ServerShutdownHandler;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.hadoop.hbase.zookeeper.ZKTable;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil.NodeAndData;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.zookeeper.AsyncCallback;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.KeeperException.NoNodeException;
+import org.apache.zookeeper.data.Stat;
+
+/**
+ * Manages and performs region assignment.
+ * <p>
+ * Monitors ZooKeeper for events related to regions in transition.
+ * <p>
+ * Handles existing regions in transition during master failover.
+ */
+public class AssignmentManager extends ZooKeeperListener {
+  private static final Log LOG = LogFactory.getLog(AssignmentManager.class);
+
+  protected Server master;
+
+  private ServerManager serverManager;
+
+  private CatalogTracker catalogTracker;
+
+  private TimeoutMonitor timeoutMonitor;
+
+  /*
+   * Maximum times we recurse an assignment.  See below in {@link #assign()}.
+   */
+  private final int maximumAssignmentAttempts;
+
+  /**
+   * Regions currently in transition.  Map of encoded region names to the master
+   * in-memory state for that region.
+   */
+  final ConcurrentSkipListMap<String, RegionState> regionsInTransition =
+    new ConcurrentSkipListMap<String, RegionState>();
+
+  /** Plans for region movement. Key is the encoded version of a region name*/
+  // TODO: When do plans get cleaned out?  Ever? In server open and in server
+  // shutdown processing -- St.Ack
+  // All access to this Map must be synchronized.
+  final NavigableMap<String, RegionPlan> regionPlans =
+    new TreeMap<String, RegionPlan>();
+
+  private final ZKTable zkTable;
+
+  /**
+   * Server to regions assignment map.
+   * Contains the set of regions currently assigned to a given server.
+   * This Map and {@link #regions} are tied.  Always update this in tandem
+   * with the other under a lock on {@link #regions}
+   * @see #regions
+   */
+  private final NavigableMap<HServerInfo, List<HRegionInfo>> servers =
+    new TreeMap<HServerInfo, List<HRegionInfo>>();
+
+  /**
+   * Region to server assignment map.
+   * Contains the server a given region is currently assigned to.
+   * This Map and {@link #servers} are tied.  Always update this in tandem
+   * with the other under a lock on {@link #regions}
+   * @see #servers
+   */
+  private final SortedMap<HRegionInfo,HServerInfo> regions =
+    new TreeMap<HRegionInfo,HServerInfo>();
+
+  private final ExecutorService executorService;
+
+  /**
+   * Constructs a new assignment manager.
+   *
+   * @param master
+   * @param serverManager
+   * @param catalogTracker
+   * @param service
+   * @throws KeeperException
+   */
+  public AssignmentManager(Server master, ServerManager serverManager,
+      CatalogTracker catalogTracker, final ExecutorService service)
+  throws KeeperException {
+    super(master.getZooKeeper());
+    this.master = master;
+    this.serverManager = serverManager;
+    this.catalogTracker = catalogTracker;
+    this.executorService = service;
+    Configuration conf = master.getConfiguration();
+    this.timeoutMonitor = new TimeoutMonitor(
+      conf.getInt("hbase.master.assignment.timeoutmonitor.period", 10000),
+      master,
+      conf.getInt("hbase.master.assignment.timeoutmonitor.timeout", 30000));
+    Threads.setDaemonThreadRunning(timeoutMonitor,
+      master.getServerName() + ".timeoutMonitor");
+    this.zkTable = new ZKTable(this.master.getZooKeeper());
+    this.maximumAssignmentAttempts =
+      this.master.getConfiguration().getInt("hbase.assignment.maximum.attempts", 10);
+  }
+
+  /**
+   * @return Instance of ZKTable.
+   */
+  public ZKTable getZKTable() {
+    // These are 'expensive' to make involving trip to zk ensemble so allow
+    // sharing.
+    return this.zkTable;
+  }
+
+  /**
+   * Reset all unassigned znodes.  Called on startup of master.
+   * Call {@link #assignAllUserRegions()} after root and meta have been assigned.
+   * @throws IOException
+   * @throws KeeperException
+   */
+  void cleanoutUnassigned() throws IOException, KeeperException {
+    // Cleanup any existing ZK nodes and start watching
+    ZKAssign.deleteAllNodes(watcher);
+    ZKUtil.listChildrenAndWatchForNewChildren(this.watcher,
+      this.watcher.assignmentZNode);
+  }
+
+  /**
+   * Handle failover.  Restore state from META and ZK.  Handle any regions in
+   * transition.  Presumes <code>.META.</code> and <code>-ROOT-</code> deployed.
+   * @throws KeeperException
+   * @throws IOException
+   */
+  void processFailover() throws KeeperException, IOException {
+    // Concurrency note: In the below the accesses on regionsInTransition are
+    // outside of a synchronization block where usually all accesses to RIT are
+    // synchronized.  The presumption is that in this case it is safe since this
+    // method is being played by a single thread on startup.
+
+    // TODO: Check list of user regions and their assignments against regionservers.
+    // TODO: Regions that have a null location and are not in regionsInTransitions
+    // need to be handled.
+
+    // Scan META to build list of existing regions, servers, and assignment
+    // Returns servers who have not checked in (assumed dead) and their regions
+    Map<HServerInfo,List<Pair<HRegionInfo,Result>>> deadServers =
+      rebuildUserRegions();
+    // Process list of dead servers
+    processDeadServers(deadServers);
+    // Check existing regions in transition
+    List<String> nodes = ZKUtil.listChildrenAndWatchForNewChildren(watcher,
+        watcher.assignmentZNode);
+    if (nodes.isEmpty()) {
+      LOG.info("No regions in transition in ZK to process on failover");
+      return;
+    }
+    LOG.info("Failed-over master needs to process " + nodes.size() +
+        " regions in transition");
+    for (String encodedRegionName: nodes) {
+      processRegionInTransition(encodedRegionName, null);
+    }
+  }
+
+  /**
+   * If region is up in zk in transition, then do fixup and block and wait until
+   * the region is assigned and out of transition.  Used on startup for
+   * catalog regions.
+   * @param hri Region to look for.
+   * @return True if we processed a region in transition else false if region
+   * was not up in zk in transition.
+   * @throws InterruptedException
+   * @throws KeeperException
+   * @throws IOException
+   */
+  boolean processRegionInTransitionAndBlockUntilAssigned(final HRegionInfo hri)
+  throws InterruptedException, KeeperException, IOException {
+    boolean intransistion = processRegionInTransition(hri.getEncodedName(), hri);
+    if (!intransistion) return intransistion;
+    synchronized(this.regionsInTransition) {
+      while (!this.master.isStopped() &&
+          this.regionsInTransition.containsKey(hri.getEncodedName())) {
+        this.regionsInTransition.wait();
+      }
+    }
+    return intransistion;
+  }
+
+  /**
+   * Process failover of <code>encodedName</code>.  Look in
+   * @param encodedRegionName Region to process failover for.
+   * @param encodedRegionName RegionInfo.  If null we'll go get it from meta table.
+   * @return
+   * @throws KeeperException
+   * @throws IOException
+   */
+  boolean processRegionInTransition(final String encodedRegionName,
+      final HRegionInfo regionInfo)
+  throws KeeperException, IOException {
+    RegionTransitionData data = ZKAssign.getData(watcher, encodedRegionName);
+    if (data == null) return false;
+    HRegionInfo hri = regionInfo;
+    if (hri == null) {
+      Pair<HRegionInfo, HServerAddress> p =
+        MetaReader.getRegion(catalogTracker, data.getRegionName());
+      if (p == null) return false;
+      hri = p.getFirst();
+    }
+    processRegionsInTransition(data, hri);
+    return true;
+  }
+
+  void processRegionsInTransition(final RegionTransitionData data,
+      final HRegionInfo regionInfo)
+  throws KeeperException {
+    String encodedRegionName = regionInfo.getEncodedName();
+    LOG.info("Processing region " + regionInfo.getRegionNameAsString() +
+      " in state " + data.getEventType());
+    synchronized (regionsInTransition) {
+      switch (data.getEventType()) {
+      case RS_ZK_REGION_CLOSING:
+        // Just insert region into RIT.
+        // If this never updates the timeout will trigger new assignment
+        regionsInTransition.put(encodedRegionName, new RegionState(
+            regionInfo, RegionState.State.CLOSING, data.getStamp()));
+        break;
+
+      case RS_ZK_REGION_CLOSED:
+        // Region is closed, insert into RIT and handle it
+        regionsInTransition.put(encodedRegionName, new RegionState(
+            regionInfo, RegionState.State.CLOSED, data.getStamp()));
+        new ClosedRegionHandler(master, this, regionInfo).process();
+        break;
+
+      case M_ZK_REGION_OFFLINE:
+        // Region is offline, insert into RIT and handle it like a closed
+        regionsInTransition.put(encodedRegionName, new RegionState(
+            regionInfo, RegionState.State.OFFLINE, data.getStamp()));
+        new ClosedRegionHandler(master, this, regionInfo).process();
+        break;
+
+      case RS_ZK_REGION_OPENING:
+        // Just insert region into RIT
+        // If this never updates the timeout will trigger new assignment
+        regionsInTransition.put(encodedRegionName, new RegionState(
+            regionInfo, RegionState.State.OPENING, data.getStamp()));
+        break;
+
+      case RS_ZK_REGION_OPENED:
+        // Region is opened, insert into RIT and handle it
+        regionsInTransition.put(encodedRegionName, new RegionState(
+            regionInfo, RegionState.State.OPENING, data.getStamp()));
+        HServerInfo hsi = serverManager.getServerInfo(data.getServerName());
+        // hsi could be null if this server is no longer online.  If
+        // that the case, just let this RIT timeout; it'll be assigned
+        // to new server then.
+        if (hsi == null) {
+          LOG.warn("Region in transition " + regionInfo.getEncodedName() +
+            " references a server no longer up " + data.getServerName() +
+            "; letting RIT timeout so will be assigned elsewhere");
+          break;
+        }
+        new OpenedRegionHandler(master, this, regionInfo, hsi).process();
+        break;
+      }
+    }
+  }
+
+  /**
+   * Handles various states an unassigned node can be in.
+   * <p>
+   * Method is called when a state change is suspected for an unassigned node.
+   * <p>
+   * This deals with skipped transitions (we got a CLOSED but didn't see CLOSING
+   * yet).
+   * @param data
+   */
+  private void handleRegion(final RegionTransitionData data) {
+    synchronized(regionsInTransition) {
+      if (data == null || data.getServerName() == null) {
+        LOG.warn("Unexpected NULL input " + data);
+        return;
+      }
+      // Check if this is a special HBCK transition
+      if (data.getServerName().equals(HConstants.HBCK_CODE_NAME)) {
+        handleHBCK(data);
+        return;
+      }
+      // Verify this is a known server
+      if (!serverManager.isServerOnline(data.getServerName()) &&
+          !this.master.getServerName().equals(data.getServerName())) {
+        LOG.warn("Attempted to handle region transition for server but " +
+          "server is not online: " + data.getRegionName());
+        return;
+      }
+      String encodedName = HRegionInfo.encodeRegionName(data.getRegionName());
+      String prettyPrintedRegionName = HRegionInfo.prettyPrint(encodedName);
+      LOG.debug("Handling transition=" + data.getEventType() +
+        ", server=" + data.getServerName() + ", region=" + prettyPrintedRegionName);
+      RegionState regionState = regionsInTransition.get(encodedName);
+      switch (data.getEventType()) {
+        case M_ZK_REGION_OFFLINE:
+          // Nothing to do.
+          break;
+
+        case RS_ZK_REGION_CLOSING:
+          // Should see CLOSING after we have asked it to CLOSE or additional
+          // times after already being in state of CLOSING
+          if (regionState == null ||
+              (!regionState.isPendingClose() && !regionState.isClosing())) {
+            LOG.warn("Received CLOSING for region " + prettyPrintedRegionName +
+              " from server " + data.getServerName() + " but region was in " +
+              " the state " + regionState + " and not " +
+              "in expected PENDING_CLOSE or CLOSING states");
+            return;
+          }
+          // Transition to CLOSING (or update stamp if already CLOSING)
+          regionState.update(RegionState.State.CLOSING, data.getStamp());
+          break;
+
+        case RS_ZK_REGION_CLOSED:
+          // Should see CLOSED after CLOSING but possible after PENDING_CLOSE
+          if (regionState == null ||
+              (!regionState.isPendingClose() && !regionState.isClosing())) {
+            LOG.warn("Received CLOSED for region " + prettyPrintedRegionName +
+                " from server " + data.getServerName() + " but region was in " +
+                " the state " + regionState + " and not " +
+                "in expected PENDING_CLOSE or CLOSING states");
+            return;
+          }
+          // Handle CLOSED by assigning elsewhere or stopping if a disable
+          // If we got here all is good.  Need to update RegionState -- else
+          // what follows will fail because not in expected state.
+          regionState.update(RegionState.State.CLOSED, data.getStamp());
+          this.executorService.submit(new ClosedRegionHandler(master,
+            this, regionState.getRegion()));
+          break;
+
+        case RS_ZK_REGION_OPENING:
+          // Should see OPENING after we have asked it to OPEN or additional
+          // times after already being in state of OPENING
+          if(regionState == null ||
+              (!regionState.isPendingOpen() && !regionState.isOpening())) {
+            LOG.warn("Received OPENING for region " +
+                prettyPrintedRegionName +
+                " from server " + data.getServerName() + " but region was in " +
+                " the state " + regionState + " and not " +
+                "in expected PENDING_OPEN or OPENING states");
+            return;
+          }
+          // Transition to OPENING (or update stamp if already OPENING)
+          regionState.update(RegionState.State.OPENING, data.getStamp());
+          break;
+
+        case RS_ZK_REGION_OPENED:
+          // Should see OPENED after OPENING but possible after PENDING_OPEN
+          if(regionState == null ||
+              (!regionState.isPendingOpen() && !regionState.isOpening())) {
+            LOG.warn("Received OPENED for region " +
+                prettyPrintedRegionName +
+                " from server " + data.getServerName() + " but region was in " +
+                " the state " + regionState + " and not " +
+                "in expected PENDING_OPEN or OPENING states");
+            return;
+          }
+          // Handle OPENED by removing from transition and deleted zk node
+          regionState.update(RegionState.State.OPEN, data.getStamp());
+          this.executorService.submit(
+            new OpenedRegionHandler(master, this, regionState.getRegion(),
+              this.serverManager.getServerInfo(data.getServerName())));
+          break;
+      }
+    }
+  }
+
+  /**
+   * Handle a ZK unassigned node transition triggered by HBCK repair tool.
+   * <p>
+   * This is handled in a separate code path because it breaks the normal rules.
+   * @param data
+   */
+  private void handleHBCK(RegionTransitionData data) {
+    String encodedName = HRegionInfo.encodeRegionName(data.getRegionName());
+    LOG.info("Handling HBCK triggered transition=" + data.getEventType() +
+      ", server=" + data.getServerName() + ", region=" +
+      HRegionInfo.prettyPrint(encodedName));
+    RegionState regionState = regionsInTransition.get(encodedName);
+    switch (data.getEventType()) {
+      case M_ZK_REGION_OFFLINE:
+        HRegionInfo regionInfo = null;
+        if (regionState != null) {
+          regionInfo = regionState.getRegion();
+        } else {
+          try {
+            regionInfo = MetaReader.getRegion(catalogTracker,
+                data.getRegionName()).getFirst();
+          } catch (IOException e) {
+            LOG.info("Exception reading META doing HBCK repair operation", e);
+            return;
+          }
+        }
+        LOG.info("HBCK repair is triggering assignment of region=" +
+            regionInfo.getRegionNameAsString());
+        // trigger assign, node is already in OFFLINE so don't need to update ZK
+        assign(regionInfo, false);
+        break;
+
+      default:
+        LOG.warn("Received unexpected region state from HBCK (" +
+            data.getEventType() + ")");
+        break;
+    }
+  }
+
+  // ZooKeeper events
+
+  /**
+   * New unassigned node has been created.
+   *
+   * <p>This happens when an RS begins the OPENING or CLOSING of a region by
+   * creating an unassigned node.
+   *
+   * <p>When this happens we must:
+   * <ol>
+   *   <li>Watch the node for further events</li>
+   *   <li>Read and handle the state in the node</li>
+   * </ol>
+   */
+  @Override
+  public void nodeCreated(String path) {
+    if(path.startsWith(watcher.assignmentZNode)) {
+      synchronized(regionsInTransition) {
+        try {
+          RegionTransitionData data = ZKAssign.getData(watcher, path);
+          if(data == null) {
+            return;
+          }
+          handleRegion(data);
+        } catch (KeeperException e) {
+          master.abort("Unexpected ZK exception reading unassigned node data", e);
+        }
+      }
+    }
+  }
+
+  /**
+   * Existing unassigned node has had data changed.
+   *
+   * <p>This happens when an RS transitions from OFFLINE to OPENING, or between
+   * OPENING/OPENED and CLOSING/CLOSED.
+   *
+   * <p>When this happens we must:
+   * <ol>
+   *   <li>Watch the node for further events</li>
+   *   <li>Read and handle the state in the node</li>
+   * </ol>
+   */
+  @Override
+  public void nodeDataChanged(String path) {
+    if(path.startsWith(watcher.assignmentZNode)) {
+      synchronized(regionsInTransition) {
+        try {
+          RegionTransitionData data = ZKAssign.getData(watcher, path);
+          if(data == null) {
+            return;
+          }
+          handleRegion(data);
+        } catch (KeeperException e) {
+          master.abort("Unexpected ZK exception reading unassigned node data", e);
+        }
+      }
+    }
+  }
+
+  /**
+   * New unassigned node has been created.
+   *
+   * <p>This happens when an RS begins the OPENING or CLOSING of a region by
+   * creating an unassigned node.
+   *
+   * <p>When this happens we must:
+   * <ol>
+   *   <li>Watch the node for further children changed events</li>
+   *   <li>Watch all new children for changed events</li>
+   *   <li>Read all children and handle them</li>
+   * </ol>
+   */
+  @Override
+  public void nodeChildrenChanged(String path) {
+    if(path.equals(watcher.assignmentZNode)) {
+      synchronized(regionsInTransition) {
+        try {
+          List<NodeAndData> newNodes = ZKUtil.watchAndGetNewChildren(watcher,
+              watcher.assignmentZNode);
+          for(NodeAndData newNode : newNodes) {
+            LOG.debug("Handling new unassigned node: " + newNode);
+            handleRegion(RegionTransitionData.fromBytes(newNode.getData()));
+          }
+        } catch(KeeperException e) {
+          master.abort("Unexpected ZK exception reading unassigned children", e);
+        }
+      }
+    }
+  }
+
+  /**
+   * Marks the region as online.  Removes it from regions in transition and
+   * updates the in-memory assignment information.
+   * <p>
+   * Used when a region has been successfully opened on a region server.
+   * @param regionInfo
+   * @param serverInfo
+   */
+  public void regionOnline(HRegionInfo regionInfo, HServerInfo serverInfo) {
+    synchronized (this.regionsInTransition) {
+      RegionState rs =
+        this.regionsInTransition.remove(regionInfo.getEncodedName());
+      if (rs != null) {
+        this.regionsInTransition.notifyAll();
+      }
+    }
+    synchronized (this.regions) {
+      // Add check
+      HServerInfo hsi = this.regions.get(regionInfo);
+      if (hsi != null) LOG.warn("Overwriting " + regionInfo.getEncodedName() +
+        " on " + hsi);
+      this.regions.put(regionInfo, serverInfo);
+      addToServers(serverInfo, regionInfo);
+    }
+    // Remove plan if one.
+    clearRegionPlan(regionInfo);
+    // Update timers for all regions in transition going against this server.
+    updateTimers(serverInfo);
+  }
+
+  /**
+   * Touch timers for all regions in transition that have the passed
+   * <code>hsi</code> in common.
+   * Call this method whenever a server checks in.  Doing so helps the case where
+   * a new regionserver has joined the cluster and its been given 1k regions to
+   * open.  If this method is tickled every time the region reports in a
+   * successful open then the 1k-th region won't be timed out just because its
+   * sitting behind the open of 999 other regions.  This method is NOT used
+   * as part of bulk assign -- there we have a different mechanism for extending
+   * the regions in transition timer (we turn it off temporarily -- because
+   * there is no regionplan involved when bulk assigning.
+   * @param hsi
+   */
+  private void updateTimers(final HServerInfo hsi) {
+    // This loop could be expensive.
+    // First make a copy of current regionPlan rather than hold sync while
+    // looping because holding sync can cause deadlock.  Its ok in this loop
+    // if the Map we're going against is a little stale
+    Map<String, RegionPlan> copy = new HashMap<String, RegionPlan>();
+    synchronized(this.regionPlans) {
+      copy.putAll(this.regionPlans);
+    }
+    for (Map.Entry<String, RegionPlan> e: copy.entrySet()) {
+      if (!e.getValue().getDestination().equals(hsi)) continue;
+      RegionState rs = null;
+      synchronized (this.regionsInTransition) {
+        rs = this.regionsInTransition.get(e.getKey());
+      }
+      if (rs == null) continue;
+      synchronized (rs) {
+        rs.update(rs.getState());
+      }
+    }
+  }
+
+  /**
+   * Marks the region as offline.  Removes it from regions in transition and
+   * removes in-memory assignment information.
+   * <p>
+   * Used when a region has been closed and should remain closed.
+   * @param regionInfo
+   */
+  public void regionOffline(final HRegionInfo regionInfo) {
+    synchronized(this.regionsInTransition) {
+      if (this.regionsInTransition.remove(regionInfo.getEncodedName()) != null) {
+        this.regionsInTransition.notifyAll();
+      }
+    }
+    // remove the region plan as well just in case.
+    clearRegionPlan(regionInfo);
+    setOffline(regionInfo);
+  }
+
+  /**
+   * Sets the region as offline by removing in-memory assignment information but
+   * retaining transition information.
+   * <p>
+   * Used when a region has been closed but should be reassigned.
+   * @param regionInfo
+   */
+  public void setOffline(HRegionInfo regionInfo) {
+    synchronized (this.regions) {
+      HServerInfo serverInfo = this.regions.remove(regionInfo);
+      if (serverInfo == null) return;
+      List<HRegionInfo> serverRegions = this.servers.get(serverInfo);
+      if (!serverRegions.remove(regionInfo)) {
+        LOG.warn("No " + regionInfo + " on " + serverInfo);
+      }
+    }
+  }
+
+  public void offlineDisabledRegion(HRegionInfo regionInfo) {
+    // Disabling so should not be reassigned, just delete the CLOSED node
+    LOG.debug("Table being disabled so deleting ZK node and removing from " +
+        "regions in transition, skipping assignment of region " +
+          regionInfo.getRegionNameAsString());
+    try {
+      if (!ZKAssign.deleteClosedNode(watcher, regionInfo.getEncodedName())) {
+        // Could also be in OFFLINE mode
+        ZKAssign.deleteOfflineNode(watcher, regionInfo.getEncodedName());
+      }
+    } catch (KeeperException.NoNodeException nne) {
+      LOG.debug("Tried to delete closed node for " + regionInfo + " but it " +
+          "does not exist so just offlining");
+    } catch (KeeperException e) {
+      this.master.abort("Error deleting CLOSED node in ZK", e);
+    }
+    regionOffline(regionInfo);
+  }
+
+  // Assignment methods
+
+  /**
+   * Assigns the specified region.
+   * <p>
+   * If a RegionPlan is available with a valid destination then it will be used
+   * to determine what server region is assigned to.  If no RegionPlan is
+   * available, region will be assigned to a random available server.
+   * <p>
+   * Updates the RegionState and sends the OPEN RPC.
+   * <p>
+   * This will only succeed if the region is in transition and in a CLOSED or
+   * OFFLINE state or not in transition (in-memory not zk), and of course, the
+   * chosen server is up and running (It may have just crashed!).  If the
+   * in-memory checks pass, the zk node is forced to OFFLINE before assigning.
+   *
+   * @param region server to be assigned
+   * @param setOfflineInZK whether ZK node should be created/transitioned to an
+   *                       OFFLINE state before assigning the region
+   */
+  public void assign(HRegionInfo region, boolean setOfflineInZK) {
+    assign(region, setOfflineInZK, false);
+  }
+
+  public void assign(HRegionInfo region, boolean setOfflineInZK,
+      boolean forceNewPlan) {
+    String tableName = region.getTableDesc().getNameAsString();
+    boolean disabled = this.zkTable.isDisabledTable(tableName);
+    if (disabled || this.zkTable.isDisablingTable(tableName)) {
+      LOG.info("Table " + tableName + (disabled? " disabled;": " disabling;") +
+        " skipping assign of " + region.getRegionNameAsString());
+      offlineDisabledRegion(region);
+      return;
+    }
+    if (this.serverManager.isClusterShutdown()) {
+      LOG.info("Cluster shutdown is set; skipping assign of " +
+        region.getRegionNameAsString());
+      return;
+    }
+    RegionState state = addToRegionsInTransition(region);
+    synchronized (state) {
+      assign(state, setOfflineInZK, forceNewPlan);
+    }
+  }
+
+  /**
+   * Bulk assign regions to <code>destination</code>.  If we fail in any way,
+   * we'll abort the server.
+   * @param destination
+   * @param regions Regions to assign.
+   */
+  void assign(final HServerInfo destination,
+      final List<HRegionInfo> regions) {
+    LOG.debug("Bulk assigning " + regions.size() + " region(s) to " +
+      destination.getServerName());
+
+    List<RegionState> states = new ArrayList<RegionState>(regions.size());
+    synchronized (this.regionsInTransition) {
+      for (HRegionInfo region: regions) {
+        states.add(forceRegionStateToOffline(region));
+      }
+    }
+    // Presumption is that only this thread will be updating the state at this
+    // time; i.e. handlers on backend won't be trying to set it to OPEN, etc.
+    AtomicInteger counter = new AtomicInteger(0);
+    CreateUnassignedAsyncCallback cb =
+      new CreateUnassignedAsyncCallback(this.watcher, destination, counter);
+    for (RegionState state: states) {
+      if (!asyncSetOfflineInZooKeeper(state, cb, state)) {
+        return;
+      }
+    }
+    // Wait until all unassigned nodes have been put up and watchers set.
+    int total = regions.size();
+    for (int oldCounter = 0; true;) {
+      int count = counter.get();
+      if (oldCounter != count) {
+        LOG.info(destination.getServerName() + " unassigned znodes=" + count +
+          " of total=" + total);
+        oldCounter = count;
+      }
+      if (count == total) break;
+      Threads.sleep(1);
+    }
+    // Move on to open regions.
+    try {
+      // Send OPEN RPC. This can fail if the server on other end is is not up.
+      this.serverManager.sendRegionOpen(destination, regions);
+    } catch (Throwable t) {
+      this.master.abort("Failed assignment of regions to " + destination, t);
+      return;
+    }
+    LOG.debug("Bulk assigning done for " + destination.getServerName());
+  }
+
+  /**
+   * Callback handler for create unassigned znodes used during bulk assign.
+   */
+  static class CreateUnassignedAsyncCallback implements AsyncCallback.StringCallback {
+    private final Log LOG = LogFactory.getLog(CreateUnassignedAsyncCallback.class);
+    private final ZooKeeperWatcher zkw;
+    private final HServerInfo destination;
+    private final AtomicInteger counter;
+
+    CreateUnassignedAsyncCallback(final ZooKeeperWatcher zkw,
+        final HServerInfo destination, final AtomicInteger counter) {
+      this.zkw = zkw;
+      this.destination = destination;
+      this.counter = counter;
+    }
+
+    @Override
+    public void processResult(int rc, String path, Object ctx, String name) {
+      if (rc != 0) {
+        // Thisis resultcode.  If non-zero, need to resubmit.
+        LOG.warn("rc != 0 for " + path + " -- retryable connectionloss -- " +
+          "FIX see http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A2");
+        this.zkw.abort("Connectionloss writing unassigned at " + path +
+          ", rc=" + rc, null);
+        return;
+      }
+      LOG.debug("rs=" + (RegionState)ctx + ", server=" + this.destination.getServerName());
+      // Async exists to set a watcher so we'll get triggered when
+      // unassigned node changes.
+      this.zkw.getZooKeeper().exists(path, this.zkw,
+        new ExistsUnassignedAsyncCallback(this.counter), ctx);
+    }
+  }
+
+  /**
+   * Callback handler for the exists call that sets watcher on unassigned znodes.
+   * Used during bulk assign on startup.
+   */
+  static class ExistsUnassignedAsyncCallback implements AsyncCallback.StatCallback {
+    private final Log LOG = LogFactory.getLog(ExistsUnassignedAsyncCallback.class);
+    private final AtomicInteger counter;
+
+    ExistsUnassignedAsyncCallback(final AtomicInteger counter) {
+      this.counter = counter;
+    }
+
+    @Override
+    public void processResult(int rc, String path, Object ctx, Stat stat) {
+      if (rc != 0) {
+        // Thisis resultcode.  If non-zero, need to resubmit.
+        LOG.warn("rc != 0 for " + path + " -- retryable connectionloss -- " +
+          "FIX see http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A2");
+        return;
+      }
+      RegionState state = (RegionState)ctx;
+      LOG.debug("rs=" + state);
+      // Transition RegionState to PENDING_OPEN here in master; means we've
+      // sent the open.  We're a little ahead of ourselves here since we've not
+      // yet sent out the actual open but putting this state change after the
+      // call to open risks our writing PENDING_OPEN after state has been moved
+      // to OPENING by the regionserver.
+      state.update(RegionState.State.PENDING_OPEN);
+      this.counter.addAndGet(1);
+    }
+  }
+
+  /**
+   * @param region
+   * @return
+   */
+  private RegionState addToRegionsInTransition(final HRegionInfo region) {
+    synchronized (regionsInTransition) {
+      return forceRegionStateToOffline(region);
+    }
+  }
+
+  /**
+   * Sets regions {@link RegionState} to {@link RegionState.State#OFFLINE}.
+   * Caller must hold lock on this.regionsInTransition.
+   * @param region
+   * @return Amended RegionState.
+   */
+  private RegionState forceRegionStateToOffline(final HRegionInfo region) {
+    String encodedName = region.getEncodedName();
+    RegionState state = this.regionsInTransition.get(encodedName);
+    if (state == null) {
+      state = new RegionState(region, RegionState.State.OFFLINE);
+      this.regionsInTransition.put(encodedName, state);
+    } else {
+      LOG.debug("Forcing OFFLINE; was=" + state);
+      state.update(RegionState.State.OFFLINE);
+    }
+    return state;
+  }
+
+  /**
+   * Caller must hold lock on the passed <code>state</code> object.
+   * @param state
+   * @param setOfflineInZK
+   * @param forceNewPlan
+   */
+  private void assign(final RegionState state, final boolean setOfflineInZK,
+      final boolean forceNewPlan) {
+    for (int i = 0; i < this.maximumAssignmentAttempts; i++) {
+      if (setOfflineInZK && !setOfflineInZooKeeper(state)) return;
+      if (this.master.isStopped()) {
+        LOG.debug("Server stopped; skipping assign of " + state);
+        return;
+      }
+      RegionPlan plan = getRegionPlan(state, forceNewPlan);
+      if (plan == null) return; // Should get reassigned later when RIT times out.
+      try {
+        LOG.debug("Assigning region " + state.getRegion().getRegionNameAsString() +
+          " to " + plan.getDestination().getServerName());
+        // Transition RegionState to PENDING_OPEN
+        state.update(RegionState.State.PENDING_OPEN);
+        // Send OPEN RPC. This can fail if the server on other end is is not up.
+        serverManager.sendRegionOpen(plan.getDestination(), state.getRegion());
+        break;
+      } catch (Throwable t) {
+        LOG.warn("Failed assignment of " +
+          state.getRegion().getRegionNameAsString() + " to " +
+          plan.getDestination() + ", trying to assign elsewhere instead; " +
+          "retry=" + i, t);
+        // Clean out plan we failed execute and one that doesn't look like it'll
+        // succeed anyways; we need a new plan!
+        // Transition back to OFFLINE
+        state.update(RegionState.State.OFFLINE);
+        // Force a new plan and reassign.  Will return null if no servers.
+        if (getRegionPlan(state, plan.getDestination(), true) == null) {
+          LOG.warn("Unable to find a viable location to assign region " +
+            state.getRegion().getRegionNameAsString());
+          return;
+        }
+      }
+    }
+  }
+
+  /**
+   * Set region as OFFLINED up in zookeeper
+   * @param state
+   * @return True if we succeeded, false otherwise (State was incorrect or failed
+   * updating zk).
+   */
+  boolean setOfflineInZooKeeper(final RegionState state) {
+    if (!state.isClosed() && !state.isOffline()) {
+        new RuntimeException("Unexpected state trying to OFFLINE; " + state);
+      this.master.abort("Unexpected state trying to OFFLINE; " + state,
+        new IllegalStateException());
+      return false;
+    }
+    state.update(RegionState.State.OFFLINE);
+    try {
+      if(!ZKAssign.createOrForceNodeOffline(master.getZooKeeper(),
+          state.getRegion(), master.getServerName())) {
+        LOG.warn("Attempted to create/force node into OFFLINE state before " +
+          "completing assignment but failed to do so for " + state);
+        return false;
+      }
+    } catch (KeeperException e) {
+      master.abort("Unexpected ZK exception creating/setting node OFFLINE", e);
+      return false;
+    }
+    return true;
+  }
+
+  /**
+   * Set region as OFFLINED up in zookeeper asynchronously.
+   * @param state
+   * @return True if we succeeded, false otherwise (State was incorrect or failed
+   * updating zk).
+   */
+  boolean asyncSetOfflineInZooKeeper(final RegionState state,
+      final AsyncCallback.StringCallback cb, final Object ctx) {
+    if (!state.isClosed() && !state.isOffline()) {
+        new RuntimeException("Unexpected state trying to OFFLINE; " + state);
+      this.master.abort("Unexpected state trying to OFFLINE; " + state,
+        new IllegalStateException());
+      return false;
+    }
+    state.update(RegionState.State.OFFLINE);
+    try {
+      ZKAssign.asyncCreateNodeOffline(master.getZooKeeper(), state.getRegion(),
+        master.getServerName(), cb, ctx);
+    } catch (KeeperException e) {
+      master.abort("Unexpected ZK exception creating/setting node OFFLINE", e);
+      return false;
+    }
+    return true;
+  }
+
+  /**
+   * @param state
+   * @return Plan for passed <code>state</code> (If none currently, it creates one or
+   * if no servers to assign, it returns null).
+   */
+  RegionPlan getRegionPlan(final RegionState state,
+      final boolean forceNewPlan) {
+    return getRegionPlan(state, null, forceNewPlan);
+  }
+
+  /**
+   * @param state
+   * @param serverToExclude Server to exclude (we know its bad). Pass null if
+   * all servers are thought to be assignable.
+   * @param forceNewPlan If true, then if an existing plan exists, a new plan
+   * will be generated.
+   * @return Plan for passed <code>state</code> (If none currently, it creates one or
+   * if no servers to assign, it returns null).
+   */
+  RegionPlan getRegionPlan(final RegionState state,
+      final HServerInfo serverToExclude, final boolean forceNewPlan) {
+    // Pickup existing plan or make a new one
+    String encodedName = state.getRegion().getEncodedName();
+    List<HServerInfo> servers = this.serverManager.getOnlineServersList();
+    // The remove below hinges on the fact that the call to
+    // serverManager.getOnlineServersList() returns a copy
+    if (serverToExclude != null) servers.remove(serverToExclude);
+    if (servers.isEmpty()) return null;
+    RegionPlan randomPlan = new RegionPlan(state.getRegion(), null,
+      LoadBalancer.randomAssignment(servers));
+    boolean newPlan = false;
+    RegionPlan existingPlan = null;
+    synchronized (this.regionPlans) {
+      existingPlan = this.regionPlans.get(encodedName);
+      if (existingPlan == null || forceNewPlan ||
+          (existingPlan != null && existingPlan.getDestination().equals(serverToExclude))) {
+        newPlan = true;
+        this.regionPlans.put(encodedName, randomPlan);
+      }
+    }
+    if (newPlan) {
+      LOG.debug("No previous transition plan was found (or we are ignoring " +
+        "an existing plan) for " + state.getRegion().getRegionNameAsString() +
+        " so generated a random one; " + randomPlan + "; " +
+        serverManager.countOfRegionServers() +
+        " (online=" + serverManager.getOnlineServers().size() +
+        ", exclude=" + serverToExclude + ") available servers");
+        return randomPlan;
+      }
+      LOG.debug("Using pre-existing plan for region " +
+        state.getRegion().getRegionNameAsString() + "; plan=" + existingPlan);
+      return existingPlan;
+  }
+
+  /**
+   * Unassigns the specified region.
+   * <p>
+   * Updates the RegionState and sends the CLOSE RPC.
+   * <p>
+   * If a RegionPlan is already set, it will remain.
+   *
+   * @param region server to be unassigned
+   */
+  public void unassign(HRegionInfo region) {
+    unassign(region, false);
+  }
+
+  /**
+   * Unassigns the specified region.
+   * <p>
+   * Updates the RegionState and sends the CLOSE RPC.
+   * <p>
+   * If a RegionPlan is already set, it will remain.
+   *
+   * @param region server to be unassigned
+   * @param force if region should be closed even if already closing
+   */
+  public void unassign(HRegionInfo region, boolean force) {
+    LOG.debug("Starting unassignment of region " +
+      region.getRegionNameAsString() + " (offlining)");
+    synchronized (this.regions) {
+      // Check if this region is currently assigned
+      if (!regions.containsKey(region)) {
+        LOG.debug("Attempted to unassign region " +
+          region.getRegionNameAsString() + " but it is not " +
+          "currently assigned anywhere");
+        return;
+      }
+    }
+    String encodedName = region.getEncodedName();
+    // Grab the state of this region and synchronize on it
+    RegionState state;
+    synchronized (regionsInTransition) {
+      state = regionsInTransition.get(encodedName);
+      if (state == null) {
+        state = new RegionState(region, RegionState.State.PENDING_CLOSE);
+        regionsInTransition.put(encodedName, state);
+      } else if (force && state.isPendingClose()) {
+        LOG.debug("Attempting to unassign region " +
+            region.getRegionNameAsString() + " which is already pending close "
+            + "but forcing an additional close");
+        state.update(RegionState.State.PENDING_CLOSE);
+      } else {
+        LOG.debug("Attempting to unassign region " +
+          region.getRegionNameAsString() + " but it is " +
+          "already in transition (" + state.getState() + ")");
+        return;
+      }
+    }
+    // Send CLOSE RPC
+    HServerInfo server = null;
+    synchronized (this.regions) {
+      server = regions.get(region);
+    }
+    try {
+      // TODO: We should consider making this look more like it does for the
+      // region open where we catch all throwables and never abort
+      if (serverManager.sendRegionClose(server, state.getRegion())) {
+        LOG.debug("Sent CLOSE to " + server + " for region " +
+          region.getRegionNameAsString());
+        return;
+      }
+      // This never happens. Currently regionserver close always return true.
+      LOG.debug("Server " + server + " region CLOSE RPC returned false for " +
+        region.getEncodedName());
+    } catch (NotServingRegionException nsre) {
+      LOG.info("Server " + server + " returned " + nsre + " for " +
+        region.getEncodedName());
+      // Presume that master has stale data.  Presume remote side just split.
+      // Presume that the split message when it comes in will fix up the master's
+      // in memory cluster state.
+      return;
+    } catch (ConnectException e) {
+      LOG.info("Failed connect to " + server + ", message=" + e.getMessage() +
+        ", region=" + region.getEncodedName());
+      // Presume that regionserver just failed and we haven't got expired
+      // server from zk yet.  Let expired server deal with clean up.
+    } catch (java.net.SocketTimeoutException e) {
+      LOG.info("Server " + server + " returned " + e.getMessage() + " for " +
+        region.getEncodedName());
+      // Presume retry or server will expire.
+    } catch (EOFException e) {
+      LOG.info("Server " + server + " returned " + e.getMessage() + " for " +
+        region.getEncodedName());
+      // Presume retry or server will expire.
+    } catch (RemoteException re) {
+      IOException ioe = re.unwrapRemoteException();
+      if (ioe instanceof NotServingRegionException) {
+        // Failed to close, so pass through and reassign
+        LOG.debug("Server " + server + " returned " + ioe + " for " +
+          region.getEncodedName());
+      } else {
+        this.master.abort("Remote unexpected exception", ioe);
+      }
+    } catch (Throwable t) {
+      // For now call abort if unexpected exception -- radical, but will get
+      // fellas attention. St.Ack 20101012
+      this.master.abort("Remote unexpected exception", t);
+    }
+  }
+
+  /**
+   * Waits until the specified region has completed assignment.
+   * <p>
+   * If the region is already assigned, returns immediately.  Otherwise, method
+   * blocks until the region is assigned.
+   * @param regionInfo region to wait on assignment for
+   * @throws InterruptedException
+   */
+  public void waitForAssignment(HRegionInfo regionInfo)
+  throws InterruptedException {
+    synchronized(regions) {
+      while(!regions.containsKey(regionInfo)) {
+        regions.wait();
+      }
+    }
+  }
+
+  /**
+   * Assigns the ROOT region.
+   * <p>
+   * Assumes that ROOT is currently closed and is not being actively served by
+   * any RegionServer.
+   * <p>
+   * Forcibly unsets the current root region location in ZooKeeper and assigns
+   * ROOT to a random RegionServer.
+   * @throws KeeperException
+   */
+  public void assignRoot() throws KeeperException {
+    RootLocationEditor.deleteRootLocation(this.master.getZooKeeper());
+    assign(HRegionInfo.ROOT_REGIONINFO, true);
+  }
+
+  /**
+   * Assigns the META region.
+   * <p>
+   * Assumes that META is currently closed and is not being actively served by
+   * any RegionServer.
+   * <p>
+   * Forcibly assigns META to a random RegionServer.
+   */
+  public void assignMeta() {
+    // Force assignment to a random server
+    assign(HRegionInfo.FIRST_META_REGIONINFO, true);
+  }
+
+  /**
+   * Assigns all user regions, if any exist.  Used during cluster startup.
+   * <p>
+   * This is a synchronous call and will return once every region has been
+   * assigned.  If anything fails, an exception is thrown and the cluster
+   * should be shutdown.
+   * @throws InterruptedException
+   * @throws IOException
+   */
+  public void assignAllUserRegions() throws IOException, InterruptedException {
+    // Get all available servers
+    List<HServerInfo> servers = serverManager.getOnlineServersList();
+
+    // Scan META for all user regions, skipping any disabled tables
+    Map<HRegionInfo,HServerAddress> allRegions =
+      MetaReader.fullScan(catalogTracker, this.zkTable.getDisabledTables(), true);
+    if (allRegions == null || allRegions.isEmpty()) return;
+
+    // Determine what type of assignment to do on startup
+    boolean retainAssignment = master.getConfiguration().
+      getBoolean("hbase.master.startup.retainassign", true);
+
+    Map<HServerInfo, List<HRegionInfo>> bulkPlan = null;
+    if (retainAssignment) {
+      // Reuse existing assignment info
+      bulkPlan = LoadBalancer.retainAssignment(allRegions, servers);
+    } else {
+      // Generate a round-robin bulk assignment plan
+      bulkPlan = LoadBalancer.roundRobinAssignment(
+          new ArrayList<HRegionInfo>(allRegions.keySet()), servers);
+    }
+    LOG.info("Bulk assigning " + allRegions.size() + " region(s) across " +
+      servers.size() + " server(s), retainAssignment=" + retainAssignment);
+
+    // Use fixed count thread pool assigning.
+    BulkAssigner ba = new BulkStartupAssigner(this.master, bulkPlan, this);
+    ba.bulkAssign();
+    LOG.info("Bulk assigning done");
+  }
+
+  /**
+   * Run bulk assign on startup.
+   */
+  static class BulkStartupAssigner extends BulkAssigner {
+    private final Map<HServerInfo, List<HRegionInfo>> bulkPlan;
+    private final AssignmentManager assignmentManager;
+
+    BulkStartupAssigner(final Server server,
+        final Map<HServerInfo, List<HRegionInfo>> bulkPlan,
+        final AssignmentManager am) {
+      super(server);
+      this.bulkPlan = bulkPlan;
+      this.assignmentManager = am;
+    }
+
+    @Override
+    public boolean bulkAssign() throws InterruptedException {
+      // Disable timing out regions in transition up in zk while bulk assigning.
+      this.assignmentManager.timeoutMonitor.bulkAssign(true);
+      try {
+        return super.bulkAssign();
+      } finally {
+        // Reenable timing out regions in transition up in zi.
+        this.assignmentManager.timeoutMonitor.bulkAssign(false);
+      }
+    }
+
+    @Override
+   protected String getThreadNamePrefix() {
+    return super.getThreadNamePrefix() + "-startup";
+   }
+
+    @Override
+    protected void populatePool(java.util.concurrent.ExecutorService pool) {
+      for (Map.Entry<HServerInfo, List<HRegionInfo>> e: this.bulkPlan.entrySet()) {
+        pool.execute(new SingleServerBulkAssigner(e.getKey(), e.getValue(),
+          this.assignmentManager));
+      }
+    }
+
+    protected boolean waitUntilDone(final long timeout)
+    throws InterruptedException {
+      return this.assignmentManager.waitUntilNoRegionsInTransition(timeout);
+    }
+  }
+
+  /**
+   * Manage bulk assigning to a server.
+   */
+  static class SingleServerBulkAssigner implements Runnable {
+    private final HServerInfo regionserver;
+    private final List<HRegionInfo> regions;
+    private final AssignmentManager assignmentManager;
+
+    SingleServerBulkAssigner(final HServerInfo regionserver,
+        final List<HRegionInfo> regions, final AssignmentManager am) {
+      this.regionserver = regionserver;
+      this.regions = regions;
+      this.assignmentManager = am;
+    }
+    @Override
+    public void run() {
+      this.assignmentManager.assign(this.regionserver, this.regions);
+    }
+  }
+
+  /**
+   * Wait until no regions in transition.
+   * @param timeout How long to wait.
+   * @return True if nothing in regions in transition.
+   * @throws InterruptedException
+   */
+  boolean waitUntilNoRegionsInTransition(final long timeout)
+  throws InterruptedException {
+    // Blocks until there are no regions in transition. It is possible that
+    // there
+    // are regions in transition immediately after this returns but guarantees
+    // that if it returns without an exception that there was a period of time
+    // with no regions in transition from the point-of-view of the in-memory
+    // state of the Master.
+    long startTime = System.currentTimeMillis();
+    long remaining = timeout;
+    synchronized (regionsInTransition) {
+      while (regionsInTransition.size() > 0 && !this.master.isStopped()
+          && remaining > 0) {
+        regionsInTransition.wait(remaining);
+        remaining = timeout - (System.currentTimeMillis() - startTime);
+      }
+    }
+    return regionsInTransition.isEmpty();
+  }
+
+  /**
+   * Rebuild the list of user regions and assignment information.
+   * <p>
+   * Returns a map of servers that are not found to be online and the regions
+   * they were hosting.
+   * @return map of servers not online to their assigned regions, as stored
+   *         in META
+   * @throws IOException
+   */
+  private Map<HServerInfo,List<Pair<HRegionInfo,Result>>> rebuildUserRegions()
+  throws IOException {
+    // Region assignment from META
+    List<Result> results = MetaReader.fullScanOfResults(catalogTracker);
+    // Map of offline servers and their regions to be returned
+    Map<HServerInfo,List<Pair<HRegionInfo,Result>>> offlineServers =
+      new TreeMap<HServerInfo,List<Pair<HRegionInfo,Result>>>();
+    // Iterate regions in META
+    for (Result result : results) {
+      Pair<HRegionInfo,HServerInfo> region =
+        MetaReader.metaRowToRegionPairWithInfo(result);
+      if (region == null) continue;
+      HServerInfo regionLocation = region.getSecond();
+      HRegionInfo regionInfo = region.getFirst();
+      if (regionLocation == null) {
+        // Region not being served, add to region map with no assignment
+        // If this needs to be assigned out, it will also be in ZK as RIT
+        this.regions.put(regionInfo, null);
+      } else if (!serverManager.isServerOnline(
+          regionLocation.getServerName())) {
+        // Region is located on a server that isn't online
+        List<Pair<HRegionInfo,Result>> offlineRegions =
+          offlineServers.get(regionLocation);
+        if (offlineRegions == null) {
+          offlineRegions = new ArrayList<Pair<HRegionInfo,Result>>(1);
+          offlineServers.put(regionLocation, offlineRegions);
+        }
+        offlineRegions.add(new Pair<HRegionInfo,Result>(regionInfo, result));
+      } else {
+        // Region is being served and on an active server
+        regions.put(regionInfo, regionLocation);
+        addToServers(regionLocation, regionInfo);
+      }
+    }
+    return offlineServers;
+  }
+
+  /**
+   * Processes list of dead servers from result of META scan.
+   * <p>
+   * This is used as part of failover to handle RegionServers which failed
+   * while there was no active master.
+   * <p>
+   * Method stubs in-memory data to be as expected by the normal server shutdown
+   * handler.
+   *
+   * @param deadServers
+   * @throws IOException
+   * @throws KeeperException
+   */
+  private void processDeadServers(
+      Map<HServerInfo, List<Pair<HRegionInfo, Result>>> deadServers)
+  throws IOException, KeeperException {
+    for (Map.Entry<HServerInfo, List<Pair<HRegionInfo,Result>>> deadServer :
+      deadServers.entrySet()) {
+      List<Pair<HRegionInfo,Result>> regions = deadServer.getValue();
+      for (Pair<HRegionInfo,Result> region : regions) {
+        HRegionInfo regionInfo = region.getFirst();
+        Result result = region.getSecond();
+        // If region was in transition (was in zk) force it offline for reassign
+        try {
+          ZKAssign.createOrForceNodeOffline(watcher, regionInfo,
+              master.getServerName());
+        } catch (KeeperException.NoNodeException nne) {
+          // This is fine
+        }
+        // Process with existing RS shutdown code
+        ServerShutdownHandler.processDeadRegion(regionInfo, result, this,
+            this.catalogTracker);
+      }
+    }
+  }
+
+  /*
+   * Presumes caller has taken care of necessary locking modifying servers Map.
+   * @param hsi
+   * @param hri
+   */
+  private void addToServers(final HServerInfo hsi, final HRegionInfo hri) {
+    List<HRegionInfo> hris = servers.get(hsi);
+    if (hris == null) {
+      hris = new ArrayList<HRegionInfo>();
+      servers.put(hsi, hris);
+    }
+    hris.add(hri);
+  }
+
+  /**
+   * @return A copy of the Map of regions currently in transition.
+   */
+  public NavigableMap<String, RegionState> getRegionsInTransition() {
+    synchronized (this.regionsInTransition) {
+      return new TreeMap<String, RegionState>(this.regionsInTransition);
+    }
+  }
+
+  /**
+   * @return True if regions in transition.
+   */
+  public boolean isRegionsInTransition() {
+    synchronized (this.regionsInTransition) {
+      return !this.regionsInTransition.isEmpty();
+    }
+  }
+
+  /**
+   * @param hri Region to check.
+   * @return Returns null if passed region is not in transition else the current
+   * RegionState
+   */
+  public RegionState isRegionInTransition(final HRegionInfo hri) {
+    synchronized (this.regionsInTransition) {
+      return this.regionsInTransition.get(hri.getEncodedName());
+    }
+  }
+
+  /**
+   * Clears the specified region from being in transition.
+   * <p>
+   * Used only by HBCK tool.
+   * @param hri
+   */
+  public void clearRegionFromTransition(HRegionInfo hri) {
+    synchronized (this.regionsInTransition) {
+      this.regionsInTransition.remove(hri.getEncodedName());
+    }
+    synchronized (this.regions) {
+      this.regions.remove(hri);
+      for (List<HRegionInfo> regions : this.servers.values()) {
+        for (int i=0;i<regions.size();i++) {
+          if (regions.get(i).equals(hri)) {
+            regions.remove(i);
+            break;
+          }
+        }
+      }
+    }
+    clearRegionPlan(hri);
+  }
+
+  /**
+   * @param region Region whose plan we are to clear.
+   */
+  void clearRegionPlan(final HRegionInfo region) {
+    synchronized (this.regionPlans) {
+      this.regionPlans.remove(region.getEncodedName());
+    }
+  }
+
+  /**
+   * Wait on region to clear regions-in-transition.
+   * @param hri Region to wait on.
+   * @throws IOException
+   */
+  public void waitOnRegionToClearRegionsInTransition(final HRegionInfo hri)
+  throws IOException {
+    if (isRegionInTransition(hri) == null) return;
+    RegionState rs = null;
+    // There is already a timeout monitor on regions in transition so I
+    // should not have to have one here too?
+    while(!this.master.isStopped() && (rs = isRegionInTransition(hri)) != null) {
+      Threads.sleep(1000);
+      LOG.info("Waiting on " + rs + " to clear regions-in-transition");
+    }
+    if (this.master.isStopped()) {
+      LOG.info("Giving up wait on regions in " +
+        "transition because stoppable.isStopped is set");
+    }
+  }
+
+
+  /**
+   * Gets the online regions of the specified table.
+   * This method looks at the in-memory state.  It does not go to <code>.META.</code>.
+   * Only returns <em>online</em> regions.  If a region on this table has been
+   * closed during a disable, etc., it will be included in the returned list.
+   * So, the returned list may not necessarily be ALL regions in this table, its
+   * all the ONLINE regions in the table.
+   * @param tableName
+   * @return Online regions from <code>tableName</code>
+   */
+  public List<HRegionInfo> getRegionsOfTable(byte[] tableName) {
+    List<HRegionInfo> tableRegions = new ArrayList<HRegionInfo>();
+    HRegionInfo boundary =
+      new HRegionInfo(new HTableDescriptor(tableName), null, null);
+    synchronized (this.regions) {
+      for (HRegionInfo regionInfo: this.regions.tailMap(boundary).keySet()) {
+        if(Bytes.equals(regionInfo.getTableDesc().getName(), tableName)) {
+          tableRegions.add(regionInfo);
+        } else {
+          break;
+        }
+      }
+    }
+    return tableRegions;
+  }
+
+  /**
+   * Monitor to check for time outs on region transition operations
+   */
+  public class TimeoutMonitor extends Chore {
+    private final int timeout;
+    private boolean bulkAssign = false;
+
+    /**
+     * Creates a periodic monitor to check for time outs on region transition
+     * operations.  This will deal with retries if for some reason something
+     * doesn't happen within the specified timeout.
+     * @param period
+   * @param stopper When {@link Stoppable#isStopped()} is true, this thread will
+   * cleanup and exit cleanly.
+     * @param timeout
+     */
+    public TimeoutMonitor(final int period, final Stoppable stopper,
+        final int timeout) {
+      super("AssignmentTimeoutMonitor", period, stopper);
+      this.timeout = timeout;
+    }
+
+    /**
+     * @param bulkAssign If true, we'll suspend checking regions in transition
+     * up in zookeeper.  If false, will reenable check.
+     * @return Old setting for bulkAssign.
+     */
+    public boolean bulkAssign(final boolean bulkAssign) {
+      boolean result = this.bulkAssign;
+      this.bulkAssign = bulkAssign;
+      return result;
+    }
+
+    @Override
+    protected void chore() {
+      // If bulkAssign in progress, suspend checks
+      if (this.bulkAssign) return;
+      synchronized (regionsInTransition) {
+        // Iterate all regions in transition checking for time outs
+        long now = System.currentTimeMillis();
+        for (RegionState regionState : regionsInTransition.values()) {
+          if(regionState.getStamp() + timeout <= now) {
+            HRegionInfo regionInfo = regionState.getRegion();
+            LOG.info("Regions in transition timed out:  " + regionState);
+            // Expired!  Do a retry.
+            switch (regionState.getState()) {
+              case CLOSED:
+                LOG.info("Region has been CLOSED for too long, " +
+                    "retriggering ClosedRegionHandler");
+                AssignmentManager.this.executorService.submit(
+                    new ClosedRegionHandler(master, AssignmentManager.this,
+                        regionState.getRegion()));
+                break;
+              case OFFLINE:
+                LOG.info("Region has been OFFLINE for too long, " +
+                  "reassigning " + regionInfo.getRegionNameAsString() +
+                  " to a random server");
+                assign(regionState.getRegion(), false);
+                break;
+              case PENDING_OPEN:
+                LOG.info("Region has been PENDING_OPEN for too " +
+                    "long, reassigning region=" +
+                    regionInfo.getRegionNameAsString());
+                assign(regionState.getRegion(), false, true);
+                break;
+              case OPENING:
+                LOG.info("Region has been OPENING for too " +
+                  "long, reassigning region=" +
+                  regionInfo.getRegionNameAsString());
+                // Should have a ZK node in OPENING state
+                try {
+                  String node = ZKAssign.getNodeName(watcher,
+                      regionInfo.getEncodedName());
+                  Stat stat = new Stat();
+                  RegionTransitionData data = ZKAssign.getDataNoWatch(watcher,
+                      node, stat);
+                  if (data.getEventType() == EventType.RS_ZK_REGION_OPENED) {
+                    LOG.debug("Region has transitioned to OPENED, allowing " +
+                        "watched event handlers to process");
+                    break;
+                  } else if (data.getEventType() !=
+                      EventType.RS_ZK_REGION_OPENING) {
+                    LOG.warn("While timing out a region in state OPENING, " +
+                        "found ZK node in unexpected state: " +
+                        data.getEventType());
+                    break;
+                  }
+                  // Attempt to transition node into OFFLINE
+                  try {
+                    data = new RegionTransitionData(
+                      EventType.M_ZK_REGION_OFFLINE, regionInfo.getRegionName(),
+                      master.getServerName());
+                    if (ZKUtil.setData(watcher, node, data.getBytes(),
+                        stat.getVersion())) {
+                      // Node is now OFFLINE, let's trigger another assignment
+                      ZKUtil.getDataAndWatch(watcher, node); // re-set the watch
+                      LOG.info("Successfully transitioned region=" +
+                          regionInfo.getRegionNameAsString() + " into OFFLINE" +
+                          " and forcing a new assignment");
+                      assign(regionState, false, true);
+                    }
+                  } catch (KeeperException.NoNodeException nne) {
+                    // Node did not exist, can't time this out
+                  }
+                } catch (KeeperException ke) {
+                  LOG.error("Unexpected ZK exception timing out CLOSING region",
+                      ke);
+                  break;
+                }
+                break;
+              case OPEN:
+                LOG.error("Region has been OPEN for too long, " +
+                "we don't know where region was opened so can't do anything");
+                break;
+              case PENDING_CLOSE:
+                LOG.info("Region has been PENDING_CLOSE for too " +
+                    "long, running forced unassign again on region=" +
+                    regionInfo.getRegionNameAsString());
+                  try {
+                    // If the server got the RPC, it will transition the node
+                    // to CLOSING, so only do something here if no node exists
+                    if (!ZKUtil.watchAndCheckExists(watcher,
+                        ZKAssign.getNodeName(watcher,
+                            regionInfo.getEncodedName()))) {
+                      unassign(regionInfo, true);
+                    }
+                  } catch (NoNodeException e) {
+                    LOG.debug("Node no longer existed so not forcing another " +
+                        "unassignment");
+                  } catch (KeeperException e) {
+                    LOG.warn("Unexpected ZK exception timing out a region " +
+                        "close", e);
+                  }
+                  break;
+              case CLOSING:
+                LOG.info("Region has been CLOSING for too " +
+                  "long, this should eventually complete or the server will " +
+                  "expire, doing nothing");
+                break;
+            }
+          }
+        }
+      }
+    }
+  }
+
+  /**
+   * Process shutdown server removing any assignments.
+   * @param hsi Server that went down.
+   * @return list of regions in transition on this server
+   */
+  public List<RegionState> processServerShutdown(final HServerInfo hsi) {
+    // Clean out any existing assignment plans for this server
+    synchronized (this.regionPlans) {
+      for (Iterator <Map.Entry<String, RegionPlan>> i =
+          this.regionPlans.entrySet().iterator(); i.hasNext();) {
+        Map.Entry<String, RegionPlan> e = i.next();
+        if (e.getValue().getDestination().equals(hsi)) {
+          // Use iterator's remove else we'll get CME
+          i.remove();
+        }
+      }
+    }
+    // TODO: Do we want to sync on RIT here?
+    // Remove this server from map of servers to regions, and remove all regions
+    // of this server from online map of regions.
+    Set<HRegionInfo> deadRegions = null;
+    List<RegionState> rits = new ArrayList<RegionState>();
+    synchronized (this.regions) {
+      List<HRegionInfo> assignedRegions = this.servers.remove(hsi);
+      if (assignedRegions == null || assignedRegions.isEmpty()) {
+        // No regions on this server, we are done, return empty list of RITs
+        return rits;
+      }
+      deadRegions = new TreeSet<HRegionInfo>(assignedRegions);
+      for (HRegionInfo region : deadRegions) {
+        this.regions.remove(region);
+      }
+    }
+    // See if any of the regions that were online on this server were in RIT
+    // If they are, normal timeouts will deal with them appropriately so
+    // let's skip a manual re-assignment.
+    synchronized (regionsInTransition) {
+      for (RegionState region : this.regionsInTransition.values()) {
+        if (deadRegions.remove(region.getRegion())) {
+          rits.add(region);
+        }
+      }
+    }
+    return rits;
+  }
+
+  /**
+   * Update inmemory structures.
+   * @param hsi Server that reported the split
+   * @param parent Parent region that was split
+   * @param a Daughter region A
+   * @param b Daughter region B
+   */
+  public void handleSplitReport(final HServerInfo hsi, final HRegionInfo parent,
+      final HRegionInfo a, final HRegionInfo b) {
+    regionOffline(parent);
+    // Remove any CLOSING node, if exists, due to race between master & rs
+    // for close & split.  Not putting into regionOffline method because it is
+    // called from various locations.
+    try {
+      RegionTransitionData node = ZKAssign.getDataNoWatch(this.watcher,
+        parent.getEncodedName(), null);
+      if (node != null) {
+        if (node.getEventType().equals(EventType.RS_ZK_REGION_CLOSING)) {
+          ZKAssign.deleteClosingNode(this.watcher, parent);
+        } else {
+          LOG.warn("Split report has RIT node (shouldnt have one): " +
+            parent + " node: " + node);
+        }
+      }
+    } catch (KeeperException e) {
+      LOG.warn("Exception while validating RIT during split report", e);
+    }
+
+    regionOnline(a, hsi);
+    regionOnline(b, hsi);
+
+    // There's a possibility that the region was splitting while a user asked
+    // the master to disable, we need to make sure we close those regions in
+    // that case. This is not racing with the region server itself since RS
+    // report is done after the split transaction completed.
+    if (this.zkTable.isDisablingOrDisabledTable(
+        parent.getTableDesc().getNameAsString())) {
+      unassign(a);
+      unassign(b);
+    }
+  }
+
+  /**
+   * @return A clone of current assignments. Note, this is assignments only.
+   * If a new server has come in and it has no regions, it will not be included
+   * in the returned Map.
+   */
+  Map<HServerInfo, List<HRegionInfo>> getAssignments() {
+    // This is an EXPENSIVE clone.  Cloning though is the safest thing to do.
+    // Can't let out original since it can change and at least the loadbalancer
+    // wants to iterate this exported list.  We need to synchronize on regions
+    // since all access to this.servers is under a lock on this.regions.
+    Map<HServerInfo, List<HRegionInfo>> result = null;
+    synchronized (this.regions) {
+      result = new HashMap<HServerInfo, List<HRegionInfo>>(this.servers.size());
+      for (Map.Entry<HServerInfo, List<HRegionInfo>> e: this.servers.entrySet()) {
+        List<HRegionInfo> shallowCopy = new ArrayList<HRegionInfo>(e.getValue());
+        HServerInfo clone = new HServerInfo(e.getKey());
+        // Set into server load the number of regions this server is carrying
+        // The load balancer calculation needs it at least and its handy.
+        clone.getLoad().setNumberOfRegions(e.getValue().size());
+        result.put(clone, shallowCopy);
+      }
+    }
+    return result;
+  }
+
+  /**
+   * @param encodedRegionName Region encoded name.
+   * @return Null or a {@link Pair} instance that holds the full {@link HRegionInfo}
+   * and the hosting servers {@link HServerInfo}.
+   */
+  Pair<HRegionInfo, HServerInfo> getAssignment(final byte [] encodedRegionName) {
+    String name = Bytes.toString(encodedRegionName);
+    synchronized(this.regions) {
+      for (Map.Entry<HRegionInfo, HServerInfo> e: this.regions.entrySet()) {
+        if (e.getKey().getEncodedName().equals(name)) {
+          return new Pair<HRegionInfo, HServerInfo>(e.getKey(), e.getValue());
+        }
+      }
+    }
+    return null;
+  }
+
+  /**
+   * @param plan Plan to execute.
+   */
+  void balance(final RegionPlan plan) {
+    synchronized (this.regionPlans) {
+      this.regionPlans.put(plan.getRegionName(), plan);
+    }
+    unassign(plan.getRegionInfo());
+  }
+
+  /**
+   * State of a Region while undergoing transitions.
+   */
+  public static class RegionState implements Writable {
+    private HRegionInfo region;
+
+    public enum State {
+      OFFLINE,        // region is in an offline state
+      PENDING_OPEN,   // sent rpc to server to open but has not begun
+      OPENING,        // server has begun to open but not yet done
+      OPEN,           // server opened region and updated meta
+      PENDING_CLOSE,  // sent rpc to server to close but has not begun
+      CLOSING,        // server has begun to close but not yet done
+      CLOSED          // server closed region and updated meta
+    }
+
+    private State state;
+    private long stamp;
+
+    public RegionState() {}
+
+    RegionState(HRegionInfo region, State state) {
+      this(region, state, System.currentTimeMillis());
+    }
+
+    RegionState(HRegionInfo region, State state, long stamp) {
+      this.region = region;
+      this.state = state;
+      this.stamp = stamp;
+    }
+
+    public void update(State state, long stamp) {
+      this.state = state;
+      this.stamp = stamp;
+    }
+
+    public void update(State state) {
+      this.state = state;
+      this.stamp = System.currentTimeMillis();
+    }
+
+    public State getState() {
+      return state;
+    }
+
+    public long getStamp() {
+      return stamp;
+    }
+
+    public HRegionInfo getRegion() {
+      return region;
+    }
+
+    public boolean isClosing() {
+      return state == State.CLOSING;
+    }
+
+    public boolean isClosed() {
+      return state == State.CLOSED;
+    }
+
+    public boolean isPendingClose() {
+      return state == State.PENDING_CLOSE;
+    }
+
+    public boolean isOpening() {
+      return state == State.OPENING;
+    }
+
+    public boolean isOpened() {
+      return state == State.OPEN;
+    }
+
+    public boolean isPendingOpen() {
+      return state == State.PENDING_OPEN;
+    }
+
+    public boolean isOffline() {
+      return state == State.OFFLINE;
+    }
+
+    @Override
+    public String toString() {
+      return region.getRegionNameAsString() + " state=" + state +
+        ", ts=" + stamp;
+    }
+
+    @Override
+    public void readFields(DataInput in) throws IOException {
+      region = new HRegionInfo();
+      region.readFields(in);
+      state = State.valueOf(in.readUTF());
+      stamp = in.readLong();
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      region.write(out);
+      out.writeUTF(state.name());
+      out.writeLong(stamp);
+    }
+  }
+
+  public void stop() {
+    this.timeoutMonitor.interrupt();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/BulkAssigner.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/BulkAssigner.java
new file mode 100644
index 0000000..32da475
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/BulkAssigner.java
@@ -0,0 +1,105 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.lang.Thread.UncaughtExceptionHandler;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.hbase.Server;
+
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+/**
+ * Base class used bulk assigning and unassigning regions.
+ * Encapsulates a fixed size thread pool of executors to run assignment/unassignment.
+ * Implement {@link #populatePool(java.util.concurrent.ExecutorService)} and
+ * {@link #waitUntilDone(long)}.
+ */
+public abstract class BulkAssigner {
+  final Server server;
+
+  /**
+   * @param server An instance of Server
+   */
+  public BulkAssigner(final Server server) {
+    this.server = server;
+  }
+
+  protected String getThreadNamePrefix() {
+    return this.server.getServerName() + "-BulkAssigner";
+  }
+
+  protected UncaughtExceptionHandler getUncaughtExceptionHandler() {
+    return new UncaughtExceptionHandler() {
+      @Override
+      public void uncaughtException(Thread t, Throwable e) {
+        // Abort if exception of any kind.
+        server.abort("Uncaught exception in " + t.getName(), e);
+      }
+    };
+  }
+
+  protected int getThreadCount() {
+    return this.server.getConfiguration().
+      getInt("hbase.bulk.assignment.threadpool.size", 20);
+  }
+
+  protected long getTimeoutOnRIT() {
+    return this.server.getConfiguration().
+      getLong("hbase.bulk.assignment.waiton.empty.rit", 10 * 60 * 1000);
+  }
+
+  protected abstract void populatePool(final java.util.concurrent.ExecutorService pool);
+
+  /**
+   * Run the bulk assign.
+   * @throws InterruptedException
+   * @return True if done.
+   */
+  public boolean bulkAssign() throws InterruptedException {
+    boolean result = false;
+    ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
+    builder.setDaemon(true);
+    builder.setNameFormat(getThreadNamePrefix() + "-%1$d");
+    builder.setUncaughtExceptionHandler(getUncaughtExceptionHandler());
+    int threadCount = getThreadCount();
+    java.util.concurrent.ExecutorService pool =
+      Executors.newFixedThreadPool(threadCount, builder.build());
+    try {
+      populatePool(pool);
+      // How long to wait on empty regions-in-transition.  If we timeout, the
+      // RIT monitor should do fixup.
+      result = waitUntilDone(getTimeoutOnRIT());
+    } finally {
+      // We're done with the pool.  It'll exit when its done all in queue.
+      pool.shutdown();
+    }
+    return result;
+  }
+
+  /**
+   * Wait until bulk assign is done.
+   * @param timeout How long to wait.
+   * @throws InterruptedException
+   * @return True if the condition we were waiting on happened.
+   */
+  protected abstract boolean waitUntilDone(final long timeout)
+  throws InterruptedException;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java
new file mode 100644
index 0000000..da4e1b8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java
@@ -0,0 +1,271 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.util.Writables;
+
+/**
+ * A janitor for the catalog tables.  Scans the <code>.META.</code> catalog
+ * table on a period looking for unused regions to garbage collect.
+ */
+class CatalogJanitor extends Chore {
+  private static final Log LOG = LogFactory.getLog(CatalogJanitor.class.getName());
+  private final Server server;
+  private final MasterServices services;
+
+  CatalogJanitor(final Server server, final MasterServices services) {
+    super(server.getServerName() + "-CatalogJanitor",
+      server.getConfiguration().getInt("hbase.catalogjanitor.interval", 300000),
+      server);
+    this.server = server;
+    this.services = services;
+  }
+
+  @Override
+  protected boolean initialChore() {
+    try {
+      scan();
+    } catch (IOException e) {
+      LOG.warn("Failed initial scan of catalog table", e);
+      return false;
+    }
+    return true;
+  }
+
+  @Override
+  protected void chore() {
+    try {
+      scan();
+    } catch (IOException e) {
+      LOG.warn("Failed scan of catalog table", e);
+    }
+  }
+
+  /**
+   * Run janitorial scan of catalog <code>.META.</code> table looking for
+   * garbage to collect.
+   * @throws IOException
+   */
+  void scan() throws IOException {
+    // TODO: Only works with single .META. region currently.  Fix.
+    final AtomicInteger count = new AtomicInteger(0);
+    // Keep Map of found split parents.  There are candidates for cleanup.
+    final Map<HRegionInfo, Result> splitParents =
+      new TreeMap<HRegionInfo, Result>();
+    // This visitor collects split parents and counts rows in the .META. table
+    MetaReader.Visitor visitor = new MetaReader.Visitor() {
+      @Override
+      public boolean visit(Result r) throws IOException {
+        if (r == null || r.isEmpty()) return true;
+        count.incrementAndGet();
+        HRegionInfo info = getHRegionInfo(r);
+        if (info == null) return true; // Keep scanning
+        if (info.isSplitParent()) splitParents.put(info, r);
+        // Returning true means "keep scanning"
+        return true;
+      }
+    };
+    // Run full scan of .META. catalog table passing in our custom visitor
+    MetaReader.fullScan(this.server.getCatalogTracker(), visitor);
+    // Now work on our list of found parents. See if any we can clean up.
+    int cleaned = 0;
+    for (Map.Entry<HRegionInfo, Result> e : splitParents.entrySet()) {
+      if (cleanParent(e.getKey(), e.getValue())) cleaned++;
+    }
+    if (cleaned != 0) {
+      LOG.info("Scanned " + count.get() + " catalog row(s) and gc'd " + cleaned +
+        " unreferenced parent region(s)");
+    } else if (LOG.isDebugEnabled()) {
+      LOG.debug("Scanned " + count.get() + " catalog row(s) and gc'd " + cleaned +
+      " unreferenced parent region(s)");
+    }
+  }
+
+  /**
+   * Get HRegionInfo from passed Map of row values.
+   * @param result Map to do lookup in.
+   * @return Null if not found (and logs fact that expected COL_REGIONINFO
+   * was missing) else deserialized {@link HRegionInfo}
+   * @throws IOException
+   */
+  static HRegionInfo getHRegionInfo(final Result result)
+  throws IOException {
+    byte [] bytes =
+      result.getValue(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    if (bytes == null) {
+      LOG.warn("REGIONINFO_QUALIFIER is empty in " + result);
+      return null;
+    }
+    return Writables.getHRegionInfo(bytes);
+  }
+
+  /**
+   * If daughters no longer hold reference to the parents, delete the parent.
+   * @param server HRegionInterface of meta server to talk to 
+   * @param parent HRegionInfo of split offlined parent
+   * @param rowContent Content of <code>parent</code> row in
+   * <code>metaRegionName</code>
+   * @return True if we removed <code>parent</code> from meta table and from
+   * the filesystem.
+   * @throws IOException
+   */
+  boolean cleanParent(final HRegionInfo parent,
+    Result rowContent)
+  throws IOException {
+    boolean result = false;
+    // Run checks on each daughter split.
+    boolean hasReferencesA =
+      checkDaughter(parent, rowContent, HConstants.SPLITA_QUALIFIER);
+    boolean hasReferencesB =
+      checkDaughter(parent, rowContent, HConstants.SPLITB_QUALIFIER);
+    if (!hasReferencesA && !hasReferencesB) {
+      LOG.debug("Deleting region " + parent.getRegionNameAsString() +
+        " because daughter splits no longer hold references");
+      // This latter regionOffline should not be necessary but is done for now
+      // until we let go of regionserver to master heartbeats.  See HBASE-3368.
+      if (this.services.getAssignmentManager() != null) {
+        // The mock used in testing catalogjanitor returns null for getAssignmnetManager.
+        // Allow for null result out of getAssignmentManager.
+        this.services.getAssignmentManager().regionOffline(parent);
+      }
+      FileSystem fs = this.services.getMasterFileSystem().getFileSystem();
+      Path rootdir = this.services.getMasterFileSystem().getRootDir();
+      HRegion.deleteRegion(fs, rootdir, parent);
+      MetaEditor.deleteRegion(this.server.getCatalogTracker(), parent);
+      result = true;
+    }
+    return result;
+  }
+
+  
+  /**
+   * See if the passed daughter has references in the filesystem to the parent
+   * and if not, remove the note of daughter region in the parent row: its
+   * column info:splitA or info:splitB.
+   * @param parent
+   * @param rowContent
+   * @param qualifier
+   * @return True if this daughter still has references to the parent.
+   * @throws IOException
+   */
+  boolean checkDaughter(final HRegionInfo parent,
+    final Result rowContent, final byte [] qualifier)
+  throws IOException {
+    HRegionInfo hri = getDaughterRegionInfo(rowContent, qualifier);
+    return hasReferences(parent, rowContent, hri, qualifier);
+  }
+
+  /**
+   * Get daughter HRegionInfo out of parent info:splitA/info:splitB columns.
+   * @param result
+   * @param which Whether "info:splitA" or "info:splitB" column
+   * @return Deserialized content of the info:splitA or info:splitB as a
+   * HRegionInfo
+   * @throws IOException
+   */
+  private HRegionInfo getDaughterRegionInfo(final Result result,
+    final byte [] which)
+  throws IOException {
+    byte [] bytes = result.getValue(HConstants.CATALOG_FAMILY, which);
+    return Writables.getHRegionInfoOrNull(bytes);
+  }
+
+  /**
+   * Remove mention of daughter from parent row.
+   * parent row.
+   * @param metaRegionName
+   * @param srvr
+   * @param parent
+   * @param split
+   * @param qualifier
+   * @throws IOException
+   */
+  private void removeDaughterFromParent(final HRegionInfo parent,
+    final HRegionInfo split, final byte [] qualifier)
+  throws IOException {
+    MetaEditor.deleteDaughterReferenceInParent(this.server.getCatalogTracker(),
+      parent, qualifier, split);
+  }
+
+  /**
+   * Checks if a daughter region -- either splitA or splitB -- still holds
+   * references to parent.  If not, removes reference to the split from
+   * the parent meta region row so we don't check it any more.
+   * @param parent Parent region name. 
+   * @param rowContent Keyed content of the parent row in meta region.
+   * @param split Which column family.
+   * @param qualifier Which of the daughters to look at, splitA or splitB.
+   * @return True if still has references to parent.
+   * @throws IOException
+   */
+  boolean hasReferences(final HRegionInfo parent,
+    final Result rowContent, final HRegionInfo split,
+    final byte [] qualifier)
+  throws IOException {
+    boolean result = false;
+    if (split == null)  return result;
+    FileSystem fs = this.services.getMasterFileSystem().getFileSystem();
+    Path rootdir = this.services.getMasterFileSystem().getRootDir();
+    Path tabledir = new Path(rootdir, split.getTableDesc().getNameAsString());
+    for (HColumnDescriptor family: split.getTableDesc().getFamilies()) {
+      Path p = Store.getStoreHomedir(tabledir, split.getEncodedName(),
+        family.getName());
+      // Look for reference files.  Call listStatus with anonymous instance of PathFilter.
+      FileStatus [] ps = fs.listStatus(p,
+          new PathFilter () {
+            public boolean accept(Path path) {
+              return StoreFile.isReference(path);
+            }
+          }
+      );
+
+      if (ps != null && ps.length > 0) {
+        result = true;
+        break;
+      }
+    }
+    if (!result) {
+      removeDaughterFromParent(parent, split, qualifier);
+    }
+    return result;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
new file mode 100644
index 0000000..efcbb99
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/DeadServer.java
@@ -0,0 +1,169 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.commons.lang.NotImplementedException;
+import org.apache.hadoop.hbase.HServerInfo;
+
+/**
+ * Class to hold dead servers list and utility querying dead server list.
+ */
+public class DeadServer implements Set<String> {
+  /**
+   * Set of known dead servers.  On znode expiration, servers are added here.
+   * This is needed in case of a network partitioning where the server's lease
+   * expires, but the server is still running. After the network is healed,
+   * and it's server logs are recovered, it will be told to call server startup
+   * because by then, its regions have probably been reassigned.
+   */
+  private final Set<String> deadServers = new HashSet<String>();
+
+  /** Linked list of dead servers used to bound size of dead server set */
+  private final List<String> deadServerList = new LinkedList<String>();
+
+  /** Maximum number of dead servers to keep track of */
+  private final int maxDeadServers;
+
+  /** Number of dead servers currently being processed */
+  private int numProcessing;
+
+  public DeadServer(int maxDeadServers) {
+    super();
+    this.maxDeadServers = maxDeadServers;
+    this.numProcessing = 0;
+  }
+
+  /**
+   * @param serverName
+   * @return true if server is dead
+   */
+  public boolean isDeadServer(final String serverName) {
+    return isDeadServer(serverName, false);
+  }
+
+  /**
+   * @param serverName Servername as either <code>host:port</code> or
+   * <code>host,port,startcode</code>.
+   * @param hostAndPortOnly True if <code>serverName</code> is host and
+   * port only (<code>host:port</code>) and if so, then we do a prefix compare
+   * (ignoring start codes) looking for dead server.
+   * @return true if server is dead
+   */
+  boolean isDeadServer(final String serverName, final boolean hostAndPortOnly) {
+    return HServerInfo.isServer(this, serverName, hostAndPortOnly);
+  }
+
+  /**
+   * Checks if there are currently any dead servers being processed by the
+   * master.  Returns true if at least one region server is currently being
+   * processed as dead.
+   * @return true if any RS are being processed as dead
+   */
+  public boolean areDeadServersInProgress() {
+    return numProcessing != 0;
+  }
+
+  public synchronized Set<String> clone() {
+    Set<String> clone = new HashSet<String>(this.deadServers.size());
+    clone.addAll(this.deadServers);
+    return clone;
+  }
+
+  public synchronized boolean add(String e) {
+    this.numProcessing++;
+    // Check to see if we are at capacity for dead servers
+    if (deadServerList.size() == this.maxDeadServers) {
+      deadServers.remove(deadServerList.remove(0));
+    }
+    deadServerList.add(e);
+    return deadServers.add(e);
+  }
+
+  public synchronized void finish(String e) {
+    this.numProcessing--;
+  }
+
+  public synchronized int size() {
+    return deadServers.size();
+  }
+
+  public synchronized boolean isEmpty() {
+    return deadServers.isEmpty();
+  }
+
+  public synchronized boolean contains(Object o) {
+    return deadServers.contains(o);
+  }
+
+  public Iterator<String> iterator() {
+    return this.deadServers.iterator();
+  }
+
+  public synchronized Object[] toArray() {
+    return deadServers.toArray();
+  }
+
+  public synchronized <T> T[] toArray(T[] a) {
+    return deadServers.toArray(a);
+  }
+
+  public synchronized boolean remove(Object o) {
+    throw new UnsupportedOperationException();
+  }
+
+  public synchronized boolean containsAll(Collection<?> c) {
+    return deadServers.containsAll(c);
+  }
+
+  public synchronized boolean addAll(Collection<? extends String> c) {
+    return deadServers.addAll(c);
+  }
+
+  public synchronized boolean retainAll(Collection<?> c) {
+    return deadServers.retainAll(c);
+  }
+
+  public synchronized boolean removeAll(Collection<?> c) {
+    return deadServers.removeAll(c);
+  }
+
+  public synchronized void clear() {
+    throw new NotImplementedException();
+  }
+
+  public synchronized boolean equals(Object o) {
+    return deadServers.equals(o);
+  }
+
+  public synchronized int hashCode() {
+    return deadServers.hashCode();
+  }
+
+  public synchronized String toString() {
+    return this.deadServers.toString();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/HMaster.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
new file mode 100644
index 0000000..43cef45
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -0,0 +1,1063 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.InetSocketAddress;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.UnknownRegionException;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.MetaScanner;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.executor.ExecutorService.ExecutorType;
+import org.apache.hadoop.hbase.ipc.HBaseRPC;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.HBaseServer;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.hbase.ipc.HMasterRegionInterface;
+import org.apache.hadoop.hbase.master.LoadBalancer.RegionPlan;
+import org.apache.hadoop.hbase.master.handler.DeleteTableHandler;
+import org.apache.hadoop.hbase.master.handler.DisableTableHandler;
+import org.apache.hadoop.hbase.master.handler.EnableTableHandler;
+import org.apache.hadoop.hbase.master.handler.ModifyTableHandler;
+import org.apache.hadoop.hbase.master.handler.TableAddFamilyHandler;
+import org.apache.hadoop.hbase.master.handler.TableDeleteFamilyHandler;
+import org.apache.hadoop.hbase.master.handler.TableModifyFamilyHandler;
+import org.apache.hadoop.hbase.master.metrics.MasterMetrics;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.replication.regionserver.Replication;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.InfoServer;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Sleeper;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.util.VersionInfo;
+import org.apache.hadoop.hbase.zookeeper.ClusterStatusTracker;
+import org.apache.hadoop.hbase.zookeeper.RegionServerTracker;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.net.DNS;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.Watcher;
+
+/**
+ * HMaster is the "master server" for HBase. An HBase cluster has one active
+ * master.  If many masters are started, all compete.  Whichever wins goes on to
+ * run the cluster.  All others park themselves in their constructor until
+ * master or cluster shutdown or until the active master loses its lease in
+ * zookeeper.  Thereafter, all running master jostle to take over master role.
+ *
+ * <p>The Master can be asked shutdown the cluster. See {@link #shutdown()}.  In
+ * this case it will tell all regionservers to go down and then wait on them
+ * all reporting in that they are down.  This master will then shut itself down.
+ *
+ * <p>You can also shutdown just this master.  Call {@link #stopMaster()}.
+ *
+ * @see HMasterInterface
+ * @see HMasterRegionInterface
+ * @see Watcher
+ */
+public class HMaster extends Thread
+implements HMasterInterface, HMasterRegionInterface, MasterServices, Server {
+  private static final Log LOG = LogFactory.getLog(HMaster.class.getName());
+
+  // MASTER is name of the webapp and the attribute name used stuffing this
+  //instance into web context.
+  public static final String MASTER = "master";
+
+  // The configuration for the Master
+  private final Configuration conf;
+  // server for the web ui
+  private InfoServer infoServer;
+
+  // Our zk client.
+  private ZooKeeperWatcher zooKeeper;
+  // Manager and zk listener for master election
+  private ActiveMasterManager activeMasterManager;
+  // Region server tracker
+  private RegionServerTracker regionServerTracker;
+
+  // RPC server for the HMaster
+  private final HBaseServer rpcServer;
+  // Address of the HMaster
+  private final HServerAddress address;
+  // Metrics for the HMaster
+  private final MasterMetrics metrics;
+  // file system manager for the master FS operations
+  private MasterFileSystem fileSystemManager;
+
+  private HConnection connection;
+
+  // server manager to deal with region server info
+  private ServerManager serverManager;
+
+  // manager of assignment nodes in zookeeper
+  AssignmentManager assignmentManager;
+  // manager of catalog regions
+  private CatalogTracker catalogTracker;
+  // Cluster status zk tracker and local setter
+  private ClusterStatusTracker clusterStatusTracker;
+
+  // This flag is for stopping this Master instance.  Its set when we are
+  // stopping or aborting
+  private volatile boolean stopped = false;
+  // Set on abort -- usually failure of our zk session.
+  private volatile boolean abort = false;
+  // flag set after we become the active master (used for testing)
+  private volatile boolean isActiveMaster = false;
+  // flag set after we complete initialization once active (used for testing)
+  private volatile boolean initialized = false;
+
+  // Instance of the hbase executor service.
+  ExecutorService executorService;
+
+  private LoadBalancer balancer = new LoadBalancer();
+  private Thread balancerChore;
+  // If 'true', the balancer is 'on'.  If 'false', the balancer will not run.
+  private volatile boolean balanceSwitch = true;
+
+  private Thread catalogJanitorChore;
+  private LogCleaner logCleaner;
+
+  /**
+   * Initializes the HMaster. The steps are as follows:
+   * <p>
+   * <ol>
+   * <li>Initialize HMaster RPC and address
+   * <li>Connect to ZooKeeper.
+   * </ol>
+   * <p>
+   * Remaining steps of initialization occur in {@link #run()} so that they
+   * run in their own thread rather than within the context of the constructor.
+   * @throws InterruptedException
+   */
+  public HMaster(final Configuration conf)
+  throws IOException, KeeperException, InterruptedException {
+    this.conf = conf;
+
+    /*
+     * Determine address and initialize RPC server (but do not start).
+     * The RPC server ports can be ephemeral. Create a ZKW instance.
+     */
+    HServerAddress a = new HServerAddress(getMyAddress(this.conf));
+    int numHandlers = conf.getInt("hbase.regionserver.handler.count", 10);
+    this.rpcServer = HBaseRPC.getServer(this,
+      new Class<?>[]{HMasterInterface.class, HMasterRegionInterface.class},
+      a.getBindAddress(), a.getPort(),
+      numHandlers,
+      0, // we dont use high priority handlers in master
+      false, conf,
+      0); // this is a DNC w/o high priority handlers
+    this.address = new HServerAddress(rpcServer.getListenerAddress());
+
+    // set the thread name now we have an address
+    setName(MASTER + "-" + this.address);
+
+    Replication.decorateMasterConfiguration(this.conf);
+
+    this.rpcServer.startThreads();
+
+    // Hack! Maps DFSClient => Master for logs.  HDFS made this
+    // config param for task trackers, but we can piggyback off of it.
+    if (this.conf.get("mapred.task.id") == null) {
+      this.conf.set("mapred.task.id", "hb_m_" + this.address.toString() +
+        "_" + System.currentTimeMillis());
+    }
+
+    this.zooKeeper = new ZooKeeperWatcher(conf, MASTER + ":" +
+        address.getPort(), this);
+
+    this.metrics = new MasterMetrics(getServerName());
+  }
+
+  /**
+   * Stall startup if we are designated a backup master; i.e. we want someone
+   * else to become the master before proceeding.
+   * @param c
+   * @param amm
+   * @throws InterruptedException
+   */
+  private static void stallIfBackupMaster(final Configuration c,
+      final ActiveMasterManager amm)
+  throws InterruptedException {
+    // If we're a backup master, stall until a primary to writes his address
+    if (!c.getBoolean(HConstants.MASTER_TYPE_BACKUP,
+      HConstants.DEFAULT_MASTER_TYPE_BACKUP)) {
+      return;
+    }
+    LOG.debug("HMaster started in backup mode.  " +
+      "Stalling until master znode is written.");
+    // This will only be a minute or so while the cluster starts up,
+    // so don't worry about setting watches on the parent znode
+    while (!amm.isActiveMaster()) {
+      LOG.debug("Waiting for master address ZNode to be written " +
+        "(Also watching cluster state node)");
+      Thread.sleep(c.getInt("zookeeper.session.timeout", 180 * 1000));
+    }
+  }
+
+  /**
+   * Main processing loop for the HMaster.
+   * <ol>
+   * <li>Block until becoming active master
+   * <li>Finish initialization via {@link #finishInitialization()}
+   * <li>Enter loop until we are stopped
+   * <li>Stop services and perform cleanup once stopped
+   * </ol>
+   */
+  @Override
+  public void run() {
+    try {
+      /*
+       * Block on becoming the active master.
+       *
+       * We race with other masters to write our address into ZooKeeper.  If we
+       * succeed, we are the primary/active master and finish initialization.
+       *
+       * If we do not succeed, there is another active master and we should
+       * now wait until it dies to try and become the next active master.  If we
+       * do not succeed on our first attempt, this is no longer a cluster startup.
+       */
+      this.activeMasterManager = new ActiveMasterManager(zooKeeper, address, this);
+      this.zooKeeper.registerListener(activeMasterManager);
+      stallIfBackupMaster(this.conf, this.activeMasterManager);
+      this.activeMasterManager.blockUntilBecomingActiveMaster();
+      // We are either the active master or we were asked to shutdown
+      if (!this.stopped) {
+        finishInitialization();
+        loop();
+      }
+    } catch (Throwable t) {
+      abort("Unhandled exception. Starting shutdown.", t);
+    } finally {
+      stopChores();
+      // Wait for all the remaining region servers to report in IFF we were
+      // running a cluster shutdown AND we were NOT aborting.
+      if (!this.abort && this.serverManager != null &&
+          this.serverManager.isClusterShutdown()) {
+        this.serverManager.letRegionServersShutdown();
+      }
+      stopServiceThreads();
+      // Stop services started for both backup and active masters
+      if (this.activeMasterManager != null) this.activeMasterManager.stop();
+      if (this.catalogTracker != null) this.catalogTracker.stop();
+      if (this.serverManager != null) this.serverManager.stop();
+      if (this.assignmentManager != null) this.assignmentManager.stop();
+      HConnectionManager.deleteConnection(this.conf, true);
+      this.zooKeeper.close();
+    }
+    LOG.info("HMaster main thread exiting");
+  }
+
+  private void loop() {
+    // Check if we should stop every second.
+    Sleeper sleeper = new Sleeper(1000, this);
+    while (!this.stopped) {
+      sleeper.sleep();
+    }
+  }
+
+  /**
+   * Finish initialization of HMaster after becoming the primary master.
+   *
+   * <ol>
+   * <li>Initialize master components - file system manager, server manager,
+   *     assignment manager, region server tracker, catalog tracker, etc</li>
+   * <li>Start necessary service threads - rpc server, info server,
+   *     executor services, etc</li>
+   * <li>Set cluster as UP in ZooKeeper</li>
+   * <li>Wait for RegionServers to check-in</li>
+   * <li>Split logs and perform data recovery, if necessary</li>
+   * <li>Ensure assignment of root and meta regions<li>
+   * <li>Handle either fresh cluster start or master failover</li>
+   * </ol>
+   *
+   * @throws IOException
+   * @throws InterruptedException
+   * @throws KeeperException
+   */
+  private void finishInitialization()
+  throws IOException, InterruptedException, KeeperException {
+
+    isActiveMaster = true;
+
+    /*
+     * We are active master now... go initialize components we need to run.
+     * Note, there may be dross in zk from previous runs; it'll get addressed
+     * below after we determine if cluster startup or failover.
+     */
+
+    // TODO: Do this using Dependency Injection, using PicoContainer, Guice or Spring.
+    this.fileSystemManager = new MasterFileSystem(this, metrics);
+    this.connection = HConnectionManager.getConnection(conf);
+    this.executorService = new ExecutorService(getServerName());
+
+    this.serverManager = new ServerManager(this, this, metrics);
+
+    this.catalogTracker = new CatalogTracker(this.zooKeeper, this.connection,
+      this, conf.getInt("hbase.master.catalog.timeout", Integer.MAX_VALUE));
+    this.catalogTracker.start();
+
+    this.assignmentManager = new AssignmentManager(this, serverManager,
+      this.catalogTracker, this.executorService);
+    zooKeeper.registerListenerFirst(assignmentManager);
+
+    this.regionServerTracker = new RegionServerTracker(zooKeeper, this,
+      this.serverManager);
+    this.regionServerTracker.start();
+
+    // Set the cluster as up.  If new RSs, they'll be waiting on this before
+    // going ahead with their startup.
+    this.clusterStatusTracker = new ClusterStatusTracker(getZooKeeper(), this);
+    this.clusterStatusTracker.start();
+    boolean wasUp = this.clusterStatusTracker.isClusterUp();
+    if (!wasUp) this.clusterStatusTracker.setClusterUp();
+
+    LOG.info("Server active/primary master; " + this.address +
+      ", sessionid=0x" +
+      Long.toHexString(this.zooKeeper.getZooKeeper().getSessionId()) +
+      ", cluster-up flag was=" + wasUp);
+
+    // start up all service threads.
+    startServiceThreads();
+
+    // Wait for region servers to report in.  Returns count of regions.
+    int regionCount = this.serverManager.waitForRegionServers();
+
+    // TODO: Should do this in background rather than block master startup
+    this.fileSystemManager.
+      splitLogAfterStartup(this.serverManager.getOnlineServers());
+
+    // Make sure root and meta assigned before proceeding.
+    assignRootAndMeta();
+
+    // Is this fresh start with no regions assigned or are we a master joining
+    // an already-running cluster?  If regionsCount == 0, then for sure a
+    // fresh start.  TOOD: Be fancier.  If regionsCount == 2, perhaps the
+    // 2 are .META. and -ROOT- and we should fall into the fresh startup
+    // branch below.  For now, do processFailover.
+    if (regionCount == 0) {
+      LOG.info("Master startup proceeding: cluster startup");
+      this.assignmentManager.cleanoutUnassigned();
+      this.assignmentManager.assignAllUserRegions();
+    } else {
+      LOG.info("Master startup proceeding: master failover");
+      this.assignmentManager.processFailover();
+    }
+
+    // Start balancer and meta catalog janitor after meta and regions have
+    // been assigned.
+    this.balancerChore = getAndStartBalancerChore(this);
+    this.catalogJanitorChore =
+      Threads.setDaemonThreadRunning(new CatalogJanitor(this, this));
+
+    LOG.info("Master has completed initialization");
+    initialized = true;
+  }
+
+  /**
+   * Check <code>-ROOT-</code> and <code>.META.</code> are assigned.  If not,
+   * assign them.
+   * @throws InterruptedException
+   * @throws IOException
+   * @throws KeeperException
+   * @return Count of regions we assigned.
+   */
+  int assignRootAndMeta()
+  throws InterruptedException, IOException, KeeperException {
+    int assigned = 0;
+    long timeout = this.conf.getLong("hbase.catalog.verification.timeout", 1000);
+
+    // Work on ROOT region.  Is it in zk in transition?
+    boolean rit = this.assignmentManager.
+      processRegionInTransitionAndBlockUntilAssigned(HRegionInfo.ROOT_REGIONINFO);
+    if (!catalogTracker.verifyRootRegionLocation(timeout)) {
+      this.assignmentManager.assignRoot();
+      this.catalogTracker.waitForRoot();
+      assigned++;
+    }
+    LOG.info("-ROOT- assigned=" + assigned + ", rit=" + rit +
+      ", location=" + catalogTracker.getRootLocation());
+
+    // Work on meta region
+    rit = this.assignmentManager.
+      processRegionInTransitionAndBlockUntilAssigned(HRegionInfo.FIRST_META_REGIONINFO);
+    if (!this.catalogTracker.verifyMetaRegionLocation(timeout)) {
+      this.assignmentManager.assignMeta();
+      this.catalogTracker.waitForMeta();
+      // Above check waits for general meta availability but this does not
+      // guarantee that the transition has completed
+      this.assignmentManager.waitForAssignment(HRegionInfo.FIRST_META_REGIONINFO);
+      assigned++;
+    }
+    LOG.info(".META. assigned=" + assigned + ", rit=" + rit +
+      ", location=" + catalogTracker.getMetaLocation());
+    return assigned;
+  }
+
+  /*
+   * @return This masters' address.
+   * @throws UnknownHostException
+   */
+  private static String getMyAddress(final Configuration c)
+  throws UnknownHostException {
+    // Find out our address up in DNS.
+    String s = DNS.getDefaultHost(c.get("hbase.master.dns.interface","default"),
+      c.get("hbase.master.dns.nameserver","default"));
+    s += ":" + c.get(HConstants.MASTER_PORT,
+        Integer.toString(HConstants.DEFAULT_MASTER_PORT));
+    return s;
+  }
+
+  /** @return HServerAddress of the master server */
+  public HServerAddress getMasterAddress() {
+    return this.address;
+  }
+
+  public long getProtocolVersion(String protocol, long clientVersion) {
+    return HBaseRPCProtocolVersion.versionID;
+  }
+
+  /** @return InfoServer object. Maybe null.*/
+  public InfoServer getInfoServer() {
+    return this.infoServer;
+  }
+
+  @Override
+  public Configuration getConfiguration() {
+    return this.conf;
+  }
+
+  @Override
+  public ServerManager getServerManager() {
+    return this.serverManager;
+  }
+
+  @Override
+  public ExecutorService getExecutorService() {
+    return this.executorService;
+  }
+
+  @Override
+  public MasterFileSystem getMasterFileSystem() {
+    return this.fileSystemManager;
+  }
+
+  /**
+   * Get the ZK wrapper object - needed by master_jsp.java
+   * @return the zookeeper wrapper
+   */
+  public ZooKeeperWatcher getZooKeeperWatcher() {
+    return this.zooKeeper;
+  }
+
+  /*
+   * Start up all services. If any of these threads gets an unhandled exception
+   * then they just die with a logged message.  This should be fine because
+   * in general, we do not expect the master to get such unhandled exceptions
+   *  as OOMEs; it should be lightly loaded. See what HRegionServer does if
+   *  need to install an unexpected exception handler.
+   */
+  private void startServiceThreads() {
+    try {
+      // Start the executor service pools
+      this.executorService.startExecutorService(ExecutorType.MASTER_OPEN_REGION,
+        conf.getInt("hbase.master.executor.openregion.threads", 5));
+      this.executorService.startExecutorService(ExecutorType.MASTER_CLOSE_REGION,
+        conf.getInt("hbase.master.executor.closeregion.threads", 5));
+      this.executorService.startExecutorService(ExecutorType.MASTER_SERVER_OPERATIONS,
+        conf.getInt("hbase.master.executor.serverops.threads", 3));
+      this.executorService.startExecutorService(ExecutorType.MASTER_META_SERVER_OPERATIONS,
+        conf.getInt("hbase.master.executor.serverops.threads", 2));
+      // We depend on there being only one instance of this executor running
+      // at a time.  To do concurrency, would need fencing of enable/disable of
+      // tables.
+      this.executorService.startExecutorService(ExecutorType.MASTER_TABLE_OPERATIONS, 1);
+
+      // Start log cleaner thread
+      String n = Thread.currentThread().getName();
+      this.logCleaner =
+        new LogCleaner(conf.getInt("hbase.master.cleaner.interval", 60 * 1000),
+          this, conf, getMasterFileSystem().getFileSystem(),
+          getMasterFileSystem().getOldLogDir());
+      Threads.setDaemonThreadRunning(logCleaner, n + ".oldLogCleaner");
+
+      // Put up info server.
+      int port = this.conf.getInt("hbase.master.info.port", 60010);
+      if (port >= 0) {
+        String a = this.conf.get("hbase.master.info.bindAddress", "0.0.0.0");
+        this.infoServer = new InfoServer(MASTER, a, port, false);
+        this.infoServer.setAttribute(MASTER, this);
+        this.infoServer.start();
+      }
+      // Start allowing requests to happen.
+      this.rpcServer.openServer();
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("Started service threads");
+      }
+    } catch (IOException e) {
+      if (e instanceof RemoteException) {
+        e = ((RemoteException)e).unwrapRemoteException();
+      }
+      // Something happened during startup. Shut things down.
+      abort("Failed startup", e);
+    }
+  }
+
+  private void stopServiceThreads() {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Stopping service threads");
+    }
+    if (this.rpcServer != null) this.rpcServer.stop();
+    // Clean up and close up shop
+    if (this.logCleaner!= null) this.logCleaner.interrupt();
+    if (this.infoServer != null) {
+      LOG.info("Stopping infoServer");
+      try {
+        this.infoServer.stop();
+      } catch (Exception ex) {
+        ex.printStackTrace();
+      }
+    }
+    if (this.executorService != null) this.executorService.shutdown();
+  }
+
+  private static Thread getAndStartBalancerChore(final HMaster master) {
+    String name = master.getServerName() + "-BalancerChore";
+    int period = master.getConfiguration().getInt("hbase.balancer.period", 300000);
+    // Start up the load balancer chore
+    Chore chore = new Chore(name, period, master) {
+      @Override
+      protected void chore() {
+        master.balance();
+      }
+    };
+    return Threads.setDaemonThreadRunning(chore);
+  }
+
+  private void stopChores() {
+    if (this.balancerChore != null) {
+      this.balancerChore.interrupt();
+    }
+    if (this.catalogJanitorChore != null) {
+      this.catalogJanitorChore.interrupt();
+    }
+  }
+
+  @Override
+  public MapWritable regionServerStartup(final HServerInfo serverInfo,
+    final long serverCurrentTime)
+  throws IOException {
+    // Set the ip into the passed in serverInfo.  Its ip is more than likely
+    // not the ip that the master sees here.  See at end of this method where
+    // we pass it back to the regionserver by setting "hbase.regionserver.address"
+    // Everafter, the HSI combination 'server name' is what uniquely identifies
+    // the incoming RegionServer.
+    InetSocketAddress address = new InetSocketAddress(
+        HBaseServer.getRemoteIp().getHostName(),
+        serverInfo.getServerAddress().getPort());
+    serverInfo.setServerAddress(new HServerAddress(address));
+
+    // Register with server manager
+    this.serverManager.regionServerStartup(serverInfo, serverCurrentTime);
+    // Send back some config info
+    MapWritable mw = createConfigurationSubset();
+     mw.put(new Text("hbase.regionserver.address"),
+         serverInfo.getServerAddress());
+    return mw;
+  }
+
+  /**
+   * @return Subset of configuration to pass initializing regionservers: e.g.
+   * the filesystem to use and root directory to use.
+   */
+  protected MapWritable createConfigurationSubset() {
+    MapWritable mw = addConfig(new MapWritable(), HConstants.HBASE_DIR);
+    return addConfig(mw, "fs.default.name");
+  }
+
+  private MapWritable addConfig(final MapWritable mw, final String key) {
+    mw.put(new Text(key), new Text(this.conf.get(key)));
+    return mw;
+  }
+
+  @Override
+  public HMsg [] regionServerReport(HServerInfo serverInfo, HMsg msgs[],
+    HRegionInfo[] mostLoadedRegions)
+  throws IOException {
+    return adornRegionServerAnswer(serverInfo,
+      this.serverManager.regionServerReport(serverInfo, msgs, mostLoadedRegions));
+  }
+
+  /**
+   * Override if you'd add messages to return to regionserver <code>hsi</code>
+   * or to send an exception.
+   * @param msgs Messages to add to
+   * @return Messages to return to
+   * @throws IOException exceptions that were injected for the region servers
+   */
+  protected HMsg [] adornRegionServerAnswer(final HServerInfo hsi,
+      final HMsg [] msgs) throws IOException {
+    return msgs;
+  }
+
+  public boolean isMasterRunning() {
+    return !isStopped();
+  }
+
+  @Override
+  public boolean balance() {
+    // If balance not true, don't run balancer.
+    if (!this.balanceSwitch) return false;
+    synchronized (this.balancer) {
+      // Only allow one balance run at at time.
+      if (this.assignmentManager.isRegionsInTransition()) {
+        LOG.debug("Not running balancer because " +
+          this.assignmentManager.getRegionsInTransition().size() +
+          " region(s) in transition: " +
+          org.apache.commons.lang.StringUtils.
+            abbreviate(this.assignmentManager.getRegionsInTransition().toString(), 256));
+        return false;
+      }
+      if (this.serverManager.areDeadServersInProgress()) {
+        LOG.debug("Not running balancer because processing dead regionserver(s): "  +
+          this.serverManager.getDeadServers());
+        return false;
+      }
+      Map<HServerInfo, List<HRegionInfo>> assignments =
+        this.assignmentManager.getAssignments();
+      // Returned Map from AM does not include mention of servers w/o assignments.
+      for (Map.Entry<String, HServerInfo> e:
+          this.serverManager.getOnlineServers().entrySet()) {
+        HServerInfo hsi = e.getValue();
+        if (!assignments.containsKey(hsi)) {
+          assignments.put(hsi, new ArrayList<HRegionInfo>());
+        }
+      }
+      List<RegionPlan> plans = this.balancer.balanceCluster(assignments);
+      if (plans != null && !plans.isEmpty()) {
+        for (RegionPlan plan: plans) {
+          LOG.info("balance " + plan);
+          this.assignmentManager.balance(plan);
+        }
+      }
+    }
+    return true;
+  }
+
+  @Override
+  public boolean balanceSwitch(final boolean b) {
+    boolean oldValue = this.balanceSwitch;
+    this.balanceSwitch = b;
+    LOG.info("Balance=" + b);
+    return oldValue;
+  }
+
+  @Override
+  public void move(final byte[] encodedRegionName, final byte[] destServerName)
+  throws UnknownRegionException {
+    Pair<HRegionInfo, HServerInfo> p =
+      this.assignmentManager.getAssignment(encodedRegionName);
+    if (p == null)
+      throw new UnknownRegionException(Bytes.toString(encodedRegionName));
+    HRegionInfo hri = p.getFirst();
+    HServerInfo dest = null;
+    if (destServerName == null || destServerName.length == 0) {
+      LOG.info("Passed destination servername is null/empty so " +
+        "choosing a server at random");
+      this.assignmentManager.clearRegionPlan(hri);
+      // Unassign will reassign it elsewhere choosing random server.
+      this.assignmentManager.unassign(hri);
+    } else {
+      dest = this.serverManager.getServerInfo(new String(destServerName));
+      RegionPlan rp = new RegionPlan(p.getFirst(), p.getSecond(), dest);
+      this.assignmentManager.balance(rp);
+    }
+  }
+
+  public void createTable(HTableDescriptor desc, byte [][] splitKeys)
+  throws IOException {
+    createTable(desc, splitKeys, false);
+  }
+
+  public void createTable(HTableDescriptor desc, byte [][] splitKeys,
+      boolean sync)
+  throws IOException {
+    if (!isMasterRunning()) {
+      throw new MasterNotRunningException();
+    }
+    HRegionInfo [] newRegions = null;
+    if(splitKeys == null || splitKeys.length == 0) {
+      newRegions = new HRegionInfo [] { new HRegionInfo(desc, null, null) };
+    } else {
+      int numRegions = splitKeys.length + 1;
+      newRegions = new HRegionInfo[numRegions];
+      byte [] startKey = null;
+      byte [] endKey = null;
+      for(int i=0;i<numRegions;i++) {
+        endKey = (i == splitKeys.length) ? null : splitKeys[i];
+        newRegions[i] = new HRegionInfo(desc, startKey, endKey);
+        startKey = endKey;
+      }
+    }
+    int timeout = conf.getInt("hbase.client.catalog.timeout", 10000);
+    // Need META availability to create a table
+    try {
+      if(catalogTracker.waitForMeta(timeout) == null) {
+        throw new NotAllMetaRegionsOnlineException();
+      }
+    } catch (InterruptedException e) {
+      LOG.warn("Interrupted waiting for meta availability", e);
+      throw new IOException(e);
+    }
+    createTable(newRegions, sync);
+  }
+
+  private synchronized void createTable(final HRegionInfo [] newRegions,
+      boolean sync)
+  throws IOException {
+    String tableName = newRegions[0].getTableDesc().getNameAsString();
+    if(MetaReader.tableExists(catalogTracker, tableName)) {
+      throw new TableExistsException(tableName);
+    }
+    for(HRegionInfo newRegion : newRegions) {
+
+      // 1. Set table enabling flag up in zk.
+      try {
+        assignmentManager.getZKTable().setEnabledTable(tableName);
+      } catch (KeeperException e) {
+        throw new IOException("Unable to ensure that the table will be" +
+            " enabled because of a ZooKeeper issue", e);
+      }
+
+      // 2. Create HRegion
+      HRegion region = HRegion.createHRegion(newRegion,
+          fileSystemManager.getRootDir(), conf);
+
+      // 3. Insert into META
+      MetaEditor.addRegionToMeta(catalogTracker, region.getRegionInfo());
+
+      // 4. Close the new region to flush to disk.  Close log file too.
+      region.close();
+      region.getLog().closeAndDelete();
+
+      // 5. Trigger immediate assignment of this region
+      assignmentManager.assign(region.getRegionInfo(), true);
+    }
+
+    // 5. If sync, wait for assignment of regions
+    if(sync) {
+      LOG.debug("Waiting for " + newRegions.length + " region(s) to be " +
+          "assigned before returning");
+      for(HRegionInfo regionInfo : newRegions) {
+        try {
+          assignmentManager.waitForAssignment(regionInfo);
+        } catch (InterruptedException e) {
+          LOG.info("Interrupted waiting for region to be assigned during " +
+              "create table call");
+          return;
+        }
+      }
+    }
+  }
+
+  private static boolean isCatalogTable(final byte [] tableName) {
+    return Bytes.equals(tableName, HConstants.ROOT_TABLE_NAME) ||
+           Bytes.equals(tableName, HConstants.META_TABLE_NAME);
+  }
+
+  public void deleteTable(final byte [] tableName) throws IOException {
+    this.executorService.submit(new DeleteTableHandler(tableName, this, this));
+  }
+
+  public void addColumn(byte [] tableName, HColumnDescriptor column)
+  throws IOException {
+    new TableAddFamilyHandler(tableName, column, this, this).process();
+  }
+
+  public void modifyColumn(byte [] tableName, HColumnDescriptor descriptor)
+  throws IOException {
+    new TableModifyFamilyHandler(tableName, descriptor, this, this).process();
+  }
+
+  public void deleteColumn(final byte [] tableName, final byte [] c)
+  throws IOException {
+    new TableDeleteFamilyHandler(tableName, c, this, this).process();
+  }
+
+  public void enableTable(final byte [] tableName) throws IOException {
+    this.executorService.submit(new EnableTableHandler(this, tableName,
+      catalogTracker, assignmentManager));
+  }
+
+  public void disableTable(final byte [] tableName) throws IOException {
+    this.executorService.submit(new DisableTableHandler(this, tableName,
+      catalogTracker, assignmentManager));
+  }
+
+  /**
+   * Return the region and current deployment for the region containing
+   * the given row. If the region cannot be found, returns null. If it
+   * is found, but not currently deployed, the second element of the pair
+   * may be null.
+   */
+  Pair<HRegionInfo,HServerAddress> getTableRegionForRow(
+      final byte [] tableName, final byte [] rowKey)
+  throws IOException {
+    final AtomicReference<Pair<HRegionInfo, HServerAddress>> result =
+      new AtomicReference<Pair<HRegionInfo, HServerAddress>>(null);
+
+    MetaScannerVisitor visitor =
+      new MetaScannerVisitor() {
+        @Override
+        public boolean processRow(Result data) throws IOException {
+          if (data == null || data.size() <= 0) {
+            return true;
+          }
+          Pair<HRegionInfo, HServerAddress> pair =
+            MetaReader.metaRowToRegionPair(data);
+          if (pair == null) {
+            return false;
+          }
+          if (!Bytes.equals(pair.getFirst().getTableDesc().getName(),
+                tableName)) {
+            return false;
+          }
+          result.set(pair);
+          return true;
+        }
+    };
+
+    MetaScanner.metaScan(conf, visitor, tableName, rowKey, 1);
+    return result.get();
+  }
+
+  @Override
+  public void modifyTable(final byte[] tableName, HTableDescriptor htd)
+  throws IOException {
+    this.executorService.submit(new ModifyTableHandler(tableName, htd, this, this));
+  }
+
+  @Override
+  public void checkTableModifiable(final byte [] tableName)
+  throws IOException {
+    String tableNameStr = Bytes.toString(tableName);
+    if (isCatalogTable(tableName)) {
+      throw new IOException("Can't modify catalog tables");
+    }
+    if (!MetaReader.tableExists(getCatalogTracker(), tableNameStr)) {
+      throw new TableNotFoundException(tableNameStr);
+    }
+    if (!getAssignmentManager().getZKTable().
+        isDisabledTable(Bytes.toString(tableName))) {
+      throw new TableNotDisabledException(tableName);
+    }
+  }
+
+  public void clearFromTransition(HRegionInfo hri) {
+    if (this.assignmentManager.isRegionInTransition(hri) != null) {
+      this.assignmentManager.clearRegionFromTransition(hri);
+    }
+  }
+  /**
+   * @return cluster status
+   */
+  public ClusterStatus getClusterStatus() {
+    ClusterStatus status = new ClusterStatus();
+    status.setHBaseVersion(VersionInfo.getVersion());
+    status.setServerInfo(serverManager.getOnlineServers().values());
+    status.setDeadServers(serverManager.getDeadServers());
+    status.setRegionsInTransition(assignmentManager.getRegionsInTransition());
+    return status;
+  }
+
+  @Override
+  public void abort(final String msg, final Throwable t) {
+    if (t != null) LOG.fatal(msg, t);
+    else LOG.fatal(msg);
+    this.abort = true;
+    stop("Aborting");
+  }
+
+  @Override
+  public ZooKeeperWatcher getZooKeeper() {
+    return zooKeeper;
+  }
+
+  @Override
+  public String getServerName() {
+    return address.toString();
+  }
+
+  @Override
+  public CatalogTracker getCatalogTracker() {
+    return catalogTracker;
+  }
+
+  @Override
+  public AssignmentManager getAssignmentManager() {
+    return this.assignmentManager;
+  }
+
+  @Override
+  public void shutdown() {
+    this.serverManager.shutdownCluster();
+    try {
+      this.clusterStatusTracker.setClusterDown();
+    } catch (KeeperException e) {
+      LOG.error("ZooKeeper exception trying to set cluster as down in ZK", e);
+    }
+  }
+
+  @Override
+  public void stopMaster() {
+    stop("Stopped by " + Thread.currentThread().getName());
+  }
+
+  @Override
+  public void stop(final String why) {
+    LOG.info(why);
+    this.stopped = true;
+    // If we are a backup master, we need to interrupt wait
+    synchronized (this.activeMasterManager.clusterHasActiveMaster) {
+      this.activeMasterManager.clusterHasActiveMaster.notifyAll();
+    }
+  }
+
+  @Override
+  public boolean isStopped() {
+    return this.stopped;
+  }
+
+  /**
+   * Report whether this master is currently the active master or not.
+   * If not active master, we are parked on ZK waiting to become active.
+   *
+   * This method is used for testing.
+   *
+   * @return true if active master, false if not.
+   */
+  public boolean isActiveMaster() {
+    return isActiveMaster;
+  }
+
+  /**
+   * Report whether this master has completed with its initialization and is
+   * ready.  If ready, the master is also the active master.  A standby master
+   * is never ready.
+   *
+   * This method is used for testing.
+   *
+   * @return true if master is ready to go, false if not.
+   */
+  public boolean isInitialized() {
+    return initialized;
+  }
+
+  @Override
+  public void assign(final byte [] regionName, final boolean force)
+  throws IOException {
+    Pair<HRegionInfo, HServerAddress> pair =
+      MetaReader.getRegion(this.catalogTracker, regionName);
+    if (pair == null) throw new UnknownRegionException(Bytes.toString(regionName));
+    assignRegion(pair.getFirst());
+  }
+
+  public void assignRegion(HRegionInfo hri) {
+    assignmentManager.assign(hri, true);
+  }
+
+  @Override
+  public void unassign(final byte [] regionName, final boolean force)
+  throws IOException {
+    Pair<HRegionInfo, HServerAddress> pair =
+      MetaReader.getRegion(this.catalogTracker, regionName);
+    if (pair == null) throw new UnknownRegionException(Bytes.toString(regionName));
+    HRegionInfo hri = pair.getFirst();
+    if (force) this.assignmentManager.clearRegionFromTransition(hri);
+    this.assignmentManager.unassign(hri, force);
+  }
+
+  /**
+   * Utility for constructing an instance of the passed HMaster class.
+   * @param masterClass
+   * @param conf
+   * @return HMaster instance.
+   */
+  public static HMaster constructMaster(Class<? extends HMaster> masterClass,
+      final Configuration conf)  {
+    try {
+      Constructor<? extends HMaster> c =
+        masterClass.getConstructor(Configuration.class);
+      return c.newInstance(conf);
+    } catch (InvocationTargetException ite) {
+      Throwable target = ite.getTargetException() != null?
+        ite.getTargetException(): ite;
+      if (target.getCause() != null) target = target.getCause();
+      throw new RuntimeException("Failed construction of Master: " +
+        masterClass.toString(), target);
+    } catch (Exception e) {
+      throw new RuntimeException("Failed construction of Master: " +
+        masterClass.toString() + ((e.getCause() != null)?
+          e.getCause().getMessage(): ""), e);
+    }
+  }
+
+
+  /**
+   * @see org.apache.hadoop.hbase.master.HMasterCommandLine
+   */
+  public static void main(String [] args) throws Exception {
+    new HMasterCommandLine(HMaster.class).doMain(args);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java
new file mode 100644
index 0000000..c050eb9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java
@@ -0,0 +1,207 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.LocalHBaseCluster;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.ServerCommandLine;
+import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
+import org.apache.zookeeper.KeeperException;
+
+public class HMasterCommandLine extends ServerCommandLine {
+  private static final Log LOG = LogFactory.getLog(HMasterCommandLine.class);
+
+  private static final String USAGE =
+    "Usage: Master [opts] start|stop\n" +
+    " start  Start Master. If local mode, start Master and RegionServer in same JVM\n" +
+    " stop   Start cluster shutdown; Master signals RegionServer shutdown\n" +
+    " where [opts] are:\n" +
+    "   --minServers=<servers>    Minimum RegionServers needed to host user tables.\n" +
+    "   --backup                  Master should start in backup mode";
+
+  private final Class<? extends HMaster> masterClass;
+
+  public HMasterCommandLine(Class<? extends HMaster> masterClass) {
+    this.masterClass = masterClass;
+  }
+
+  protected String getUsage() {
+    return USAGE;
+  }
+
+
+  public int run(String args[]) throws Exception {
+    Options opt = new Options();
+    opt.addOption("minServers", true, "Minimum RegionServers needed to host user tables");
+    opt.addOption("backup", false, "Do not try to become HMaster until the primary fails");
+
+
+    CommandLine cmd;
+    try {
+      cmd = new GnuParser().parse(opt, args);
+    } catch (ParseException e) {
+      LOG.error("Could not parse: ", e);
+      usage(null);
+      return -1;
+    }
+
+
+    if (cmd.hasOption("minServers")) {
+      String val = cmd.getOptionValue("minServers");
+      getConf().setInt("hbase.regions.server.count.min",
+                  Integer.valueOf(val));
+      LOG.debug("minServers set to " + val);
+    }
+
+    // check if we are the backup master - override the conf if so
+    if (cmd.hasOption("backup")) {
+      getConf().setBoolean(HConstants.MASTER_TYPE_BACKUP, true);
+    }
+
+    List<String> remainingArgs = cmd.getArgList();
+    if (remainingArgs.size() != 1) {
+      usage(null);
+      return -1;
+    }
+
+    String command = remainingArgs.get(0);
+
+    if ("start".equals(command)) {
+      return startMaster();
+    } else if ("stop".equals(command)) {
+      return stopMaster();
+    } else {
+      usage("Invalid command: " + command);
+      return -1;
+    }
+  }
+
+  private int startMaster() {
+    Configuration conf = getConf();
+    try {
+      // If 'local', defer to LocalHBaseCluster instance.  Starts master
+      // and regionserver both in the one JVM.
+      if (LocalHBaseCluster.isLocal(conf)) {
+        final MiniZooKeeperCluster zooKeeperCluster =
+          new MiniZooKeeperCluster();
+        File zkDataPath = new File(conf.get("hbase.zookeeper.property.dataDir"));
+        int zkClientPort = conf.getInt("hbase.zookeeper.property.clientPort", 0);
+        if (zkClientPort == 0) {
+          throw new IOException("No config value for hbase.zookeeper.property.clientPort");
+        }
+        zooKeeperCluster.setClientPort(zkClientPort);
+        int clientPort = zooKeeperCluster.startup(zkDataPath);
+        if (clientPort != zkClientPort) {
+          String errorMsg = "Couldnt start ZK at requested address of " +
+            zkClientPort + ", instead got: " + clientPort + ". Aborting. Why? " +
+            "Because clients (eg shell) wont be able to find this ZK quorum";
+          System.err.println(errorMsg);
+          throw new IOException(errorMsg);
+        }
+        conf.set("hbase.zookeeper.property.clientPort",
+                 Integer.toString(clientPort));
+        // Need to have the zk cluster shutdown when master is shutdown.
+        // Run a subclass that does the zk cluster shutdown on its way out.
+        LocalHBaseCluster cluster = new LocalHBaseCluster(conf, 1, 1,
+                                                          LocalHMaster.class, HRegionServer.class);
+        ((LocalHMaster)cluster.getMaster(0)).setZKCluster(zooKeeperCluster);
+        cluster.startup();
+      } else {
+        HMaster master = HMaster.constructMaster(masterClass, conf);
+        if (master.isStopped()) {
+          LOG.info("Won't bring the Master up as a shutdown is requested");
+          return -1;
+        }
+        master.start();
+        master.join();
+      }
+    } catch (Throwable t) {
+      LOG.error("Failed to start master", t);
+      return -1;
+    }
+    return 0;
+  }
+
+  private int stopMaster() {
+    HBaseAdmin adm = null;
+    try {
+      Configuration conf = getConf();
+      // Don't try more than once
+      conf.setInt("hbase.client.retries.number", 1);
+      adm = new HBaseAdmin(getConf());
+    } catch (MasterNotRunningException e) {
+      LOG.error("Master not running");
+      return -1;
+    } catch (ZooKeeperConnectionException e) {
+      LOG.error("ZooKeeper not available");
+      return -1;
+    }
+    try {
+      adm.shutdown();
+    } catch (Throwable t) {
+      LOG.error("Failed to stop master", t);
+      return -1;
+    }
+    return 0;
+  }
+
+  /*
+   * Version of master that will shutdown the passed zk cluster on its way out.
+   */
+  public static class LocalHMaster extends HMaster {
+    private MiniZooKeeperCluster zkcluster = null;
+
+    public LocalHMaster(Configuration conf)
+    throws IOException, KeeperException, InterruptedException {
+      super(conf);
+    }
+
+    @Override
+    public void run() {
+      super.run();
+      if (this.zkcluster != null) {
+        try {
+          this.zkcluster.shutdown();
+        } catch (IOException e) {
+          e.printStackTrace();
+        }
+      }
+    }
+
+    void setZKCluster(final MiniZooKeeperCluster zkcluster) {
+      this.zkcluster = zkcluster;
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
new file mode 100644
index 0000000..15f65c6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
@@ -0,0 +1,652 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Random;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+
+/**
+ * Makes decisions about the placement and movement of Regions across
+ * RegionServers.
+ *
+ * <p>Cluster-wide load balancing will occur only when there are no regions in
+ * transition and according to a fixed period of a time using {@link #balanceCluster(Map)}.
+ *
+ * <p>Inline region placement with {@link #immediateAssignment} can be used when
+ * the Master needs to handle closed regions that it currently does not have
+ * a destination set for.  This can happen during master failover.
+ *
+ * <p>On cluster startup, bulk assignment can be used to determine
+ * locations for all Regions in a cluster.
+ *
+ * <p>This classes produces plans for the {@link AssignmentManager} to execute.
+ */
+public class LoadBalancer {
+  private static final Log LOG = LogFactory.getLog(LoadBalancer.class);
+  private static final Random rand = new Random();
+
+  /**
+   * Generate a global load balancing plan according to the specified map of
+   * server information to the most loaded regions of each server.
+   *
+   * The load balancing invariant is that all servers are within 1 region of the
+   * average number of regions per server.  If the average is an integer number,
+   * all servers will be balanced to the average.  Otherwise, all servers will
+   * have either floor(average) or ceiling(average) regions.
+   *
+   * The algorithm is currently implemented as such:
+   *
+   * <ol>
+   * <li>Determine the two valid numbers of regions each server should have,
+   *     <b>MIN</b>=floor(average) and <b>MAX</b>=ceiling(average).
+   *
+   * <li>Iterate down the most loaded servers, shedding regions from each so
+   *     each server hosts exactly <b>MAX</b> regions.  Stop once you reach a
+   *     server that already has &lt;= <b>MAX</b> regions.
+   *
+   * <li>Iterate down the least loaded servers, assigning regions so each server
+   *     has exactly </b>MIN</b> regions.  Stop once you reach a server that
+   *     already has &gt;= <b>MIN</b> regions.
+   *
+   *     Regions being assigned to underloaded servers are those that were shed
+   *     in the previous step.  It is possible that there were not enough
+   *     regions shed to fill each underloaded server to <b>MIN</b>.  If so we
+   *     end up with a number of regions required to do so, <b>neededRegions</b>.
+   *
+   *     It is also possible that we were able fill each underloaded but ended
+   *     up with regions that were unassigned from overloaded servers but that
+   *     still do not have assignment.
+   *
+   *     If neither of these conditions hold (no regions needed to fill the
+   *     underloaded servers, no regions leftover from overloaded servers),
+   *     we are done and return.  Otherwise we handle these cases below.
+   *
+   * <li>If <b>neededRegions</b> is non-zero (still have underloaded servers),
+   *     we iterate the most loaded servers again, shedding a single server from
+   *     each (this brings them from having <b>MAX</b> regions to having
+   *     <b>MIN</b> regions).
+   *
+   * <li>We now definitely have more regions that need assignment, either from
+   *     the previous step or from the original shedding from overloaded servers.
+   *
+   *     Iterate the least loaded servers filling each to <b>MIN</b>.
+   *
+   * <li>If we still have more regions that need assignment, again iterate the
+   *     least loaded servers, this time giving each one (filling them to
+   *     </b>MAX</b>) until we run out.
+   *
+   * <li>All servers will now either host <b>MIN</b> or <b>MAX</b> regions.
+   *
+   *     In addition, any server hosting &gt;= <b>MAX</b> regions is guaranteed
+   *     to end up with <b>MAX</b> regions at the end of the balancing.  This
+   *     ensures the minimal number of regions possible are moved.
+   * </ol>
+   *
+   * TODO: We can at-most reassign the number of regions away from a particular
+   *       server to be how many they report as most loaded.
+   *       Should we just keep all assignment in memory?  Any objections?
+   *       Does this mean we need HeapSize on HMaster?  Or just careful monitor?
+   *       (current thinking is we will hold all assignments in memory)
+   *
+   * @param clusterState Map of regionservers and their load/region information to
+   *                   a list of their most loaded regions
+   * @return a list of regions to be moved, including source and destination,
+   *         or null if cluster is already balanced
+   */
+  public List<RegionPlan> balanceCluster(
+      Map<HServerInfo,List<HRegionInfo>> clusterState) {
+    long startTime = System.currentTimeMillis();
+
+    // Make a map sorted by load and count regions
+    TreeMap<HServerInfo,List<HRegionInfo>> serversByLoad =
+      new TreeMap<HServerInfo,List<HRegionInfo>>(
+          new HServerInfo.LoadComparator());
+    int numServers = clusterState.size();
+    if (numServers == 0) {
+      LOG.debug("numServers=0 so skipping load balancing");
+      return null;
+    }
+    int numRegions = 0;
+    // Iterate so we can count regions as we build the map
+    for(Map.Entry<HServerInfo, List<HRegionInfo>> server:
+        clusterState.entrySet()) {
+      server.getKey().getLoad().setNumberOfRegions(server.getValue().size());
+      numRegions += server.getKey().getLoad().getNumberOfRegions();
+      serversByLoad.put(server.getKey(), server.getValue());
+    }
+
+    // Check if we even need to do any load balancing
+    float average = (float)numRegions / numServers; // for logging
+    int min = numRegions / numServers;
+    int max = numRegions % numServers == 0 ? min : min + 1;
+    if(serversByLoad.lastKey().getLoad().getNumberOfRegions() <= max &&
+       serversByLoad.firstKey().getLoad().getNumberOfRegions() >= min) {
+      // Skipped because no server outside (min,max) range
+      LOG.info("Skipping load balancing.  servers=" + numServers + " " +
+          "regions=" + numRegions + " average=" + average + " " +
+          "mostloaded=" + serversByLoad.lastKey().getLoad().getNumberOfRegions() +
+          " leastloaded=" + serversByLoad.lastKey().getLoad().getNumberOfRegions());
+      return null;
+    }
+
+    // Balance the cluster
+    // TODO: Look at data block locality or a more complex load to do this
+    List<RegionPlan> regionsToMove = new ArrayList<RegionPlan>();
+    int regionidx = 0; // track the index in above list for setting destination
+
+    // Walk down most loaded, pruning each to the max
+    int serversOverloaded = 0;
+    Map<HServerInfo,BalanceInfo> serverBalanceInfo =
+      new TreeMap<HServerInfo,BalanceInfo>();
+    for(Map.Entry<HServerInfo, List<HRegionInfo>> server :
+      serversByLoad.descendingMap().entrySet()) {
+      HServerInfo serverInfo = server.getKey();
+      int regionCount = serverInfo.getLoad().getNumberOfRegions();
+      if(regionCount <= max) {
+        serverBalanceInfo.put(serverInfo, new BalanceInfo(0, 0));
+        break;
+      }
+      serversOverloaded++;
+      List<HRegionInfo> regions = server.getValue();
+      int numToOffload = Math.min(regionCount - max, regions.size());
+      int numTaken = 0;
+      for (HRegionInfo hri: regions) {
+        // Don't rebalance meta regions.
+        if (hri.isMetaRegion()) continue;
+        regionsToMove.add(new RegionPlan(hri, serverInfo, null));
+        numTaken++;
+        if (numTaken >= numToOffload) break;
+      }
+      serverBalanceInfo.put(serverInfo,
+          new BalanceInfo(numToOffload, (-1)*numTaken));
+    }
+
+    // Walk down least loaded, filling each to the min
+    int serversUnderloaded = 0; // number of servers that get new regions
+    int neededRegions = 0; // number of regions needed to bring all up to min
+    for(Map.Entry<HServerInfo, List<HRegionInfo>> server :
+      serversByLoad.entrySet()) {
+      int regionCount = server.getKey().getLoad().getNumberOfRegions();
+      if(regionCount >= min) {
+        break;
+      }
+      serversUnderloaded++;
+      int numToTake = min - regionCount;
+      int numTaken = 0;
+      while(numTaken < numToTake && regionidx < regionsToMove.size()) {
+        regionsToMove.get(regionidx).setDestination(server.getKey());
+        numTaken++;
+        regionidx++;
+      }
+      serverBalanceInfo.put(server.getKey(), new BalanceInfo(0, numTaken));
+      // If we still want to take some, increment needed
+      if(numTaken < numToTake) {
+        neededRegions += (numToTake - numTaken);
+      }
+    }
+
+    // If none needed to fill all to min and none left to drain all to max,
+    // we are done
+    if(neededRegions == 0 && regionidx == regionsToMove.size()) {
+      long endTime = System.currentTimeMillis();
+      LOG.info("Calculated a load balance in " + (endTime-startTime) + "ms. " +
+          "Moving " + regionsToMove.size() + " regions off of " +
+          serversOverloaded + " overloaded servers onto " +
+          serversUnderloaded + " less loaded servers");
+      return regionsToMove;
+    }
+
+    // Need to do a second pass.
+    // Either more regions to assign out or servers that are still underloaded
+
+    // If we need more to fill min, grab one from each most loaded until enough
+    if (neededRegions != 0) {
+      // Walk down most loaded, grabbing one from each until we get enough
+      for(Map.Entry<HServerInfo, List<HRegionInfo>> server :
+        serversByLoad.descendingMap().entrySet()) {
+        BalanceInfo balanceInfo = serverBalanceInfo.get(server.getKey());
+        int idx =
+          balanceInfo == null ? 0 : balanceInfo.getNextRegionForUnload();
+        if (idx >= server.getValue().size()) break;
+        HRegionInfo region = server.getValue().get(idx);
+        if (region.isMetaRegion()) continue; // Don't move meta regions.
+        regionsToMove.add(new RegionPlan(region, server.getKey(), null));
+        if(--neededRegions == 0) {
+          // No more regions needed, done shedding
+          break;
+        }
+      }
+    }
+
+    // Now we have a set of regions that must be all assigned out
+    // Assign each underloaded up to the min, then if leftovers, assign to max
+
+    // Walk down least loaded, assigning to each to fill up to min
+    for(Map.Entry<HServerInfo, List<HRegionInfo>> server :
+      serversByLoad.entrySet()) {
+      int regionCount = server.getKey().getLoad().getNumberOfRegions();
+      if (regionCount >= min) break;
+      BalanceInfo balanceInfo = serverBalanceInfo.get(server.getKey());
+      if(balanceInfo != null) {
+        regionCount += balanceInfo.getNumRegionsAdded();
+      }
+      if(regionCount >= min) {
+        continue;
+      }
+      int numToTake = min - regionCount;
+      int numTaken = 0;
+      while(numTaken < numToTake && regionidx < regionsToMove.size()) {
+        regionsToMove.get(regionidx).setDestination(server.getKey());
+        numTaken++;
+        regionidx++;
+      }
+    }
+
+    // If we still have regions to dish out, assign underloaded to max
+    if(regionidx != regionsToMove.size()) {
+      for(Map.Entry<HServerInfo, List<HRegionInfo>> server :
+        serversByLoad.entrySet()) {
+        int regionCount = server.getKey().getLoad().getNumberOfRegions();
+        if(regionCount >= max) {
+          break;
+        }
+        regionsToMove.get(regionidx).setDestination(server.getKey());
+        regionidx++;
+        if(regionidx == regionsToMove.size()) {
+          break;
+        }
+      }
+    }
+
+    long endTime = System.currentTimeMillis();
+
+    if (regionidx != regionsToMove.size() || neededRegions != 0) {
+      // Emit data so can diagnose how balancer went astray.
+      LOG.warn("regionidx=" + regionidx + ", regionsToMove=" + regionsToMove.size() +
+      ", numServers=" + numServers + ", serversOverloaded=" + serversOverloaded +
+      ", serversUnderloaded=" + serversUnderloaded);
+      StringBuilder sb = new StringBuilder();
+      for (Map.Entry<HServerInfo, List<HRegionInfo>> e: clusterState.entrySet()) {
+        if (sb.length() > 0) sb.append(", ");
+        sb.append(e.getKey().getServerName());
+        sb.append(" ");
+        sb.append(e.getValue().size());
+      }
+      LOG.warn("Input " + sb.toString());
+    }
+
+    // All done!
+    LOG.info("Calculated a load balance in " + (endTime-startTime) + "ms. " +
+        "Moving " + regionsToMove.size() + " regions off of " +
+        serversOverloaded + " overloaded servers onto " +
+        serversUnderloaded + " less loaded servers");
+
+    return regionsToMove;
+  }
+
+  /**
+   * Stores additional per-server information about the regions added/removed
+   * during the run of the balancing algorithm.
+   *
+   * For servers that receive additional regions, we are not updating the number
+   * of regions in HServerInfo once we decide to reassign regions to a server,
+   * but we need this information later in the algorithm.  This is stored in
+   * <b>numRegionsAdded</b>.
+   *
+   * For servers that shed regions, we need to track which regions we have
+   * already shed.  <b>nextRegionForUnload</b> contains the index in the list
+   * of regions on the server that is the next to be shed.
+   */
+  private static class BalanceInfo {
+
+    private final int nextRegionForUnload;
+    private final int numRegionsAdded;
+
+    public BalanceInfo(int nextRegionForUnload, int numRegionsAdded) {
+      this.nextRegionForUnload = nextRegionForUnload;
+      this.numRegionsAdded = numRegionsAdded;
+    }
+
+    public int getNextRegionForUnload() {
+      return nextRegionForUnload;
+    }
+
+    public int getNumRegionsAdded() {
+      return numRegionsAdded;
+    }
+  }
+
+  /**
+   * Generates a bulk assignment plan to be used on cluster startup using a
+   * simple round-robin assignment.
+   * <p>
+   * Takes a list of all the regions and all the servers in the cluster and
+   * returns a map of each server to the regions that it should be assigned.
+   * <p>
+   * Currently implemented as a round-robin assignment.  Same invariant as
+   * load balancing, all servers holding floor(avg) or ceiling(avg).
+   *
+   * TODO: Use block locations from HDFS to place regions with their blocks
+   *
+   * @param regions all regions
+   * @param servers all servers
+   * @return map of server to the regions it should take, or null if no
+   *         assignment is possible (ie. no regions or no servers)
+   */
+  public static Map<HServerInfo,List<HRegionInfo>> roundRobinAssignment(
+      List<HRegionInfo> regions, List<HServerInfo> servers) {
+    if(regions.size() == 0 || servers.size() == 0) {
+      return null;
+    }
+    Map<HServerInfo,List<HRegionInfo>> assignments =
+      new TreeMap<HServerInfo,List<HRegionInfo>>();
+    int numRegions = regions.size();
+    int numServers = servers.size();
+    int max = (int)Math.ceil((float)numRegions/numServers);
+    int serverIdx = 0;
+    for(HServerInfo server : servers) {
+      List<HRegionInfo> serverRegions = new ArrayList<HRegionInfo>(max);
+      for(int i=serverIdx;i<regions.size();i+=numServers) {
+        serverRegions.add(regions.get(i));
+      }
+      assignments.put(server, serverRegions);
+      serverIdx++;
+    }
+    return assignments;
+  }
+
+  /**
+   * Generates a bulk assignment startup plan, attempting to reuse the existing
+   * assignment information from META, but adjusting for the specified list of
+   * available/online servers available for assignment.
+   * <p>
+   * Takes a map of all regions to their existing assignment from META.  Also
+   * takes a list of online servers for regions to be assigned to.  Attempts to
+   * retain all assignment, so in some instances initial assignment will not be
+   * completely balanced.
+   * <p>
+   * Any leftover regions without an existing server to be assigned to will be
+   * assigned randomly to available servers.
+   * @param regions regions and existing assignment from meta
+   * @param servers available servers
+   * @return map of servers and regions to be assigned to them
+   */
+  public static Map<HServerInfo, List<HRegionInfo>> retainAssignment(
+      Map<HRegionInfo, HServerAddress> regions, List<HServerInfo> servers) {
+    Map<HServerInfo, List<HRegionInfo>> assignments =
+      new TreeMap<HServerInfo, List<HRegionInfo>>();
+    // Build a map of server addresses to server info so we can match things up
+    Map<HServerAddress, HServerInfo> serverMap =
+      new TreeMap<HServerAddress, HServerInfo>();
+    for (HServerInfo server : servers) {
+      serverMap.put(server.getServerAddress(), server);
+      assignments.put(server, new ArrayList<HRegionInfo>());
+    }
+    for (Map.Entry<HRegionInfo, HServerAddress> region : regions.entrySet()) {
+      HServerAddress hsa = region.getValue();
+      HServerInfo server = hsa == null? null: serverMap.get(hsa);
+      if (server != null) {
+        assignments.get(server).add(region.getKey());
+      } else {
+        assignments.get(servers.get(rand.nextInt(assignments.size()))).add(
+            region.getKey());
+      }
+    }
+    return assignments;
+  }
+
+  /**
+   * Find the block locations for all of the files for the specified region.
+   *
+   * Returns an ordered list of hosts that are hosting the blocks for this
+   * region.  The weight of each host is the sum of the block lengths of all
+   * files on that host, so the first host in the list is the server which
+   * holds the most bytes of the given region's HFiles.
+   *
+   * TODO: Make this work.  Need to figure out how to match hadoop's hostnames
+   *       given for block locations with our HServerAddress.
+   * TODO: Use the right directory for the region
+   * TODO: Use getFileBlockLocations on the files not the directory
+   *
+   * @param fs the filesystem
+   * @param region region
+   * @return ordered list of hosts holding blocks of the specified region
+   * @throws IOException if any filesystem errors
+   */
+  @SuppressWarnings("unused")
+  private List<String> getTopBlockLocations(FileSystem fs, HRegionInfo region)
+  throws IOException {
+    String encodedName = region.getEncodedName();
+    Path path = new Path("/hbase/table/" + encodedName);
+    FileStatus status = fs.getFileStatus(path);
+    BlockLocation [] blockLocations =
+      fs.getFileBlockLocations(status, 0, status.getLen());
+    Map<HostAndWeight,HostAndWeight> hostWeights =
+      new TreeMap<HostAndWeight,HostAndWeight>(new HostAndWeight.HostComparator());
+    for(BlockLocation bl : blockLocations) {
+      String [] hosts = bl.getHosts();
+      long len = bl.getLength();
+      for(String host : hosts) {
+        HostAndWeight haw = hostWeights.get(host);
+        if(haw == null) {
+          haw = new HostAndWeight(host, len);
+          hostWeights.put(haw, haw);
+        } else {
+          haw.addWeight(len);
+        }
+      }
+    }
+    NavigableSet<HostAndWeight> orderedHosts = new TreeSet<HostAndWeight>(
+        new HostAndWeight.WeightComparator());
+    orderedHosts.addAll(hostWeights.values());
+    List<String> topHosts = new ArrayList<String>(orderedHosts.size());
+    for(HostAndWeight haw : orderedHosts.descendingSet()) {
+      topHosts.add(haw.getHost());
+    }
+    return topHosts;
+  }
+
+  /**
+   * Stores the hostname and weight for that hostname.
+   *
+   * This is used when determining the physical locations of the blocks making
+   * up a region.
+   *
+   * To make a prioritized list of the hosts holding the most data of a region,
+   * this class is used to count the total weight for each host.  The weight is
+   * currently just the size of the file.
+   */
+  private static class HostAndWeight {
+
+    private final String host;
+    private long weight;
+
+    public HostAndWeight(String host, long weight) {
+      this.host = host;
+      this.weight = weight;
+    }
+
+    public void addWeight(long weight) {
+      this.weight += weight;
+    }
+
+    public String getHost() {
+      return host;
+    }
+
+    public long getWeight() {
+      return weight;
+    }
+
+    private static class HostComparator implements Comparator<HostAndWeight> {
+      @Override
+      public int compare(HostAndWeight l, HostAndWeight r) {
+        return l.getHost().compareTo(r.getHost());
+      }
+    }
+
+    private static class WeightComparator implements Comparator<HostAndWeight> {
+      @Override
+      public int compare(HostAndWeight l, HostAndWeight r) {
+        if(l.getWeight() == r.getWeight()) {
+          return l.getHost().compareTo(r.getHost());
+        }
+        return l.getWeight() < r.getWeight() ? -1 : 1;
+      }
+    }
+  }
+
+  /**
+   * Generates an immediate assignment plan to be used by a new master for
+   * regions in transition that do not have an already known destination.
+   *
+   * Takes a list of regions that need immediate assignment and a list of
+   * all available servers.  Returns a map of regions to the server they
+   * should be assigned to.
+   *
+   * This method will return quickly and does not do any intelligent
+   * balancing.  The goal is to make a fast decision not the best decision
+   * possible.
+   *
+   * Currently this is random.
+   *
+   * @param regions
+   * @param servers
+   * @return map of regions to the server it should be assigned to
+   */
+  public static Map<HRegionInfo,HServerInfo> immediateAssignment(
+      List<HRegionInfo> regions, List<HServerInfo> servers) {
+    Map<HRegionInfo,HServerInfo> assignments =
+      new TreeMap<HRegionInfo,HServerInfo>();
+    for(HRegionInfo region : regions) {
+      assignments.put(region, servers.get(rand.nextInt(servers.size())));
+    }
+    return assignments;
+  }
+
+  public static HServerInfo randomAssignment(List<HServerInfo> servers) {
+    if (servers == null || servers.isEmpty()) {
+      LOG.warn("Wanted to do random assignment but no servers to assign to");
+      return null;
+    }
+    return servers.get(rand.nextInt(servers.size()));
+  }
+
+  /**
+   * Stores the plan for the move of an individual region.
+   *
+   * Contains info for the region being moved, info for the server the region
+   * should be moved from, and info for the server the region should be moved
+   * to.
+   *
+   * The comparable implementation of this class compares only the region
+   * information and not the source/dest server info.
+   */
+  public static class RegionPlan implements Comparable<RegionPlan> {
+    private final HRegionInfo hri;
+    private final HServerInfo source;
+    private HServerInfo dest;
+
+    /**
+     * Instantiate a plan for a region move, moving the specified region from
+     * the specified source server to the specified destination server.
+     *
+     * Destination server can be instantiated as null and later set
+     * with {@link #setDestination(HServerInfo)}.
+     *
+     * @param hri region to be moved
+     * @param source regionserver region should be moved from
+     * @param dest regionserver region should be moved to
+     */
+    public RegionPlan(final HRegionInfo hri, HServerInfo source, HServerInfo dest) {
+      this.hri = hri;
+      this.source = source;
+      this.dest = dest;
+    }
+
+    /**
+     * Set the destination server for the plan for this region.
+     */
+    public void setDestination(HServerInfo dest) {
+      this.dest = dest;
+    }
+
+    /**
+     * Get the source server for the plan for this region.
+     * @return server info for source
+     */
+    public HServerInfo getSource() {
+      return source;
+    }
+
+    /**
+     * Get the destination server for the plan for this region.
+     * @return server info for destination
+     */
+    public HServerInfo getDestination() {
+      return dest;
+    }
+
+    /**
+     * Get the encoded region name for the region this plan is for.
+     * @return Encoded region name
+     */
+    public String getRegionName() {
+      return this.hri.getEncodedName();
+    }
+
+    public HRegionInfo getRegionInfo() {
+      return this.hri;
+    }
+
+    /**
+     * Compare the region info.
+     * @param o region plan you are comparing against
+     */
+    @Override
+    public int compareTo(RegionPlan o) {
+      return getRegionName().compareTo(o.getRegionName());
+    }
+
+    @Override
+    public String toString() {
+      return "hri=" + this.hri.getRegionNameAsString() + ", src=" +
+        (this.source == null? "": this.source.getServerName()) +
+        ", dest=" + (this.dest == null? "": this.dest.getServerName());
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/LogCleaner.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/LogCleaner.java
new file mode 100644
index 0000000..f4252a6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/LogCleaner.java
@@ -0,0 +1,178 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.LinkedList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+
+import static org.apache.hadoop.hbase.HConstants.HBASE_MASTER_LOGCLEANER_PLUGINS;
+
+/**
+ * This Chore, everytime it runs, will clear the wal logs in the old logs folder
+ * that are deletable for each log cleaner in the chain, in order to limit the
+ * number of deletes it sends, will only delete maximum 20 in a single run.
+ */
+public class LogCleaner extends Chore {
+  static final Log LOG = LogFactory.getLog(LogCleaner.class.getName());
+
+  // Max number we can delete on every chore, this is to make sure we don't
+  // issue thousands of delete commands around the same time
+  private final int maxDeletedLogs;
+  private final FileSystem fs;
+  private final Path oldLogDir;
+  private List<LogCleanerDelegate> logCleanersChain;
+  private final Configuration conf;
+
+  /**
+   *
+   * @param p the period of time to sleep between each run
+   * @param s the stopper
+   * @param conf configuration to use
+   * @param fs handle to the FS
+   * @param oldLogDir the path to the archived logs
+   */
+  public LogCleaner(final int p, final Stoppable s,
+                        Configuration conf, FileSystem fs,
+                        Path oldLogDir) {
+    super("LogsCleaner", p, s);
+
+    this.maxDeletedLogs =
+        conf.getInt("hbase.master.logcleaner.maxdeletedlogs", 20);
+    this.fs = fs;
+    this.oldLogDir = oldLogDir;
+    this.conf = conf;
+    this.logCleanersChain = new LinkedList<LogCleanerDelegate>();
+
+    initLogCleanersChain();
+  }
+
+  /*
+   * Initialize the chain of log cleaners from the configuration. The default
+   * three LogCleanerDelegates in this chain are: TimeToLiveLogCleaner,
+   * ReplicationLogCleaner and SnapshotLogCleaner.
+   */
+  private void initLogCleanersChain() {
+    String[] logCleaners = conf.getStrings(HBASE_MASTER_LOGCLEANER_PLUGINS);
+    if (logCleaners != null) {
+      for (String className : logCleaners) {
+        LogCleanerDelegate logCleaner = newLogCleaner(className, conf);
+        addLogCleaner(logCleaner);
+      }
+    }
+  }
+
+  /**
+   * A utility method to create new instances of LogCleanerDelegate based
+   * on the class name of the LogCleanerDelegate.
+   * @param className fully qualified class name of the LogCleanerDelegate
+   * @param conf
+   * @return the new instance
+   */
+  public static LogCleanerDelegate newLogCleaner(String className, Configuration conf) {
+    try {
+      Class c = Class.forName(className);
+      LogCleanerDelegate cleaner = (LogCleanerDelegate) c.newInstance();
+      cleaner.setConf(conf);
+      return cleaner;
+    } catch(Exception e) {
+      LOG.warn("Can NOT create LogCleanerDelegate: " + className, e);
+      // skipping if can't instantiate
+      return null;
+    }
+  }
+
+  /**
+   * Add a LogCleanerDelegate to the log cleaner chain. A log file is deletable
+   * if it is deletable for each LogCleanerDelegate in the chain.
+   * @param logCleaner
+   */
+  public void addLogCleaner(LogCleanerDelegate logCleaner) {
+    if (logCleaner != null && !logCleanersChain.contains(logCleaner)) {
+      logCleanersChain.add(logCleaner);
+      LOG.debug("Add log cleaner in chain: " + logCleaner.getClass().getName());
+    }
+  }
+
+  @Override
+  protected void chore() {
+    try {
+      FileStatus [] files = this.fs.listStatus(this.oldLogDir);
+      if (files == null) return;
+      int nbDeletedLog = 0;
+      FILE: for (FileStatus file : files) {
+        Path filePath = file.getPath();
+        if (HLog.validateHLogFilename(filePath.getName())) {
+          for (LogCleanerDelegate logCleaner : logCleanersChain) {
+            if (logCleaner.isStopped()) {
+              LOG.warn("A log cleaner is stopped, won't delete any log.");
+              return;
+            }
+
+            if (!logCleaner.isLogDeletable(filePath) ) {
+              // this log is not deletable, continue to process next log file
+              continue FILE;
+            }
+          }
+          // delete this log file if it passes all the log cleaners
+          this.fs.delete(filePath, true);
+          nbDeletedLog++;
+        } else {
+          LOG.warn("Found a wrongly formated file: "
+              + file.getPath().getName());
+          this.fs.delete(filePath, true);
+          nbDeletedLog++;
+        }
+        if (nbDeletedLog >= maxDeletedLogs) {
+          break;
+        }
+      }
+    } catch (IOException e) {
+      e = RemoteExceptionHandler.checkIOException(e);
+      LOG.warn("Error while cleaning the logs", e);
+    }
+  }
+
+  @Override
+  public void run() {
+    try {
+      super.run();
+    } finally {
+      for (LogCleanerDelegate lc: this.logCleanersChain) {
+        try {
+          lc.stop("Exiting");
+        } catch (Throwable t) {
+          LOG.warn("Stopping", t);
+        }
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/LogCleanerDelegate.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/LogCleanerDelegate.java
new file mode 100644
index 0000000..27ea161
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/LogCleanerDelegate.java
@@ -0,0 +1,48 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Stoppable;
+
+/**
+ * Interface for the log cleaning function inside the master. By default, three
+ * cleaners <code>TimeToLiveLogCleaner</code>,  <code>ReplicationLogCleaner</code>,
+ * <code>SnapshotLogCleaner</code> are called in order. So if other effects are
+ * needed, implement your own LogCleanerDelegate and add it to the configuration
+ * "hbase.master.logcleaner.plugins", which is a comma-separated list of fully
+ * qualified class names. LogsCleaner will add it to the chain.
+ *
+ * <p>HBase ships with LogsCleaner as the default implementation.
+ *
+ * <p>This interface extends Configurable, so setConf needs to be called once
+ * before using the cleaner.
+ * Since LogCleanerDelegates are created in LogsCleaner by reflection. Classes
+ * that implements this interface should provide a default constructor.
+ */
+public interface LogCleanerDelegate extends Configurable, Stoppable {
+  /**
+   * Should the master delete the log or keep it?
+   * @param filePath full path to log.
+   * @return true if the log is deletable, false if not
+   */
+  public boolean isLogDeletable(Path filePath);
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
new file mode 100644
index 0000000..98cbc73
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
@@ -0,0 +1,304 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.master.metrics.MasterMetrics;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogSplitter;
+import org.apache.hadoop.hbase.regionserver.wal.OrphanHLogAfterSplitException;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+
+/**
+ * This class abstracts a bunch of operations the HMaster needs to interact with
+ * the underlying file system, including splitting log files, checking file
+ * system status, etc.
+ */
+public class MasterFileSystem {
+  private static final Log LOG = LogFactory.getLog(MasterFileSystem.class.getName());
+  // HBase configuration
+  Configuration conf;
+  // master status
+  Server master;
+  // metrics for master
+  MasterMetrics metrics;
+  // Keep around for convenience.
+  private final FileSystem fs;
+  // Is the fileystem ok?
+  private volatile boolean fsOk = true;
+  // The Path to the old logs dir
+  private final Path oldLogDir;
+  // root hbase directory on the FS
+  private final Path rootdir;
+  // create the split log lock
+  final Lock splitLogLock = new ReentrantLock();
+
+  public MasterFileSystem(Server master, MasterMetrics metrics)
+  throws IOException {
+    this.conf = master.getConfiguration();
+    this.master = master;
+    this.metrics = metrics;
+    // Set filesystem to be that of this.rootdir else we get complaints about
+    // mismatched filesystems if hbase.rootdir is hdfs and fs.defaultFS is
+    // default localfs.  Presumption is that rootdir is fully-qualified before
+    // we get to here with appropriate fs scheme.
+    this.rootdir = FSUtils.getRootDir(conf);
+    // Cover both bases, the old way of setting default fs and the new.
+    // We're supposed to run on 0.20 and 0.21 anyways.
+    conf.set("fs.default.name", this.rootdir.toString());
+    conf.set("fs.defaultFS", this.rootdir.toString());
+    // setup the filesystem variable
+    this.fs = FileSystem.get(conf);
+    // set up the archived logs path
+    this.oldLogDir = new Path(this.rootdir, HConstants.HREGION_OLDLOGDIR_NAME);
+    createInitialFileSystemLayout();
+  }
+
+  /**
+   * Create initial layout in filesystem.
+   * <ol>
+   * <li>Check if the root region exists and is readable, if not create it.
+   * Create hbase.version and the -ROOT- directory if not one.
+   * </li>
+   * <li>Create a log archive directory for RS to put archived logs</li>
+   * </ol>
+   * Idempotent.
+   */
+  private void createInitialFileSystemLayout() throws IOException {
+    // check if the root directory exists
+    checkRootDir(this.rootdir, conf, this.fs);
+
+    // Make sure the region servers can archive their old logs
+    if(!this.fs.exists(this.oldLogDir)) {
+      this.fs.mkdirs(this.oldLogDir);
+    }
+  }
+
+  public FileSystem getFileSystem() {
+    return this.fs;
+  }
+
+  /**
+   * Get the directory where old logs go
+   * @return the dir
+   */
+  public Path getOldLogDir() {
+    return this.oldLogDir;
+  }
+
+  /**
+   * Checks to see if the file system is still accessible.
+   * If not, sets closed
+   * @return false if file system is not available
+   */
+  public boolean checkFileSystem() {
+    if (this.fsOk) {
+      try {
+        FSUtils.checkFileSystemAvailable(this.fs);
+      } catch (IOException e) {
+        master.abort("Shutting down HBase cluster: file system not available", e);
+        this.fsOk = false;
+      }
+    }
+    return this.fsOk;
+  }
+
+  /**
+   * @return HBase root dir.
+   * @throws IOException
+   */
+  public Path getRootDir() {
+    return this.rootdir;
+  }
+
+  /**
+   * Inspect the log directory to recover any log file without
+   * an active region server.
+   * @param onlineServers Map of online servers keyed by
+   * {@link HServerInfo#getServerName()}
+   */
+  void splitLogAfterStartup(final Map<String, HServerInfo> onlineServers) {
+    Path logsDirPath = new Path(this.rootdir, HConstants.HREGION_LOGDIR_NAME);
+    try {
+      if (!this.fs.exists(logsDirPath)) {
+        return;
+      }
+    } catch (IOException e) {
+      throw new RuntimeException("Failed exists test on " + logsDirPath, e);
+    }
+    FileStatus[] logFolders;
+    try {
+      logFolders = this.fs.listStatus(logsDirPath);
+    } catch (IOException e) {
+      throw new RuntimeException("Failed listing " + logsDirPath.toString(), e);
+    }
+    if (logFolders == null || logFolders.length == 0) {
+      LOG.debug("No log files to split, proceeding...");
+      return;
+    }
+    for (FileStatus status : logFolders) {
+      String serverName = status.getPath().getName();
+      if (onlineServers.get(serverName) == null) {
+        LOG.info("Log folder " + status.getPath() + " doesn't belong " +
+          "to a known region server, splitting");
+        splitLog(serverName);
+      } else {
+        LOG.info("Log folder " + status.getPath() +
+          " belongs to an existing region server");
+      }
+    }
+  }
+
+  public void splitLog(final String serverName) {
+    this.splitLogLock.lock();
+    long splitTime = 0, splitLogSize = 0;
+    Path logDir = new Path(this.rootdir, HLog.getHLogDirectoryName(serverName));
+    try {
+      HLogSplitter splitter = HLogSplitter.createLogSplitter(
+        conf, rootdir, logDir, oldLogDir, this.fs);
+      try {
+        splitter.splitLog();
+      } catch (OrphanHLogAfterSplitException e) {
+        LOG.warn("Retrying splitting because of:", e);
+        splitter.splitLog();
+      }
+      splitTime = splitter.getTime();
+      splitLogSize = splitter.getSize();
+    } catch (IOException e) {
+      LOG.error("Failed splitting " + logDir.toString(), e);
+    } finally {
+      this.splitLogLock.unlock();
+    }
+    if (this.metrics != null) {
+      this.metrics.addSplit(splitTime, splitLogSize);
+    }
+  }
+
+  /**
+   * Get the rootdir.  Make sure its wholesome and exists before returning.
+   * @param rd
+   * @param conf
+   * @param fs
+   * @return hbase.rootdir (after checks for existence and bootstrapping if
+   * needed populating the directory with necessary bootup files).
+   * @throws IOException
+   */
+  private static Path checkRootDir(final Path rd, final Configuration c,
+    final FileSystem fs)
+  throws IOException {
+    // If FS is in safe mode wait till out of it.
+    FSUtils.waitOnSafeMode(c, c.getInt(HConstants.THREAD_WAKE_FREQUENCY,
+        10 * 1000));
+    // Filesystem is good. Go ahead and check for hbase.rootdir.
+    if (!fs.exists(rd)) {
+      fs.mkdirs(rd);
+      FSUtils.setVersion(fs, rd);
+    } else {
+      FSUtils.checkVersion(fs, rd, true);
+    }
+    // Make sure the root region directory exists!
+    if (!FSUtils.rootRegionExists(fs, rd)) {
+      bootstrap(rd, c);
+    }
+    return rd;
+  }
+
+  private static void bootstrap(final Path rd, final Configuration c)
+  throws IOException {
+    LOG.info("BOOTSTRAP: creating ROOT and first META regions");
+    try {
+      // Bootstrapping, make sure blockcache is off.  Else, one will be
+      // created here in bootstap and it'll need to be cleaned up.  Better to
+      // not make it in first place.  Turn off block caching for bootstrap.
+      // Enable after.
+      HRegionInfo rootHRI = new HRegionInfo(HRegionInfo.ROOT_REGIONINFO);
+      setInfoFamilyCaching(rootHRI, false);
+      HRegionInfo metaHRI = new HRegionInfo(HRegionInfo.FIRST_META_REGIONINFO);
+      setInfoFamilyCaching(metaHRI, false);
+      HRegion root = HRegion.createHRegion(rootHRI, rd, c);
+      HRegion meta = HRegion.createHRegion(metaHRI, rd, c);
+      setInfoFamilyCaching(rootHRI, true);
+      setInfoFamilyCaching(metaHRI, true);
+      // Add first region from the META table to the ROOT region.
+      HRegion.addRegionToMETA(root, meta);
+      root.close();
+      root.getLog().closeAndDelete();
+      meta.close();
+      meta.getLog().closeAndDelete();
+    } catch (IOException e) {
+      e = RemoteExceptionHandler.checkIOException(e);
+      LOG.error("bootstrap", e);
+      throw e;
+    }
+  }
+
+  /**
+   * @param hri Set all family block caching to <code>b</code>
+   * @param b
+   */
+  private static void setInfoFamilyCaching(final HRegionInfo hri, final boolean b) {
+    for (HColumnDescriptor hcd: hri.getTableDesc().families.values()) {
+      if (Bytes.equals(hcd.getName(), HConstants.CATALOG_FAMILY)) {
+        hcd.setBlockCacheEnabled(b);
+        hcd.setInMemory(b);
+      }
+    }
+  }
+
+  public void deleteRegion(HRegionInfo region) throws IOException {
+    fs.delete(HRegion.getRegionDir(rootdir, region), true);
+  }
+
+  public void deleteTable(byte[] tableName) throws IOException {
+    fs.delete(new Path(rootdir, Bytes.toString(tableName)), true);
+  }
+
+  public void updateRegionInfo(HRegionInfo region) {
+    // TODO implement this.  i think this is currently broken in trunk i don't
+    //      see this getting updated.
+    //      @see HRegion.checkRegioninfoOnFilesystem()
+  }
+
+  public void deleteFamily(HRegionInfo region, byte[] familyName)
+  throws IOException {
+    fs.delete(Store.getStoreHomedir(
+        new Path(rootdir, region.getTableDesc().getNameAsString()),
+        region.getEncodedName(), familyName), true);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
new file mode 100644
index 0000000..593254b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java
@@ -0,0 +1,59 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+
+/**
+ * Services Master supplies
+ */
+public interface MasterServices {
+  /**
+   * @return Master's instance of the {@link AssignmentManager}
+   */
+  public AssignmentManager getAssignmentManager();
+
+  /**
+   * @return Master's filesystem {@link MasterFileSystem} utility class.
+   */
+  public MasterFileSystem getMasterFileSystem();
+
+  /**
+   * @return Master's {@link ServerManager} instance.
+   */
+  public ServerManager getServerManager();
+
+  /**
+   * @return Master's instance of {@link ExecutorService}
+   */
+  public ExecutorService getExecutorService();
+
+  /**
+   * Check table is modifiable; i.e. exists and is offline.
+   * @param tableName Name of table to check.
+   * @throws TableNotDisabledException
+   * @throws TableNotFoundException 
+   */
+  public void checkTableModifiable(final byte [] tableName) throws IOException;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
new file mode 100644
index 0000000..07f0660
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
@@ -0,0 +1,695 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClockOutOfSyncException;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.PleaseHoldException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.YouAreDeadException;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.RetriesExhaustedException;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler;
+import org.apache.hadoop.hbase.master.handler.ServerShutdownHandler;
+import org.apache.hadoop.hbase.master.metrics.MasterMetrics;
+import org.apache.hadoop.hbase.regionserver.Leases.LeaseStillHeldException;
+
+/**
+ * The ServerManager class manages info about region servers - HServerInfo,
+ * load numbers, dying servers, etc.
+ * <p>
+ * Maintains lists of online and dead servers.  Processes the startups,
+ * shutdowns, and deaths of region servers.
+ * <p>
+ * Servers are distinguished in two different ways.  A given server has a
+ * location, specified by hostname and port, and of which there can only be one
+ * online at any given time.  A server instance is specified by the location
+ * (hostname and port) as well as the startcode (timestamp from when the server
+ * was started).  This is used to differentiate a restarted instance of a given
+ * server from the original instance.
+ */
+public class ServerManager {
+  private static final Log LOG = LogFactory.getLog(ServerManager.class);
+
+  // Set if we are to shutdown the cluster.
+  private volatile boolean clusterShutdown = false;
+
+  /** The map of known server names to server info */
+  private final Map<String, HServerInfo> onlineServers =
+    new ConcurrentHashMap<String, HServerInfo>();
+
+  // TODO: This is strange to have two maps but HSI above is used on both sides
+  /**
+   * Map from full server-instance name to the RPC connection for this server.
+   */
+  private final Map<String, HRegionInterface> serverConnections =
+    new HashMap<String, HRegionInterface>();
+
+  private final Server master;
+  private final MasterServices services;
+
+  // Reporting to track master metrics.
+  private final MasterMetrics metrics;
+
+  private final DeadServer deadservers;
+
+  private final long maxSkew;
+
+  /**
+   * Constructor.
+   * @param master
+   * @param services
+   * @param metrics
+   */
+  public ServerManager(final Server master, final MasterServices services,
+      MasterMetrics metrics) {
+    this.master = master;
+    this.services = services;
+    this.metrics = metrics;
+    Configuration c = master.getConfiguration();
+    maxSkew = c.getLong("hbase.master.maxclockskew", 30000);
+    this.deadservers =
+      new DeadServer(c.getInt("hbase.master.maxdeadservers", 100));
+  }
+
+  /**
+   * Let the server manager know a new regionserver has come online
+   * @param serverInfo
+   * @param serverCurrentTime The current time of the region server in ms
+   * @throws IOException
+   */
+  void regionServerStartup(final HServerInfo serverInfo, long serverCurrentTime)
+  throws IOException {
+    // Test for case where we get a region startup message from a regionserver
+    // that has been quickly restarted but whose znode expiration handler has
+    // not yet run, or from a server whose fail we are currently processing.
+    // Test its host+port combo is present in serverAddresstoServerInfo.  If it
+    // is, reject the server and trigger its expiration. The next time it comes
+    // in, it should have been removed from serverAddressToServerInfo and queued
+    // for processing by ProcessServerShutdown.
+    HServerInfo info = new HServerInfo(serverInfo);
+    checkIsDead(info.getServerName(), "STARTUP");
+    checkAlreadySameHostPort(info);
+    checkClockSkew(info, serverCurrentTime);
+    recordNewServer(info, false, null);
+  }
+
+  /**
+   * Test to see if we have a server of same host and port already.
+   * @param serverInfo
+   * @throws PleaseHoldException
+   */
+  void checkAlreadySameHostPort(final HServerInfo serverInfo)
+  throws PleaseHoldException {
+    String hostAndPort = serverInfo.getServerAddress().toString();
+    HServerInfo existingServer =
+      haveServerWithSameHostAndPortAlready(serverInfo.getHostnamePort());
+    if (existingServer != null) {
+      String message = "Server start rejected; we already have " + hostAndPort +
+        " registered; existingServer=" + existingServer + ", newServer=" + serverInfo;
+      LOG.info(message);
+      if (existingServer.getStartCode() < serverInfo.getStartCode()) {
+        LOG.info("Triggering server recovery; existingServer " +
+          existingServer.getServerName() + " looks stale");
+        expireServer(existingServer);
+      }
+      throw new PleaseHoldException(message);
+    }
+  }
+
+  private HServerInfo haveServerWithSameHostAndPortAlready(final String hostnamePort) {
+    synchronized (this.onlineServers) {
+      for (Map.Entry<String, HServerInfo> e: this.onlineServers.entrySet()) {
+        if (e.getValue().getHostnamePort().equals(hostnamePort)) {
+          return e.getValue();
+        }
+      }
+    }
+    return null;
+  }
+
+  /**
+   * Checks if the clock skew between the server and the master. If the clock
+   * skew is too much it will throw an Exception.
+   * @throws ClockOutOfSyncException
+   */
+  private void checkClockSkew(final HServerInfo serverInfo,
+      final long serverCurrentTime)
+  throws ClockOutOfSyncException {
+    long skew = System.currentTimeMillis() - serverCurrentTime;
+    if (skew > maxSkew) {
+      String message = "Server " + serverInfo.getServerName() + " has been " +
+        "rejected; Reported time is too far out of sync with master.  " +
+        "Time difference of " + skew + "ms > max allowed of " + maxSkew + "ms";
+      LOG.warn(message);
+      throw new ClockOutOfSyncException(message);
+    }
+  }
+
+  /**
+   * If this server is on the dead list, reject it with a LeaseStillHeldException
+   * @param serverName Server name formatted as host_port_startcode.
+   * @param what START or REPORT
+   * @throws LeaseStillHeldException
+   */
+  private void checkIsDead(final String serverName, final String what)
+  throws YouAreDeadException {
+    if (!this.deadservers.isDeadServer(serverName)) return;
+    String message = "Server " + what + " rejected; currently processing " +
+      serverName + " as dead server";
+    LOG.debug(message);
+    throw new YouAreDeadException(message);
+  }
+
+  /**
+   * Adds the HSI to the RS list
+   * @param info The region server informations
+   * @param useInfoLoad True if the load from the info should be used; e.g.
+   * under a master failover
+   * @param hri Region interface.  Can be null.
+   */
+  void recordNewServer(HServerInfo info, boolean useInfoLoad,
+      HRegionInterface hri) {
+    HServerLoad load = useInfoLoad? info.getLoad(): new HServerLoad();
+    String serverName = info.getServerName();
+    LOG.info("Registering server=" + serverName + ", regionCount=" +
+      load.getLoad() + ", userLoad=" + useInfoLoad);
+    info.setLoad(load);
+    // TODO: Why did we update the RS location ourself?  Shouldn't RS do this?
+    // masterStatus.getZooKeeper().updateRSLocationGetWatch(info, watcher);
+    // -- If I understand the question, the RS does not update the location
+    // because could be disagreement over locations because of DNS issues; only
+    // master does DNS now -- St.Ack 20100929.
+    this.onlineServers.put(serverName, info);
+    if (hri == null) {
+      serverConnections.remove(serverName);
+    } else {
+      serverConnections.put(serverName, hri);
+    }
+  }
+
+  /**
+   * Called to process the messages sent from the region server to the master
+   * along with the heart beat.
+   *
+   * @param serverInfo
+   * @param msgs
+   * @param mostLoadedRegions Array of regions the region server is submitting
+   * as candidates to be rebalanced, should it be overloaded
+   * @return messages from master to region server indicating what region
+   * server should do.
+   *
+   * @throws IOException
+   */
+  HMsg [] regionServerReport(final HServerInfo serverInfo,
+    final HMsg [] msgs, final HRegionInfo[] mostLoadedRegions)
+  throws IOException {
+    // Be careful. This method does returns in the middle.
+    HServerInfo info = new HServerInfo(serverInfo);
+
+    // Check if dead.  If it is, it'll get a 'You Are Dead!' exception.
+    checkIsDead(info.getServerName(), "REPORT");
+
+    // If we don't know this server, tell it shutdown.
+    HServerInfo storedInfo = this.onlineServers.get(info.getServerName());
+    if (storedInfo == null) {
+      // Maybe we already have this host+port combo and its just different
+      // start code?
+      checkAlreadySameHostPort(info);
+      // Just let the server in. Presume master joining a running cluster.
+      // recordNewServer is what happens at the end of reportServerStartup.
+      // The only thing we are skipping is passing back to the regionserver
+      // the HServerInfo to use. Here we presume a master has already done
+      // that so we'll press on with whatever it gave us for HSI.
+      recordNewServer(info, true, null);
+      // If msgs, put off their processing but this is not enough because
+      // its possible that the next time the server reports in, we'll still
+      // not be up and serving. For example, if a split, we'll need the
+      // regions and servers setup in the master before the below
+      // handleSplitReport will work. TODO: FIx!!
+      if (msgs.length > 0)
+        throw new PleaseHoldException("FIX! Putting off " +
+          "message processing because not yet rwady but possible we won't be " +
+          "ready next on next report");
+    }
+
+    // Check startcodes
+    if (raceThatShouldNotHappenAnymore(storedInfo, info)) {
+      return HMsg.STOP_REGIONSERVER_ARRAY;
+    }
+
+    for (HMsg msg: msgs) {
+      LOG.info("Received " + msg + " from " + serverInfo.getServerName());
+      switch (msg.getType()) {
+      case REGION_SPLIT:
+        this.services.getAssignmentManager().handleSplitReport(serverInfo,
+            msg.getRegionInfo(), msg.getDaughterA(), msg.getDaughterB());
+        break;
+
+        default:
+          LOG.error("Unhandled msg type " + msg);
+      }
+    }
+
+    HMsg [] reply = null;
+    int numservers = countOfRegionServers();
+    if (this.clusterShutdown) {
+      if (numservers <= 2) {
+        // Shutdown needs to be staggered; the meta regions need to close last
+        // in case they need to be updated during the close melee.  If <= 2
+        // servers left, then these are the two that were carrying root and meta
+        // most likely (TODO: This presumes unsplittable meta -- FIX). Tell
+        // these servers can shutdown now too.
+        reply = HMsg.STOP_REGIONSERVER_ARRAY;
+      }
+    }
+    return processRegionServerAllsWell(info, mostLoadedRegions, reply);
+  }
+
+  private boolean raceThatShouldNotHappenAnymore(final HServerInfo storedInfo,
+      final HServerInfo reportedInfo) {
+    if (storedInfo.getStartCode() != reportedInfo.getStartCode()) {
+      // TODO: I don't think this possible any more.  We check startcodes when
+      // server comes in on regionServerStartup -- St.Ack
+      // This state is reachable if:
+      // 1) RegionServer A started
+      // 2) RegionServer B started on the same machine, then clobbered A in regionServerStartup.
+      // 3) RegionServer A returns, expecting to work as usual.
+      // The answer is to ask A to shut down for good.
+      LOG.warn("Race condition detected: " + reportedInfo.getServerName());
+      synchronized (this.onlineServers) {
+        removeServerInfo(reportedInfo.getServerName());
+        notifyOnlineServers();
+      }
+      return true;
+    }
+    return false;
+  }
+
+  /**
+   *  RegionServer is checking in, no exceptional circumstances
+   * @param serverInfo
+   * @param mostLoadedRegions
+   * @param msgs
+   * @return
+   * @throws IOException
+   */
+  private HMsg[] processRegionServerAllsWell(HServerInfo serverInfo,
+      final HRegionInfo[] mostLoadedRegions, HMsg[] msgs)
+  throws IOException {
+    // Refresh the info object and the load information
+    this.onlineServers.put(serverInfo.getServerName(), serverInfo);
+    HServerLoad load = serverInfo.getLoad();
+    if (load != null && this.metrics != null) {
+      this.metrics.incrementRequests(load.getNumberOfRequests());
+    }
+    // No more piggyback messages on heartbeats for other stuff
+    return msgs;
+  }
+
+  /**
+   * @param serverName
+   * @return True if we removed server from the list.
+   */
+  private boolean removeServerInfo(final String serverName) {
+    HServerInfo info = this.onlineServers.remove(serverName);
+    if (info != null) {
+      return true;
+    }
+    return false;
+  }
+
+  /**
+   * Compute the average load across all region servers.
+   * Currently, this uses a very naive computation - just uses the number of
+   * regions being served, ignoring stats about number of requests.
+   * @return the average load
+   */
+  public double getAverageLoad() {
+    int totalLoad = 0;
+    int numServers = 0;
+    double averageLoad = 0.0;
+    for (HServerInfo hsi : onlineServers.values()) {
+        numServers++;
+        totalLoad += hsi.getLoad().getNumberOfRegions();
+    }
+    averageLoad = (double)totalLoad / (double)numServers;
+    return averageLoad;
+  }
+
+  /** @return the count of active regionservers */
+  int countOfRegionServers() {
+    // Presumes onlineServers is a concurrent map
+    return this.onlineServers.size();
+  }
+
+  /**
+   * @param name server name
+   * @return HServerInfo for the given server address
+   */
+  public HServerInfo getServerInfo(String name) {
+    return this.onlineServers.get(name);
+  }
+
+  /**
+   * @return Read-only map of servers to serverinfo
+   */
+  public Map<String, HServerInfo> getOnlineServers() {
+    // Presumption is that iterating the returned Map is OK.
+    synchronized (this.onlineServers) {
+      return Collections.unmodifiableMap(this.onlineServers);
+    }
+  }
+
+  public Set<String> getDeadServers() {
+    return this.deadservers.clone();
+  }
+
+  /**
+   * Checks if any dead servers are currently in progress.
+   * @return true if any RS are being processed as dead, false if not
+   */
+  public boolean areDeadServersInProgress() {
+    return this.deadservers.areDeadServersInProgress();
+  }
+
+  /**
+   * @param hsa
+   * @return The HServerInfo whose HServerAddress is <code>hsa</code> or null
+   * if nothing found.
+   */
+  public HServerInfo getHServerInfo(final HServerAddress hsa) {
+    synchronized(this.onlineServers) {
+      // TODO: This is primitive.  Do a better search.
+      for (Map.Entry<String, HServerInfo> e: this.onlineServers.entrySet()) {
+        if (e.getValue().getServerAddress().equals(hsa)) {
+          return e.getValue();
+        }
+      }
+    }
+    return null;
+  }
+
+  private void notifyOnlineServers() {
+    synchronized (this.onlineServers) {
+      this.onlineServers.notifyAll();
+    }
+  }
+
+  /*
+   * Wait on regionservers to report in
+   * with {@link #regionServerReport(HServerInfo, HMsg[])} so they get notice
+   * the master is going down.  Waits until all region servers come back with
+   * a MSG_REGIONSERVER_STOP.
+   */
+  void letRegionServersShutdown() {
+    synchronized (onlineServers) {
+      while (onlineServers.size() > 0) {
+        StringBuilder sb = new StringBuilder();
+        for (String key: this.onlineServers.keySet()) {
+          if (sb.length() > 0) {
+            sb.append(", ");
+          }
+          sb.append(key);
+        }
+        LOG.info("Waiting on regionserver(s) to go down " + sb.toString());
+        try {
+          this.onlineServers.wait(1000);
+        } catch (InterruptedException e) {
+          // continue
+        }
+      }
+    }
+  }
+
+  /*
+   * Expire the passed server.  Add it to list of deadservers and queue a
+   * shutdown processing.
+   */
+  public synchronized void expireServer(final HServerInfo hsi) {
+    // First check a server to expire.  ServerName is of the form:
+    // <hostname> , <port> , <startcode>
+    String serverName = hsi.getServerName();
+    HServerInfo info = this.onlineServers.get(serverName);
+    if (info == null) {
+      LOG.warn("Received expiration of " + hsi.getServerName() +
+        " but server is not currently online");
+      return;
+    }
+    if (this.deadservers.contains(serverName)) {
+      // TODO: Can this happen?  It shouldn't be online in this case?
+      LOG.warn("Received expiration of " + hsi.getServerName() +
+          " but server shutdown is already in progress");
+      return;
+    }
+    // Remove the server from the known servers lists and update load info BUT
+    // add to deadservers first; do this so it'll show in dead servers list if
+    // not in online servers list.
+    this.deadservers.add(serverName);
+    this.onlineServers.remove(serverName);
+    this.serverConnections.remove(serverName);
+    // If cluster is going down, yes, servers are going to be expiring; don't
+    // process as a dead server
+    if (this.clusterShutdown) {
+      LOG.info("Cluster shutdown set; " + hsi.getServerName() +
+        " expired; onlineServers=" + this.onlineServers.size());
+      if (this.onlineServers.isEmpty()) {
+        master.stop("Cluster shutdown set; onlineServer=0");
+      }
+      return;
+    }
+    CatalogTracker ct = this.master.getCatalogTracker();
+    // Was this server carrying root?
+    boolean carryingRoot;
+    try {
+      HServerAddress address = ct.getRootLocation();
+      carryingRoot = address != null &&
+        hsi.getServerAddress().equals(address);
+    } catch (InterruptedException e) {
+      Thread.currentThread().interrupt();
+      LOG.info("Interrupted");
+      return;
+    }
+    // Was this server carrying meta?  Can't ask CatalogTracker because it
+    // may have reset the meta location as null already (it may have already
+    // run into fact that meta is dead).  I can ask assignment manager. It
+    // has an inmemory list of who has what.  This list will be cleared as we
+    // process the dead server but should be  find asking it now.
+    HServerAddress address = ct.getMetaLocation();
+    boolean carryingMeta =
+      address != null && hsi.getServerAddress().equals(address);
+    if (carryingRoot || carryingMeta) {
+      this.services.getExecutorService().submit(new MetaServerShutdownHandler(this.master,
+        this.services, this.deadservers, info, carryingRoot, carryingMeta));
+    } else {
+      this.services.getExecutorService().submit(new ServerShutdownHandler(this.master,
+        this.services, this.deadservers, info));
+    }
+    LOG.debug("Added=" + serverName +
+      " to dead servers, submitted shutdown handler to be executed, root=" +
+        carryingRoot + ", meta=" + carryingMeta);
+  }
+
+  // RPC methods to region servers
+
+  /**
+   * Sends an OPEN RPC to the specified server to open the specified region.
+   * <p>
+   * Open should not fail but can if server just crashed.
+   * <p>
+   * @param server server to open a region
+   * @param region region to open
+   */
+  public void sendRegionOpen(HServerInfo server, HRegionInfo region)
+  throws IOException {
+    HRegionInterface hri = getServerConnection(server);
+    if (hri == null) {
+      LOG.warn("Attempting to send OPEN RPC to server " + server.getServerName()
+          + " failed because no RPC connection found to this server");
+      return;
+    }
+    hri.openRegion(region);
+  }
+
+  /**
+   * Sends an OPEN RPC to the specified server to open the specified region.
+   * <p>
+   * Open should not fail but can if server just crashed.
+   * <p>
+   * @param server server to open a region
+   * @param regions regions to open
+   */
+  public void sendRegionOpen(HServerInfo server, List<HRegionInfo> regions)
+  throws IOException {
+    HRegionInterface hri = getServerConnection(server);
+    if (hri == null) {
+      LOG.warn("Attempting to send OPEN RPC to server " + server.getServerName()
+          + " failed because no RPC connection found to this server");
+      return;
+    }
+    hri.openRegions(regions);
+  }
+
+  /**
+   * Sends an CLOSE RPC to the specified server to close the specified region.
+   * <p>
+   * A region server could reject the close request because it either does not
+   * have the specified region or the region is being split.
+   * @param server server to open a region
+   * @param region region to open
+   * @return true if server acknowledged close, false if not
+   * @throws IOException
+   */
+  public boolean sendRegionClose(HServerInfo server, HRegionInfo region)
+  throws IOException {
+    if (server == null) throw new NullPointerException("Passed server is null");
+    HRegionInterface hri = getServerConnection(server);
+    if (hri == null) {
+      throw new IOException("Attempting to send CLOSE RPC to server " +
+        server.getServerName() + " for region " +
+        region.getRegionNameAsString() +
+        " failed because no RPC connection found to this server");
+    }
+    return hri.closeRegion(region);
+  }
+
+  /**
+   * @param info
+   * @return
+   * @throws IOException
+   * @throws RetriesExhaustedException wrapping a ConnectException if failed
+   * putting up proxy.
+   */
+  private HRegionInterface getServerConnection(HServerInfo info)
+  throws IOException {
+    HConnection connection =
+      HConnectionManager.getConnection(this.master.getConfiguration());
+    HRegionInterface hri = serverConnections.get(info.getServerName());
+    if (hri == null) {
+      LOG.debug("New connection to " + info.getServerName());
+      hri = connection.getHRegionConnection(info.getServerAddress(), false);
+      this.serverConnections.put(info.getServerName(), hri);
+    }
+    return hri;
+  }
+
+  /**
+   * Waits for the regionservers to report in.
+   * @return Count of regions out on cluster
+   * @throws InterruptedException
+   */
+  public int waitForRegionServers()
+  throws InterruptedException {
+    long interval = this.master.getConfiguration().
+      getLong("hbase.master.wait.on.regionservers.interval", 1500);
+    long timeout = this.master.getConfiguration().
+      getLong("hbase.master.wait.on.regionservers.timeout", 4500);
+    int minToStart = this.master.getConfiguration().
+      getInt("hbase.master.wait.on.regionservers.mintostart", 1);
+    int maxToStart = this.master.getConfiguration().
+      getInt("hbase.master.wait.on.regionservers.maxtostart", Integer.MAX_VALUE);
+    // So, number of regionservers > 0 and its been n since last check in, break,
+    // else just stall here
+    int count = 0;
+    long slept = 0;
+    for (int oldcount = countOfRegionServers(); !this.master.isStopped();) {
+      Thread.sleep(interval);
+      slept += interval;
+      count = countOfRegionServers();
+      if (count == oldcount && count >= minToStart && slept >= timeout) {
+        LOG.info("Finished waiting for regionserver count to settle; " +
+            "count=" + count + ", sleptFor=" + slept);
+        break;
+      }
+      if (count >= maxToStart) {
+        LOG.info("At least the max configured number of regionserver(s) have " +
+            "checked in: " + count);
+        break;
+      }
+      if (count == 0) {
+        LOG.info("Waiting on regionserver(s) to checkin");
+      } else {
+        LOG.info("Waiting on regionserver(s) count to settle; currently=" + count);
+      }
+      oldcount = count;
+    }
+    // Count how many regions deployed out on cluster.  If fresh start, it'll
+    // be none but if not a fresh start, we'll have registered servers when
+    // they came in on the {@link #regionServerReport(HServerInfo)} as opposed to
+    // {@link #regionServerStartup(HServerInfo)} and it'll be carrying an
+    // actual server load.
+    int regionCount = 0;
+    for (Map.Entry<String, HServerInfo> e: this.onlineServers.entrySet()) {
+      HServerLoad load = e.getValue().getLoad();
+      if (load != null) regionCount += load.getLoad();
+    }
+    LOG.info("Exiting wait on regionserver(s) to checkin; count=" + count +
+      ", stopped=" + this.master.isStopped() +
+      ", count of regions out on cluster=" + regionCount);
+    return regionCount;
+  }
+
+  /**
+   * @return A copy of the internal list of online servers.
+   */
+  public List<HServerInfo> getOnlineServersList() {
+    // TODO: optimize the load balancer call so we don't need to make a new list
+    return new ArrayList<HServerInfo>(onlineServers.values());
+  }
+
+  public boolean isServerOnline(String serverName) {
+    return onlineServers.containsKey(serverName);
+  }
+
+  public void shutdownCluster() {
+    this.clusterShutdown = true;
+    this.master.stop("Cluster shutdown requested");
+  }
+
+  public boolean isClusterShutdown() {
+    return this.clusterShutdown;
+  }
+
+  /**
+   * Stop the ServerManager.  Currently does nothing.
+   */
+  public void stop() {
+
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/TimeToLiveLogCleaner.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/TimeToLiveLogCleaner.java
new file mode 100644
index 0000000..55a47ca
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/TimeToLiveLogCleaner.java
@@ -0,0 +1,79 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Log cleaner that uses the timestamp of the hlog to determine if it should
+ * be deleted. By default they are allowed to live for 10 minutes.
+ */
+public class TimeToLiveLogCleaner implements LogCleanerDelegate {
+  static final Log LOG = LogFactory.getLog(TimeToLiveLogCleaner.class.getName());
+  private Configuration conf;
+  // Configured time a log can be kept after it was closed
+  private long ttl;
+  private boolean stopped = false;
+
+  @Override
+  public boolean isLogDeletable(Path filePath) {
+    long time = 0;
+    long currentTime = System.currentTimeMillis();
+    String[] parts = filePath.getName().split("\\.");
+    try {
+      time = Long.parseLong(parts[parts.length-1]);
+    } catch (NumberFormatException e) {
+      LOG.error("Unable to parse the timestamp in " + filePath.getName() +
+          ", deleting it since it's invalid and may not be a hlog", e);
+      return true;
+    }
+    long life = currentTime - time;
+    if (life < 0) {
+      LOG.warn("Found a log newer than current time, " +
+          "probably a clock skew");
+      return false;
+    }
+    return life > ttl;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    this.conf = conf;
+    this.ttl = conf.getLong("hbase.master.logcleaner.ttl", 600000);
+  }
+
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void stop(String why) {
+    this.stopped = true;
+  }
+
+  @Override
+  public boolean isStopped() {
+    return this.stopped;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ClosedRegionHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ClosedRegionHandler.java
new file mode 100644
index 0000000..c98ed17
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ClosedRegionHandler.java
@@ -0,0 +1,94 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+
+/**
+ * Handles CLOSED region event on Master.
+ * <p>
+ * If table is being disabled, deletes ZK unassigned node and removes from
+ * regions in transition.
+ * <p>
+ * Otherwise, assigns the region to another server.
+ */
+public class ClosedRegionHandler extends EventHandler implements TotesHRegionInfo {
+  private static final Log LOG = LogFactory.getLog(ClosedRegionHandler.class);
+  private final AssignmentManager assignmentManager;
+  private final HRegionInfo regionInfo;
+  private final ClosedPriority priority;
+
+  private enum ClosedPriority {
+    ROOT (1),
+    META (2),
+    USER (3);
+
+    private final int value;
+    ClosedPriority(int value) {
+      this.value = value;
+    }
+    public int getValue() {
+      return value;
+    }
+  };
+
+  public ClosedRegionHandler(Server server, AssignmentManager assignmentManager,
+      HRegionInfo regionInfo) {
+    super(server, EventType.RS_ZK_REGION_CLOSED);
+    this.assignmentManager = assignmentManager;
+    this.regionInfo = regionInfo;
+    if(regionInfo.isRootRegion()) {
+      priority = ClosedPriority.ROOT;
+    } else if(regionInfo.isMetaRegion()) {
+      priority = ClosedPriority.META;
+    } else {
+      priority = ClosedPriority.USER;
+    }
+  }
+
+  @Override
+  public int getPriority() {
+    return priority.getValue();
+  }
+
+  @Override
+  public HRegionInfo getHRegionInfo() {
+    return this.regionInfo;
+  }
+
+  @Override
+  public void process() {
+    LOG.debug("Handling CLOSED event for " + regionInfo.getEncodedName());
+    // Check if this table is being disabled or not
+    if (this.assignmentManager.getZKTable().
+        isDisablingOrDisabledTable(this.regionInfo.getTableDesc().getNameAsString())) {
+      assignmentManager.offlineDisabledRegion(regionInfo);
+      return;
+    }
+    // ZK Node is in CLOSED state, assign it.
+    assignmentManager.setOffline(regionInfo);
+    assignmentManager.assign(regionInfo, true);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java
new file mode 100644
index 0000000..55f70f5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.zookeeper.KeeperException;
+
+public class DeleteTableHandler extends TableEventHandler {
+  private static final Log LOG = LogFactory.getLog(DeleteTableHandler.class);
+
+  public DeleteTableHandler(byte [] tableName, Server server,
+      final MasterServices masterServices)
+  throws IOException {
+    super(EventType.C_M_DELETE_TABLE, tableName, server, masterServices);
+  }
+
+  @Override
+  protected void handleTableOperation(List<HRegionInfo> regions)
+  throws IOException, KeeperException {
+    AssignmentManager am = this.masterServices.getAssignmentManager();
+    long waitTime = server.getConfiguration().
+      getLong("hbase.master.wait.on.region", 5 * 60 * 1000);
+    for (HRegionInfo region : regions) {
+      long done = System.currentTimeMillis() + waitTime;
+      while (System.currentTimeMillis() < done) {
+        AssignmentManager.RegionState rs = am.isRegionInTransition(region);
+        if (rs == null) break;
+        Threads.sleep(1000);
+        LOG.debug("Waiting on  region to clear regions in transition; " + rs);
+      }
+      if (am.isRegionInTransition(region) != null) {
+        throw new IOException("Waited hbase.master.wait.on.region (" +
+          waitTime + "ms) for region to leave region " +
+          region.getRegionNameAsString() + " in transitions");
+      }
+      LOG.debug("Deleting region " + region.getRegionNameAsString() +
+        " from META and FS");
+      // Remove region from META
+      MetaEditor.deleteRegion(this.server.getCatalogTracker(), region);
+      // Delete region from FS
+      this.masterServices.getMasterFileSystem().deleteRegion(region);
+    }
+    // Delete table from FS
+    this.masterServices.getMasterFileSystem().deleteTable(tableName);
+
+    // If entry for this table in zk, and up in AssignmentManager, remove it.
+    // Call to undisableTable does this. TODO: Make a more formal purge table.
+    am.getZKTable().setEnabledTable(Bytes.toString(tableName));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/DisableTableHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/DisableTableHandler.java
new file mode 100644
index 0000000..4b00636
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/DisableTableHandler.java
@@ -0,0 +1,153 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.ExecutorService;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.BulkAssigner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Handler to run disable of a table.
+ */
+public class DisableTableHandler extends EventHandler {
+  private static final Log LOG = LogFactory.getLog(DisableTableHandler.class);
+  private final byte [] tableName;
+  private final String tableNameStr;
+  private final AssignmentManager assignmentManager;
+
+  public DisableTableHandler(Server server, byte [] tableName,
+      CatalogTracker catalogTracker, AssignmentManager assignmentManager)
+  throws TableNotFoundException, IOException {
+    super(server, EventType.C_M_DISABLE_TABLE);
+    this.tableName = tableName;
+    this.tableNameStr = Bytes.toString(this.tableName);
+    this.assignmentManager = assignmentManager;
+    // Check if table exists
+    // TODO: do we want to keep this in-memory as well?  i guess this is
+    //       part of old master rewrite, schema to zk to check for table
+    //       existence and such
+    if (!MetaReader.tableExists(catalogTracker, this.tableNameStr)) {
+      throw new TableNotFoundException(Bytes.toString(tableName));
+    }
+  }
+
+  @Override
+  public void process() {
+    try {
+      LOG.info("Attemping to disable table " + this.tableNameStr);
+      handleDisableTable();
+    } catch (IOException e) {
+      LOG.error("Error trying to disable table " + this.tableNameStr, e);
+    } catch (KeeperException e) {
+      LOG.error("Error trying to disable table " + this.tableNameStr, e);
+    }
+  }
+
+  private void handleDisableTable() throws IOException, KeeperException {
+    if (this.assignmentManager.getZKTable().isDisabledTable(this.tableNameStr)) {
+      LOG.info("Table " + tableNameStr + " already disabled; skipping disable");
+      return;
+    }
+    // Set table disabling flag up in zk.
+    this.assignmentManager.getZKTable().setDisablingTable(this.tableNameStr);
+    boolean done = false;
+    while (true) {
+      // Get list of online regions that are of this table.  Regions that are
+      // already closed will not be included in this list; i.e. the returned
+      // list is not ALL regions in a table, its all online regions according to
+      // the in-memory state on this master.
+      final List<HRegionInfo> regions =
+        this.assignmentManager.getRegionsOfTable(tableName);
+      if (regions.size() == 0) {
+        done = true;
+        break;
+      }
+      LOG.info("Offlining " + regions.size() + " regions.");
+      BulkDisabler bd = new BulkDisabler(this.server, regions);
+      try {
+        if (bd.bulkAssign()) {
+          done = true;
+          break;
+        }
+      } catch (InterruptedException e) {
+        LOG.warn("Disable was interrupted");
+        // Preserve the interrupt.
+        Thread.currentThread().interrupt();
+        break;
+      }
+    }
+    // Flip the table to disabled if success.
+    if (done) this.assignmentManager.getZKTable().setDisabledTable(this.tableNameStr);
+    LOG.info("Disabled table is done=" + done);
+  }
+
+  /**
+   * Run bulk disable.
+   */
+  class BulkDisabler extends BulkAssigner {
+    private final List<HRegionInfo> regions;
+
+    BulkDisabler(final Server server, final List<HRegionInfo> regions) {
+      super(server);
+      this.regions = regions;
+    }
+
+    @Override
+    protected void populatePool(ExecutorService pool) {
+      for (HRegionInfo region: regions) {
+        if (assignmentManager.isRegionInTransition(region) != null) continue;
+        final HRegionInfo hri = region;
+        pool.execute(new Runnable() {
+          public void run() {
+            assignmentManager.unassign(hri);
+          }
+        });
+      }
+    }
+
+    @Override
+    protected boolean waitUntilDone(long timeout)
+    throws InterruptedException {
+      long startTime = System.currentTimeMillis();
+      long remaining = timeout;
+      List<HRegionInfo> regions = null;
+      while (!server.isStopped() && remaining > 0) {
+        Thread.sleep(1000);
+        regions = assignmentManager.getRegionsOfTable(tableName);
+        if (regions.isEmpty()) break;
+        remaining = timeout - (System.currentTimeMillis() - startTime);
+      }
+      return regions != null && regions.isEmpty();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java
new file mode 100644
index 0000000..929b8cd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java
@@ -0,0 +1,179 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.ExecutorService;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.BulkAssigner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Handler to run enable of a table.
+ */
+public class EnableTableHandler extends EventHandler {
+  private static final Log LOG = LogFactory.getLog(EnableTableHandler.class);
+  private final byte [] tableName;
+  private final String tableNameStr;
+  private final AssignmentManager assignmentManager;
+  private final CatalogTracker ct;
+
+  public EnableTableHandler(Server server, byte [] tableName,
+      CatalogTracker catalogTracker, AssignmentManager assignmentManager)
+  throws TableNotFoundException, IOException {
+    super(server, EventType.C_M_ENABLE_TABLE);
+    this.tableName = tableName;
+    this.tableNameStr = Bytes.toString(tableName);
+    this.ct = catalogTracker;
+    this.assignmentManager = assignmentManager;
+    // Check if table exists
+    if (!MetaReader.tableExists(catalogTracker, this.tableNameStr)) {
+      throw new TableNotFoundException(Bytes.toString(tableName));
+    }
+  }
+
+  @Override
+  public void process() {
+    try {
+      LOG.info("Attemping to enable the table " + this.tableNameStr);
+      handleEnableTable();
+    } catch (IOException e) {
+      LOG.error("Error trying to enable the table " + this.tableNameStr, e);
+    } catch (KeeperException e) {
+      LOG.error("Error trying to enable the table " + this.tableNameStr, e);
+    }
+  }
+
+  private void handleEnableTable() throws IOException, KeeperException {
+    if (this.assignmentManager.getZKTable().isEnabledTable(this.tableNameStr)) {
+      LOG.info("Table " + tableNameStr + " is already enabled; skipping enable");
+      return;
+    }
+    // I could check table is disabling and if so, not enable but require
+    // that user first finish disabling but that might be obnoxious.
+
+    // Set table enabling flag up in zk.
+    this.assignmentManager.getZKTable().setEnablingTable(this.tableNameStr);
+    boolean done = false;
+    while (true) {
+      // Get the regions of this table. We're done when all listed
+      // tables are onlined.
+      List<HRegionInfo> regionsInMeta =
+        MetaReader.getTableRegions(this.ct, tableName, true);
+      int countOfRegionsInTable = regionsInMeta.size();
+      List<HRegionInfo> regions = regionsToAssign(regionsInMeta);
+      if (regions.size() == 0) {
+        done = true;
+        break;
+      }
+      LOG.info("Table has " + countOfRegionsInTable + " regions of which " +
+        regions.size() + " are online.");
+      BulkEnabler bd = new BulkEnabler(this.server, regions,
+        countOfRegionsInTable);
+      try {
+        if (bd.bulkAssign()) {
+          done = true;
+          break;
+        }
+      } catch (InterruptedException e) {
+        LOG.warn("Enable was interrupted");
+        // Preserve the interrupt.
+        Thread.currentThread().interrupt();
+        break;
+      }
+    }
+    // Flip the table to disabled.
+    if (done) this.assignmentManager.getZKTable().setEnabledTable(this.tableNameStr);
+    LOG.info("Enabled table is done=" + done);
+  }
+
+  /**
+   * @param regionsInMeta This datastructure is edited by this method.
+   * @return The <code>regionsInMeta</code> list minus the regions that have
+   * been onlined; i.e. List of regions that need onlining.
+   * @throws IOException
+   */
+  private List<HRegionInfo> regionsToAssign(final List<HRegionInfo> regionsInMeta)
+  throws IOException {
+    final List<HRegionInfo> onlineRegions =
+      this.assignmentManager.getRegionsOfTable(tableName);
+    regionsInMeta.removeAll(onlineRegions);
+    return regionsInMeta;
+  }
+
+  /**
+   * Run bulk enable.
+   */
+  class BulkEnabler extends BulkAssigner {
+    private final List<HRegionInfo> regions;
+    // Count of regions in table at time this assign was launched.
+    private final int countOfRegionsInTable;
+
+    BulkEnabler(final Server server, final List<HRegionInfo> regions,
+        final int countOfRegionsInTable) {
+      super(server);
+      this.regions = regions;
+      this.countOfRegionsInTable = countOfRegionsInTable;
+    }
+
+    @Override
+    protected void populatePool(ExecutorService pool) {
+      for (HRegionInfo region: regions) {
+        if (assignmentManager.isRegionInTransition(region) != null) continue;
+        final HRegionInfo hri = region;
+        pool.execute(new Runnable() {
+          public void run() {
+            assignmentManager.assign(hri, true);
+          }
+        });
+      }
+    }
+
+    @Override
+    protected boolean waitUntilDone(long timeout)
+    throws InterruptedException {
+      long startTime = System.currentTimeMillis();
+      long remaining = timeout;
+      List<HRegionInfo> regions = null;
+      while (!server.isStopped() && remaining > 0) {
+        Thread.sleep(1000);
+        regions = assignmentManager.getRegionsOfTable(tableName);
+        if (isDone(regions)) break;
+        remaining = timeout - (System.currentTimeMillis() - startTime);
+      }
+      return isDone(regions);
+    }
+
+    private boolean isDone(final List<HRegionInfo> regions) {
+      return regions != null && regions.size() >= this.countOfRegionsInTable;
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
new file mode 100644
index 0000000..eb01a6a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java
@@ -0,0 +1,53 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.master.DeadServer;
+import org.apache.hadoop.hbase.master.MasterServices;
+
+/**
+ * Shutdown handler for the server hosting <code>-ROOT-</code>,
+ * <code>.META.</code>, or both.
+ */
+public class MetaServerShutdownHandler extends ServerShutdownHandler {
+  private final boolean carryingRoot;
+  private final boolean carryingMeta;
+  
+  public MetaServerShutdownHandler(final Server server,
+      final MasterServices services,
+      final DeadServer deadServers, final HServerInfo hsi,
+      final boolean carryingRoot, final boolean carryingMeta) {
+    super(server, services, deadServers, hsi, EventType.M_META_SERVER_SHUTDOWN);
+    this.carryingRoot = carryingRoot;
+    this.carryingMeta = carryingMeta;
+  }
+
+  @Override
+  boolean isCarryingRoot() {
+    return this.carryingRoot;
+  }
+
+  @Override
+  boolean isCarryingMeta() {
+    return this.carryingMeta;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ModifyTableHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ModifyTableHandler.java
new file mode 100644
index 0000000..6380520
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ModifyTableHandler.java
@@ -0,0 +1,52 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.master.MasterServices;
+
+public class ModifyTableHandler extends TableEventHandler {
+  private final HTableDescriptor htd;
+
+  public ModifyTableHandler(final byte [] tableName,
+      final HTableDescriptor htd, final Server server,
+      final MasterServices masterServices) throws IOException {
+    super(EventType.C_M_MODIFY_TABLE, tableName, server, masterServices);
+    this.htd = htd;
+  }
+
+  @Override
+  protected void handleTableOperation(List<HRegionInfo> hris)
+  throws IOException {
+    for (HRegionInfo hri : hris) {
+      // Update region info in META
+      hri.setTableDesc(this.htd);
+      MetaEditor.updateRegionInfo(this.server.getCatalogTracker(), hri);
+      // Update region info in FS
+      this.masterServices.getMasterFileSystem().updateRegionInfo(hri);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java
new file mode 100644
index 0000000..0f0ae65
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java
@@ -0,0 +1,105 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Handles OPENED region event on Master.
+ */
+public class OpenedRegionHandler extends EventHandler implements TotesHRegionInfo {
+  private static final Log LOG = LogFactory.getLog(OpenedRegionHandler.class);
+  private final AssignmentManager assignmentManager;
+  private final HRegionInfo regionInfo;
+  private final HServerInfo serverInfo;
+  private final OpenedPriority priority;
+
+  private enum OpenedPriority {
+    ROOT (1),
+    META (2),
+    USER (3);
+
+    private final int value;
+    OpenedPriority(int value) {
+      this.value = value;
+    }
+    public int getValue() {
+      return value;
+    }
+  };
+
+  public OpenedRegionHandler(Server server,
+      AssignmentManager assignmentManager, HRegionInfo regionInfo,
+      HServerInfo serverInfo) {
+    super(server, EventType.RS_ZK_REGION_OPENED);
+    this.assignmentManager = assignmentManager;
+    this.regionInfo = regionInfo;
+    this.serverInfo = serverInfo;
+    if(regionInfo.isRootRegion()) {
+      priority = OpenedPriority.ROOT;
+    } else if(regionInfo.isMetaRegion()) {
+      priority = OpenedPriority.META;
+    } else {
+      priority = OpenedPriority.USER;
+    }
+  }
+
+  @Override
+  public int getPriority() {
+    return priority.getValue();
+  }
+
+  @Override
+  public HRegionInfo getHRegionInfo() {
+    return this.regionInfo;
+  }
+
+  @Override
+  public void process() {
+    LOG.debug("Handling OPENED event for " + this.regionInfo.getEncodedName() +
+      "; deleting unassigned node");
+    // Remove region from in-memory transition and unassigned node from ZK
+    try {
+      ZKAssign.deleteOpenedNode(server.getZooKeeper(),
+          regionInfo.getEncodedName());
+    } catch (KeeperException e) {
+      server.abort("Error deleting OPENED node in ZK for transition ZK node (" +
+          regionInfo.getEncodedName() + ")", e);
+    }
+    this.assignmentManager.regionOnline(regionInfo, serverInfo);
+    if (this.assignmentManager.getZKTable().isDisablingOrDisabledTable(
+        regionInfo.getTableDesc().getNameAsString())) {
+      LOG.debug("Opened region " + regionInfo.getRegionNameAsString() + " but "
+          + "this table is disabled, triggering close of region");
+      assignmentManager.unassign(regionInfo);
+    } else {
+      LOG.debug("Opened region " + regionInfo.getRegionNameAsString() +
+          " on " + serverInfo.getServerName());
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
new file mode 100644
index 0000000..d306770
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java
@@ -0,0 +1,227 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.master.DeadServer;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.master.ServerManager;
+import org.apache.hadoop.hbase.master.AssignmentManager.RegionState;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Process server shutdown.
+ * Server-to-handle must be already in the deadservers lists.  See
+ * {@link ServerManager#expireServer(HServerInfo)}.
+ */
+public class ServerShutdownHandler extends EventHandler {
+  private static final Log LOG = LogFactory.getLog(ServerShutdownHandler.class);
+  private final HServerInfo hsi;
+  private final Server server;
+  private final MasterServices services;
+  private final DeadServer deadServers;
+
+  public ServerShutdownHandler(final Server server, final MasterServices services,
+      final DeadServer deadServers, final HServerInfo hsi) {
+    this(server, services, deadServers, hsi, EventType.M_SERVER_SHUTDOWN);
+  }
+
+  ServerShutdownHandler(final Server server, final MasterServices services,
+      final DeadServer deadServers, final HServerInfo hsi, EventType type) {
+    super(server, type);
+    this.hsi = hsi;
+    this.server = server;
+    this.services = services;
+    this.deadServers = deadServers;
+    if (!this.deadServers.contains(hsi.getServerName())) {
+      LOG.warn(hsi.getServerName() + " is NOT in deadservers; it should be!");
+    }
+  }
+
+  /**
+   * @return True if the server we are processing was carrying <code>-ROOT-</code>
+   */
+  boolean isCarryingRoot() {
+    return false;
+  }
+
+  /**
+   * @return True if the server we are processing was carrying <code>.META.</code>
+   */
+  boolean isCarryingMeta() {
+    return false;
+  }
+
+  @Override
+  public void process() throws IOException {
+    final String serverName = this.hsi.getServerName();
+
+    LOG.info("Splitting logs for " + serverName);
+    this.services.getMasterFileSystem().splitLog(serverName);
+
+    // Clean out anything in regions in transition.  Being conservative and
+    // doing after log splitting.  Could do some states before -- OPENING?
+    // OFFLINE? -- and then others after like CLOSING that depend on log
+    // splitting.
+    List<RegionState> regionsInTransition =
+      this.services.getAssignmentManager().processServerShutdown(this.hsi);
+
+    // Assign root and meta if we were carrying them.
+    if (isCarryingRoot()) { // -ROOT-
+      try {
+        this.services.getAssignmentManager().assignRoot();
+      } catch (KeeperException e) {
+        this.server.abort("In server shutdown processing, assigning root", e);
+        throw new IOException("Aborting", e);
+      }
+    }
+
+    // Carrying meta?
+    if (isCarryingMeta()) this.services.getAssignmentManager().assignMeta();
+
+    // Wait on meta to come online; we need it to progress.
+    // TODO: Best way to hold strictly here?  We should build this retry logic
+    //       into the MetaReader operations themselves.
+    NavigableMap<HRegionInfo, Result> hris = null;
+    while (!this.server.isStopped()) {
+      try {
+        this.server.getCatalogTracker().waitForMeta();
+        hris = MetaReader.getServerUserRegions(this.server.getCatalogTracker(),
+            this.hsi);
+        break;
+      } catch (InterruptedException e) {
+        Thread.currentThread().interrupt();
+        throw new IOException("Interrupted", e);
+      } catch (IOException ioe) {
+        LOG.info("Received exception accessing META during server shutdown of " +
+            serverName + ", retrying META read");
+      }
+    }
+
+    // Skip regions that were in transition unless CLOSING or PENDING_CLOSE
+    for (RegionState rit : regionsInTransition) {
+      if (!rit.isClosing() && !rit.isPendingClose()) {
+        LOG.debug("Removed " + rit.getRegion().getRegionNameAsString() +
+          " from list of regions to assign because in RIT");
+        hris.remove(rit.getRegion());
+      }
+    }
+
+    LOG.info("Reassigning " + hris.size() + " region(s) that " + serverName +
+      " was carrying (skipping " + regionsInTransition.size() +
+      " regions(s) that are already in transition)");
+
+    // Iterate regions that were on this server and assign them
+    for (Map.Entry<HRegionInfo, Result> e: hris.entrySet()) {
+      if (processDeadRegion(e.getKey(), e.getValue(),
+          this.services.getAssignmentManager(),
+          this.server.getCatalogTracker())) {
+        this.services.getAssignmentManager().assign(e.getKey(), true);
+      }
+    }
+    this.deadServers.finish(serverName);
+    LOG.info("Finished processing of shutdown of " + serverName);
+  }
+
+  /**
+   * Process a dead region from a dead RS.  Checks if the region is disabled
+   * or if the region has a partially completed split.
+   * @param hri
+   * @param result
+   * @param assignmentManager
+   * @param catalogTracker
+   * @return Returns true if specified region should be assigned, false if not.
+   * @throws IOException
+   */
+  public static boolean processDeadRegion(HRegionInfo hri, Result result,
+      AssignmentManager assignmentManager, CatalogTracker catalogTracker)
+  throws IOException {
+    // If table is not disabled but the region is offlined,
+    boolean disabled = assignmentManager.getZKTable().isDisabledTable(
+        hri.getTableDesc().getNameAsString());
+    if (disabled) return false;
+    if (hri.isOffline() && hri.isSplit()) {
+      LOG.debug("Offlined and split region " + hri.getRegionNameAsString() +
+        "; checking daughter presence");
+      fixupDaughters(result, assignmentManager, catalogTracker);
+      return false;
+    }
+    return true;
+  }
+
+  /**
+   * Check that daughter regions are up in .META. and if not, add them.
+   * @param hris All regions for this server in meta.
+   * @param result The contents of the parent row in .META.
+   * @throws IOException
+   */
+  static void fixupDaughters(final Result result,
+      final AssignmentManager assignmentManager,
+      final CatalogTracker catalogTracker) throws IOException {
+    fixupDaughter(result, HConstants.SPLITA_QUALIFIER, assignmentManager,
+        catalogTracker);
+    fixupDaughter(result, HConstants.SPLITB_QUALIFIER, assignmentManager,
+        catalogTracker);
+  }
+
+  /**
+   * Check individual daughter is up in .META.; fixup if its not.
+   * @param result The contents of the parent row in .META.
+   * @param qualifier Which daughter to check for.
+   * @throws IOException
+   */
+  static void fixupDaughter(final Result result, final byte [] qualifier,
+      final AssignmentManager assignmentManager,
+      final CatalogTracker catalogTracker)
+  throws IOException {
+    byte [] bytes = result.getValue(HConstants.CATALOG_FAMILY, qualifier);
+    if (bytes == null || bytes.length <= 0) return;
+    HRegionInfo hri = Writables.getHRegionInfoOrNull(bytes);
+    if (hri == null) return;
+    Pair<HRegionInfo, HServerAddress> pair =
+      MetaReader.getRegion(catalogTracker, hri.getRegionName());
+    if (pair == null || pair.getFirst() == null) {
+      LOG.info("Fixup; missing daughter " + hri.getEncodedName());
+      MetaEditor.addDaughter(catalogTracker, hri, null);
+      assignmentManager.assign(hri, true);
+    } else {
+      LOG.debug("Daughter " + hri.getRegionNameAsString() + " present");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableAddFamilyHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableAddFamilyHandler.java
new file mode 100644
index 0000000..fcea483
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableAddFamilyHandler.java
@@ -0,0 +1,66 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.InvalidFamilyOperationException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Handles adding a new family to an existing table.
+ */
+public class TableAddFamilyHandler extends TableEventHandler {
+
+  private final HColumnDescriptor familyDesc;
+
+  public TableAddFamilyHandler(byte[] tableName, HColumnDescriptor familyDesc,
+      Server server, final MasterServices masterServices) throws IOException {
+    super(EventType.C_M_ADD_FAMILY, tableName, server, masterServices);
+    this.familyDesc = familyDesc;
+  }
+
+  @Override
+  protected void handleTableOperation(List<HRegionInfo> hris)
+  throws IOException {
+    HTableDescriptor htd = hris.get(0).getTableDesc();
+    byte [] familyName = familyDesc.getName();
+    if(htd.hasFamily(familyName)) {
+      throw new InvalidFamilyOperationException(
+          "Family '" + Bytes.toString(familyName) + "' already exists so " +
+          "cannot be added");
+    }
+    for(HRegionInfo hri : hris) {
+      // Update the HTD
+      hri.getTableDesc().addFamily(familyDesc);
+      // Update region in META
+      MetaEditor.updateRegionInfo(this.server.getCatalogTracker(), hri);
+      // Update region info in FS
+      this.masterServices.getMasterFileSystem().updateRegionInfo(hri);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableDeleteFamilyHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableDeleteFamilyHandler.java
new file mode 100644
index 0000000..a963c6c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableDeleteFamilyHandler.java
@@ -0,0 +1,69 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.InvalidFamilyOperationException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.master.MasterFileSystem;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Handles adding a new family to an existing table.
+ */
+public class TableDeleteFamilyHandler extends TableEventHandler {
+
+  private final byte [] familyName;
+
+  public TableDeleteFamilyHandler(byte[] tableName, byte [] familyName,
+      Server server, final MasterServices masterServices) throws IOException {
+    super(EventType.C_M_ADD_FAMILY, tableName, server, masterServices);
+    this.familyName = familyName;
+  }
+
+  @Override
+  protected void handleTableOperation(List<HRegionInfo> hris) throws IOException {
+    HTableDescriptor htd = hris.get(0).getTableDesc();
+    if(!htd.hasFamily(familyName)) {
+      throw new InvalidFamilyOperationException(
+          "Family '" + Bytes.toString(familyName) + "' does not exist so " +
+          "cannot be deleted");
+    }
+    for (HRegionInfo hri : hris) {
+      // Update the HTD
+      hri.getTableDesc().removeFamily(familyName);
+      // Update region in META
+      MetaEditor.updateRegionInfo(this.server.getCatalogTracker(), hri);
+      MasterFileSystem mfs = this.masterServices.getMasterFileSystem();
+      // Update region info in FS
+      mfs.updateRegionInfo(hri);
+      // Delete directory in FS
+      mfs.deleteFamily(hri, familyName);
+      // Update region info in FS
+      this.masterServices.getMasterFileSystem().updateRegionInfo(hri);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java
new file mode 100644
index 0000000..09891aa
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java
@@ -0,0 +1,76 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Base class for performing operations against tables.
+ * Checks on whether the process can go forward are done in constructor rather
+ * than later on in {@link #process()}.  The idea is to fail fast rather than
+ * later down in an async invocation of {@link #process()} (which currently has
+ * no means of reporting back issues once started).
+ */
+public abstract class TableEventHandler extends EventHandler {
+  private static final Log LOG = LogFactory.getLog(TableEventHandler.class);
+  protected final MasterServices masterServices;
+  protected final byte [] tableName;
+  protected final String tableNameStr;
+
+  public TableEventHandler(EventType eventType, byte [] tableName, Server server,
+      MasterServices masterServices)
+  throws IOException {
+    super(server, eventType);
+    this.masterServices = masterServices;
+    this.tableName = tableName;
+    this.masterServices.checkTableModifiable(tableName);
+    this.tableNameStr = Bytes.toString(this.tableName);
+  }
+
+  @Override
+  public void process() {
+    try {
+      LOG.info("Handling table operation " + eventType + " on table " +
+          Bytes.toString(tableName));
+      List<HRegionInfo> hris =
+        MetaReader.getTableRegions(this.server.getCatalogTracker(),
+          tableName);
+      handleTableOperation(hris);
+    } catch (IOException e) {
+      LOG.error("Error manipulating table " + Bytes.toString(tableName), e);
+    } catch (KeeperException e) {
+      LOG.error("Error manipulating table " + Bytes.toString(tableName), e);
+    }
+  }
+
+  protected abstract void handleTableOperation(List<HRegionInfo> regions)
+  throws IOException, KeeperException;
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java
new file mode 100644
index 0000000..4029893
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java
@@ -0,0 +1,65 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.InvalidFamilyOperationException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Handles adding a new family to an existing table.
+ */
+public class TableModifyFamilyHandler extends TableEventHandler {
+
+  private final HColumnDescriptor familyDesc;
+
+  public TableModifyFamilyHandler(byte[] tableName,
+      HColumnDescriptor familyDesc, Server server,
+      final MasterServices masterServices) throws IOException {
+    super(EventType.C_M_MODIFY_FAMILY, tableName, server, masterServices);
+    this.familyDesc = familyDesc;
+  }
+
+  @Override
+  protected void handleTableOperation(List<HRegionInfo> regions) throws IOException {
+    HTableDescriptor htd = regions.get(0).getTableDesc();
+    byte [] familyName = familyDesc.getName();
+    if(!htd.hasFamily(familyName)) {
+      throw new InvalidFamilyOperationException("Family '" +
+        Bytes.toString(familyName) + "' doesn't exists so cannot be modified");
+    }
+    for(HRegionInfo hri : regions) {
+      // Update the HTD
+      hri.getTableDesc().addFamily(familyDesc);
+      // Update region in META
+      MetaEditor.updateRegionInfo(this.server.getCatalogTracker(), hri);
+      // Update region info in FS
+      this.masterServices.getMasterFileSystem().updateRegionInfo(hri);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TotesHRegionInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TotesHRegionInfo.java
new file mode 100644
index 0000000..d08f649
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/handler/TotesHRegionInfo.java
@@ -0,0 +1,36 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.handler;
+
+import java.beans.EventHandler;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+
+/**
+ * Implementors tote an HRegionInfo instance.
+ * This is a marker interface that can be put on {@link EventHandler}s that
+ * have an {@link HRegionInfo}.
+ */
+public interface TotesHRegionInfo {
+  /**
+   * @return HRegionInfo instance.
+   */
+  public HRegionInfo getHRegionInfo();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java
new file mode 100644
index 0000000..9e4cf73
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterMetrics.java
@@ -0,0 +1,149 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.metrics;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.metrics.HBaseInfo;
+import org.apache.hadoop.hbase.metrics.MetricsRate;
+import org.apache.hadoop.hbase.metrics.PersistentMetricsTimeVaryingRate;
+import org.apache.hadoop.metrics.ContextFactory;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.jvm.JvmMetrics;
+import org.apache.hadoop.metrics.util.MetricsLongValue;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+
+/**
+ * This class is for maintaining the various master statistics
+ * and publishing them through the metrics interfaces.
+ * <p>
+ * This class has a number of metrics variables that are publicly accessible;
+ * these variables (objects) have methods to update their values.
+ */
+public class MasterMetrics implements Updater {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+  private final MetricsRecord metricsRecord;
+  private final MetricsRegistry registry = new MetricsRegistry();
+  private final MasterStatistics masterStatistics;
+
+  private long lastUpdate = System.currentTimeMillis();
+  private long lastExtUpdate = System.currentTimeMillis();
+  private long extendedPeriod = 0;
+/*
+   * Count of requests to the cluster since last call to metrics update
+   */
+  private final MetricsRate cluster_requests =
+    new MetricsRate("cluster_requests", registry);
+
+  /** Time it takes to finish HLog.splitLog() */
+  final PersistentMetricsTimeVaryingRate splitTime =
+    new PersistentMetricsTimeVaryingRate("splitTime", registry);
+
+  /** Size of HLog files being split */
+  final PersistentMetricsTimeVaryingRate splitSize =
+    new PersistentMetricsTimeVaryingRate("splitSize", registry);
+
+  public MasterMetrics(final String name) {
+    MetricsContext context = MetricsUtil.getContext("hbase");
+    metricsRecord = MetricsUtil.createRecord(context, "master");
+    metricsRecord.setTag("Master", name);
+    context.registerUpdater(this);
+    JvmMetrics.init("Master", name);
+    HBaseInfo.init();
+
+    // expose the MBean for metrics
+    masterStatistics = new MasterStatistics(this.registry);
+
+    // get custom attributes
+    try {
+      Object m = 
+        ContextFactory.getFactory().getAttribute("hbase.extendedperiod");
+      if (m instanceof String) {
+        this.extendedPeriod = Long.parseLong((String) m)*1000;
+      }
+    } catch (IOException ioe) {
+      LOG.info("Couldn't load ContextFactory for Metrics config info");
+    }
+
+    LOG.info("Initialized");
+  }
+
+  public void shutdown() {
+    if (masterStatistics != null)
+      masterStatistics.shutdown();
+  }
+
+  /**
+   * Since this object is a registered updater, this method will be called
+   * periodically, e.g. every 5 seconds.
+   * @param unused
+   */
+  public void doUpdates(MetricsContext unused) {
+    synchronized (this) {
+      this.lastUpdate = System.currentTimeMillis();
+
+      // has the extended period for long-living stats elapsed?
+      if (this.extendedPeriod > 0 &&
+          this.lastUpdate - this.lastExtUpdate >= this.extendedPeriod) {
+        this.lastExtUpdate = this.lastUpdate;
+        this.splitTime.resetMinMaxAvg();
+        this.splitSize.resetMinMaxAvg();
+        this.resetAllMinMax();
+      }
+
+      this.cluster_requests.pushMetric(metricsRecord);
+      this.splitTime.pushMetric(metricsRecord);
+      this.splitSize.pushMetric(metricsRecord);
+    }
+    this.metricsRecord.update();
+  }
+
+  public void resetAllMinMax() {
+    // Nothing to do
+  }
+  
+  /**
+   * Record a single instance of a split
+   * @param time time that the split took
+   * @param size length of original HLogs that were split
+   */
+  public synchronized void addSplit(long time, long size) {
+    splitTime.inc(time);
+    splitSize.inc(size);
+  }
+
+  /**
+   * @return Count of requests.
+   */
+  public float getRequests() {
+    return this.cluster_requests.getPreviousIntervalValue();
+  }
+
+  /**
+   * @param inc How much to add to requests.
+   */
+  public void incrementRequests(final int inc) {
+    this.cluster_requests.inc(inc);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterStatistics.java b/0.90/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterStatistics.java
new file mode 100644
index 0000000..d885348
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/master/metrics/MasterStatistics.java
@@ -0,0 +1,42 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.metrics;
+
+import javax.management.ObjectName;
+
+import org.apache.hadoop.hbase.metrics.MetricsMBeanBase;
+import org.apache.hadoop.metrics.util.MBeanUtil;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+/**
+ * Exports the {@link MasterMetrics} statistics as an MBean
+ * for JMX.
+ */
+public class MasterStatistics extends MetricsMBeanBase {
+  private final ObjectName mbeanName;
+
+  public MasterStatistics(MetricsRegistry registry) {
+    super(registry, "MasterStatistics");
+    mbeanName = MBeanUtil.registerMBean("Master", "MasterStatistics", this);
+  }
+
+  public void shutdown() {
+    if (mbeanName != null)
+      MBeanUtil.unregisterMBean(mbeanName);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/metrics/HBaseInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/HBaseInfo.java
new file mode 100644
index 0000000..fb65a65
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/HBaseInfo.java
@@ -0,0 +1,96 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics;
+
+import org.apache.hadoop.hbase.metrics.MetricsMBeanBase;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.util.MBeanUtil;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+import javax.management.ObjectName;
+
+/**
+ * Exports HBase system information as an MBean for JMX observation.
+ */
+public class HBaseInfo {
+  protected static class HBaseInfoMBean extends MetricsMBeanBase {
+    private final ObjectName mbeanName;
+  
+    public HBaseInfoMBean(MetricsRegistry registry, String rsName) {
+      super(registry, "HBaseInfo");
+      mbeanName = MBeanUtil.registerMBean("HBase",
+          "Info", this);
+    }
+  
+    public void shutdown() {
+      if (mbeanName != null)
+        MBeanUtil.unregisterMBean(mbeanName);
+    }
+  }
+
+  protected final MetricsRecord mr;
+  protected final HBaseInfoMBean mbean;
+  protected MetricsRegistry registry = new MetricsRegistry();
+
+  private static HBaseInfo theInstance = null;
+  public synchronized static HBaseInfo init() {
+      if (theInstance == null) {
+        theInstance = new HBaseInfo();
+      }
+      return theInstance;
+  }
+  
+  // HBase jar info
+  private MetricsString date = new MetricsString("date", registry,
+      org.apache.hadoop.hbase.util.VersionInfo.getDate());
+  private MetricsString revision = new MetricsString("revision", registry, 
+      org.apache.hadoop.hbase.util.VersionInfo.getRevision());
+  private MetricsString url = new MetricsString("url", registry,
+      org.apache.hadoop.hbase.util.VersionInfo.getUrl());
+  private MetricsString user = new MetricsString("user", registry,
+      org.apache.hadoop.hbase.util.VersionInfo.getUser());
+  private MetricsString version = new MetricsString("version", registry,
+      org.apache.hadoop.hbase.util.VersionInfo.getVersion());
+
+  // Info on the HDFS jar that HBase has (aka: HDFS Client)
+  private MetricsString hdfsDate = new MetricsString("hdfsDate", registry,
+      org.apache.hadoop.util.VersionInfo.getDate());
+  private MetricsString hdfsRev = new MetricsString("hdfsRevision", registry,
+      org.apache.hadoop.util.VersionInfo.getRevision());
+  private MetricsString hdfsUrl = new MetricsString("hdfsUrl", registry,
+      org.apache.hadoop.util.VersionInfo.getUrl());
+  private MetricsString hdfsUser = new MetricsString("hdfsUser", registry,
+      org.apache.hadoop.util.VersionInfo.getUser());
+  private MetricsString hdfsVer = new MetricsString("hdfsVersion", registry,
+      org.apache.hadoop.util.VersionInfo.getVersion());
+
+  protected HBaseInfo() {
+    MetricsContext context = MetricsUtil.getContext("hbase");
+    mr = MetricsUtil.createRecord(context, "info");
+    String name = Thread.currentThread().getName();
+    mr.setTag("Info", name);
+
+    // export for JMX
+    mbean = new HBaseInfoMBean(this.registry, name);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsMBeanBase.java b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsMBeanBase.java
new file mode 100644
index 0000000..37fdfc2
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsMBeanBase.java
@@ -0,0 +1,166 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import javax.management.AttributeNotFoundException;
+import javax.management.MBeanAttributeInfo;
+import javax.management.MBeanException;
+import javax.management.MBeanInfo;
+import javax.management.ReflectionException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.util.MetricsBase;
+import org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+/**
+ * Extends the Hadoop MetricsDynamicMBeanBase class to provide JMX support for
+ * custom HBase MetricsBase implementations.  MetricsDynamicMBeanBase ignores
+ * registered MetricsBase instance that are not instances of one of the
+ * org.apache.hadoop.metrics.util implementations.
+ *
+ */
+public class MetricsMBeanBase extends MetricsDynamicMBeanBase {
+
+  private static final Log LOG = LogFactory.getLog("org.apache.hadoop.hbase.metrics");
+
+  protected final MetricsRegistry registry;
+  protected final String description;
+  protected int registryLength;
+  /** HBase MetricsBase implementations that MetricsDynamicMBeanBase does
+   * not understand
+   */
+  protected Map<String,MetricsBase> extendedAttributes =
+      new HashMap<String,MetricsBase>();
+  protected MBeanInfo extendedInfo;
+
+  protected MetricsMBeanBase( MetricsRegistry mr, String description ) {
+    super(copyMinusHBaseMetrics(mr), description);
+    this.registry = mr;
+    this.description = description;
+    this.init();
+  }
+
+  /*
+   * @param mr MetricsRegistry.
+   * @return A copy of the passed MetricsRegistry minus the hbase metrics
+   */
+  private static MetricsRegistry copyMinusHBaseMetrics(final MetricsRegistry mr) {
+    MetricsRegistry copy = new MetricsRegistry();
+    for (MetricsBase metric : mr.getMetricsList()) {
+      if (metric instanceof MetricsRate || metric instanceof MetricsString) {
+        continue;
+      }
+      copy.add(metric.getName(), metric);
+    }
+    return copy;
+  }
+
+  protected void init() {
+    List<MBeanAttributeInfo> attributes = new ArrayList<MBeanAttributeInfo>();
+    MBeanInfo parentInfo = super.getMBeanInfo();
+    List<String> parentAttributes = new ArrayList<String>();
+    for (MBeanAttributeInfo attr : parentInfo.getAttributes()) {
+      attributes.add(attr);
+      parentAttributes.add(attr.getName());
+    }
+
+    this.registryLength = this.registry.getMetricsList().size();
+
+    for (MetricsBase metric : this.registry.getMetricsList()) {
+      if (metric.getName() == null || parentAttributes.contains(metric.getName()))
+        continue;
+
+      // add on custom HBase metric types
+      if (metric instanceof MetricsRate) {
+        attributes.add( new MBeanAttributeInfo(metric.getName(),
+            "java.lang.Float", metric.getDescription(), true, false, false) );
+        extendedAttributes.put(metric.getName(), metric);
+      } else if (metric instanceof MetricsString) {
+        attributes.add( new MBeanAttributeInfo(metric.getName(),
+            "java.lang.String", metric.getDescription(), true, false, false) );
+        extendedAttributes.put(metric.getName(), metric);
+        LOG.info("MetricsString added: " + metric.getName());
+      }
+      // else, its probably a hadoop metric already registered. Skip it.
+    }
+
+    LOG.info("new MBeanInfo");
+    this.extendedInfo = new MBeanInfo( this.getClass().getName(),
+        this.description, attributes.toArray( new MBeanAttributeInfo[0] ),
+        parentInfo.getConstructors(), parentInfo.getOperations(),
+        parentInfo.getNotifications() );
+  }
+
+  private void checkAndUpdateAttributes() {
+    if (this.registryLength != this.registry.getMetricsList().size())
+      this.init();
+  }
+
+  @Override
+  public Object getAttribute( String name )
+      throws AttributeNotFoundException, MBeanException,
+      ReflectionException {
+
+    if (name == null) {
+      throw new IllegalArgumentException("Attribute name is NULL");
+    }
+
+    /*
+     * Ugly.  Since MetricsDynamicMBeanBase implementation is private,
+     * we need to first check the parent class for the attribute.
+     * In case that the MetricsRegistry contents have changed, this will
+     * allow the parent to update it's internal structures (which we rely on
+     * to update our own.
+     */
+    try {
+      return super.getAttribute(name);
+    } catch (AttributeNotFoundException ex) {
+
+      checkAndUpdateAttributes();
+
+      MetricsBase metric = this.extendedAttributes.get(name);
+      if (metric != null) {
+        if (metric instanceof MetricsRate) {
+          return ((MetricsRate) metric).getPreviousIntervalValue();
+        } else if (metric instanceof MetricsString) {
+          return ((MetricsString)metric).getValue();
+        } else {
+          LOG.warn( String.format("unknown metrics type %s for attribute %s",
+                        metric.getClass().getName(), name) );
+        }
+      }
+    }
+
+    throw new AttributeNotFoundException();
+  }
+
+  @Override
+  public MBeanInfo getMBeanInfo() {
+    return this.extendedInfo;
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsRate.java b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsRate.java
new file mode 100644
index 0000000..fc1dc36
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsRate.java
@@ -0,0 +1,82 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.util.MetricsBase;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Publishes a rate based on a counter - you increment the counter each
+ * time an event occurs (eg: an RPC call) and this publishes a rate.
+ */
+public class MetricsRate extends MetricsBase {
+  private static final Log LOG = LogFactory.getLog("org.apache.hadoop.hbase.metrics");
+
+  private int value;
+  private float prevRate;
+  private long ts;
+
+  public MetricsRate(final String name, final MetricsRegistry registry,
+      final String description) {
+    super(name, description);
+    this.value = 0;
+    this.prevRate = 0;
+    this.ts = System.currentTimeMillis();
+    registry.add(name, this);
+  }
+
+  public MetricsRate(final String name, final MetricsRegistry registry) {
+    this(name, registry, NO_DESCRIPTION);
+  }
+
+  public synchronized void inc(final int incr) {
+    value += incr;
+  }
+
+  public synchronized void inc() {
+    value++;
+  }
+
+  private synchronized void intervalHeartBeat() {
+    long now = System.currentTimeMillis();
+    long diff = (now-ts)/1000;
+    if (diff == 0) diff = 1; // sigh this is crap.
+    this.prevRate = (float)value / diff;
+    this.value = 0;
+    this.ts = now;
+  }
+
+  @Override
+  public synchronized void pushMetric(final MetricsRecord mr) {
+    intervalHeartBeat();
+    try {
+      mr.setMetric(getName(), getPreviousIntervalValue());
+    } catch (Exception e) {
+      LOG.info("pushMetric failed for " + getName() + "\n" +
+          StringUtils.stringifyException(e));
+    }
+  }
+
+  public synchronized float getPreviousIntervalValue() {
+    return this.prevRate;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsString.java b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsString.java
new file mode 100644
index 0000000..2ee8066
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/MetricsString.java
@@ -0,0 +1,56 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.util.MetricsBase;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+/**
+ * Publishes a string to the metrics collector
+ */
+public class MetricsString extends MetricsBase {
+  private static final Log LOG = LogFactory.getLog("org.apache.hadoop.hbase.metrics");
+
+  private String value;
+
+  public MetricsString(final String name, final MetricsRegistry registry, 
+      final String value) {
+    super(name, NO_DESCRIPTION);
+    this.value = value;
+    registry.add(name, this);
+  }
+  public MetricsString(final String name, final String description, 
+      final MetricsRegistry registry, final String value) {
+    super(name, description);
+    this.value = value;
+    registry.add(name, this);
+  }
+  
+  public String getValue() {
+    return this.value;
+  }
+
+  @Override
+  public synchronized void pushMetric(final MetricsRecord mr) {
+    // NOOP
+    // MetricsMBeanBase.getAttribute is where we actually fill the data
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/metrics/PersistentMetricsTimeVaryingRate.java b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/PersistentMetricsTimeVaryingRate.java
new file mode 100644
index 0000000..cf2fc28
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/PersistentMetricsTimeVaryingRate.java
@@ -0,0 +1,138 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+import org.apache.hadoop.metrics.util.MetricsTimeVaryingRate;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * This class extends MetricsTimeVaryingRate to let the metrics
+ * persist past a pushMetric() call
+ */
+public class PersistentMetricsTimeVaryingRate extends MetricsTimeVaryingRate {
+  protected static final Log LOG =
+    LogFactory.getLog("org.apache.hadoop.hbase.metrics");
+
+  protected boolean reset = false;
+  protected long lastOper = 0;
+  protected long totalOps = 0;
+
+  /**
+   * Constructor - create a new metric
+   * @param nam the name of the metrics to be used to publish the metric
+   * @param registry - where the metrics object will be registered
+   * @param description metrics description
+   */
+  public PersistentMetricsTimeVaryingRate(final String nam, 
+      final MetricsRegistry registry, 
+      final String description) {
+    super(nam, registry, description);
+  }
+
+  /**
+   * Constructor - create a new metric
+   * @param nam the name of the metrics to be used to publish the metric
+   * @param registry - where the metrics object will be registered
+   */
+  public PersistentMetricsTimeVaryingRate(final String nam, 
+      MetricsRegistry registry) {
+    this(nam, registry, NO_DESCRIPTION);
+  }
+
+  /**
+   * Push updated metrics to the mr.
+   * 
+   * Note this does NOT push to JMX
+   * (JMX gets the info via {@link #getPreviousIntervalAverageTime()} and
+   * {@link #getPreviousIntervalNumOps()}
+   *
+   * @param mr owner of this metric
+   */
+  @Override
+  public synchronized void pushMetric(final MetricsRecord mr) {
+    // this will reset the currentInterval & num_ops += prevInterval()
+    super.pushMetric(mr);
+    // since we're retaining prevInterval(), we don't want to do the incr
+    // instead, we want to set that value because we have absolute ops
+    try {
+      mr.setMetric(getName() + "_num_ops", totalOps);
+    } catch (Exception e) {
+      LOG.info("pushMetric failed for " + getName() + "\n" +
+          StringUtils.stringifyException(e));
+    }
+    if (reset) {
+      // use the previous avg as our starting min/max/avg
+      super.inc(getPreviousIntervalAverageTime());
+      reset = false;
+    } else {
+      // maintain the stats that pushMetric() cleared
+      maintainStats();
+    }
+  }
+  
+  /**
+   * Increment the metrics for numOps operations
+   * @param numOps - number of operations
+   * @param time - time for numOps operations
+   */
+  @Override
+  public synchronized void inc(final int numOps, final long time) {
+    super.inc(numOps, time);
+    totalOps += numOps;
+  }
+  
+  /**
+   * Increment the metrics for numOps operations
+   * @param time - time for numOps operations
+   */
+  @Override
+  public synchronized void inc(final long time) {
+    super.inc(time);
+    ++totalOps;
+  }
+  
+  /**
+   * Rollover to a new interval
+   * NOTE: does not reset numOps.  this is an absolute value
+   */
+  public synchronized void resetMinMaxAvg() {
+    reset = true;
+  }
+
+  /* MetricsTimeVaryingRate will reset every time pushMetric() is called
+   * This is annoying for long-running stats that might not get a single 
+   * operation in the polling period.  This function ensures that values
+   * for those stat entries don't get reset.
+   */
+  protected void maintainStats() {
+    int curOps = this.getPreviousIntervalNumOps();
+    if (curOps > 0) {
+      long curTime = this.getPreviousIntervalAverageTime();
+      long totalTime = curTime * curOps;
+      if (curTime == 0 || totalTime / curTime == curOps) {
+        super.inc(curOps, totalTime);
+      } else {
+        LOG.info("Stats for " + this.getName() + " overflowed! resetting");
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/metrics/file/TimeStampingFileContext.java b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/file/TimeStampingFileContext.java
new file mode 100644
index 0000000..000e0d3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/metrics/file/TimeStampingFileContext.java
@@ -0,0 +1,111 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics.file;
+
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.text.SimpleDateFormat;
+import java.util.Date;
+
+import org.apache.hadoop.metrics.ContextFactory;
+import org.apache.hadoop.metrics.file.FileContext;
+import org.apache.hadoop.metrics.spi.OutputRecord;
+
+/**
+ * Add timestamp to {@link org.apache.hadoop.metrics.file.FileContext#emitRecord(String, String, OutputRecord)}.
+ */
+public class TimeStampingFileContext extends FileContext {
+  // Copies bunch of FileContext here because writer and file are private in
+  // superclass.
+  private File file = null;
+  private PrintWriter writer = null;
+  private final SimpleDateFormat sdf;
+
+  public TimeStampingFileContext() {
+    super();
+    this.sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
+  }
+
+  @Override
+  public void init(String contextName, ContextFactory factory) {
+    super.init(contextName, factory);
+    String fileName = getAttribute(FILE_NAME_PROPERTY);
+    if (fileName != null) {
+      file = new File(fileName);
+    }
+  }
+
+  @Override
+  public void startMonitoring() throws IOException {
+    if (file == null) {
+      writer = new PrintWriter(new BufferedOutputStream(System.out));
+    } else {
+      writer = new PrintWriter(new FileWriter(file, true));
+    }
+    super.startMonitoring();
+  }
+
+  @Override
+  public void stopMonitoring() {
+    super.stopMonitoring();
+    if (writer != null) {
+      writer.close();
+      writer = null;
+    }
+  }
+
+  private synchronized String iso8601() {
+    return this.sdf.format(new Date());
+  }
+
+  @Override
+  public void emitRecord(String contextName, String recordName,
+      OutputRecord outRec) {
+    writer.print(iso8601());
+    writer.print(" ");
+    writer.print(contextName);
+    writer.print(".");
+    writer.print(recordName);
+    String separator = ": ";
+    for (String tagName : outRec.getTagNames()) {
+      writer.print(separator);
+      separator = ", ";
+      writer.print(tagName);
+      writer.print("=");
+      writer.print(outRec.getTag(tagName));
+    }
+    for (String metricName : outRec.getMetricNames()) {
+      writer.print(separator);
+      separator = ", ";
+      writer.print(metricName);
+      writer.print("=");
+      writer.print(outRec.getMetric(metricName));
+    }
+    writer.println();
+  }
+
+  @Override
+  public void flush() {
+    writer.flush();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ChangedReadersObserver.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ChangedReadersObserver.java
new file mode 100644
index 0000000..82894e2
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ChangedReadersObserver.java
@@ -0,0 +1,35 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+
+/**
+ * If set of MapFile.Readers in Store change, implementors are notified.
+ */
+public interface ChangedReadersObserver {
+  /**
+   * Notify observers.
+   * @throws IOException e
+   */
+  void updateReaders() throws IOException;
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnCount.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnCount.java
new file mode 100644
index 0000000..1be0280
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnCount.java
@@ -0,0 +1,121 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+/**
+ * Simple wrapper for a byte buffer and a counter.  Does not copy.
+ * <p>
+ * NOT thread-safe because it is not used in a multi-threaded context, yet.
+ */
+public class ColumnCount {
+  private final byte [] bytes;
+  private final int offset;
+  private final int length;
+  private int count;
+
+  /**
+   * Constructor
+   * @param column the qualifier to count the versions for
+   */
+  public ColumnCount(byte [] column) {
+    this(column, 0);
+  }
+
+  /**
+   * Constructor
+   * @param column the qualifier to count the versions for
+   * @param count initial count
+   */
+  public ColumnCount(byte [] column, int count) {
+    this(column, 0, column.length, count);
+  }
+
+  /**
+   * Constuctor
+   * @param column the qualifier to count the versions for
+   * @param offset in the passed buffer where to start the qualifier from
+   * @param length of the qualifier
+   * @param count initial count
+   */
+  public ColumnCount(byte [] column, int offset, int length, int count) {
+    this.bytes = column;
+    this.offset = offset;
+    this.length = length;
+    this.count = count;
+  }
+
+  /**
+   * @return the buffer
+   */
+  public byte [] getBuffer(){
+    return this.bytes;
+  }
+
+  /**
+   * @return the offset
+   */
+  public int getOffset(){
+    return this.offset;
+  }
+
+  /**
+   * @return the length
+   */
+  public int getLength(){
+    return this.length;
+  }
+
+  /**
+   * Decrement the current version count
+   * @return current count
+   */
+  public int decrement() {
+    return --count;
+  }
+
+  /**
+   * Increment the current version count
+   * @return current count
+   */
+  public int increment() {
+    return ++count;
+  }
+
+  /**
+   * Set the current count to a new count
+   * @param count new count to set
+   */
+  public void setCount(int count) {
+    this.count = count;
+  }
+
+
+  /**
+   * Check to see if needed to fetch more versions
+   * @param max
+   * @return true if more versions are needed, false otherwise
+   */
+  public boolean needMore(int max) {
+    if(this.count < max) {
+      return true;
+    }
+    return false;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
new file mode 100644
index 0000000..78f946c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
@@ -0,0 +1,79 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+/**
+ * Implementing classes of this interface will be used for the tracking
+ * and enforcement of columns and numbers of versions during the course of a
+ * Get or Scan operation.
+ * <p>
+ * Currently there are two different types of Store/Family-level queries.
+ * <ul><li>{@link ExplicitColumnTracker} is used when the query specifies
+ * one or more column qualifiers to return in the family.
+ * <p>
+ * This class is utilized by {@link ScanQueryMatcher} through two methods:
+ * <ul><li>{@link #checkColumn} is called when a Put satisfies all other
+ * conditions of the query.  This method returns a {@link org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode} to define
+ * what action should be taken.
+ * <li>{@link #update} is called at the end of every StoreFile or memstore.
+ * <p>
+ * This class is NOT thread-safe as queries are never multi-threaded
+ */
+public interface ColumnTracker {
+  /**
+   * Keeps track of the number of versions for the columns asked for
+   * @param bytes
+   * @param offset
+   * @param length
+   * @param timestamp
+   * @return The match code instance.
+   */
+  public ScanQueryMatcher.MatchCode checkColumn(byte [] bytes, int offset,
+      int length, long timestamp);
+
+  /**
+   * Updates internal variables in between files
+   */
+  public void update();
+
+  /**
+   * Resets the Matcher
+   */
+  public void reset();
+
+  /**
+   *
+   * @return <code>true</code> when done.
+   */
+  public boolean done();
+
+  /**
+   * Used by matcher and scan/get to get a hint of the next column
+   * to seek to after checkColumn() returns SKIP.  Returns the next interesting
+   * column we want, or NULL there is none (wildcard scanner).
+   *
+   * Implementations aren't required to return anything useful unless the most recent
+   * call was to checkColumn() and the return code was SKIP.  This is pretty implementation
+   * detail-y, but optimizations are like that.
+   *
+   * @return null, or a ColumnCount that we should seek to
+   */
+  public ColumnCount getColumnHint();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
new file mode 100644
index 0000000..2eeb19f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java
@@ -0,0 +1,218 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Compact region on request and then run split if appropriate
+ */
+public class CompactSplitThread extends Thread implements CompactionRequestor {
+  static final Log LOG = LogFactory.getLog(CompactSplitThread.class);
+  private final long frequency;
+  private final ReentrantLock lock = new ReentrantLock();
+
+  private final HRegionServer server;
+  private final Configuration conf;
+
+  private final PriorityCompactionQueue compactionQueue =
+    new PriorityCompactionQueue();
+
+  /* The default priority for user-specified compaction requests.
+   * The user gets top priority unless we have blocking compactions. (Pri <= 0)
+   */
+  public static final int PRIORITY_USER = 1;
+
+  /**
+   * Splitting should not take place if the total number of regions exceed this.
+   * This is not a hard limit to the number of regions but it is a guideline to
+   * stop splitting after number of online regions is greater than this.
+   */
+  private int regionSplitLimit;
+
+  /** @param server */
+  public CompactSplitThread(HRegionServer server) {
+    super();
+    this.server = server;
+    this.conf = server.getConfiguration();
+    this.regionSplitLimit = conf.getInt("hbase.regionserver.regionSplitLimit",
+        Integer.MAX_VALUE);
+    this.frequency =
+      conf.getLong("hbase.regionserver.thread.splitcompactcheckfrequency",
+      20 * 1000);
+  }
+
+  @Override
+  public void run() {
+    while (!this.server.isStopped()) {
+      HRegion r = null;
+      try {
+        r = compactionQueue.poll(this.frequency, TimeUnit.MILLISECONDS);
+        if (r != null) {
+          lock.lock();
+          try {
+            if(!this.server.isStopped()) {
+              // Don't interrupt us while we are working
+              byte [] midKey = r.compactStores();
+              if (r.getLastCompactInfo() != null) {  // compaction aborted?
+                this.server.getMetrics().addCompaction(r.getLastCompactInfo());
+              }
+              if (shouldSplitRegion() && midKey != null &&
+                  !this.server.isStopped()) {
+                split(r, midKey);
+              }
+            }
+          } finally {
+            lock.unlock();
+          }
+        }
+      } catch (InterruptedException ex) {
+        continue;
+      } catch (IOException ex) {
+        LOG.error("Compaction/Split failed for region " +
+            r.getRegionNameAsString(),
+          RemoteExceptionHandler.checkIOException(ex));
+        if (!server.checkFileSystem()) {
+          break;
+        }
+      } catch (Exception ex) {
+        LOG.error("Compaction failed" +
+            (r != null ? (" for region " + r.getRegionNameAsString()) : ""),
+            ex);
+        if (!server.checkFileSystem()) {
+          break;
+        }
+      }
+    }
+    compactionQueue.clear();
+    LOG.info(getName() + " exiting");
+  }
+
+  public synchronized void requestCompaction(final HRegion r,
+      final String why) {
+    requestCompaction(r, false, why, r.getCompactPriority());
+  }
+
+  public synchronized void requestCompaction(final HRegion r,
+      final String why, int p) {
+    requestCompaction(r, false, why, p);
+  }
+
+  /**
+   * @param r HRegion store belongs to
+   * @param force Whether next compaction should be major
+   * @param why Why compaction requested -- used in debug messages
+   */
+  public synchronized void requestCompaction(final HRegion r,
+      final boolean force, final String why, int priority) {
+    if (this.server.isStopped()) {
+      return;
+    }
+    // tell the region to major-compact (and don't downgrade it)
+    if (force) {
+      r.setForceMajorCompaction(force);
+    }
+    if (compactionQueue.add(r, priority) && LOG.isDebugEnabled()) {
+      LOG.debug("Compaction " + (force? "(major) ": "") +
+        "requested for " + r.getRegionNameAsString() +
+        (why != null && !why.isEmpty()? " because " + why: "") +
+        "; priority=" + priority + ", compaction queue size=" + compactionQueue.size());
+    }
+  }
+
+  private void split(final HRegion parent, final byte [] midKey)
+  throws IOException {
+    final long startTime = System.currentTimeMillis();
+    SplitTransaction st = new SplitTransaction(parent, midKey);
+    // If prepare does not return true, for some reason -- logged inside in
+    // the prepare call -- we are not ready to split just now.  Just return.
+    if (!st.prepare()) return;
+    try {
+      st.execute(this.server, this.server);
+    } catch (IOException ioe) {
+      try {
+        LOG.info("Running rollback of failed split of " +
+          parent.getRegionNameAsString() + "; " + ioe.getMessage());
+        st.rollback(this.server);
+        LOG.info("Successful rollback of failed split of " +
+          parent.getRegionNameAsString());
+      } catch (RuntimeException e) {
+        // If failed rollback, kill this server to avoid having a hole in table.
+        LOG.info("Failed rollback of failed split of " +
+          parent.getRegionNameAsString() + " -- aborting server", e);
+        this.server.abort("Failed split");
+      }
+      return;
+    }
+
+    // Now tell the master about the new regions.  If we fail here, its OK.
+    // Basescanner will do fix up.  And reporting split to master is going away.
+    // TODO: Verify this still holds in new master rewrite.
+    this.server.reportSplit(parent.getRegionInfo(), st.getFirstDaughter(),
+      st.getSecondDaughter());
+    LOG.info("Region split, META updated, and report to master. Parent=" +
+      parent.getRegionInfo().getRegionNameAsString() + ", new regions: " +
+      st.getFirstDaughter().getRegionNameAsString() + ", " +
+      st.getSecondDaughter().getRegionNameAsString() + ". Split took " +
+      StringUtils.formatTimeDiff(System.currentTimeMillis(), startTime));
+  }
+
+  /**
+   * Only interrupt once it's done with a run through the work loop.
+   */
+  void interruptIfNecessary() {
+    if (lock.tryLock()) {
+      try {
+        this.interrupt();
+      } finally {
+        lock.unlock();
+      }
+    }
+  }
+
+  /**
+   * Returns the current size of the queue containing regions that are
+   * processed.
+   *
+   * @return The current size of the regions queue.
+   */
+  public int getCompactionQueueSize() {
+    return compactionQueue.size();
+  }
+
+  private boolean shouldSplitRegion() {
+    return (regionSplitLimit > server.getNumberOfOnlineRegions());
+  }
+
+  /**
+   * @return the regionSplitLimit
+   */
+  public int getRegionSplitLimit() {
+    return this.regionSplitLimit;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionRequestor.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionRequestor.java
new file mode 100644
index 0000000..f3be5e4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionRequestor.java
@@ -0,0 +1,35 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+public interface CompactionRequestor {
+  /**
+   * @param r Region to compact
+   * @param why Why compaction was requested -- used in debug messages
+   */
+  public void requestCompaction(final HRegion r, final String why);
+
+  /**
+   * @param r Region to compact
+   * @param why Why compaction was requested -- used in debug messages
+   * @param pri Priority of this compaction. minHeap. <=0 is critical
+   */
+  public void requestCompaction(final HRegion r, final String why, int pri);
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/DebugPrint.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/DebugPrint.java
new file mode 100644
index 0000000..e1d69c7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/DebugPrint.java
@@ -0,0 +1,69 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.FileWriter;
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+public class DebugPrint {
+
+private static final AtomicBoolean enabled = new AtomicBoolean(false);
+  private static final Object sync = new Object();
+  public static StringBuilder out = new StringBuilder();
+
+  static public void enable() {
+    enabled.set(true);
+  }
+  static public void disable() {
+    enabled.set(false);
+  }
+
+  static public void reset() {
+    synchronized (sync) {
+      enable(); // someone wants us enabled basically.
+
+      out = new StringBuilder();
+    }
+  }
+  static public void dumpToFile(String file) throws IOException {
+    FileWriter f = new FileWriter(file);
+    synchronized (sync) {
+      f.write(out.toString());
+    }
+    f.close();
+  }
+
+  public static void println(String m) {
+    if (!enabled.get()) {
+      System.out.println(m);
+      return;
+    }
+
+    synchronized (sync) {
+      String threadName = Thread.currentThread().getName();
+      out.append("<");
+      out.append(threadName);
+      out.append("> ");
+      out.append(m);
+      out.append("\n");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/DeleteTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/DeleteTracker.java
new file mode 100644
index 0000000..b425bf2
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/DeleteTracker.java
@@ -0,0 +1,97 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+/**
+ * This interface is used for the tracking and enforcement of Deletes
+ * during the course of a Get or Scan operation.
+ * <p>
+ * This class is utilized through three methods:
+ * <ul><li>{@link #add} when encountering a Delete
+ * <li>{@link #isDeleted} when checking if a Put KeyValue has been deleted
+ * <li>{@link #update} when reaching the end of a StoreFile
+ */
+public interface DeleteTracker {
+
+  /**
+   * Add the specified KeyValue to the list of deletes to check against for
+   * this row operation.
+   * <p>
+   * This is called when a Delete is encountered in a StoreFile.
+   * @param buffer KeyValue buffer
+   * @param qualifierOffset column qualifier offset
+   * @param qualifierLength column qualifier length
+   * @param timestamp timestamp
+   * @param type delete type as byte
+   */
+  public void add(byte [] buffer, int qualifierOffset, int qualifierLength,
+      long timestamp, byte type);
+
+  /**
+   * Check if the specified KeyValue buffer has been deleted by a previously
+   * seen delete.
+   * @param buffer KeyValue buffer
+   * @param qualifierOffset column qualifier offset
+   * @param qualifierLength column qualifier length
+   * @param timestamp timestamp
+   * @return true is the specified KeyValue is deleted, false if not
+   */
+  public boolean isDeleted(byte [] buffer, int qualifierOffset,
+      int qualifierLength, long timestamp);
+
+  /**
+   * @return true if there are no current delete, false otherwise
+   */
+  public boolean isEmpty();
+
+  /**
+   * Called at the end of every StoreFile.
+   * <p>
+   * Many optimized implementations of Trackers will require an update at
+   * when the end of each StoreFile is reached.
+   */
+  public void update();
+
+  /**
+   * Called between rows.
+   * <p>
+   * This clears everything as if a new DeleteTracker was instantiated.
+   */
+  public void reset();
+
+
+  /**
+   * Return codes for comparison of two Deletes.
+   * <p>
+   * The codes tell the merging function what to do.
+   * <p>
+   * INCLUDE means add the specified Delete to the merged list.
+   * NEXT means move to the next element in the specified list(s).
+   */
+  enum DeleteCompare {
+    INCLUDE_OLD_NEXT_OLD,
+    INCLUDE_OLD_NEXT_BOTH,
+    INCLUDE_NEW_NEXT_NEW,
+    INCLUDE_NEW_NEXT_BOTH,
+    NEXT_OLD,
+    NEXT_NEW
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
new file mode 100644
index 0000000..1f05368
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
@@ -0,0 +1,230 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NavigableSet;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * This class is used for the tracking and enforcement of columns and numbers
+ * of versions during the course of a Get or Scan operation, when explicit
+ * column qualifiers have been asked for in the query.
+ *
+ * With a little magic (see {@link ScanQueryMatcher}), we can use this matcher
+ * for both scans and gets.  The main difference is 'next' and 'done' collapse
+ * for the scan case (since we see all columns in order), and we only reset
+ * between rows.
+ *
+ * <p>
+ * This class is utilized by {@link ScanQueryMatcher} through two methods:
+ * <ul><li>{@link #checkColumn} is called when a Put satisfies all other
+ * conditions of the query.  This method returns a {@link org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode} to define
+ * what action should be taken.
+ * <li>{@link #update} is called at the end of every StoreFile or memstore.
+ * <p>
+ * This class is NOT thread-safe as queries are never multi-threaded
+ */
+public class ExplicitColumnTracker implements ColumnTracker {
+
+  private final int maxVersions;
+  private final List<ColumnCount> columns;
+  private final List<ColumnCount> columnsToReuse;
+  private int index;
+  private ColumnCount column;
+  /** Keeps track of the latest timestamp included for current column.
+   * Used to eliminate duplicates. */
+  private long latestTSOfCurrentColumn;
+
+  /**
+   * Default constructor.
+   * @param columns columns specified user in query
+   * @param maxVersions maximum versions to return per column
+   */
+  public ExplicitColumnTracker(NavigableSet<byte[]> columns, int maxVersions) {
+    this.maxVersions = maxVersions;
+    this.columns = new ArrayList<ColumnCount>(columns.size());
+    this.columnsToReuse = new ArrayList<ColumnCount>(columns.size());
+    for(byte [] column : columns) {
+      this.columnsToReuse.add(new ColumnCount(column,maxVersions));
+    }
+    reset();
+  }
+
+  /**
+   * Done when there are no more columns to match against.
+   */
+  public boolean done() {
+    return this.columns.size() == 0;
+  }
+
+  public ColumnCount getColumnHint() {
+    return this.column;
+  }
+
+  /**
+   * Checks against the parameters of the query and the columns which have
+   * already been processed by this query.
+   * @param bytes KeyValue buffer
+   * @param offset offset to the start of the qualifier
+   * @param length length of the qualifier
+   * @param timestamp timestamp of the key being checked
+   * @return MatchCode telling ScanQueryMatcher what action to take
+   */
+  public ScanQueryMatcher.MatchCode checkColumn(byte [] bytes, int offset,
+      int length, long timestamp) {
+    do {
+      // No more columns left, we are done with this query
+      if(this.columns.size() == 0) {
+        return ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW; // done_row
+      }
+
+      // No more columns to match against, done with storefile
+      if(this.column == null) {
+        return ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW; // done_row
+      }
+
+      // Compare specific column to current column
+      int ret = Bytes.compareTo(column.getBuffer(), column.getOffset(),
+          column.getLength(), bytes, offset, length);
+
+      // Column Matches. If it is not a duplicate key, decrement versions left
+      // and include.
+      if(ret == 0) {
+        //If column matches, check if it is a duplicate timestamp
+        if (sameAsPreviousTS(timestamp)) {
+          //If duplicate, skip this Key
+          return ScanQueryMatcher.MatchCode.SKIP;
+        }
+        if(this.column.decrement() == 0) {
+          // Done with versions for this column
+          this.columns.remove(this.index);
+          resetTS();
+          if(this.columns.size() == this.index) {
+            // Will not hit any more columns in this storefile
+            this.column = null;
+          } else {
+            this.column = this.columns.get(this.index);
+          }
+        } else {
+          setTS(timestamp);
+        }
+        return ScanQueryMatcher.MatchCode.INCLUDE;
+      }
+
+      resetTS();
+
+      if (ret > 0) {
+        // Specified column is smaller than the current, skip to next column.
+        return ScanQueryMatcher.MatchCode.SEEK_NEXT_COL;
+      }
+
+      // Specified column is bigger than current column
+      // Move down current column and check again
+      if(ret <= -1) {
+        if(++this.index >= this.columns.size()) {
+          // No more to match, do not include, done with storefile
+          return ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW; // done_row
+        }
+        // This is the recursive case.
+        this.column = this.columns.get(this.index);
+      }
+    } while(true);
+  }
+
+  /**
+   * Called at the end of every StoreFile or memstore.
+   */
+  public void update() {
+    if(this.columns.size() != 0) {
+      this.index = 0;
+      this.column = this.columns.get(this.index);
+    } else {
+      this.index = -1;
+      this.column = null;
+    }
+  }
+
+  // Called between every row.
+  public void reset() {
+    buildColumnList();
+    this.index = 0;
+    this.column = this.columns.get(this.index);
+    resetTS();
+  }
+
+  private void resetTS() {
+    latestTSOfCurrentColumn = HConstants.LATEST_TIMESTAMP;
+  }
+
+  private void setTS(long timestamp) {
+    latestTSOfCurrentColumn = timestamp;
+  }
+
+  private boolean sameAsPreviousTS(long timestamp) {
+    return timestamp == latestTSOfCurrentColumn;
+  }
+
+  private void buildColumnList() {
+    this.columns.clear();
+    this.columns.addAll(this.columnsToReuse);
+    for(ColumnCount col : this.columns) {
+      col.setCount(this.maxVersions);
+    }
+  }
+
+  /**
+   * This method is used to inform the column tracker that we are done with
+   * this column. We may get this information from external filters or
+   * timestamp range and we then need to indicate this information to
+   * tracker. It is required only in case of ExplicitColumnTracker.
+   * @param bytes
+   * @param offset
+   * @param length
+   */
+  public void doneWithColumn(byte [] bytes, int offset, int length) {
+    while (this.column != null) {
+      int compare = Bytes.compareTo(column.getBuffer(), column.getOffset(),
+          column.getLength(), bytes, offset, length);
+      if (compare == 0) {
+        this.columns.remove(this.index);
+        if (this.columns.size() == this.index) {
+          // Will not hit any more columns in this storefile
+          this.column = null;
+        } else {
+          this.column = this.columns.get(this.index);
+        }
+        return;
+      } else if ( compare <= -1) {
+        if(++this.index != this.columns.size()) {
+          this.column = this.columns.get(this.index);
+        } else {
+          this.column = null;
+        }
+      } else {
+        return;
+      }
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java
new file mode 100644
index 0000000..b843c91
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java
@@ -0,0 +1,33 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+/**
+ * Request a flush.
+ */
+public interface FlushRequester {
+  /**
+   * Tell the listener the cache needs to be flushed.
+   *
+   * @param region the HRegion requesting the cache flush
+   */
+  void requestFlush(HRegion region);
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/GetClosestRowBeforeTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/GetClosestRowBeforeTracker.java
new file mode 100644
index 0000000..3a26bbb
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/GetClosestRowBeforeTracker.java
@@ -0,0 +1,240 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValue.KVComparator;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * State and utility processing {@link HRegion#getClosestRowBefore(byte[], byte[])}.
+ * Like {@link ScanDeleteTracker} and {@link ScanDeleteTracker} but does not
+ * implement the {@link DeleteTracker} interface since state spans rows (There
+ * is no update nor reset method).
+ */
+class GetClosestRowBeforeTracker {
+  private final KeyValue targetkey;
+  // Any cell w/ a ts older than this is expired.
+  private final long oldestts;
+  private KeyValue candidate = null;
+  private final KVComparator kvcomparator;
+  // Flag for whether we're doing getclosest on a metaregion.
+  private final boolean metaregion;
+  // Offset and length into targetkey demarking table name (if in a metaregion).
+  private final int rowoffset;
+  private final int tablenamePlusDelimiterLength;
+
+  // Deletes keyed by row.  Comparator compares on row portion of KeyValue only.
+  private final NavigableMap<KeyValue, NavigableSet<KeyValue>> deletes;
+
+  /**
+   * @param c
+   * @param kv Presume first on row: i.e. empty column, maximum timestamp and
+   * a type of Type.Maximum
+   * @param ttl Time to live in ms for this Store
+   * @param metaregion True if this is .META. or -ROOT- region.
+   */
+  GetClosestRowBeforeTracker(final KVComparator c, final KeyValue kv,
+      final long ttl, final boolean metaregion) {
+    super();
+    this.metaregion = metaregion;
+    this.targetkey = kv;
+    // If we are in a metaregion, then our table name is the prefix on the
+    // targetkey.
+    this.rowoffset = kv.getRowOffset();
+    int l = -1;
+    if (metaregion) {
+      l = KeyValue.getDelimiter(kv.getBuffer(), rowoffset, kv.getRowLength(),
+        HRegionInfo.DELIMITER) - this.rowoffset;
+    }
+    this.tablenamePlusDelimiterLength = metaregion? l + 1: -1;
+    this.oldestts = System.currentTimeMillis() - ttl;
+    this.kvcomparator = c;
+    KeyValue.RowComparator rc = new KeyValue.RowComparator(this.kvcomparator);
+    this.deletes = new TreeMap<KeyValue, NavigableSet<KeyValue>>(rc);
+  }
+
+  /**
+   * @param kv
+   * @return True if this <code>kv</code> is expired.
+   */
+  boolean isExpired(final KeyValue kv) {
+    return Store.isExpired(kv, this.oldestts);
+  }
+
+  /*
+   * Add the specified KeyValue to the list of deletes.
+   * @param kv
+   */
+  private void addDelete(final KeyValue kv) {
+    NavigableSet<KeyValue> rowdeletes = this.deletes.get(kv);
+    if (rowdeletes == null) {
+      rowdeletes = new TreeSet<KeyValue>(this.kvcomparator);
+      this.deletes.put(kv, rowdeletes);
+    }
+    rowdeletes.add(kv);
+  }
+
+  /*
+   * @param kv Adds candidate if nearer the target than previous candidate.
+   * @return True if updated candidate.
+   */
+  private boolean addCandidate(final KeyValue kv) {
+    if (!isDeleted(kv) && isBetterCandidate(kv)) {
+      this.candidate = kv;
+      return true;
+    }
+    return false;
+  }
+
+  boolean isBetterCandidate(final KeyValue contender) {
+    return this.candidate == null ||
+      (this.kvcomparator.compareRows(this.candidate, contender) < 0 &&
+        this.kvcomparator.compareRows(contender, this.targetkey) <= 0);
+  }
+
+  /*
+   * Check if specified KeyValue buffer has been deleted by a previously
+   * seen delete.
+   * @param kv
+   * @return true is the specified KeyValue is deleted, false if not
+   */
+  private boolean isDeleted(final KeyValue kv) {
+    if (this.deletes.isEmpty()) return false;
+    NavigableSet<KeyValue> rowdeletes = this.deletes.get(kv);
+    if (rowdeletes == null || rowdeletes.isEmpty()) return false;
+    return isDeleted(kv, rowdeletes);
+  }
+
+  /**
+   * Check if the specified KeyValue buffer has been deleted by a previously
+   * seen delete.
+   * @param kv
+   * @param ds
+   * @return True is the specified KeyValue is deleted, false if not
+   */
+  public boolean isDeleted(final KeyValue kv, final NavigableSet<KeyValue> ds) {
+    if (deletes == null || deletes.isEmpty()) return false;
+    for (KeyValue d: ds) {
+      long kvts = kv.getTimestamp();
+      long dts = d.getTimestamp();
+      if (d.isDeleteFamily()) {
+        if (kvts <= dts) return true;
+        continue;
+      }
+      // Check column
+      int ret = Bytes.compareTo(kv.getBuffer(), kv.getQualifierOffset(),
+          kv.getQualifierLength(),
+        d.getBuffer(), d.getQualifierOffset(), d.getQualifierLength());
+      if (ret <= -1) {
+        // This delete is for an earlier column.
+        continue;
+      } else if (ret >= 1) {
+        // Beyond this kv.
+        break;
+      }
+      // Check Timestamp
+      if (kvts > dts) return false;
+
+      // Check Type
+      switch (KeyValue.Type.codeToType(d.getType())) {
+        case Delete: return kvts == dts;
+        case DeleteColumn: return true;
+        default: continue;
+      }
+    }
+    return false;
+  }
+
+  /*
+   * Handle keys whose values hold deletes.
+   * Add to the set of deletes and then if the candidate keys contain any that
+   * might match, then check for a match and remove it.  Implies candidates
+   * is made with a Comparator that ignores key type.
+   * @param kv
+   * @return True if we removed <code>k</code> from <code>candidates</code>.
+   */
+  boolean handleDeletes(final KeyValue kv) {
+    addDelete(kv);
+    boolean deleted = false;
+    if (!hasCandidate()) return deleted;
+    if (isDeleted(this.candidate)) {
+      this.candidate = null;
+      deleted = true;
+    }
+    return deleted;
+  }
+
+  /**
+   * Do right thing with passed key, add to deletes or add to candidates.
+   * @param kv
+   * @return True if we added a candidate
+   */
+  boolean handle(final KeyValue kv) {
+    if (kv.isDelete()) {
+      handleDeletes(kv);
+      return false;
+    }
+    return addCandidate(kv);
+  }
+
+  /**
+   * @return True if has candidate
+   */
+  public boolean hasCandidate() {
+    return this.candidate != null;
+  }
+
+  /**
+   * @return Best candidate or null.
+   */
+  public KeyValue getCandidate() {
+    return this.candidate;
+  }
+
+  public KeyValue getTargetKey() {
+    return this.targetkey;
+  }
+
+  /**
+   * @param kv Current kv
+   * @param First on row kv.
+   * @param state
+   * @return True if we went too far, past the target key.
+   */
+  boolean isTooFar(final KeyValue kv, final KeyValue firstOnRow) {
+    return this.kvcomparator.compareRows(kv, firstOnRow) > 0;
+  }
+
+  boolean isTargetTable(final KeyValue kv) {
+    if (!metaregion) return true;
+    // Compare start of keys row.  Compare including delimiter.  Saves having
+    // to calculate where tablename ends in the candidate kv.
+    return Bytes.compareTo(this.targetkey.getBuffer(), this.rowoffset,
+        this.tablenamePlusDelimiterLength,
+      kv.getBuffer(), kv.getRowOffset(), this.tablenamePlusDelimiterLength) == 0;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
new file mode 100644
index 0000000..a111c41
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -0,0 +1,3372 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.io.UnsupportedEncodingException;
+import java.lang.reflect.Constructor;
+import java.text.ParseException;
+import java.util.AbstractList;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.NavigableSet;
+import java.util.Random;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.DroppedSnapshotException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.RowLock;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.IncompatibleFilterException;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.hbase.util.CompressionTest;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.util.StringUtils;
+
+import com.google.common.collect.Lists;
+
+/**
+ * HRegion stores data for a certain region of a table.  It stores all columns
+ * for each row. A given table consists of one or more HRegions.
+ *
+ * <p>We maintain multiple HStores for a single HRegion.
+ *
+ * <p>An Store is a set of rows with some column data; together,
+ * they make up all the data for the rows.
+ *
+ * <p>Each HRegion has a 'startKey' and 'endKey'.
+ * <p>The first is inclusive, the second is exclusive (except for
+ * the final region)  The endKey of region 0 is the same as
+ * startKey for region 1 (if it exists).  The startKey for the
+ * first region is null. The endKey for the final region is null.
+ *
+ * <p>Locking at the HRegion level serves only one purpose: preventing the
+ * region from being closed (and consequently split) while other operations
+ * are ongoing. Each row level operation obtains both a row lock and a region
+ * read lock for the duration of the operation. While a scanner is being
+ * constructed, getScanner holds a read lock. If the scanner is successfully
+ * constructed, it holds a read lock until it is closed. A close takes out a
+ * write lock and consequently will block for ongoing operations and will block
+ * new operations from starting while the close is in progress.
+ *
+ * <p>An HRegion is defined by its table and its key extent.
+ *
+ * <p>It consists of at least one Store.  The number of Stores should be
+ * configurable, so that data which is accessed together is stored in the same
+ * Store.  Right now, we approximate that by building a single Store for
+ * each column family.  (This config info will be communicated via the
+ * tabledesc.)
+ *
+ * <p>The HTableDescriptor contains metainfo about the HRegion's table.
+ * regionName is a unique identifier for this HRegion. (startKey, endKey]
+ * defines the keyspace for this HRegion.
+ */
+public class HRegion implements HeapSize { // , Writable{
+  public static final Log LOG = LogFactory.getLog(HRegion.class);
+  static final String MERGEDIR = "merges";
+
+  final AtomicBoolean closed = new AtomicBoolean(false);
+  /* Closing can take some time; use the closing flag if there is stuff we don't
+   * want to do while in closing state; e.g. like offer this region up to the
+   * master as a region to close if the carrying regionserver is overloaded.
+   * Once set, it is never cleared.
+   */
+  final AtomicBoolean closing = new AtomicBoolean(false);
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Members
+  //////////////////////////////////////////////////////////////////////////////
+
+  private final Set<byte[]> lockedRows =
+    new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+  private final Map<Integer, byte []> lockIds =
+    new HashMap<Integer, byte []>();
+  private int lockIdGenerator = 1;
+  static private Random rand = new Random();
+
+  protected final Map<byte [], Store> stores =
+    new ConcurrentSkipListMap<byte [], Store>(Bytes.BYTES_RAWCOMPARATOR);
+
+  //These variable are just used for getting data out of the region, to test on
+  //client side
+  // private int numStores = 0;
+  // private int [] storeSize = null;
+  // private byte [] name = null;
+
+  final AtomicLong memstoreSize = new AtomicLong(0);
+
+  /**
+   * The directory for the table this region is part of.
+   * This directory contains the directory for this region.
+   */
+  final Path tableDir;
+
+  final HLog log;
+  final FileSystem fs;
+  final Configuration conf;
+  final HRegionInfo regionInfo;
+  final Path regiondir;
+  KeyValue.KVComparator comparator;
+
+  /*
+   * Set this when scheduling compaction if want the next compaction to be a
+   * major compaction.  Cleared each time through compaction code.
+   */
+  private volatile boolean forceMajorCompaction = false;
+  private Pair<Long,Long> lastCompactInfo = null;
+
+  // Used to ensure only one thread closes region at a time.
+  private final Object closeLock = new Object();
+
+  /*
+   * Data structure of write state flags used coordinating flushes,
+   * compactions and closes.
+   */
+  static class WriteState {
+    // Set while a memstore flush is happening.
+    volatile boolean flushing = false;
+    // Set when a flush has been requested.
+    volatile boolean flushRequested = false;
+    // Set while a compaction is running.
+    volatile boolean compacting = false;
+    // Gets set in close. If set, cannot compact or flush again.
+    volatile boolean writesEnabled = true;
+    // Set if region is read-only
+    volatile boolean readOnly = false;
+
+    /**
+     * Set flags that make this region read-only.
+     *
+     * @param onOff flip value for region r/o setting
+     */
+    synchronized void setReadOnly(final boolean onOff) {
+      this.writesEnabled = !onOff;
+      this.readOnly = onOff;
+    }
+
+    boolean isReadOnly() {
+      return this.readOnly;
+    }
+
+    boolean isFlushRequested() {
+      return this.flushRequested;
+    }
+  }
+
+  final WriteState writestate = new WriteState();
+
+  final long memstoreFlushSize;
+  private volatile long lastFlushTime;
+  private List<Pair<Long,Long>> recentFlushes = new ArrayList<Pair<Long,Long>>();
+  final FlushRequester flushRequester;
+  private final long blockingMemStoreSize;
+  final long threadWakeFrequency;
+  // Used to guard closes
+  final ReentrantReadWriteLock lock =
+    new ReentrantReadWriteLock();
+
+  // Stop updates lock
+  private final ReentrantReadWriteLock updatesLock =
+    new ReentrantReadWriteLock();
+  private boolean splitRequest;
+
+  private final ReadWriteConsistencyControl rwcc =
+      new ReadWriteConsistencyControl();
+
+  /**
+   * Name of the region info file that resides just under the region directory.
+   */
+  public final static String REGIONINFO_FILE = ".regioninfo";
+
+  /**
+   * Should only be used for testing purposes
+   */
+  public HRegion(){
+    this.tableDir = null;
+    this.blockingMemStoreSize = 0L;
+    this.conf = null;
+    this.flushRequester = null;
+    this.fs = null;
+    this.memstoreFlushSize = 0L;
+    this.log = null;
+    this.regiondir = null;
+    this.regionInfo = null;
+    this.threadWakeFrequency = 0L;
+  }
+
+  /**
+   * HRegion constructor.  his constructor should only be used for testing and
+   * extensions.  Instances of HRegion should be instantiated with the
+   * {@link HRegion#newHRegion(Path, HLog, FileSystem, Configuration, org.apache.hadoop.hbase.HRegionInfo, FlushRequester)} method.
+   *
+   *
+   * @param tableDir qualified path of directory where region should be located,
+   * usually the table directory.
+   * @param log The HLog is the outbound log for any updates to the HRegion
+   * (There's a single HLog for all the HRegions on a single HRegionServer.)
+   * The log file is a logfile from the previous execution that's
+   * custom-computed for this HRegion. The HRegionServer computes and sorts the
+   * appropriate log info for this HRegion. If there is a previous log file
+   * (implying that the HRegion has been written-to before), then read it from
+   * the supplied path.
+   * @param fs is the filesystem.
+   * @param conf is global configuration settings.
+   * @param regionInfo - HRegionInfo that describes the region
+   * is new), then read them from the supplied path.
+   * @param flushRequester an object that implements {@link FlushRequester} or null
+   *
+   * @see HRegion#newHRegion(Path, HLog, FileSystem, Configuration, org.apache.hadoop.hbase.HRegionInfo, FlushRequester)
+   */
+  public HRegion(Path tableDir, HLog log, FileSystem fs, Configuration conf,
+      HRegionInfo regionInfo, FlushRequester flushRequester) {
+    this.tableDir = tableDir;
+    this.comparator = regionInfo.getComparator();
+    this.log = log;
+    this.fs = fs;
+    this.conf = conf;
+    this.regionInfo = regionInfo;
+    this.flushRequester = flushRequester;
+    this.threadWakeFrequency = conf.getLong(HConstants.THREAD_WAKE_FREQUENCY,
+        10 * 1000);
+    String encodedNameStr = this.regionInfo.getEncodedName();
+    this.regiondir = getRegionDir(this.tableDir, encodedNameStr);
+    long flushSize = regionInfo.getTableDesc().getMemStoreFlushSize();
+    if (flushSize == HTableDescriptor.DEFAULT_MEMSTORE_FLUSH_SIZE) {
+      flushSize = conf.getLong("hbase.hregion.memstore.flush.size",
+                      HTableDescriptor.DEFAULT_MEMSTORE_FLUSH_SIZE);
+    }
+    this.memstoreFlushSize = flushSize;
+    this.blockingMemStoreSize = this.memstoreFlushSize *
+      conf.getLong("hbase.hregion.memstore.block.multiplier", 2);
+    if (LOG.isDebugEnabled()) {
+      // Write out region name as string and its encoded name.
+      LOG.debug("Instantiated " + this);
+    }
+  }
+
+  /**
+   * Initialize this region.
+   * @return What the next sequence (edit) id should be.
+   * @throws IOException e
+   */
+  public long initialize() throws IOException {
+    return initialize(null);
+  }
+
+  /**
+   * Initialize this region.
+   *
+   * @param reporter Tickle every so often if initialize is taking a while.
+   * @return What the next sequence (edit) id should be.
+   * @throws IOException e
+   */
+  public long initialize(final Progressable reporter)
+  throws IOException {
+    // A region can be reopened if failed a split; reset flags
+    this.closing.set(false);
+    this.closed.set(false);
+
+    // Write HRI to a file in case we need to recover .META.
+    checkRegioninfoOnFilesystem();
+
+    // Remove temporary data left over from old regions
+    cleanupTmpDir();
+
+    // Load in all the HStores.  Get maximum seqid.
+    long maxSeqId = -1;
+    for (HColumnDescriptor c : this.regionInfo.getTableDesc().getFamilies()) {
+      Store store = instantiateHStore(this.tableDir, c);
+      this.stores.put(c.getName(), store);
+      long storeSeqId = store.getMaxSequenceId();
+      if (storeSeqId > maxSeqId) {
+        maxSeqId = storeSeqId;
+      }
+    }
+    // Recover any edits if available.
+    maxSeqId = replayRecoveredEditsIfAny(this.regiondir, maxSeqId, reporter);
+
+    // Get rid of any splits or merges that were lost in-progress.  Clean out
+    // these directories here on open.  We may be opening a region that was
+    // being split but we crashed in the middle of it all.
+    SplitTransaction.cleanupAnySplitDetritus(this);
+    FSUtils.deleteDirectory(this.fs, new Path(regiondir, MERGEDIR));
+
+    this.writestate.setReadOnly(this.regionInfo.getTableDesc().isReadOnly());
+
+    this.writestate.compacting = false;
+    this.lastFlushTime = EnvironmentEdgeManager.currentTimeMillis();
+    // Use maximum of log sequenceid or that which was found in stores
+    // (particularly if no recovered edits, seqid will be -1).
+    long nextSeqid = maxSeqId + 1;
+    LOG.info("Onlined " + this.toString() + "; next sequenceid=" + nextSeqid);
+    return nextSeqid;
+  }
+
+  /*
+   * Move any passed HStore files into place (if any).  Used to pick up split
+   * files and any merges from splits and merges dirs.
+   * @param initialFiles
+   * @throws IOException
+   */
+  static void moveInitialFilesIntoPlace(final FileSystem fs,
+    final Path initialFiles, final Path regiondir)
+  throws IOException {
+    if (initialFiles != null && fs.exists(initialFiles)) {
+      if (!fs.rename(initialFiles, regiondir)) {
+        LOG.warn("Unable to rename " + initialFiles + " to " + regiondir);
+      }
+    }
+  }
+
+  /**
+   * @return True if this region has references.
+   */
+  public boolean hasReferences() {
+    for (Store store : this.stores.values()) {
+      for (StoreFile sf : store.getStorefiles()) {
+        // Found a reference, return.
+        if (sf.isReference()) return true;
+      }
+    }
+    return false;
+  }
+
+  /*
+   * Write out an info file under the region directory.  Useful recovering
+   * mangled regions.
+   * @throws IOException
+   */
+  private void checkRegioninfoOnFilesystem() throws IOException {
+    // Name of this file has two leading and trailing underscores so it doesn't
+    // clash w/ a store/family name.  There is possibility, but assumption is
+    // that its slim (don't want to use control character in filename because
+    //
+    Path regioninfo = new Path(this.regiondir, REGIONINFO_FILE);
+    if (this.fs.exists(regioninfo) &&
+        this.fs.getFileStatus(regioninfo).getLen() > 0) {
+      return;
+    }
+    FSDataOutputStream out = this.fs.create(regioninfo, true);
+    try {
+      this.regionInfo.write(out);
+      out.write('\n');
+      out.write('\n');
+      out.write(Bytes.toBytes(this.regionInfo.toString()));
+    } finally {
+      out.close();
+    }
+  }
+
+  /** @return a HRegionInfo object for this region */
+  public HRegionInfo getRegionInfo() {
+    return this.regionInfo;
+  }
+
+  /** @return true if region is closed */
+  public boolean isClosed() {
+    return this.closed.get();
+  }
+
+  /**
+   * @return True if closing process has started.
+   */
+  public boolean isClosing() {
+    return this.closing.get();
+  }
+
+  boolean areWritesEnabled() {
+    synchronized(this.writestate) {
+      return this.writestate.writesEnabled;
+    }
+  }
+
+   public ReadWriteConsistencyControl getRWCC() {
+     return rwcc;
+   }
+
+  /**
+   * Close down this HRegion.  Flush the cache, shut down each HStore, don't
+   * service any more calls.
+   *
+   * <p>This method could take some time to execute, so don't call it from a
+   * time-sensitive thread.
+   *
+   * @return Vector of all the storage files that the HRegion's component
+   * HStores make use of.  It's a list of all HStoreFile objects. Returns empty
+   * vector if already closed and null if judged that it should not close.
+   *
+   * @throws IOException e
+   */
+  public List<StoreFile> close() throws IOException {
+    return close(false);
+  }
+
+  /**
+   * Close down this HRegion.  Flush the cache unless abort parameter is true,
+   * Shut down each HStore, don't service any more calls.
+   *
+   * This method could take some time to execute, so don't call it from a
+   * time-sensitive thread.
+   *
+   * @param abort true if server is aborting (only during testing)
+   * @return Vector of all the storage files that the HRegion's component
+   * HStores make use of.  It's a list of HStoreFile objects.  Can be null if
+   * we are not to close at this time or we are already closed.
+   *
+   * @throws IOException e
+   */
+  public List<StoreFile> close(final boolean abort) throws IOException {
+    // Only allow one thread to close at a time. Serialize them so dual
+    // threads attempting to close will run up against each other.
+    synchronized (closeLock) {
+      return doClose(abort);
+    }
+  }
+
+  private List<StoreFile> doClose(final boolean abort)
+  throws IOException {
+    if (isClosed()) {
+      LOG.warn("Region " + this + " already closed");
+      return null;
+    }
+    boolean wasFlushing = false;
+    synchronized (writestate) {
+      // Disable compacting and flushing by background threads for this
+      // region.
+      writestate.writesEnabled = false;
+      wasFlushing = writestate.flushing;
+      LOG.debug("Closing " + this + ": disabling compactions & flushes");
+      while (writestate.compacting || writestate.flushing) {
+        LOG.debug("waiting for" +
+          (writestate.compacting ? " compaction" : "") +
+          (writestate.flushing ?
+            (writestate.compacting ? "," : "") + " cache flush" :
+              "") + " to complete for region " + this);
+        try {
+          writestate.wait();
+        } catch (InterruptedException iex) {
+          // continue
+        }
+      }
+    }
+    // If we were not just flushing, is it worth doing a preflush...one
+    // that will clear out of the bulk of the memstore before we put up
+    // the close flag?
+    if (!abort && !wasFlushing && worthPreFlushing()) {
+      LOG.info("Running close preflush of " + this.getRegionNameAsString());
+      internalFlushcache();
+    }
+    this.closing.set(true);
+    lock.writeLock().lock();
+    try {
+      if (this.isClosed()) {
+        // SplitTransaction handles the null
+        return null;
+      }
+      LOG.debug("Updates disabled for region " + this);
+      // Don't flush the cache if we are aborting
+      if (!abort) {
+        internalFlushcache();
+      }
+
+      List<StoreFile> result = new ArrayList<StoreFile>();
+      for (Store store : stores.values()) {
+        result.addAll(store.close());
+      }
+      this.closed.set(true);
+      LOG.info("Closed " + this);
+      return result;
+    } finally {
+      lock.writeLock().unlock();
+    }
+  }
+
+   /**
+    * @return True if its worth doing a flush before we put up the close flag.
+    */
+  private boolean worthPreFlushing() {
+    return this.memstoreSize.get() >
+      this.conf.getLong("hbase.hregion.preclose.flush.size", 1024 * 1024 * 5);
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // HRegion accessors
+  //////////////////////////////////////////////////////////////////////////////
+
+  /** @return start key for region */
+  public byte [] getStartKey() {
+    return this.regionInfo.getStartKey();
+  }
+
+  /** @return end key for region */
+  public byte [] getEndKey() {
+    return this.regionInfo.getEndKey();
+  }
+
+  /** @return region id */
+  public long getRegionId() {
+    return this.regionInfo.getRegionId();
+  }
+
+  /** @return region name */
+  public byte [] getRegionName() {
+    return this.regionInfo.getRegionName();
+  }
+
+  /** @return region name as string for logging */
+  public String getRegionNameAsString() {
+    return this.regionInfo.getRegionNameAsString();
+  }
+
+  /** @return HTableDescriptor for this region */
+  public HTableDescriptor getTableDesc() {
+    return this.regionInfo.getTableDesc();
+  }
+
+  /** @return HLog in use for this region */
+  public HLog getLog() {
+    return this.log;
+  }
+
+  /** @return Configuration object */
+  public Configuration getConf() {
+    return this.conf;
+  }
+
+  /** @return region directory Path */
+  public Path getRegionDir() {
+    return this.regiondir;
+  }
+
+  /**
+   * Computes the Path of the HRegion
+   *
+   * @param tabledir qualified path for table
+   * @param name ENCODED region name
+   * @return Path of HRegion directory
+   */
+  public static Path getRegionDir(final Path tabledir, final String name) {
+    return new Path(tabledir, name);
+  }
+
+  /** @return FileSystem being used by this region */
+  public FileSystem getFilesystem() {
+    return this.fs;
+  }
+
+  /** @return info about the last compaction <time, size> */
+  public Pair<Long,Long> getLastCompactInfo() {
+    return this.lastCompactInfo;
+  }
+
+  /** @return the last time the region was flushed */
+  public long getLastFlushTime() {
+    return this.lastFlushTime;
+  }
+
+  /** @return info about the last flushes <time, size> */
+  public List<Pair<Long,Long>> getRecentFlushInfo() {
+    this.lock.readLock().lock();
+    List<Pair<Long,Long>> ret = this.recentFlushes;
+    this.recentFlushes = new ArrayList<Pair<Long,Long>>();
+    this.lock.readLock().unlock();
+    return ret;
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // HRegion maintenance.
+  //
+  // These methods are meant to be called periodically by the HRegionServer for
+  // upkeep.
+  //////////////////////////////////////////////////////////////////////////////
+
+  /** @return returns size of largest HStore. */
+  public long getLargestHStoreSize() {
+    long size = 0;
+    for (Store h: stores.values()) {
+      long storeSize = h.getSize();
+      if (storeSize > size) {
+        size = storeSize;
+      }
+    }
+    return size;
+  }
+
+  /*
+   * Do preparation for pending compaction.
+   * @throws IOException
+   */
+  void doRegionCompactionPrep() throws IOException {
+  }
+
+  /*
+   * Removes the temporary directory for this Store.
+   */
+  private void cleanupTmpDir() throws IOException {
+    FSUtils.deleteDirectory(this.fs, getTmpDir());
+  }
+
+  /**
+   * Get the temporary diretory for this region. This directory
+   * will have its contents removed when the region is reopened.
+   */
+  Path getTmpDir() {
+    return new Path(getRegionDir(), ".tmp");
+  }
+
+  void setForceMajorCompaction(final boolean b) {
+    this.forceMajorCompaction = b;
+  }
+
+  boolean getForceMajorCompaction() {
+    return this.forceMajorCompaction;
+  }
+
+  /**
+   * Called by compaction thread and after region is opened to compact the
+   * HStores if necessary.
+   *
+   * <p>This operation could block for a long time, so don't call it from a
+   * time-sensitive thread.
+   *
+   * Note that no locking is necessary at this level because compaction only
+   * conflicts with a region split, and that cannot happen because the region
+   * server does them sequentially and not in parallel.
+   *
+   * @return mid key if split is needed
+   * @throws IOException e
+   */
+  public byte [] compactStores() throws IOException {
+    boolean majorCompaction = this.forceMajorCompaction;
+    this.forceMajorCompaction = false;
+    return compactStores(majorCompaction);
+  }
+
+  /*
+   * Called by compaction thread and after region is opened to compact the
+   * HStores if necessary.
+   *
+   * <p>This operation could block for a long time, so don't call it from a
+   * time-sensitive thread.
+   *
+   * Note that no locking is necessary at this level because compaction only
+   * conflicts with a region split, and that cannot happen because the region
+   * server does them sequentially and not in parallel.
+   *
+   * @param majorCompaction True to force a major compaction regardless of thresholds
+   * @return split row if split is needed
+   * @throws IOException e
+   */
+  byte [] compactStores(final boolean majorCompaction)
+  throws IOException {
+    if (this.closing.get()) {
+      LOG.debug("Skipping compaction on " + this + " because closing");
+      return null;
+    }
+    lock.readLock().lock();
+    this.lastCompactInfo = null;
+    try {
+      if (this.closed.get()) {
+        LOG.debug("Skipping compaction on " + this + " because closed");
+        return null;
+      }
+      byte [] splitRow = null;
+      if (this.closed.get()) {
+        return splitRow;
+      }
+      try {
+        synchronized (writestate) {
+          if (!writestate.compacting && writestate.writesEnabled) {
+            writestate.compacting = true;
+          } else {
+            LOG.info("NOT compacting region " + this +
+                ": compacting=" + writestate.compacting + ", writesEnabled=" +
+                writestate.writesEnabled);
+              return splitRow;
+          }
+        }
+        LOG.info("Starting" + (majorCompaction? " major " : " ") +
+            "compaction on region " + this);
+        long startTime = EnvironmentEdgeManager.currentTimeMillis();
+        doRegionCompactionPrep();
+        long lastCompactSize = 0;
+        long maxSize = -1;
+        boolean completed = false;
+        try {
+          for (Store store: stores.values()) {
+            final Store.StoreSize ss = store.compact(majorCompaction);
+            lastCompactSize += store.getLastCompactSize();
+            if (ss != null && ss.getSize() > maxSize) {
+              maxSize = ss.getSize();
+              splitRow = ss.getSplitRow();
+            }
+          }
+          completed = true;
+        } catch (InterruptedIOException iioe) {
+          LOG.info("compaction interrupted by user: ", iioe);
+        } finally {
+          long now = EnvironmentEdgeManager.currentTimeMillis();
+          LOG.info(((completed) ? "completed" : "aborted")
+              + " compaction on region " + this
+              + " after " + StringUtils.formatTimeDiff(now, startTime));
+          if (completed) {
+            this.lastCompactInfo =
+              new Pair<Long,Long>((now - startTime) / 1000, lastCompactSize);
+          }
+        }
+      } finally {
+        synchronized (writestate) {
+          writestate.compacting = false;
+          writestate.notifyAll();
+        }
+      }
+      return splitRow;
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Flush the cache.
+   *
+   * When this method is called the cache will be flushed unless:
+   * <ol>
+   *   <li>the cache is empty</li>
+   *   <li>the region is closed.</li>
+   *   <li>a flush is already in progress</li>
+   *   <li>writes are disabled</li>
+   * </ol>
+   *
+   * <p>This method may block for some time, so it should not be called from a
+   * time-sensitive thread.
+   *
+   * @return true if cache was flushed
+   *
+   * @throws IOException general io exceptions
+   * @throws DroppedSnapshotException Thrown when replay of hlog is required
+   * because a Snapshot was not properly persisted.
+   */
+  public boolean flushcache() throws IOException {
+    // fail-fast instead of waiting on the lock
+    if (this.closing.get()) {
+      LOG.debug("Skipping flush on " + this + " because closing");
+      return false;
+    }
+    lock.readLock().lock();
+    try {
+      if (this.closed.get()) {
+        LOG.debug("Skipping flush on " + this + " because closed");
+        return false;
+      }
+      try {
+        synchronized (writestate) {
+          if (!writestate.flushing && writestate.writesEnabled) {
+            this.writestate.flushing = true;
+          } else {
+            if (LOG.isDebugEnabled()) {
+              LOG.debug("NOT flushing memstore for region " + this +
+                  ", flushing=" +
+                  writestate.flushing + ", writesEnabled=" +
+                  writestate.writesEnabled);
+            }
+            return false;
+          }
+        }
+        return internalFlushcache();
+      } finally {
+        synchronized (writestate) {
+          writestate.flushing = false;
+          this.writestate.flushRequested = false;
+          writestate.notifyAll();
+        }
+      }
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Flush the memstore.
+   *
+   * Flushing the memstore is a little tricky. We have a lot of updates in the
+   * memstore, all of which have also been written to the log. We need to
+   * write those updates in the memstore out to disk, while being able to
+   * process reads/writes as much as possible during the flush operation. Also,
+   * the log has to state clearly the point in time at which the memstore was
+   * flushed. (That way, during recovery, we know when we can rely on the
+   * on-disk flushed structures and when we have to recover the memstore from
+   * the log.)
+   *
+   * <p>So, we have a three-step process:
+   *
+   * <ul><li>A. Flush the memstore to the on-disk stores, noting the current
+   * sequence ID for the log.<li>
+   *
+   * <li>B. Write a FLUSHCACHE-COMPLETE message to the log, using the sequence
+   * ID that was current at the time of memstore-flush.</li>
+   *
+   * <li>C. Get rid of the memstore structures that are now redundant, as
+   * they've been flushed to the on-disk HStores.</li>
+   * </ul>
+   * <p>This method is protected, but can be accessed via several public
+   * routes.
+   *
+   * <p> This method may block for some time.
+   *
+   * @return true if the region needs compacting
+   *
+   * @throws IOException general io exceptions
+   * @throws DroppedSnapshotException Thrown when replay of hlog is required
+   * because a Snapshot was not properly persisted.
+   */
+  protected boolean internalFlushcache() throws IOException {
+    return internalFlushcache(this.log, -1);
+  }
+
+  /**
+   * @param wal Null if we're NOT to go via hlog/wal.
+   * @param myseqid The seqid to use if <code>wal</code> is null writing out
+   * flush file.
+   * @return true if the region needs compacting
+   * @throws IOException
+   * @see #internalFlushcache()
+   */
+  protected boolean internalFlushcache(final HLog wal, final long myseqid)
+  throws IOException {
+    final long startTime = EnvironmentEdgeManager.currentTimeMillis();
+    // Clear flush flag.
+    // Record latest flush time
+    this.lastFlushTime = startTime;
+    // If nothing to flush, return and avoid logging start/stop flush.
+    if (this.memstoreSize.get() <= 0) {
+      return false;
+    }
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Started memstore flush for " + this +
+        ", current region memstore size " +
+        StringUtils.humanReadableInt(this.memstoreSize.get()) +
+        ((wal != null)? "": "; wal is null, using passed sequenceid=" + myseqid));
+    }
+
+    // Stop updates while we snapshot the memstore of all stores. We only have
+    // to do this for a moment.  Its quick.  The subsequent sequence id that
+    // goes into the HLog after we've flushed all these snapshots also goes
+    // into the info file that sits beside the flushed files.
+    // We also set the memstore size to zero here before we allow updates
+    // again so its value will represent the size of the updates received
+    // during the flush
+    long sequenceId = -1L;
+    long completeSequenceId = -1L;
+
+    // We have to take a write lock during snapshot, or else a write could
+    // end up in both snapshot and memstore (makes it difficult to do atomic
+    // rows then)
+    this.updatesLock.writeLock().lock();
+    final long currentMemStoreSize = this.memstoreSize.get();
+    List<StoreFlusher> storeFlushers = new ArrayList<StoreFlusher>(stores.size());
+    try {
+      sequenceId = (wal == null)? myseqid: wal.startCacheFlush();
+      completeSequenceId = this.getCompleteCacheFlushSequenceId(sequenceId);
+
+      for (Store s : stores.values()) {
+        storeFlushers.add(s.getStoreFlusher(completeSequenceId));
+      }
+
+      // prepare flush (take a snapshot)
+      for (StoreFlusher flusher : storeFlushers) {
+        flusher.prepare();
+      }
+    } finally {
+      this.updatesLock.writeLock().unlock();
+    }
+
+    LOG.debug("Finished snapshotting, commencing flushing stores");
+
+    // Any failure from here on out will be catastrophic requiring server
+    // restart so hlog content can be replayed and put back into the memstore.
+    // Otherwise, the snapshot content while backed up in the hlog, it will not
+    // be part of the current running servers state.
+    boolean compactionRequested = false;
+    try {
+      // A.  Flush memstore to all the HStores.
+      // Keep running vector of all store files that includes both old and the
+      // just-made new flush store file.
+
+      for (StoreFlusher flusher : storeFlushers) {
+        flusher.flushCache();
+      }
+      // Switch snapshot (in memstore) -> new hfile (thus causing
+      // all the store scanners to reset/reseek).
+      for (StoreFlusher flusher : storeFlushers) {
+        boolean needsCompaction = flusher.commit();
+        if (needsCompaction) {
+          compactionRequested = true;
+        }
+      }
+      storeFlushers.clear();
+
+      // Set down the memstore size by amount of flush.
+      this.memstoreSize.addAndGet(-currentMemStoreSize);
+    } catch (Throwable t) {
+      // An exception here means that the snapshot was not persisted.
+      // The hlog needs to be replayed so its content is restored to memstore.
+      // Currently, only a server restart will do this.
+      // We used to only catch IOEs but its possible that we'd get other
+      // exceptions -- e.g. HBASE-659 was about an NPE -- so now we catch
+      // all and sundry.
+      if (wal != null) wal.abortCacheFlush();
+      DroppedSnapshotException dse = new DroppedSnapshotException("region: " +
+          Bytes.toStringBinary(getRegionName()));
+      dse.initCause(t);
+      throw dse;
+    }
+
+    // If we get to here, the HStores have been written. If we get an
+    // error in completeCacheFlush it will release the lock it is holding
+
+    // B.  Write a FLUSHCACHE-COMPLETE message to the log.
+    //     This tells future readers that the HStores were emitted correctly,
+    //     and that all updates to the log for this regionName that have lower
+    //     log-sequence-ids can be safely ignored.
+    if (wal != null) {
+      wal.completeCacheFlush(this.regionInfo.getEncodedNameAsBytes(),
+        regionInfo.getTableDesc().getName(), completeSequenceId,
+        this.getRegionInfo().isMetaRegion());
+    }
+
+    // C. Finally notify anyone waiting on memstore to clear:
+    // e.g. checkResources().
+    synchronized (this) {
+      notifyAll(); // FindBugs NN_NAKED_NOTIFY
+    }
+
+    long time = EnvironmentEdgeManager.currentTimeMillis() - startTime;
+    if (LOG.isDebugEnabled()) {
+      LOG.info("Finished memstore flush of ~" +
+        StringUtils.humanReadableInt(currentMemStoreSize) + " for region " +
+        this + " in " + time + "ms, sequenceid=" + sequenceId +
+        ", compaction requested=" + compactionRequested +
+        ((wal == null)? "; wal=null": ""));
+    }
+    this.recentFlushes.add(new Pair<Long,Long>(time/1000,currentMemStoreSize));
+
+    return compactionRequested;
+  }
+
+   /**
+   * Get the sequence number to be associated with this cache flush. Used by
+   * TransactionalRegion to not complete pending transactions.
+   *
+   *
+   * @param currentSequenceId
+   * @return sequence id to complete the cache flush with
+   */
+  protected long getCompleteCacheFlushSequenceId(long currentSequenceId) {
+    return currentSequenceId;
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // get() methods for client use.
+  //////////////////////////////////////////////////////////////////////////////
+  /**
+   * Return all the data for the row that matches <i>row</i> exactly,
+   * or the one that immediately preceeds it, at or immediately before
+   * <i>ts</i>.
+   *
+   * @param row row key
+   * @return map of values
+   * @throws IOException
+   */
+  Result getClosestRowBefore(final byte [] row)
+  throws IOException{
+    return getClosestRowBefore(row, HConstants.CATALOG_FAMILY);
+  }
+
+  /**
+   * Return all the data for the row that matches <i>row</i> exactly,
+   * or the one that immediately preceeds it, at or immediately before
+   * <i>ts</i>.
+   *
+   * @param row row key
+   * @param family column family to find on
+   * @return map of values
+   * @throws IOException read exceptions
+   */
+  public Result getClosestRowBefore(final byte [] row, final byte [] family)
+  throws IOException {
+    // look across all the HStores for this region and determine what the
+    // closest key is across all column families, since the data may be sparse
+    KeyValue key = null;
+    checkRow(row);
+    startRegionOperation();
+    try {
+      Store store = getStore(family);
+      KeyValue kv = new KeyValue(row, HConstants.LATEST_TIMESTAMP);
+      // get the closest key. (HStore.getRowKeyAtOrBefore can return null)
+      key = store.getRowKeyAtOrBefore(kv);
+      if (key == null) {
+        return null;
+      }
+      Get get = new Get(key.getRow());
+      get.addFamily(family);
+      return get(get, null);
+    } finally {
+      closeRegionOperation();
+    }
+  }
+
+  /**
+   * Return an iterator that scans over the HRegion, returning the indicated
+   * columns and rows specified by the {@link Scan}.
+   * <p>
+   * This Iterator must be closed by the caller.
+   *
+   * @param scan configured {@link Scan}
+   * @return InternalScanner
+   * @throws IOException read exceptions
+   */
+  public InternalScanner getScanner(Scan scan)
+  throws IOException {
+   return getScanner(scan, null);
+  }
+
+  protected InternalScanner getScanner(Scan scan, List<KeyValueScanner> additionalScanners) throws IOException {
+    startRegionOperation();
+    try {
+      // Verify families are all valid
+      if(scan.hasFamilies()) {
+        for(byte [] family : scan.getFamilyMap().keySet()) {
+          checkFamily(family);
+        }
+      } else { // Adding all families to scanner
+        for(byte[] family: regionInfo.getTableDesc().getFamiliesKeys()){
+          scan.addFamily(family);
+        }
+      }
+      return instantiateInternalScanner(scan, additionalScanners);
+
+    } finally {
+      closeRegionOperation();
+    }
+  }
+
+  protected InternalScanner instantiateInternalScanner(Scan scan, List<KeyValueScanner> additionalScanners) throws IOException {
+    return new RegionScanner(scan, additionalScanners);
+  }
+
+  /*
+   * @param delete The passed delete is modified by this method. WARNING!
+   */
+  private void prepareDelete(Delete delete) throws IOException {
+    // Check to see if this is a deleteRow insert
+    if(delete.getFamilyMap().isEmpty()){
+      for(byte [] family : regionInfo.getTableDesc().getFamiliesKeys()){
+        // Don't eat the timestamp
+        delete.deleteFamily(family, delete.getTimeStamp());
+      }
+    } else {
+      for(byte [] family : delete.getFamilyMap().keySet()) {
+        if(family == null) {
+          throw new NoSuchColumnFamilyException("Empty family is invalid");
+        }
+        checkFamily(family);
+      }
+    }
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // set() methods for client use.
+  //////////////////////////////////////////////////////////////////////////////
+  /**
+   * @param delete delete object
+   * @param lockid existing lock id, or null for grab a lock
+   * @param writeToWAL append to the write ahead lock or not
+   * @throws IOException read exceptions
+   */
+  public void delete(Delete delete, Integer lockid, boolean writeToWAL)
+  throws IOException {
+    checkReadOnly();
+    checkResources();
+    Integer lid = null;
+    startRegionOperation();
+    try {
+      byte [] row = delete.getRow();
+      // If we did not pass an existing row lock, obtain a new one
+      lid = getLock(lockid, row, true);
+
+      // All edits for the given row (across all column families) must happen atomically.
+      prepareDelete(delete);
+      delete(delete.getFamilyMap(), writeToWAL);
+
+    } finally {
+      if(lockid == null) releaseRowLock(lid);
+      closeRegionOperation();
+    }
+  }
+
+
+  /**
+   * @param familyMap map of family to edits for the given family.
+   * @param writeToWAL
+   * @throws IOException
+   */
+  public void delete(Map<byte[], List<KeyValue>> familyMap, boolean writeToWAL)
+  throws IOException {
+    long now = EnvironmentEdgeManager.currentTimeMillis();
+    byte [] byteNow = Bytes.toBytes(now);
+    boolean flush = false;
+
+    updatesLock.readLock().lock();
+
+    try {
+
+      for (Map.Entry<byte[], List<KeyValue>> e : familyMap.entrySet()) {
+
+        byte[] family = e.getKey();
+        List<KeyValue> kvs = e.getValue();
+        Map<byte[], Integer> kvCount = new TreeMap<byte[], Integer>(Bytes.BYTES_COMPARATOR);
+
+        for (KeyValue kv: kvs) {
+          //  Check if time is LATEST, change to time of most recent addition if so
+          //  This is expensive.
+          if (kv.isLatestTimestamp() && kv.isDeleteType()) {
+            byte[] qual = kv.getQualifier();
+            if (qual == null) qual = HConstants.EMPTY_BYTE_ARRAY;
+
+            Integer count = kvCount.get(qual);
+            if (count == null) {
+              kvCount.put(qual, 1);
+            } else {
+              kvCount.put(qual, count + 1);
+            }
+            count = kvCount.get(qual);
+
+            Get get = new Get(kv.getRow());
+            get.setMaxVersions(count);
+            get.addColumn(family, qual);
+
+            List<KeyValue> result = get(get);
+
+            if (result.size() < count) {
+              // Nothing to delete
+              kv.updateLatestStamp(byteNow);
+              continue;
+            }
+            if (result.size() > count) {
+              throw new RuntimeException("Unexpected size: " + result.size());
+            }
+            KeyValue getkv = result.get(count - 1);
+            Bytes.putBytes(kv.getBuffer(), kv.getTimestampOffset(),
+                getkv.getBuffer(), getkv.getTimestampOffset(), Bytes.SIZEOF_LONG);
+          } else {
+            kv.updateLatestStamp(byteNow);
+          }
+        }
+      }
+
+      if (writeToWAL) {
+        // write/sync to WAL should happen before we touch memstore.
+        //
+        // If order is reversed, i.e. we write to memstore first, and
+        // for some reason fail to write/sync to commit log, the memstore
+        // will contain uncommitted transactions.
+        //
+        // bunch up all edits across all column families into a
+        // single WALEdit.
+        WALEdit walEdit = new WALEdit();
+        addFamilyMapToWALEdit(familyMap, walEdit);
+        this.log.append(regionInfo, regionInfo.getTableDesc().getName(),
+            walEdit, now);
+      }
+
+      // Now make changes to the memstore.
+      long addedSize = applyFamilyMapToMemstore(familyMap);
+      flush = isFlushSize(memstoreSize.addAndGet(addedSize));
+    } finally {
+      this.updatesLock.readLock().unlock();
+    }
+
+    if (flush) {
+      // Request a cache flush.  Do it outside update lock.
+      requestFlush();
+    }
+  }
+
+  /**
+   * @param put
+   * @throws IOException
+   */
+  public void put(Put put) throws IOException {
+    this.put(put, null, put.getWriteToWAL());
+  }
+
+  /**
+   * @param put
+   * @param writeToWAL
+   * @throws IOException
+   */
+  public void put(Put put, boolean writeToWAL) throws IOException {
+    this.put(put, null, writeToWAL);
+  }
+
+  /**
+   * @param put
+   * @param lockid
+   * @throws IOException
+   */
+  public void put(Put put, Integer lockid) throws IOException {
+    this.put(put, lockid, put.getWriteToWAL());
+  }
+
+  /**
+   * @param put
+   * @param lockid
+   * @param writeToWAL
+   * @throws IOException
+   */
+  public void put(Put put, Integer lockid, boolean writeToWAL)
+  throws IOException {
+    checkReadOnly();
+
+    // Do a rough check that we have resources to accept a write.  The check is
+    // 'rough' in that between the resource check and the call to obtain a
+    // read lock, resources may run out.  For now, the thought is that this
+    // will be extremely rare; we'll deal with it when it happens.
+    checkResources();
+    startRegionOperation();
+    try {
+      // We obtain a per-row lock, so other clients will block while one client
+      // performs an update. The read lock is released by the client calling
+      // #commit or #abort or if the HRegionServer lease on the lock expires.
+      // See HRegionServer#RegionListener for how the expire on HRegionServer
+      // invokes a HRegion#abort.
+      byte [] row = put.getRow();
+      // If we did not pass an existing row lock, obtain a new one
+      Integer lid = getLock(lockid, row, true);
+
+      try {
+        // All edits for the given row (across all column families) must happen atomically.
+        put(put.getFamilyMap(), writeToWAL);
+      } finally {
+        if(lockid == null) releaseRowLock(lid);
+      }
+    } finally {
+      closeRegionOperation();
+    }
+  }
+
+  /**
+   * Struct-like class that tracks the progress of a batch operation,
+   * accumulating status codes and tracking the index at which processing
+   * is proceeding.
+   */
+  private static class BatchOperationInProgress<T> {
+    T[] operations;
+    OperationStatusCode[] retCodes;
+    int nextIndexToProcess = 0;
+
+    public BatchOperationInProgress(T[] operations) {
+      this.operations = operations;
+      retCodes = new OperationStatusCode[operations.length];
+      Arrays.fill(retCodes, OperationStatusCode.NOT_RUN);
+    }
+
+    public boolean isDone() {
+      return nextIndexToProcess == operations.length;
+    }
+  }
+
+  /**
+   * Perform a batch put with no pre-specified locks
+   * @see HRegion#put(Pair[])
+   */
+  public OperationStatusCode[] put(Put[] puts) throws IOException {
+    @SuppressWarnings("unchecked")
+    Pair<Put, Integer> putsAndLocks[] = new Pair[puts.length];
+
+    for (int i = 0; i < puts.length; i++) {
+      putsAndLocks[i] = new Pair<Put, Integer>(puts[i], null);
+    }
+    return put(putsAndLocks);
+  }
+
+  /**
+   * Perform a batch of puts.
+   * @param putsAndLocks the list of puts paired with their requested lock IDs.
+   * @throws IOException
+   */
+  public OperationStatusCode[] put(Pair<Put, Integer>[] putsAndLocks) throws IOException {
+    BatchOperationInProgress<Pair<Put, Integer>> batchOp =
+      new BatchOperationInProgress<Pair<Put,Integer>>(putsAndLocks);
+
+    while (!batchOp.isDone()) {
+      checkReadOnly();
+      checkResources();
+
+      long newSize;
+      startRegionOperation();
+      try {
+        long addedSize = doMiniBatchPut(batchOp);
+        newSize = memstoreSize.addAndGet(addedSize);
+      } finally {
+        closeRegionOperation();
+      }
+      if (isFlushSize(newSize)) {
+        requestFlush();
+      }
+    }
+    return batchOp.retCodes;
+  }
+
+  private long doMiniBatchPut(BatchOperationInProgress<Pair<Put, Integer>> batchOp) throws IOException {
+    long now = EnvironmentEdgeManager.currentTimeMillis();
+    byte[] byteNow = Bytes.toBytes(now);
+
+    /** Keep track of the locks we hold so we can release them in finally clause */
+    List<Integer> acquiredLocks = Lists.newArrayListWithCapacity(batchOp.operations.length);
+    // We try to set up a batch in the range [firstIndex,lastIndexExclusive)
+    int firstIndex = batchOp.nextIndexToProcess;
+    int lastIndexExclusive = firstIndex;
+    boolean success = false;
+    try {
+      // ------------------------------------
+      // STEP 1. Try to acquire as many locks as we can, and ensure
+      // we acquire at least one.
+      // ----------------------------------
+      int numReadyToWrite = 0;
+      while (lastIndexExclusive < batchOp.operations.length) {
+        Pair<Put, Integer> nextPair = batchOp.operations[lastIndexExclusive];
+        Put put = nextPair.getFirst();
+        Integer providedLockId = nextPair.getSecond();
+
+        // Check the families in the put. If bad, skip this one.
+        try {
+          checkFamilies(put.getFamilyMap().keySet());
+        } catch (NoSuchColumnFamilyException nscf) {
+          LOG.warn("No such column family in batch put", nscf);
+          batchOp.retCodes[lastIndexExclusive] = OperationStatusCode.BAD_FAMILY;
+          lastIndexExclusive++;
+          continue;
+        }
+
+        // If we haven't got any rows in our batch, we should block to
+        // get the next one.
+        boolean shouldBlock = numReadyToWrite == 0;
+        Integer acquiredLockId = getLock(providedLockId, put.getRow(), shouldBlock);
+        if (acquiredLockId == null) {
+          // We failed to grab another lock
+          assert !shouldBlock : "Should never fail to get lock when blocking";
+          break; // stop acquiring more rows for this batch
+        }
+        if (providedLockId == null) {
+          acquiredLocks.add(acquiredLockId);
+        }
+        lastIndexExclusive++;
+        numReadyToWrite++;
+      }
+      // Nothing to put -- an exception in the above such as NoSuchColumnFamily? 
+      if (numReadyToWrite <= 0) return 0L;
+
+      // We've now grabbed as many puts off the list as we can
+
+      // ------------------------------------
+      // STEP 2. Update any LATEST_TIMESTAMP timestamps
+      // ----------------------------------
+      for (int i = firstIndex; i < lastIndexExclusive; i++) {
+        updateKVTimestamps(
+            batchOp.operations[i].getFirst().getFamilyMap().values(),
+            byteNow);
+      }
+
+      // ------------------------------------
+      // STEP 3. Write to WAL
+      // ----------------------------------
+      WALEdit walEdit = new WALEdit();
+      for (int i = firstIndex; i < lastIndexExclusive; i++) {
+        // Skip puts that were determined to be invalid during preprocessing
+        if (batchOp.retCodes[i] != OperationStatusCode.NOT_RUN) continue;
+
+        Put p = batchOp.operations[i].getFirst();
+        if (!p.getWriteToWAL()) continue;
+        addFamilyMapToWALEdit(p.getFamilyMap(), walEdit);
+      }
+
+      // Append the edit to WAL
+      this.log.append(regionInfo, regionInfo.getTableDesc().getName(),
+          walEdit, now);
+
+      // ------------------------------------
+      // STEP 4. Write back to memstore
+      // ----------------------------------
+      long addedSize = 0;
+      for (int i = firstIndex; i < lastIndexExclusive; i++) {
+        if (batchOp.retCodes[i] != OperationStatusCode.NOT_RUN) continue;
+
+        Put p = batchOp.operations[i].getFirst();
+        addedSize += applyFamilyMapToMemstore(p.getFamilyMap());
+        batchOp.retCodes[i] = OperationStatusCode.SUCCESS;
+      }
+      success = true;
+      return addedSize;
+    } finally {
+      for (Integer toRelease : acquiredLocks) {
+        releaseRowLock(toRelease);
+      }
+      if (!success) {
+        for (int i = firstIndex; i < lastIndexExclusive; i++) {
+          if (batchOp.retCodes[i] == OperationStatusCode.NOT_RUN) {
+            batchOp.retCodes[i] = OperationStatusCode.FAILURE;
+          }
+        }
+      }
+      batchOp.nextIndexToProcess = lastIndexExclusive;
+    }
+  }
+
+  //TODO, Think that gets/puts and deletes should be refactored a bit so that
+  //the getting of the lock happens before, so that you would just pass it into
+  //the methods. So in the case of checkAndMutate you could just do lockRow,
+  //get, put, unlockRow or something
+  /**
+   *
+   * @param row
+   * @param family
+   * @param qualifier
+   * @param expectedValue
+   * @param lockId
+   * @param writeToWAL
+   * @throws IOException
+   * @return true if the new put was execute, false otherwise
+   */
+  public boolean checkAndMutate(byte [] row, byte [] family, byte [] qualifier,
+      byte [] expectedValue, Writable w, Integer lockId, boolean writeToWAL)
+  throws IOException{
+    checkReadOnly();
+    //TODO, add check for value length or maybe even better move this to the
+    //client if this becomes a global setting
+    checkResources();
+    boolean isPut = w instanceof Put;
+    if (!isPut && !(w instanceof Delete))
+      throw new IOException("Action must be Put or Delete");
+
+    startRegionOperation();
+    try {
+      RowLock lock = isPut ? ((Put)w).getRowLock() : ((Delete)w).getRowLock();
+      Get get = new Get(row, lock);
+      checkFamily(family);
+      get.addColumn(family, qualifier);
+
+      // Lock row
+      Integer lid = getLock(lockId, get.getRow(), true);
+      List<KeyValue> result = new ArrayList<KeyValue>();
+      try {
+        result = get(get);
+
+        boolean matches = false;
+        if (result.size() == 0 &&
+            (expectedValue == null || expectedValue.length == 0)) {
+          matches = true;
+        } else if (result.size() == 1) {
+          //Compare the expected value with the actual value
+          byte [] actualValue = result.get(0).getValue();
+          matches = Bytes.equals(expectedValue, actualValue);
+        }
+        //If matches put the new put or delete the new delete
+        if (matches) {
+          // All edits for the given row (across all column families) must happen atomically.
+          if (isPut) {
+            put(((Put)w).getFamilyMap(), writeToWAL);
+          } else {
+            Delete d = (Delete)w;
+            prepareDelete(d);
+            delete(d.getFamilyMap(), writeToWAL);
+          }
+          return true;
+        }
+        return false;
+      } finally {
+        if(lockId == null) releaseRowLock(lid);
+      }
+    } finally {
+      closeRegionOperation();
+    }
+  }
+
+
+  /**
+   * Replaces any KV timestamps set to {@link HConstants#LATEST_TIMESTAMP}
+   * with the provided current timestamp.
+   */
+  private void updateKVTimestamps(
+      final Iterable<List<KeyValue>> keyLists, final byte[] now) {
+    for (List<KeyValue> keys: keyLists) {
+      if (keys == null) continue;
+      for (KeyValue key : keys) {
+        key.updateLatestStamp(now);
+      }
+    }
+  }
+
+  /*
+   * Check if resources to support an update.
+   *
+   * Here we synchronize on HRegion, a broad scoped lock.  Its appropriate
+   * given we're figuring in here whether this region is able to take on
+   * writes.  This is only method with a synchronize (at time of writing),
+   * this and the synchronize on 'this' inside in internalFlushCache to send
+   * the notify.
+   */
+  private void checkResources() {
+
+    // If catalog region, do not impose resource constraints or block updates.
+    if (this.getRegionInfo().isMetaRegion()) return;
+
+    boolean blocked = false;
+    while (this.memstoreSize.get() > this.blockingMemStoreSize) {
+      requestFlush();
+      if (!blocked) {
+        LOG.info("Blocking updates for '" + Thread.currentThread().getName() +
+          "' on region " + Bytes.toStringBinary(getRegionName()) +
+          ": memstore size " +
+          StringUtils.humanReadableInt(this.memstoreSize.get()) +
+          " is >= than blocking " +
+          StringUtils.humanReadableInt(this.blockingMemStoreSize) + " size");
+      }
+      blocked = true;
+      synchronized(this) {
+        try {
+          wait(threadWakeFrequency);
+        } catch (InterruptedException e) {
+          // continue;
+        }
+      }
+    }
+    if (blocked) {
+      LOG.info("Unblocking updates for region " + this + " '"
+          + Thread.currentThread().getName() + "'");
+    }
+  }
+
+  /**
+   * @throws IOException Throws exception if region is in read-only mode.
+   */
+  protected void checkReadOnly() throws IOException {
+    if (this.writestate.isReadOnly()) {
+      throw new IOException("region is read only");
+    }
+  }
+
+  /**
+   * Add updates first to the hlog and then add values to memstore.
+   * Warning: Assumption is caller has lock on passed in row.
+   * @param family
+   * @param edits Cell updates by column
+   * @praram now
+   * @throws IOException
+   */
+  private void put(final byte [] family, final List<KeyValue> edits)
+  throws IOException {
+    Map<byte[], List<KeyValue>> familyMap = new HashMap<byte[], List<KeyValue>>();
+    familyMap.put(family, edits);
+    this.put(familyMap, true);
+  }
+
+  /**
+   * Add updates first to the hlog (if writeToWal) and then add values to memstore.
+   * Warning: Assumption is caller has lock on passed in row.
+   * @param familyMap map of family to edits for the given family.
+   * @param writeToWAL if true, then we should write to the log
+   * @throws IOException
+   */
+  private void put(final Map<byte [], List<KeyValue>> familyMap,
+      boolean writeToWAL) throws IOException {
+    long now = EnvironmentEdgeManager.currentTimeMillis();
+    byte[] byteNow = Bytes.toBytes(now);
+    boolean flush = false;
+    this.updatesLock.readLock().lock();
+    try {
+      checkFamilies(familyMap.keySet());
+      updateKVTimestamps(familyMap.values(), byteNow);
+      // write/sync to WAL should happen before we touch memstore.
+      //
+      // If order is reversed, i.e. we write to memstore first, and
+      // for some reason fail to write/sync to commit log, the memstore
+      // will contain uncommitted transactions.
+      if (writeToWAL) {
+        WALEdit walEdit = new WALEdit();
+        addFamilyMapToWALEdit(familyMap, walEdit);
+        this.log.append(regionInfo, regionInfo.getTableDesc().getName(),
+           walEdit, now);
+      }
+
+      long addedSize = applyFamilyMapToMemstore(familyMap);
+      flush = isFlushSize(memstoreSize.addAndGet(addedSize));
+    } finally {
+      this.updatesLock.readLock().unlock();
+    }
+    if (flush) {
+      // Request a cache flush.  Do it outside update lock.
+      requestFlush();
+    }
+  }
+
+  /**
+   * Atomically apply the given map of family->edits to the memstore.
+   * This handles the consistency control on its own, but the caller
+   * should already have locked updatesLock.readLock(). This also does
+   * <b>not</b> check the families for validity.
+   *
+   * @return the additional memory usage of the memstore caused by the
+   * new entries.
+   */
+  private long applyFamilyMapToMemstore(Map<byte[], List<KeyValue>> familyMap) {
+    ReadWriteConsistencyControl.WriteEntry w = null;
+    long size = 0;
+    try {
+      w = rwcc.beginMemstoreInsert();
+
+      for (Map.Entry<byte[], List<KeyValue>> e : familyMap.entrySet()) {
+        byte[] family = e.getKey();
+        List<KeyValue> edits = e.getValue();
+
+        Store store = getStore(family);
+        for (KeyValue kv: edits) {
+          kv.setMemstoreTS(w.getWriteNumber());
+          size += store.add(kv);
+        }
+      }
+    } finally {
+      rwcc.completeMemstoreInsert(w);
+    }
+    return size;
+  }
+
+  /**
+   * Check the collection of families for validity.
+   * @throws NoSuchColumnFamilyException if a family does not exist.
+   */
+  private void checkFamilies(Collection<byte[]> families)
+  throws NoSuchColumnFamilyException {
+    for (byte[] family : families) {
+      checkFamily(family);
+    }
+  }
+
+  /**
+   * Append the given map of family->edits to a WALEdit data structure.
+   * This does not write to the HLog itself.
+   * @param familyMap map of family->edits
+   * @param walEdit the destination entry to append into
+   */
+  private void addFamilyMapToWALEdit(Map<byte[], List<KeyValue>> familyMap,
+      WALEdit walEdit) {
+    for (List<KeyValue> edits : familyMap.values()) {
+      for (KeyValue kv : edits) {
+        walEdit.add(kv);
+      }
+    }
+  }
+
+  private void requestFlush() {
+    if (this.flushRequester == null) {
+      return;
+    }
+    synchronized (writestate) {
+      if (this.writestate.isFlushRequested()) {
+        return;
+      }
+      writestate.flushRequested = true;
+    }
+    // Make request outside of synchronize block; HBASE-818.
+    this.flushRequester.requestFlush(this);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Flush requested on " + this);
+    }
+  }
+
+  /*
+   * @param size
+   * @return True if size is over the flush threshold
+   */
+  private boolean isFlushSize(final long size) {
+    return size > this.memstoreFlushSize;
+  }
+
+  /**
+   * Read the edits log put under this region by wal log splitting process.  Put
+   * the recovered edits back up into this region.
+   *
+   * <p>We can ignore any log message that has a sequence ID that's equal to or
+   * lower than minSeqId.  (Because we know such log messages are already
+   * reflected in the HFiles.)
+   *
+   * <p>While this is running we are putting pressure on memory yet we are
+   * outside of our usual accounting because we are not yet an onlined region
+   * (this stuff is being run as part of Region initialization).  This means
+   * that if we're up against global memory limits, we'll not be flagged to flush
+   * because we are not online. We can't be flushed by usual mechanisms anyways;
+   * we're not yet online so our relative sequenceids are not yet aligned with
+   * HLog sequenceids -- not till we come up online, post processing of split
+   * edits.
+   *
+   * <p>But to help relieve memory pressure, at least manage our own heap size
+   * flushing if are in excess of per-region limits.  Flushing, though, we have
+   * to be careful and avoid using the regionserver/hlog sequenceid.  Its running
+   * on a different line to whats going on in here in this region context so if we
+   * crashed replaying these edits, but in the midst had a flush that used the
+   * regionserver log with a sequenceid in excess of whats going on in here
+   * in this region and with its split editlogs, then we could miss edits the
+   * next time we go to recover. So, we have to flush inline, using seqids that
+   * make sense in a this single region context only -- until we online.
+   *
+   * @param regiondir
+   * @param minSeqId Any edit found in split editlogs needs to be in excess of
+   * this minSeqId to be applied, else its skipped.
+   * @param reporter
+   * @return the sequence id of the last edit added to this region out of the
+   * recovered edits log or <code>minSeqId</code> if nothing added from editlogs.
+   * @throws UnsupportedEncodingException
+   * @throws IOException
+   */
+  protected long replayRecoveredEditsIfAny(final Path regiondir,
+      final long minSeqId, final Progressable reporter)
+  throws UnsupportedEncodingException, IOException {
+    long seqid = minSeqId;
+    NavigableSet<Path> files = HLog.getSplitEditFilesSorted(this.fs, regiondir);
+    if (files == null || files.isEmpty()) return seqid;
+    for (Path edits: files) {
+      if (edits == null || !this.fs.exists(edits)) {
+        LOG.warn("Null or non-existent edits file: " + edits);
+        continue;
+      }
+      if (isZeroLengthThenDelete(this.fs, edits)) continue;
+      try {
+        seqid = replayRecoveredEdits(edits, seqid, reporter);
+      } catch (IOException e) {
+        boolean skipErrors = conf.getBoolean("hbase.skip.errors", false);
+        if (skipErrors) {
+          Path p = HLog.moveAsideBadEditsFile(fs, edits);
+          LOG.error("hbase.skip.errors=true so continuing. Renamed " + edits +
+            " as " + p, e);
+        } else {
+          throw e;
+        }
+      }
+    }
+    if (seqid > minSeqId) {
+      // Then we added some edits to memory. Flush and cleanup split edit files.
+      internalFlushcache(null, seqid);
+    }
+    // Now delete the content of recovered edits.  We're done w/ them.
+    for (Path file: files) {
+      if (!this.fs.delete(file, false)) {
+        LOG.error("Failed delete of " + file);
+      } else {
+        LOG.debug("Deleted recovered.edits file=" + file);
+      }
+    }
+    return seqid;
+  }
+
+  /*
+   * @param edits File of recovered edits.
+   * @param minSeqId Minimum sequenceid found in a store file.  Edits in log
+   * must be larger than this to be replayed.
+   * @param reporter
+   * @return the sequence id of the last edit added to this region out of the
+   * recovered edits log or <code>minSeqId</code> if nothing added from editlogs.
+   * @throws IOException
+   */
+  private long replayRecoveredEdits(final Path edits,
+      final long minSeqId, final Progressable reporter)
+    throws IOException {
+    LOG.info("Replaying edits from " + edits + "; minSequenceid=" + minSeqId);
+    HLog.Reader reader = HLog.getReader(this.fs, edits, conf);
+    try {
+    long currentEditSeqId = minSeqId;
+    long firstSeqIdInLog = -1;
+    long skippedEdits = 0;
+    long editsCount = 0;
+    HLog.Entry entry;
+    Store store = null;
+
+    try {
+      // How many edits to apply before we send a progress report.
+      int interval = this.conf.getInt("hbase.hstore.report.interval.edits", 2000);
+      while ((entry = reader.next()) != null) {
+        HLogKey key = entry.getKey();
+        WALEdit val = entry.getEdit();
+        if (firstSeqIdInLog == -1) {
+          firstSeqIdInLog = key.getLogSeqNum();
+        }
+        // Now, figure if we should skip this edit.
+        if (key.getLogSeqNum() <= currentEditSeqId) {
+          skippedEdits++;
+          continue;
+        }
+        currentEditSeqId = key.getLogSeqNum();
+        boolean flush = false;
+        for (KeyValue kv: val.getKeyValues()) {
+          // Check this edit is for me. Also, guard against writing the special
+          // METACOLUMN info such as HBASE::CACHEFLUSH entries
+          if (kv.matchingFamily(HLog.METAFAMILY) ||
+              !Bytes.equals(key.getEncodedRegionName(), this.regionInfo.getEncodedNameAsBytes())) {
+            skippedEdits++;
+            continue;
+              }
+          // Figure which store the edit is meant for.
+          if (store == null || !kv.matchingFamily(store.getFamily().getName())) {
+            store = this.stores.get(kv.getFamily());
+          }
+          if (store == null) {
+            // This should never happen.  Perhaps schema was changed between
+            // crash and redeploy?
+            LOG.warn("No family for " + kv);
+            skippedEdits++;
+            continue;
+          }
+          // Once we are over the limit, restoreEdit will keep returning true to
+          // flush -- but don't flush until we've played all the kvs that make up
+          // the WALEdit.
+          flush = restoreEdit(store, kv);
+          editsCount++;
+        }
+        if (flush) internalFlushcache(null, currentEditSeqId);
+
+        // Every 'interval' edits, tell the reporter we're making progress.
+        // Have seen 60k edits taking 3minutes to complete.
+        if (reporter != null && (editsCount % interval) == 0) {
+          reporter.progress();
+        }
+      }
+    } catch (EOFException eof) {
+      Path p = HLog.moveAsideBadEditsFile(fs, edits);
+      LOG.warn("Encountered EOF. Most likely due to Master failure during " +
+          "log spliting, so we have this data in another edit.  " +
+          "Continuing, but renaming " + edits + " as " + p, eof);
+    } catch (IOException ioe) {
+      // If the IOE resulted from bad file format,
+      // then this problem is idempotent and retrying won't help
+      if (ioe.getCause() instanceof ParseException) {
+        Path p = HLog.moveAsideBadEditsFile(fs, edits);
+        LOG.warn("File corruption encountered!  " +
+            "Continuing, but renaming " + edits + " as " + p, ioe);
+      } else {
+        // other IO errors may be transient (bad network connection,
+        // checksum exception on one datanode, etc).  throw & retry
+        throw ioe;
+      }
+    }
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Applied " + editsCount + ", skipped " + skippedEdits +
+          ", firstSequenceidInLog=" + firstSeqIdInLog +
+          ", maxSequenceidInLog=" + currentEditSeqId);
+    }
+    return currentEditSeqId;
+    } finally {
+      reader.close();
+    }
+  }
+
+  /**
+   * Used by tests
+   * @param s Store to add edit too.
+   * @param kv KeyValue to add.
+   * @return True if we should flush.
+   */
+  protected boolean restoreEdit(final Store s, final KeyValue kv) {
+    return isFlushSize(this.memstoreSize.addAndGet(s.add(kv)));
+  }
+
+  /*
+   * @param fs
+   * @param p File to check.
+   * @return True if file was zero-length (and if so, we'll delete it in here).
+   * @throws IOException
+   */
+  private static boolean isZeroLengthThenDelete(final FileSystem fs, final Path p)
+  throws IOException {
+    FileStatus stat = fs.getFileStatus(p);
+    if (stat.getLen() > 0) return false;
+    LOG.warn("File " + p + " is zero-length, deleting.");
+    fs.delete(p, false);
+    return true;
+  }
+
+  protected Store instantiateHStore(Path tableDir, HColumnDescriptor c)
+  throws IOException {
+    return new Store(tableDir, this, c, this.fs, this.conf);
+  }
+
+  /**
+   * Return HStore instance.
+   * Use with caution.  Exposed for use of fixup utilities.
+   * @param column Name of column family hosted by this region.
+   * @return Store that goes with the family on passed <code>column</code>.
+   * TODO: Make this lookup faster.
+   */
+  public Store getStore(final byte [] column) {
+    return this.stores.get(column);
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Support code
+  //////////////////////////////////////////////////////////////////////////////
+
+  /** Make sure this is a valid row for the HRegion */
+  private void checkRow(final byte [] row) throws IOException {
+    if(!rowIsInRange(regionInfo, row)) {
+      throw new WrongRegionException("Requested row out of range for " +
+          "HRegion " + this + ", startKey='" +
+          Bytes.toStringBinary(regionInfo.getStartKey()) + "', getEndKey()='" +
+          Bytes.toStringBinary(regionInfo.getEndKey()) + "', row='" +
+          Bytes.toStringBinary(row) + "'");
+    }
+  }
+
+  /**
+   * Obtain a lock on the given row.  Blocks until success.
+   *
+   * I know it's strange to have two mappings:
+   * <pre>
+   *   ROWS  ==> LOCKS
+   * </pre>
+   * as well as
+   * <pre>
+   *   LOCKS ==> ROWS
+   * </pre>
+   *
+   * But it acts as a guard on the client; a miswritten client just can't
+   * submit the name of a row and start writing to it; it must know the correct
+   * lockid, which matches the lock list in memory.
+   *
+   * <p>It would be more memory-efficient to assume a correctly-written client,
+   * which maybe we'll do in the future.
+   *
+   * @param row Name of row to lock.
+   * @throws IOException
+   * @return The id of the held lock.
+   */
+  public Integer obtainRowLock(final byte [] row) throws IOException {
+    startRegionOperation();
+    try {
+      return internalObtainRowLock(row, true);
+    } finally {
+      closeRegionOperation();
+    }
+  }
+
+  /**
+   * Tries to obtain a row lock on the given row, but does not block if the
+   * row lock is not available. If the lock is not available, returns false.
+   * Otherwise behaves the same as the above method.
+   * @see HRegion#obtainRowLock(byte[])
+   */
+  public Integer tryObtainRowLock(final byte[] row) throws IOException {
+    startRegionOperation();
+    try {
+      return internalObtainRowLock(row, false);
+    } finally {
+      closeRegionOperation();
+    }
+  }
+
+  /**
+   * Obtains or tries to obtain the given row lock.
+   * @param waitForLock if true, will block until the lock is available.
+   *        Otherwise, just tries to obtain the lock and returns
+   *        null if unavailable.
+   */
+  private Integer internalObtainRowLock(final byte[] row, boolean waitForLock)
+  throws IOException {
+    checkRow(row);
+    startRegionOperation();
+    try {
+      synchronized (lockedRows) {
+        while (lockedRows.contains(row)) {
+          if (!waitForLock) {
+            return null;
+          }
+          try {
+            lockedRows.wait();
+          } catch (InterruptedException ie) {
+            // Empty
+          }
+        }
+        // generate a new lockid. Attempt to insert the new [lockid, row].
+        // if this lockid already exists in the map then revert and retry
+        // We could have first done a lockIds.get, and if it does not exist only
+        // then do a lockIds.put, but the hope is that the lockIds.put will
+        // mostly return null the first time itself because there won't be
+        // too many lockId collisions.
+        byte [] prev = null;
+        Integer lockId = null;
+        do {
+          lockId = new Integer(lockIdGenerator++);
+          prev = lockIds.put(lockId, row);
+          if (prev != null) {
+            lockIds.put(lockId, prev);    // revert old value
+            lockIdGenerator = rand.nextInt(); // generate new start point
+          }
+        } while (prev != null);
+
+        lockedRows.add(row);
+        lockedRows.notifyAll();
+        return lockId;
+      }
+    } finally {
+      closeRegionOperation();
+    }
+  }
+
+  /**
+   * Used by unit tests.
+   * @param lockid
+   * @return Row that goes with <code>lockid</code>
+   */
+  byte [] getRowFromLock(final Integer lockid) {
+    synchronized (lockedRows) {
+      return lockIds.get(lockid);
+    }
+  }
+
+  /**
+   * Release the row lock!
+   * @param lockid  The lock ID to release.
+   */
+  void releaseRowLock(final Integer lockid) {
+    synchronized (lockedRows) {
+      byte[] row = lockIds.remove(lockid);
+      lockedRows.remove(row);
+      lockedRows.notifyAll();
+    }
+  }
+
+  /**
+   * See if row is currently locked.
+   * @param lockid
+   * @return boolean
+   */
+  boolean isRowLocked(final Integer lockid) {
+    synchronized (lockedRows) {
+      if (lockIds.get(lockid) != null) {
+        return true;
+      }
+      return false;
+    }
+  }
+
+  /**
+   * Returns existing row lock if found, otherwise
+   * obtains a new row lock and returns it.
+   * @param lockid requested by the user, or null if the user didn't already hold lock
+   * @param row the row to lock
+   * @param waitForLock if true, will block until the lock is available, otherwise will
+   * simply return null if it could not acquire the lock.
+   * @return lockid or null if waitForLock is false and the lock was unavailable.
+   */
+  private Integer getLock(Integer lockid, byte [] row, boolean waitForLock)
+  throws IOException {
+    Integer lid = null;
+    if (lockid == null) {
+      lid = internalObtainRowLock(row, waitForLock);
+    } else {
+      if (!isRowLocked(lockid)) {
+        throw new IOException("Invalid row lock");
+      }
+      lid = lockid;
+    }
+    return lid;
+  }
+
+  public void bulkLoadHFile(String hfilePath, byte[] familyName)
+  throws IOException {
+    startRegionOperation();
+    try {
+      Store store = getStore(familyName);
+      if (store == null) {
+        throw new DoNotRetryIOException(
+            "No such column family " + Bytes.toStringBinary(familyName));
+      }
+      store.bulkLoadHFile(hfilePath);
+    } finally {
+      closeRegionOperation();
+    }
+
+  }
+
+
+  @Override
+  public boolean equals(Object o) {
+    if (!(o instanceof HRegion)) {
+      return false;
+    }
+    return this.hashCode() == ((HRegion)o).hashCode();
+  }
+
+  @Override
+  public int hashCode() {
+    return Bytes.hashCode(this.regionInfo.getRegionName());
+  }
+
+  @Override
+  public String toString() {
+    return this.regionInfo.getRegionNameAsString();
+  }
+
+  /** @return Path of region base directory */
+  public Path getTableDir() {
+    return this.tableDir;
+  }
+
+  /**
+   * RegionScanner is an iterator through a bunch of rows in an HRegion.
+   * <p>
+   * It is used to combine scanners from multiple Stores (aka column families).
+   */
+  class RegionScanner implements InternalScanner {
+    // Package local for testability
+    KeyValueHeap storeHeap = null;
+    private final byte [] stopRow;
+    private Filter filter;
+    private List<KeyValue> results = new ArrayList<KeyValue>();
+    private int batch;
+    private int isScan;
+    private boolean filterClosed = false;
+    private long readPt;
+
+    public HRegionInfo getRegionName() {
+      return regionInfo;
+    }
+    RegionScanner(Scan scan, List<KeyValueScanner> additionalScanners) throws IOException {
+      //DebugPrint.println("HRegionScanner.<init>");
+      this.filter = scan.getFilter();
+      this.batch = scan.getBatch();
+      if (Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW)) {
+        this.stopRow = null;
+      } else {
+        this.stopRow = scan.getStopRow();
+      }
+      // If we are doing a get, we want to be [startRow,endRow] normally
+      // it is [startRow,endRow) and if startRow=endRow we get nothing.
+      this.isScan = scan.isGetScan() ? -1 : 0;
+
+      this.readPt = ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+
+      List<KeyValueScanner> scanners = new ArrayList<KeyValueScanner>();
+      if (additionalScanners != null) {
+        scanners.addAll(additionalScanners);
+      }
+
+      for (Map.Entry<byte[], NavigableSet<byte[]>> entry :
+          scan.getFamilyMap().entrySet()) {
+        Store store = stores.get(entry.getKey());
+        scanners.add(store.getScanner(scan, entry.getValue()));
+      }
+      this.storeHeap = new KeyValueHeap(scanners, comparator);
+    }
+
+    RegionScanner(Scan scan) throws IOException {
+      this(scan, null);
+    }
+
+    /**
+     * Reset both the filter and the old filter.
+     */
+    protected void resetFilters() {
+      if (filter != null) {
+        filter.reset();
+      }
+    }
+
+    public synchronized boolean next(List<KeyValue> outResults, int limit)
+        throws IOException {
+      if (this.filterClosed) {
+        throw new UnknownScannerException("Scanner was closed (timed out?) " +
+            "after we renewed it. Could be caused by a very slow scanner " +
+            "or a lengthy garbage collection");
+      }
+      startRegionOperation();
+      try {
+
+        // This could be a new thread from the last time we called next().
+        ReadWriteConsistencyControl.setThreadReadPoint(this.readPt);
+
+        results.clear();
+        boolean returnResult = nextInternal(limit);
+
+        outResults.addAll(results);
+        resetFilters();
+        if (isFilterDone()) {
+          return false;
+        }
+        return returnResult;
+      } finally {
+        closeRegionOperation();
+      }
+    }
+
+    public synchronized boolean next(List<KeyValue> outResults)
+        throws IOException {
+      // apply the batching limit by default
+      return next(outResults, batch);
+    }
+
+    /*
+     * @return True if a filter rules the scanner is over, done.
+     */
+    synchronized boolean isFilterDone() {
+      return this.filter != null && this.filter.filterAllRemaining();
+    }
+
+    private boolean nextInternal(int limit) throws IOException {
+      while (true) {
+        byte [] currentRow = peekRow();
+        if (isStopRow(currentRow)) {
+          if (filter != null && filter.hasFilterRow()) {
+            filter.filterRow(results);
+          }
+          if (filter != null && filter.filterRow()) {
+            results.clear();
+          }
+
+          return false;
+        } else if (filterRowKey(currentRow)) {
+          nextRow(currentRow);
+        } else {
+          byte [] nextRow;
+          do {
+            this.storeHeap.next(results, limit - results.size());
+            if (limit > 0 && results.size() == limit) {
+              if (this.filter != null && filter.hasFilterRow()) throw new IncompatibleFilterException(
+                  "Filter with filterRow(List<KeyValue>) incompatible with scan with limit!");
+              return true; // we are expecting more yes, but also limited to how many we can return.
+            }
+          } while (Bytes.equals(currentRow, nextRow = peekRow()));
+
+          final boolean stopRow = isStopRow(nextRow);
+
+          // now that we have an entire row, lets process with a filters:
+
+          // first filter with the filterRow(List)
+          if (filter != null && filter.hasFilterRow()) {
+            filter.filterRow(results);
+          }
+
+          if (results.isEmpty() || filterRow()) {
+            // this seems like a redundant step - we already consumed the row
+            // there're no left overs.
+            // the reasons for calling this method are:
+            // 1. reset the filters.
+            // 2. provide a hook to fast forward the row (used by subclasses)
+            nextRow(currentRow);
+
+            // This row was totally filtered out, if this is NOT the last row,
+            // we should continue on.
+
+            if (!stopRow) continue;
+          }
+          return !stopRow;
+        }
+      }
+    }
+
+    private boolean filterRow() {
+      return filter != null
+          && filter.filterRow();
+    }
+    private boolean filterRowKey(byte[] row) {
+      return filter != null
+          && filter.filterRowKey(row, 0, row.length);
+    }
+
+    protected void nextRow(byte [] currentRow) throws IOException {
+      while (Bytes.equals(currentRow, peekRow())) {
+        this.storeHeap.next(MOCKED_LIST);
+      }
+      results.clear();
+      resetFilters();
+    }
+
+    private byte[] peekRow() {
+      KeyValue kv = this.storeHeap.peek();
+      return kv == null ? null : kv.getRow();
+    }
+
+    private boolean isStopRow(byte [] currentRow) {
+      return currentRow == null ||
+          (stopRow != null &&
+          comparator.compareRows(stopRow, 0, stopRow.length,
+              currentRow, 0, currentRow.length) <= isScan);
+    }
+
+    public synchronized void close() {
+      if (storeHeap != null) {
+        storeHeap.close();
+        storeHeap = null;
+      }
+      this.filterClosed = true;
+    }
+  }
+
+  // Utility methods
+  /**
+   * A utility method to create new instances of HRegion based on the
+   * {@link HConstants#REGION_IMPL} configuration property.
+   * @param tableDir qualified path of directory where region should be located,
+   * usually the table directory.
+   * @param log The HLog is the outbound log for any updates to the HRegion
+   * (There's a single HLog for all the HRegions on a single HRegionServer.)
+   * The log file is a logfile from the previous execution that's
+   * custom-computed for this HRegion. The HRegionServer computes and sorts the
+   * appropriate log info for this HRegion. If there is a previous log file
+   * (implying that the HRegion has been written-to before), then read it from
+   * the supplied path.
+   * @param fs is the filesystem.
+   * @param conf is global configuration settings.
+   * @param regionInfo - HRegionInfo that describes the region
+   * is new), then read them from the supplied path.
+   * @param flushListener an object that implements CacheFlushListener or null
+   * making progress to master -- otherwise master might think region deploy
+   * failed.  Can be null.
+   * @return the new instance
+   */
+  public static HRegion newHRegion(Path tableDir, HLog log, FileSystem fs, Configuration conf,
+                                   HRegionInfo regionInfo, FlushRequester flushListener) {
+    try {
+      @SuppressWarnings("unchecked")
+      Class<? extends HRegion> regionClass =
+          (Class<? extends HRegion>) conf.getClass(HConstants.REGION_IMPL, HRegion.class);
+
+      Constructor<? extends HRegion> c =
+          regionClass.getConstructor(Path.class, HLog.class, FileSystem.class,
+              Configuration.class, HRegionInfo.class, FlushRequester.class);
+
+      return c.newInstance(tableDir, log, fs, conf, regionInfo, flushListener);
+    } catch (Throwable e) {
+      // todo: what should I throw here?
+      throw new IllegalStateException("Could not instantiate a region instance.", e);
+    }
+  }
+
+  /**
+   * Convenience method creating new HRegions. Used by createTable and by the
+   * bootstrap code in the HMaster constructor.
+   * Note, this method creates an {@link HLog} for the created region. It
+   * needs to be closed explicitly.  Use {@link HRegion#getLog()} to get
+   * access.
+   * @param info Info for region to create.
+   * @param rootDir Root directory for HBase instance
+   * @param conf
+   * @return new HRegion
+   *
+   * @throws IOException
+   */
+  public static HRegion createHRegion(final HRegionInfo info, final Path rootDir,
+    final Configuration conf)
+  throws IOException {
+    Path tableDir =
+      HTableDescriptor.getTableDir(rootDir, info.getTableDesc().getName());
+    Path regionDir = HRegion.getRegionDir(tableDir, info.getEncodedName());
+    FileSystem fs = FileSystem.get(conf);
+    fs.mkdirs(regionDir);
+    HRegion region = HRegion.newHRegion(tableDir,
+      new HLog(fs, new Path(regionDir, HConstants.HREGION_LOGDIR_NAME),
+          new Path(regionDir, HConstants.HREGION_OLDLOGDIR_NAME), conf),
+      fs, conf, info, null);
+    region.initialize();
+    return region;
+  }
+
+  /**
+   * Open a Region.
+   * @param info Info for region to be opened.
+   * @param wal HLog for region to use. This method will call
+   * HLog#setSequenceNumber(long) passing the result of the call to
+   * HRegion#getMinSequenceId() to ensure the log id is properly kept
+   * up.  HRegionStore does this every time it opens a new region.
+   * @param conf
+   * @return new HRegion
+   *
+   * @throws IOException
+   */
+  public static HRegion openHRegion(final HRegionInfo info, final HLog wal,
+      final Configuration conf)
+  throws IOException {
+    return openHRegion(info, wal, conf, null, null);
+  }
+
+  /**
+   * Open a Region.
+   * @param info Info for region to be opened.
+   * @param wal HLog for region to use. This method will call
+   * HLog#setSequenceNumber(long) passing the result of the call to
+   * HRegion#getMinSequenceId() to ensure the log id is properly kept
+   * up.  HRegionStore does this every time it opens a new region.
+   * @param conf
+   * @param flusher An interface we can request flushes against.
+   * @param reporter An interface we can report progress against.
+   * @return new HRegion
+   *
+   * @throws IOException
+   */
+  public static HRegion openHRegion(final HRegionInfo info, final HLog wal,
+    final Configuration conf, final FlushRequester flusher,
+    final Progressable reporter)
+  throws IOException {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Opening region: " + info);
+    }
+    if (info == null) {
+      throw new NullPointerException("Passed region info is null");
+    }
+    Path dir = HTableDescriptor.getTableDir(FSUtils.getRootDir(conf),
+      info.getTableDesc().getName());
+    HRegion r = HRegion.newHRegion(dir, wal, FileSystem.get(conf), conf, info,
+      flusher);
+    return r.openHRegion(reporter);
+  }
+
+  /**
+   * Open HRegion.
+   * Calls initialize and sets sequenceid.
+   * @param reporter
+   * @return Returns <code>this</code>
+   * @throws IOException
+   */
+  protected HRegion openHRegion(final Progressable reporter)
+  throws IOException {
+    checkCompressionCodecs();
+
+    long seqid = initialize(reporter);
+    if (this.log != null) {
+      this.log.setSequenceNumber(seqid);
+    }
+    return this;
+  }
+
+  private void checkCompressionCodecs() throws IOException {
+    for (HColumnDescriptor fam: regionInfo.getTableDesc().getColumnFamilies()) {
+      CompressionTest.testCompression(fam.getCompression());
+      CompressionTest.testCompression(fam.getCompactionCompression());
+    }
+  }
+
+  /**
+   * Inserts a new region's meta information into the passed
+   * <code>meta</code> region. Used by the HMaster bootstrap code adding
+   * new table to ROOT table.
+   *
+   * @param meta META HRegion to be updated
+   * @param r HRegion to add to <code>meta</code>
+   *
+   * @throws IOException
+   */
+  public static void addRegionToMETA(HRegion meta, HRegion r)
+  throws IOException {
+    meta.checkResources();
+    // The row key is the region name
+    byte[] row = r.getRegionName();
+    Integer lid = meta.obtainRowLock(row);
+    try {
+      final List<KeyValue> edits = new ArrayList<KeyValue>(1);
+      edits.add(new KeyValue(row, HConstants.CATALOG_FAMILY,
+          HConstants.REGIONINFO_QUALIFIER,
+          EnvironmentEdgeManager.currentTimeMillis(),
+          Writables.getBytes(r.getRegionInfo())));
+      meta.put(HConstants.CATALOG_FAMILY, edits);
+    } finally {
+      meta.releaseRowLock(lid);
+    }
+  }
+
+  /**
+   * Deletes all the files for a HRegion
+   *
+   * @param fs the file system object
+   * @param rootdir qualified path of HBase root directory
+   * @param info HRegionInfo for region to be deleted
+   * @throws IOException
+   */
+  public static void deleteRegion(FileSystem fs, Path rootdir, HRegionInfo info)
+  throws IOException {
+    deleteRegion(fs, HRegion.getRegionDir(rootdir, info));
+  }
+
+  private static void deleteRegion(FileSystem fs, Path regiondir)
+  throws IOException {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("DELETING region " + regiondir.toString());
+    }
+    if (!fs.delete(regiondir, true)) {
+      LOG.warn("Failed delete of " + regiondir);
+    }
+  }
+
+  /**
+   * Computes the Path of the HRegion
+   *
+   * @param rootdir qualified path of HBase root directory
+   * @param info HRegionInfo for the region
+   * @return qualified path of region directory
+   */
+  public static Path getRegionDir(final Path rootdir, final HRegionInfo info) {
+    return new Path(
+      HTableDescriptor.getTableDir(rootdir, info.getTableDesc().getName()),
+                                   info.getEncodedName());
+  }
+
+  /**
+   * Determines if the specified row is within the row range specified by the
+   * specified HRegionInfo
+   *
+   * @param info HRegionInfo that specifies the row range
+   * @param row row to be checked
+   * @return true if the row is within the range specified by the HRegionInfo
+   */
+  public static boolean rowIsInRange(HRegionInfo info, final byte [] row) {
+    return ((info.getStartKey().length == 0) ||
+        (Bytes.compareTo(info.getStartKey(), row) <= 0)) &&
+        ((info.getEndKey().length == 0) ||
+            (Bytes.compareTo(info.getEndKey(), row) > 0));
+  }
+
+  /**
+   * Make the directories for a specific column family
+   *
+   * @param fs the file system
+   * @param tabledir base directory where region will live (usually the table dir)
+   * @param hri
+   * @param colFamily the column family
+   * @throws IOException
+   */
+  public static void makeColumnFamilyDirs(FileSystem fs, Path tabledir,
+    final HRegionInfo hri, byte [] colFamily)
+  throws IOException {
+    Path dir = Store.getStoreHomedir(tabledir, hri.getEncodedName(), colFamily);
+    if (!fs.mkdirs(dir)) {
+      LOG.warn("Failed to create " + dir);
+    }
+  }
+
+  /**
+   * Merge two HRegions.  The regions must be adjacent and must not overlap.
+   *
+   * @param srcA
+   * @param srcB
+   * @return new merged HRegion
+   * @throws IOException
+   */
+  public static HRegion mergeAdjacent(final HRegion srcA, final HRegion srcB)
+  throws IOException {
+    HRegion a = srcA;
+    HRegion b = srcB;
+
+    // Make sure that srcA comes first; important for key-ordering during
+    // write of the merged file.
+    if (srcA.getStartKey() == null) {
+      if (srcB.getStartKey() == null) {
+        throw new IOException("Cannot merge two regions with null start key");
+      }
+      // A's start key is null but B's isn't. Assume A comes before B
+    } else if ((srcB.getStartKey() == null) ||
+      (Bytes.compareTo(srcA.getStartKey(), srcB.getStartKey()) > 0)) {
+      a = srcB;
+      b = srcA;
+    }
+
+    if (!(Bytes.compareTo(a.getEndKey(), b.getStartKey()) == 0)) {
+      throw new IOException("Cannot merge non-adjacent regions");
+    }
+    return merge(a, b);
+  }
+
+  /**
+   * Merge two regions whether they are adjacent or not.
+   *
+   * @param a region a
+   * @param b region b
+   * @return new merged region
+   * @throws IOException
+   */
+  public static HRegion merge(HRegion a, HRegion b) throws IOException {
+    if (!a.getRegionInfo().getTableDesc().getNameAsString().equals(
+        b.getRegionInfo().getTableDesc().getNameAsString())) {
+      throw new IOException("Regions do not belong to the same table");
+    }
+
+    FileSystem fs = a.getFilesystem();
+
+    // Make sure each region's cache is empty
+
+    a.flushcache();
+    b.flushcache();
+
+    // Compact each region so we only have one store file per family
+
+    a.compactStores(true);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Files for region: " + a);
+      listPaths(fs, a.getRegionDir());
+    }
+    b.compactStores(true);
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Files for region: " + b);
+      listPaths(fs, b.getRegionDir());
+    }
+
+    Configuration conf = a.getConf();
+    HTableDescriptor tabledesc = a.getTableDesc();
+    HLog log = a.getLog();
+    Path tableDir = a.getTableDir();
+    // Presume both are of same region type -- i.e. both user or catalog
+    // table regions.  This way can use comparator.
+    final byte[] startKey =
+      (a.comparator.matchingRows(a.getStartKey(), 0, a.getStartKey().length,
+           HConstants.EMPTY_BYTE_ARRAY, 0, HConstants.EMPTY_BYTE_ARRAY.length)
+       || b.comparator.matchingRows(b.getStartKey(), 0,
+              b.getStartKey().length, HConstants.EMPTY_BYTE_ARRAY, 0,
+              HConstants.EMPTY_BYTE_ARRAY.length))
+      ? HConstants.EMPTY_BYTE_ARRAY
+      : (a.comparator.compareRows(a.getStartKey(), 0, a.getStartKey().length,
+             b.getStartKey(), 0, b.getStartKey().length) <= 0
+         ? a.getStartKey()
+         : b.getStartKey());
+    final byte[] endKey =
+      (a.comparator.matchingRows(a.getEndKey(), 0, a.getEndKey().length,
+           HConstants.EMPTY_BYTE_ARRAY, 0, HConstants.EMPTY_BYTE_ARRAY.length)
+       || a.comparator.matchingRows(b.getEndKey(), 0, b.getEndKey().length,
+              HConstants.EMPTY_BYTE_ARRAY, 0,
+              HConstants.EMPTY_BYTE_ARRAY.length))
+      ? HConstants.EMPTY_BYTE_ARRAY
+      : (a.comparator.compareRows(a.getEndKey(), 0, a.getEndKey().length,
+             b.getEndKey(), 0, b.getEndKey().length) <= 0
+         ? b.getEndKey()
+         : a.getEndKey());
+
+    HRegionInfo newRegionInfo = new HRegionInfo(tabledesc, startKey, endKey);
+    LOG.info("Creating new region " + newRegionInfo.toString());
+    String encodedName = newRegionInfo.getEncodedName();
+    Path newRegionDir = HRegion.getRegionDir(a.getTableDir(), encodedName);
+    if(fs.exists(newRegionDir)) {
+      throw new IOException("Cannot merge; target file collision at " +
+          newRegionDir);
+    }
+    fs.mkdirs(newRegionDir);
+
+    LOG.info("starting merge of regions: " + a + " and " + b +
+      " into new region " + newRegionInfo.toString() +
+        " with start key <" + Bytes.toString(startKey) + "> and end key <" +
+        Bytes.toString(endKey) + ">");
+
+    // Move HStoreFiles under new region directory
+    Map<byte [], List<StoreFile>> byFamily =
+      new TreeMap<byte [], List<StoreFile>>(Bytes.BYTES_COMPARATOR);
+    byFamily = filesByFamily(byFamily, a.close());
+    byFamily = filesByFamily(byFamily, b.close());
+    for (Map.Entry<byte [], List<StoreFile>> es : byFamily.entrySet()) {
+      byte [] colFamily = es.getKey();
+      makeColumnFamilyDirs(fs, tableDir, newRegionInfo, colFamily);
+      // Because we compacted the source regions we should have no more than two
+      // HStoreFiles per family and there will be no reference store
+      List<StoreFile> srcFiles = es.getValue();
+      if (srcFiles.size() == 2) {
+        long seqA = srcFiles.get(0).getMaxSequenceId();
+        long seqB = srcFiles.get(1).getMaxSequenceId();
+        if (seqA == seqB) {
+          // Can't have same sequenceid since on open of a store, this is what
+          // distingushes the files (see the map of stores how its keyed by
+          // sequenceid).
+          throw new IOException("Files have same sequenceid: " + seqA);
+        }
+      }
+      for (StoreFile hsf: srcFiles) {
+        StoreFile.rename(fs, hsf.getPath(),
+          StoreFile.getUniqueFile(fs, Store.getStoreHomedir(tableDir,
+            newRegionInfo.getEncodedName(), colFamily)));
+      }
+    }
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Files for new region");
+      listPaths(fs, newRegionDir);
+    }
+    HRegion dstRegion = HRegion.newHRegion(tableDir, log, fs, conf, newRegionInfo, null);
+    dstRegion.initialize();
+    dstRegion.compactStores();
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Files for new region");
+      listPaths(fs, dstRegion.getRegionDir());
+    }
+    deleteRegion(fs, a.getRegionDir());
+    deleteRegion(fs, b.getRegionDir());
+
+    LOG.info("merge completed. New region is " + dstRegion);
+
+    return dstRegion;
+  }
+
+  /*
+   * Fills a map with a vector of store files keyed by column family.
+   * @param byFamily Map to fill.
+   * @param storeFiles Store files to process.
+   * @param family
+   * @return Returns <code>byFamily</code>
+   */
+  private static Map<byte [], List<StoreFile>> filesByFamily(
+      Map<byte [], List<StoreFile>> byFamily, List<StoreFile> storeFiles) {
+    for (StoreFile src: storeFiles) {
+      byte [] family = src.getFamily();
+      List<StoreFile> v = byFamily.get(family);
+      if (v == null) {
+        v = new ArrayList<StoreFile>();
+        byFamily.put(family, v);
+      }
+      v.add(src);
+    }
+    return byFamily;
+  }
+
+  /**
+   * @return True if needs a mojor compaction.
+   * @throws IOException
+   */
+  boolean isMajorCompaction() throws IOException {
+    for (Store store: this.stores.values()) {
+      if (store.isMajorCompaction()) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  /*
+   * List the files under the specified directory
+   *
+   * @param fs
+   * @param dir
+   * @throws IOException
+   */
+  private static void listPaths(FileSystem fs, Path dir) throws IOException {
+    if (LOG.isDebugEnabled()) {
+      FileStatus[] stats = fs.listStatus(dir);
+      if (stats == null || stats.length == 0) {
+        return;
+      }
+      for (int i = 0; i < stats.length; i++) {
+        String path = stats[i].getPath().toString();
+        if (stats[i].isDir()) {
+          LOG.debug("d " + path);
+          listPaths(fs, stats[i].getPath());
+        } else {
+          LOG.debug("f " + path + " size=" + stats[i].getLen());
+        }
+      }
+    }
+  }
+
+
+  //
+  // HBASE-880
+  //
+  /**
+   * @param get get object
+   * @param lockid existing lock id, or null for no previous lock
+   * @return result
+   * @throws IOException read exceptions
+   */
+  public Result get(final Get get, final Integer lockid) throws IOException {
+    // Verify families are all valid
+    if (get.hasFamilies()) {
+      for (byte [] family: get.familySet()) {
+        checkFamily(family);
+      }
+    } else { // Adding all families to scanner
+      for (byte[] family: regionInfo.getTableDesc().getFamiliesKeys()) {
+        get.addFamily(family);
+      }
+    }
+    List<KeyValue> result = get(get);
+
+    return new Result(result);
+  }
+
+  /**
+   * An optimized version of {@link #get(Get)} that checks MemStore first for
+   * the specified query.
+   * <p>
+   * This is intended for use by increment operations where we have the
+   * guarantee that versions are never inserted out-of-order so if a value
+   * exists in MemStore it is the latest value.
+   * <p>
+   * It only makes sense to use this method without a TimeRange and maxVersions
+   * equal to 1.
+   * @param get
+   * @return result
+   * @throws IOException
+   */
+  private List<KeyValue> getLastIncrement(final Get get) throws IOException {
+    InternalScan iscan = new InternalScan(get);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+
+    // memstore scan
+    iscan.checkOnlyMemStore();
+    InternalScanner scanner = null;
+    try {
+      scanner = getScanner(iscan);
+      scanner.next(results);
+    } finally {
+      if (scanner != null)
+        scanner.close();
+    }
+
+    // count how many columns we're looking for
+    int expected = 0;
+    Map<byte[], NavigableSet<byte[]>> familyMap = get.getFamilyMap();
+    for (NavigableSet<byte[]> qfs : familyMap.values()) {
+      expected += qfs.size();
+    }
+
+    // found everything we were looking for, done
+    if (results.size() == expected) {
+      return results;
+    }
+
+    // still have more columns to find
+    if (results != null && !results.isEmpty()) {
+      // subtract what was found in memstore
+      for (KeyValue kv : results) {
+        byte [] family = kv.getFamily();
+        NavigableSet<byte[]> qfs = familyMap.get(family);
+        qfs.remove(kv.getQualifier());
+        if (qfs.isEmpty()) familyMap.remove(family);
+        expected--;
+      }
+      // make a new get for just what is left
+      Get newGet = new Get(get.getRow());
+      for (Map.Entry<byte[], NavigableSet<byte[]>> f : familyMap.entrySet()) {
+        byte [] family = f.getKey();
+        for (byte [] qualifier : f.getValue()) {
+          newGet.addColumn(family, qualifier);
+        }
+      }
+      newGet.setTimeRange(get.getTimeRange().getMin(),
+          get.getTimeRange().getMax());
+      iscan = new InternalScan(newGet);
+    }
+
+    // check store files for what is left
+    List<KeyValue> fileResults = new ArrayList<KeyValue>();
+    iscan.checkOnlyStoreFiles();
+    scanner = null;
+    try {
+      scanner = getScanner(iscan);
+      scanner.next(fileResults);
+    } finally {
+      if (scanner != null)
+        scanner.close();
+    }
+
+    // combine and return
+    results.addAll(fileResults);
+    return results;
+  }
+
+  /*
+   * Do a get based on the get parameter.
+   */
+  private List<KeyValue> get(final Get get) throws IOException {
+    Scan scan = new Scan(get);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+
+    InternalScanner scanner = null;
+    try {
+      scanner = getScanner(scan);
+      scanner.next(results);
+    } finally {
+      if (scanner != null)
+        scanner.close();
+    }
+    return results;
+  }
+
+  /**
+   * Perform one or more increment operations on a row.
+   * <p>
+   * Increments performed are done under row lock but reads do not take locks
+   * out so this can be seen partially complete by gets and scans.
+   * @param increment
+   * @param lockid
+   * @param writeToWAL
+   * @return new keyvalues after increment
+   * @throws IOException
+   */
+  public Result increment(Increment increment, Integer lockid,
+      boolean writeToWAL)
+  throws IOException {
+    // TODO: Use RWCC to make this set of increments atomic to reads
+    byte [] row = increment.getRow();
+    checkRow(row);
+    TimeRange tr = increment.getTimeRange();
+    boolean flush = false;
+    WALEdit walEdits = null;
+    List<KeyValue> allKVs = new ArrayList<KeyValue>(increment.numColumns());
+    List<KeyValue> kvs = new ArrayList<KeyValue>(increment.numColumns());
+    long now = EnvironmentEdgeManager.currentTimeMillis();
+    long size = 0;
+
+    // Lock row
+    startRegionOperation();
+    try {
+      Integer lid = getLock(lockid, row, true);
+      try {
+        // Process each family
+        for (Map.Entry<byte [], NavigableMap<byte [], Long>> family :
+          increment.getFamilyMap().entrySet()) {
+
+          Store store = stores.get(family.getKey());
+
+          // Get previous values for all columns in this family
+          Get get = new Get(row);
+          for (Map.Entry<byte [], Long> column : family.getValue().entrySet()) {
+            get.addColumn(family.getKey(), column.getKey());
+          }
+          get.setTimeRange(tr.getMin(), tr.getMax());
+          List<KeyValue> results = getLastIncrement(get);
+
+          // Iterate the input columns and update existing values if they were
+          // found, otherwise add new column initialized to the increment amount
+          int idx = 0;
+          for (Map.Entry<byte [], Long> column : family.getValue().entrySet()) {
+            long amount = column.getValue();
+            if (idx < results.size() &&
+                results.get(idx).matchingQualifier(column.getKey())) {
+              amount += Bytes.toLong(results.get(idx).getValue());
+              idx++;
+            }
+
+            // Append new incremented KeyValue to list
+            KeyValue newKV = new KeyValue(row, family.getKey(), column.getKey(),
+                now, Bytes.toBytes(amount));
+            kvs.add(newKV);
+
+            // Append update to WAL
+            if (writeToWAL) {
+              if (walEdits == null) {
+                walEdits = new WALEdit();
+              }
+              walEdits.add(newKV);
+            }
+          }
+
+          // Write the KVs for this family into the store
+          size += store.upsert(kvs);
+          allKVs.addAll(kvs);
+          kvs.clear();
+        }
+
+        // Actually write to WAL now
+        if (writeToWAL) {
+          this.log.append(regionInfo, regionInfo.getTableDesc().getName(),
+            walEdits, now);
+        }
+
+        size = this.memstoreSize.addAndGet(size);
+        flush = isFlushSize(size);
+      } finally {
+        releaseRowLock(lid);
+      }
+    } finally {
+      closeRegionOperation();
+    }
+
+    if (flush) {
+      // Request a cache flush.  Do it outside update lock.
+      requestFlush();
+    }
+
+    return new Result(allKVs);
+  }
+
+  /**
+   *
+   * @param row
+   * @param family
+   * @param qualifier
+   * @param amount
+   * @param writeToWAL
+   * @return The new value.
+   * @throws IOException
+   */
+  public long incrementColumnValue(byte [] row, byte [] family,
+      byte [] qualifier, long amount, boolean writeToWAL)
+  throws IOException {
+    checkRow(row);
+    boolean flush = false;
+    // Lock row
+    long result = amount;
+    startRegionOperation();
+    try {
+      Integer lid = obtainRowLock(row);
+      try {
+        Store store = stores.get(family);
+
+        // Get the old value:
+        Get get = new Get(row);
+        get.addColumn(family, qualifier);
+
+        List<KeyValue> results = getLastIncrement(get);
+
+        if (!results.isEmpty()) {
+          KeyValue kv = results.get(0);
+          byte [] buffer = kv.getBuffer();
+          int valueOffset = kv.getValueOffset();
+          result += Bytes.toLong(buffer, valueOffset, Bytes.SIZEOF_LONG);
+        }
+
+        // build the KeyValue now:
+        KeyValue newKv = new KeyValue(row, family,
+            qualifier, EnvironmentEdgeManager.currentTimeMillis(),
+            Bytes.toBytes(result));
+
+        // now log it:
+        if (writeToWAL) {
+          long now = EnvironmentEdgeManager.currentTimeMillis();
+          WALEdit walEdit = new WALEdit();
+          walEdit.add(newKv);
+          this.log.append(regionInfo, regionInfo.getTableDesc().getName(),
+            walEdit, now);
+        }
+
+        // Now request the ICV to the store, this will set the timestamp
+        // appropriately depending on if there is a value in memcache or not.
+        // returns the change in the size of the memstore from operation
+        long size = store.updateColumnValue(row, family, qualifier, result);
+
+        size = this.memstoreSize.addAndGet(size);
+        flush = isFlushSize(size);
+      } finally {
+        releaseRowLock(lid);
+      }
+    } finally {
+      closeRegionOperation();
+    }
+
+    if (flush) {
+      // Request a cache flush.  Do it outside update lock.
+      requestFlush();
+    }
+
+    return result;
+  }
+
+
+  //
+  // New HBASE-880 Helpers
+  //
+
+  private void checkFamily(final byte [] family)
+  throws NoSuchColumnFamilyException {
+    if(!regionInfo.getTableDesc().hasFamily(family)) {
+      throw new NoSuchColumnFamilyException("Column family " +
+          Bytes.toString(family) + " does not exist in region " + this
+            + " in table " + regionInfo.getTableDesc());
+    }
+  }
+
+  public static final long FIXED_OVERHEAD = ClassSize.align(
+      (4 * Bytes.SIZEOF_LONG) + Bytes.SIZEOF_BOOLEAN +
+      (21 * ClassSize.REFERENCE) + ClassSize.OBJECT + Bytes.SIZEOF_INT);
+
+  public static final long DEEP_OVERHEAD = ClassSize.align(FIXED_OVERHEAD +
+      (ClassSize.OBJECT * 2) + (2 * ClassSize.ATOMIC_BOOLEAN) +
+      ClassSize.ATOMIC_LONG + ClassSize.ATOMIC_INTEGER +
+
+      // Using TreeMap for TreeSet
+      ClassSize.TREEMAP +
+
+      // Using TreeMap for HashMap
+      ClassSize.TREEMAP +
+
+      ClassSize.CONCURRENT_SKIPLISTMAP + ClassSize.CONCURRENT_SKIPLISTMAP_ENTRY +
+      ClassSize.align(ClassSize.OBJECT +
+        (5 * Bytes.SIZEOF_BOOLEAN)) +
+        (3 * ClassSize.REENTRANT_LOCK));
+
+  public long heapSize() {
+    long heapSize = DEEP_OVERHEAD;
+    for(Store store : this.stores.values()) {
+      heapSize += store.heapSize();
+    }
+    return heapSize;
+  }
+
+  /*
+   * This method calls System.exit.
+   * @param message Message to print out.  May be null.
+   */
+  private static void printUsageAndExit(final String message) {
+    if (message != null && message.length() > 0) System.out.println(message);
+    System.out.println("Usage: HRegion CATLALOG_TABLE_DIR [major_compact]");
+    System.out.println("Options:");
+    System.out.println(" major_compact  Pass this option to major compact " +
+      "passed region.");
+    System.out.println("Default outputs scan of passed region.");
+    System.exit(1);
+  }
+
+  /*
+   * Process table.
+   * Do major compaction or list content.
+   * @param fs
+   * @param p
+   * @param log
+   * @param c
+   * @param majorCompact
+   * @throws IOException
+   */
+  private static void processTable(final FileSystem fs, final Path p,
+      final HLog log, final Configuration c,
+      final boolean majorCompact)
+  throws IOException {
+    HRegion region = null;
+    String rootStr = Bytes.toString(HConstants.ROOT_TABLE_NAME);
+    String metaStr = Bytes.toString(HConstants.META_TABLE_NAME);
+    // Currently expects tables have one region only.
+    if (p.getName().startsWith(rootStr)) {
+      region = HRegion.newHRegion(p, log, fs, c, HRegionInfo.ROOT_REGIONINFO, null);
+    } else if (p.getName().startsWith(metaStr)) {
+      region = HRegion.newHRegion(p, log, fs, c, HRegionInfo.FIRST_META_REGIONINFO,
+          null);
+    } else {
+      throw new IOException("Not a known catalog table: " + p.toString());
+    }
+    try {
+      region.initialize();
+      if (majorCompact) {
+        region.compactStores(true);
+      } else {
+        // Default behavior
+        Scan scan = new Scan();
+        // scan.addFamily(HConstants.CATALOG_FAMILY);
+        InternalScanner scanner = region.getScanner(scan);
+        try {
+          List<KeyValue> kvs = new ArrayList<KeyValue>();
+          boolean done = false;
+          do {
+            kvs.clear();
+            done = scanner.next(kvs);
+            if (kvs.size() > 0) LOG.info(kvs);
+          } while (done);
+        } finally {
+          scanner.close();
+        }
+        // System.out.println(region.getClosestRowBefore(Bytes.toBytes("GeneratedCSVContent2,E3652782193BC8D66A0BA1629D0FAAAB,9993372036854775807")));
+      }
+    } finally {
+      region.close();
+    }
+  }
+
+  /**
+   * For internal use in forcing splits ahead of file size limit.
+   * @param b
+   * @return previous value
+   */
+  public boolean shouldSplit(boolean b) {
+    boolean old = this.splitRequest;
+    this.splitRequest = b;
+    return old;
+  }
+
+  /**
+   * Give the region a chance to prepare before it is split.
+   */
+  protected void prepareToSplit() {
+    // nothing
+  }
+
+  /**
+   * @return The priority that this region should have in the compaction queue
+   */
+  public int getCompactPriority() {
+    int count = Integer.MAX_VALUE;
+    for(Store store : stores.values()) {
+      count = Math.min(count, store.getCompactPriority());
+    }
+    return count;
+  }
+
+  /**
+   * Checks every store to see if one has too many
+   * store files
+   * @return true if any store has too many store files
+   */
+  public boolean hasTooManyStoreFiles() {
+    for(Store store : stores.values()) {
+      if(store.hasTooManyStoreFiles()) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  /**
+   * This method needs to be called before any public call that reads or
+   * modifies data. It has to be called just before a try.
+   * #closeRegionOperation needs to be called in the try's finally block
+   * Acquires a read lock and checks if the region is closing or closed.
+   * @throws NotServingRegionException when the region is closing or closed
+   */
+  private void startRegionOperation() throws NotServingRegionException {
+    if (this.closing.get()) {
+      throw new NotServingRegionException(regionInfo.getRegionNameAsString() +
+          " is closing");
+    }
+    lock.readLock().lock();
+    if (this.closed.get()) {
+      lock.readLock().unlock();
+      throw new NotServingRegionException(regionInfo.getRegionNameAsString() +
+          " is closed");
+    }
+  }
+
+  /**
+   * Closes the lock. This needs to be called in the finally block corresponding
+   * to the try block of #startRegionOperation
+   */
+  private void closeRegionOperation(){
+    lock.readLock().unlock();
+  }
+
+  /**
+   * A mocked list implementaion - discards all updates.
+   */
+  private static final List<KeyValue> MOCKED_LIST = new AbstractList<KeyValue>() {
+
+    @Override
+    public void add(int index, KeyValue element) {
+      // do nothing
+    }
+
+    @Override
+    public boolean addAll(int index, Collection<? extends KeyValue> c) {
+      return false; // this list is never changed as a result of an update
+    }
+
+    @Override
+    public KeyValue get(int index) {
+      throw new UnsupportedOperationException();
+    }
+
+    @Override
+    public int size() {
+      return 0;
+    }
+  };
+
+
+  /**
+   * Facility for dumping and compacting catalog tables.
+   * Only does catalog tables since these are only tables we for sure know
+   * schema on.  For usage run:
+   * <pre>
+   *   ./bin/hbase org.apache.hadoop.hbase.regionserver.HRegion
+   * </pre>
+   * @param args
+   * @throws IOException
+   */
+  public static void main(String[] args) throws IOException {
+    if (args.length < 1) {
+      printUsageAndExit(null);
+    }
+    boolean majorCompact = false;
+    if (args.length > 1) {
+      if (!args[1].toLowerCase().startsWith("major")) {
+        printUsageAndExit("ERROR: Unrecognized option <" + args[1] + ">");
+      }
+      majorCompact = true;
+    }
+    final Path tableDir = new Path(args[0]);
+    final Configuration c = HBaseConfiguration.create();
+    final FileSystem fs = FileSystem.get(c);
+    final Path logdir = new Path(c.get("hbase.tmp.dir"),
+        "hlog" + tableDir.getName()
+        + EnvironmentEdgeManager.currentTimeMillis());
+    final Path oldLogDir = new Path(c.get("hbase.tmp.dir"),
+        HConstants.HREGION_OLDLOGDIR_NAME);
+    final HLog log = new HLog(fs, logdir, oldLogDir, c);
+    try {
+      processTable(fs, tableDir, log, c, majorCompact);
+     } finally {
+       log.close();
+       BlockCache bc = StoreFile.getBlockCache(c);
+       if (bc != null) bc.shutdown();
+     }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
new file mode 100644
index 0000000..d7147b5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -0,0 +1,2679 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.lang.Thread.UncaughtExceptionHandler;
+import java.lang.management.ManagementFactory;
+import java.lang.management.MemoryUsage;
+import java.lang.reflect.Constructor;
+import java.net.BindException;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.LinkedBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import com.google.common.collect.Lists;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Chore;
+import org.apache.hadoop.hbase.ClockOutOfSyncException;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.MasterAddressTracker;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.UnknownRowLockException;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.YouAreDeadException;
+import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.catalog.RootLocationEditor;
+import org.apache.hadoop.hbase.client.Action;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.MultiAction;
+import org.apache.hadoop.hbase.client.MultiPut;
+import org.apache.hadoop.hbase.client.MultiPutResponse;
+import org.apache.hadoop.hbase.client.MultiResponse;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.executor.ExecutorService.ExecutorType;
+import org.apache.hadoop.hbase.io.hfile.LruBlockCache;
+import org.apache.hadoop.hbase.io.hfile.LruBlockCache.CacheStats;
+import org.apache.hadoop.hbase.ipc.HBaseRPC;
+import org.apache.hadoop.hbase.ipc.HBaseRPCErrorHandler;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.HBaseServer;
+import org.apache.hadoop.hbase.ipc.HMasterRegionInterface;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.ipc.ServerNotRunningException;
+import org.apache.hadoop.hbase.regionserver.Leases.LeaseStillHeldException;
+import org.apache.hadoop.hbase.regionserver.handler.CloseMetaHandler;
+import org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler;
+import org.apache.hadoop.hbase.regionserver.handler.CloseRootHandler;
+import org.apache.hadoop.hbase.regionserver.handler.OpenMetaHandler;
+import org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler;
+import org.apache.hadoop.hbase.regionserver.handler.OpenRootHandler;
+import org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.WALObserver;
+import org.apache.hadoop.hbase.replication.regionserver.Replication;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.CompressionTest;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.InfoServer;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Sleeper;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.zookeeper.ClusterStatusTracker;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.hadoop.net.DNS;
+import org.apache.zookeeper.KeeperException;
+
+import com.google.common.base.Function;
+
+/**
+ * HRegionServer makes a set of HRegions available to clients. It checks in with
+ * the HMaster. There are many HRegionServers in a single HBase deployment.
+ */
+public class HRegionServer implements HRegionInterface, HBaseRPCErrorHandler,
+    Runnable, RegionServerServices, Server {
+  public static final Log LOG = LogFactory.getLog(HRegionServer.class);
+
+  // Set when a report to the master comes back with a message asking us to
+  // shutdown. Also set by call to stop when debugging or running unit tests
+  // of HRegionServer in isolation.
+  protected volatile boolean stopped = false;
+
+  // A state before we go into stopped state.  At this stage we're closing user
+  // space regions.
+  private boolean stopping = false;
+
+  // Go down hard. Used if file system becomes unavailable and also in
+  // debugging and unit tests.
+  protected volatile boolean abortRequested;
+
+  private volatile boolean killed = false;
+
+  // If false, the file system has become unavailable
+  protected volatile boolean fsOk;
+
+  protected HServerInfo serverInfo;
+  protected final Configuration conf;
+
+  private final HConnection connection;
+  protected final AtomicBoolean haveRootRegion = new AtomicBoolean(false);
+  private FileSystem fs;
+  private Path rootDir;
+  private final Random rand = new Random();
+
+  /**
+   * Map of regions currently being served by this region server. Key is the
+   * encoded region name.  All access should be synchronized.
+   */
+  protected final Map<String, HRegion> onlineRegions =
+    new HashMap<String, HRegion>();
+
+  protected final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+  private final LinkedBlockingQueue<HMsg> outboundMsgs = new LinkedBlockingQueue<HMsg>();
+
+  final int numRetries;
+  protected final int threadWakeFrequency;
+  private final int msgInterval;
+
+  protected final int numRegionsToReport;
+
+  private final long maxScannerResultSize;
+
+  // Remote HMaster
+  private HMasterRegionInterface hbaseMaster;
+
+  // Server to handle client requests. Default access so can be accessed by
+  // unit tests.
+  HBaseServer server;
+
+  // Leases
+  private Leases leases;
+
+  // Request counter
+  private volatile AtomicInteger requestCount = new AtomicInteger();
+
+  // Info server. Default access so can be used by unit tests. REGIONSERVER
+  // is name of the webapp and the attribute name used stuffing this instance
+  // into web context.
+  InfoServer infoServer;
+
+  /** region server process name */
+  public static final String REGIONSERVER = "regionserver";
+
+  /*
+   * Space is reserved in HRS constructor and then released when aborting to
+   * recover from an OOME. See HBASE-706. TODO: Make this percentage of the heap
+   * or a minimum.
+   */
+  private final LinkedList<byte[]> reservedSpace = new LinkedList<byte[]>();
+
+  private RegionServerMetrics metrics;
+
+  // Compactions
+  CompactSplitThread compactSplitThread;
+
+  // Cache flushing
+  MemStoreFlusher cacheFlusher;
+
+  /*
+   * Check for major compactions.
+   */
+  Chore majorCompactionChecker;
+
+  // HLog and HLog roller. log is protected rather than private to avoid
+  // eclipse warning when accessed by inner classes
+  protected volatile HLog hlog;
+  LogRoller hlogRoller;
+
+  // flag set after we're done setting up server threads (used for testing)
+  protected volatile boolean isOnline;
+
+  final Map<String, InternalScanner> scanners = new ConcurrentHashMap<String, InternalScanner>();
+
+  // zookeeper connection and watcher
+  private ZooKeeperWatcher zooKeeper;
+
+  // master address manager and watcher
+  private MasterAddressTracker masterAddressManager;
+
+  // catalog tracker
+  private CatalogTracker catalogTracker;
+
+  // Cluster Status Tracker
+  private ClusterStatusTracker clusterStatusTracker;
+
+  // A sleeper that sleeps for msgInterval.
+  private final Sleeper sleeper;
+
+  private final int rpcTimeout;
+
+  // The main region server thread.
+  @SuppressWarnings("unused")
+  private Thread regionServerThread;
+
+  // Instance of the hbase executor service.
+  private ExecutorService service;
+
+  // Replication services. If no replication, this handler will be null.
+  private Replication replicationHandler;
+
+  /**
+   * Starts a HRegionServer at the default location
+   *
+   * @param conf
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  public HRegionServer(Configuration conf) throws IOException, InterruptedException {
+    this.fsOk = true;
+    this.conf = conf;
+    this.connection = HConnectionManager.getConnection(conf);
+    this.isOnline = false;
+
+    // check to see if the codec list is available:
+    String [] codecs = conf.getStrings("hbase.regionserver.codecs",
+        (String[])null);
+    if (codecs != null) {
+      for (String codec : codecs) {
+        if (!CompressionTest.testCompression(codec)) {
+          throw new IOException("Compression codec " + codec +
+              " not supported, aborting RS construction");
+        }
+      }
+    }
+
+    // Config'ed params
+    this.numRetries = conf.getInt("hbase.client.retries.number", 10);
+    this.threadWakeFrequency = conf.getInt(HConstants.THREAD_WAKE_FREQUENCY,
+        10 * 1000);
+    this.msgInterval = conf.getInt("hbase.regionserver.msginterval", 3 * 1000);
+
+    sleeper = new Sleeper(this.msgInterval, this);
+
+    this.maxScannerResultSize = conf.getLong(
+        HConstants.HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE_KEY,
+        HConstants.DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE);
+
+    this.numRegionsToReport = conf.getInt(
+        "hbase.regionserver.numregionstoreport", 10);
+
+    this.rpcTimeout = conf.getInt(
+        HConstants.HBASE_RPC_TIMEOUT_KEY,
+        HConstants.DEFAULT_HBASE_RPC_TIMEOUT);
+
+    this.abortRequested = false;
+    this.stopped = false;
+
+    // Server to handle client requests
+    String machineName = DNS.getDefaultHost(conf.get(
+        "hbase.regionserver.dns.interface", "default"), conf.get(
+        "hbase.regionserver.dns.nameserver", "default"));
+    String addressStr = machineName + ":" +
+      conf.get(HConstants.REGIONSERVER_PORT,
+        Integer.toString(HConstants.DEFAULT_REGIONSERVER_PORT));
+    HServerAddress address = new HServerAddress(addressStr);
+    this.server = HBaseRPC.getServer(this,
+        new Class<?>[]{HRegionInterface.class, HBaseRPCErrorHandler.class,
+        OnlineRegions.class},
+        address.getBindAddress(),
+      address.getPort(), conf.getInt("hbase.regionserver.handler.count", 10),
+        conf.getInt("hbase.regionserver.metahandler.count", 10),
+        false, conf, QOS_THRESHOLD);
+    this.server.setErrorHandler(this);
+    this.server.setQosFunction(new QosFunction());
+
+    // HServerInfo can be amended by master.  See below in reportForDuty.
+    this.serverInfo = new HServerInfo(new HServerAddress(new InetSocketAddress(
+        address.getBindAddress(), this.server.getListenerAddress().getPort())),
+        System.currentTimeMillis(), this.conf.getInt(
+            "hbase.regionserver.info.port", 60030), machineName);
+    if (this.serverInfo.getServerAddress() == null) {
+      throw new NullPointerException("Server address cannot be null; "
+          + "hbase-958 debugging");
+    }
+  }
+
+  private static final int NORMAL_QOS = 0;
+  private static final int QOS_THRESHOLD = 10;  // the line between low and high qos
+  private static final int HIGH_QOS = 100;
+
+  class QosFunction implements Function<Writable,Integer> {
+    public boolean isMetaRegion(byte[] regionName) {
+      HRegion region;
+      try {
+        region = getRegion(regionName);
+      } catch (NotServingRegionException ignored) {
+        return false;
+      }
+      return region.getRegionInfo().isMetaRegion();
+    }
+
+    @Override
+    public Integer apply(Writable from) {
+      if (!(from instanceof HBaseRPC.Invocation)) return NORMAL_QOS;
+
+      HBaseRPC.Invocation inv = (HBaseRPC.Invocation) from;
+      String methodName = inv.getMethodName();
+
+      // scanner methods...
+      if (methodName.equals("next") || methodName.equals("close")) {
+        // translate!
+        Long scannerId;
+        try {
+          scannerId = (Long) inv.getParameters()[0];
+        } catch (ClassCastException ignored) {
+          // LOG.debug("Low priority: " + from);
+          return NORMAL_QOS; // doh.
+        }
+        String scannerIdString = Long.toString(scannerId);
+        InternalScanner scanner = scanners.get(scannerIdString);
+        if (scanner instanceof HRegion.RegionScanner) {
+          HRegion.RegionScanner rs = (HRegion.RegionScanner) scanner;
+          HRegionInfo regionName = rs.getRegionName();
+          if (regionName.isMetaRegion()) {
+            // LOG.debug("High priority scanner request: " + scannerId);
+            return HIGH_QOS;
+          }
+        }
+      } else if (methodName.equals("getHServerInfo")
+          || methodName.equals("getRegionsAssignment")
+          || methodName.equals("unlockRow")
+          || methodName.equals("getProtocolVersion")
+          || methodName.equals("getClosestRowBefore")) {
+        // LOG.debug("High priority method: " + methodName);
+        return HIGH_QOS;
+      } else if (inv.getParameterClasses().length == 0) {
+       // Just let it through.  This is getOnlineRegions, etc.
+      } else if (inv.getParameterClasses()[0] == byte[].class) {
+        // first arg is byte array, so assume this is a regionname:
+        if (isMetaRegion((byte[]) inv.getParameters()[0])) {
+          // LOG.debug("High priority with method: " + methodName +
+          // " and region: "
+          // + Bytes.toString((byte[]) inv.getParameters()[0]));
+          return HIGH_QOS;
+        }
+      } else if (inv.getParameterClasses()[0] == MultiAction.class) {
+        MultiAction ma = (MultiAction) inv.getParameters()[0];
+        Set<byte[]> regions = ma.getRegions();
+        // ok this sucks, but if any single of the actions touches a meta, the
+        // whole
+        // thing gets pingged high priority. This is a dangerous hack because
+        // people
+        // can get their multi action tagged high QOS by tossing a Get(.META.)
+        // AND this
+        // regionserver hosts META/-ROOT-
+        for (byte[] region : regions) {
+          if (isMetaRegion(region)) {
+            // LOG.debug("High priority multi with region: " +
+            // Bytes.toString(region));
+            return HIGH_QOS; // short circuit for the win.
+          }
+        }
+      }
+      // LOG.debug("Low priority: " + from.toString());
+      return NORMAL_QOS;
+    }
+  }
+
+  /**
+   * Creates all of the state that needs to be reconstructed in case we are
+   * doing a restart. This is shared between the constructor and restart(). Both
+   * call it.
+   *
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  private void initialize() throws IOException, InterruptedException {
+    try {
+      initializeZooKeeper();
+      initializeThreads();
+      int nbBlocks = conf.getInt("hbase.regionserver.nbreservationblocks", 4);
+      for (int i = 0; i < nbBlocks; i++) {
+        reservedSpace.add(new byte[HConstants.DEFAULT_SIZE_RESERVATION_BLOCK]);
+      }
+    } catch (Throwable t) {
+      // Call stop if error or process will stick around for ever since server
+      // puts up non-daemon threads.
+      LOG.error("Stopping HRS because failed initialize", t);
+      this.server.stop();
+    }
+  }
+
+  /**
+   * Bring up connection to zk ensemble and then wait until a master for this
+   * cluster and then after that, wait until cluster 'up' flag has been set.
+   * This is the order in which master does things.
+   * Finally put up a catalog tracker.
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  private void initializeZooKeeper() throws IOException, InterruptedException {
+    // Open connection to zookeeper and set primary watcher
+    zooKeeper = new ZooKeeperWatcher(conf, REGIONSERVER + ":" +
+      serverInfo.getServerAddress().getPort(), this);
+
+    // Create the master address manager, register with zk, and start it.  Then
+    // block until a master is available.  No point in starting up if no master
+    // running.
+    this.masterAddressManager = new MasterAddressTracker(this.zooKeeper, this);
+    this.masterAddressManager.start();
+    blockAndCheckIfStopped(this.masterAddressManager);
+
+    // Wait on cluster being up.  Master will set this flag up in zookeeper
+    // when ready.
+    this.clusterStatusTracker = new ClusterStatusTracker(this.zooKeeper, this);
+    this.clusterStatusTracker.start();
+    blockAndCheckIfStopped(this.clusterStatusTracker);
+
+    // Create the catalog tracker and start it;
+    this.catalogTracker = new CatalogTracker(this.zooKeeper, this.connection,
+      this, this.conf.getInt("hbase.regionserver.catalog.timeout", Integer.MAX_VALUE));
+    catalogTracker.start();
+  }
+
+  /**
+   * Utilty method to wait indefinitely on a znode availability while checking
+   * if the region server is shut down
+   * @param tracker znode tracker to use
+   * @throws IOException any IO exception, plus if the RS is stopped
+   * @throws InterruptedException
+   */
+  private void blockAndCheckIfStopped(ZooKeeperNodeTracker tracker)
+      throws IOException, InterruptedException {
+    while (tracker.blockUntilAvailable(this.msgInterval) == null) {
+      if (this.stopped) {
+        throw new IOException("Received the shutdown message while waiting.");
+      }
+    }
+  }
+
+  /**
+   * @return False if cluster shutdown in progress
+   */
+  private boolean isClusterUp() {
+    return this.clusterStatusTracker.isClusterUp();
+  }
+
+  private void initializeThreads() throws IOException {
+
+    // Cache flushing thread.
+    this.cacheFlusher = new MemStoreFlusher(conf, this);
+
+    // Compaction thread
+    this.compactSplitThread = new CompactSplitThread(this);
+
+    // Background thread to check for major compactions; needed if region
+    // has not gotten updates in a while. Make it run at a lesser frequency.
+    int multiplier = this.conf.getInt(HConstants.THREAD_WAKE_FREQUENCY
+        + ".multiplier", 1000);
+    this.majorCompactionChecker = new MajorCompactionChecker(this,
+        this.threadWakeFrequency * multiplier, this);
+
+    this.leases = new Leases((int) conf.getLong(
+        HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY,
+        HConstants.DEFAULT_HBASE_REGIONSERVER_LEASE_PERIOD),
+        this.threadWakeFrequency);
+  }
+
+  /**
+   * The HRegionServer sticks in this loop until closed. It repeatedly checks in
+   * with the HMaster, sending heartbeats & reports, and receiving HRegion
+   * load/unload instructions.
+   */
+  public void run() {
+
+    try {
+      // Initialize threads and wait for a master
+      initialize();
+    } catch (Exception e) {
+      abort("Fatal exception during initialization", e);
+    }
+
+    this.regionServerThread = Thread.currentThread();
+    try {
+      while (!this.stopped) {
+        if (tryReportForDuty()) break;
+      }
+      long lastMsg = 0;
+      List<HMsg> outboundMessages = new ArrayList<HMsg>();
+      // The main run loop.
+      for (int tries = 0; !this.stopped && isHealthy();) {
+        if (!isClusterUp()) {
+          if (isOnlineRegionsEmpty()) {
+            stop("Exiting; cluster shutdown set and not carrying any regions");
+          } else if (!this.stopping) {
+            this.stopping = true;
+            closeUserRegions(this.abortRequested);
+          } else if (this.stopping && LOG.isDebugEnabled()) {
+            LOG.debug("Waiting on " + getOnlineRegionsAsPrintableString());
+          }
+        }
+        long now = System.currentTimeMillis();
+        // Drop into the send loop if msgInterval has elapsed or if something
+        // to send. If we fail talking to the master, then we'll sleep below
+        // on poll of the outboundMsgs blockingqueue.
+        if ((now - lastMsg) >= msgInterval || !outboundMessages.isEmpty()) {
+          try {
+            doMetrics();
+            tryRegionServerReport(outboundMessages);
+            lastMsg = System.currentTimeMillis();
+            // Reset tries count if we had a successful transaction.
+            tries = 0;
+            if (this.stopped) continue;
+          } catch (Exception e) { // FindBugs REC_CATCH_EXCEPTION
+            // Two special exceptions could be printed out here,
+            // PleaseHoldException and YouAreDeadException
+            if (e instanceof IOException) {
+              e = RemoteExceptionHandler.checkIOException((IOException) e);
+            }
+            if (e instanceof YouAreDeadException) {
+              // This will be caught and handled as a fatal error below
+              throw e;
+            }
+            tries++;
+            if (tries > 0 && (tries % this.numRetries) == 0) {
+              // Check filesystem every so often.
+              checkFileSystem();
+            }
+            if (this.stopped) {
+              continue;
+            }
+            LOG.warn("Attempt=" + tries, e);
+            // No point retrying immediately; this is probably connection to
+            // master issue. Doing below will cause us to sleep.
+            lastMsg = System.currentTimeMillis();
+          }
+        }
+        now = System.currentTimeMillis();
+        HMsg msg = this.outboundMsgs.poll((msgInterval - (now - lastMsg)), TimeUnit.MILLISECONDS);
+        if (msg != null) outboundMessages.add(msg);
+      } // for
+    } catch (Throwable t) {
+      if (!checkOOME(t)) {
+        abort("Unhandled exception: " + t.getMessage(), t);
+      }
+    }
+    this.leases.closeAfterLeasesExpire();
+    this.server.stop();
+    if (this.infoServer != null) {
+      LOG.info("Stopping infoServer");
+      try {
+        this.infoServer.stop();
+      } catch (Exception e) {
+        e.printStackTrace();
+      }
+    }
+    // Send cache a shutdown.
+    LruBlockCache c = (LruBlockCache) StoreFile.getBlockCache(this.conf);
+    if (c != null) {
+      c.shutdown();
+    }
+
+    // Send interrupts to wake up threads if sleeping so they notice shutdown.
+    // TODO: Should we check they are alive? If OOME could have exited already
+    if (this.cacheFlusher != null) this.cacheFlusher.interruptIfNecessary();
+    if (this.compactSplitThread != null) this.compactSplitThread.interruptIfNecessary();
+    if (this.hlogRoller != null) this.hlogRoller.interruptIfNecessary();
+    if (this.majorCompactionChecker != null) this.majorCompactionChecker.interrupt();
+
+    if (this.killed) {
+      // Just skip out w/o closing regions.
+    } else if (abortRequested) {
+      if (this.fsOk) {
+        closeAllRegions(abortRequested); // Don't leave any open file handles
+        closeWAL(false);
+      }
+      LOG.info("aborting server at: " + this.serverInfo.getServerName());
+    } else {
+      closeAllRegions(abortRequested);
+      closeWAL(true);
+      closeAllScanners();
+      LOG.info("stopping server at: " + this.serverInfo.getServerName());
+    }
+    // Interrupt catalog tracker here in case any regions being opened out in
+    // handlers are stuck waiting on meta or root.
+    if (this.catalogTracker != null) this.catalogTracker.stop();
+    if (this.fsOk) waitOnAllRegionsToClose();
+
+    // Make sure the proxy is down.
+    if (this.hbaseMaster != null) {
+      HBaseRPC.stopProxy(this.hbaseMaster);
+      this.hbaseMaster = null;
+    }
+    this.leases.close();
+    HConnectionManager.deleteConnection(conf, true);
+    this.zooKeeper.close();
+    if (!killed) {
+      join();
+    }
+    LOG.info(Thread.currentThread().getName() + " exiting");
+  }
+
+  String getOnlineRegionsAsPrintableString() {
+    StringBuilder sb = new StringBuilder();
+    synchronized (this.onlineRegions) {
+      for (HRegion r: this.onlineRegions.values()) {
+        if (sb.length() > 0) sb.append(", ");
+        sb.append(r.getRegionInfo().getEncodedName());
+      }
+    }
+    return sb.toString();
+  }
+
+  /**
+   * Wait on regions close.
+   */
+  private void waitOnAllRegionsToClose() {
+    // Wait till all regions are closed before going out.
+    int lastCount = -1;
+    while (!isOnlineRegionsEmpty()) {
+      int count = getNumberOfOnlineRegions();
+      // Only print a message if the count of regions has changed.
+      if (count != lastCount) {
+        lastCount = count;
+        LOG.info("Waiting on " + count + " regions to close");
+        // Only print out regions still closing if a small number else will
+        // swamp the log.
+        if (count < 10 && LOG.isDebugEnabled()) {
+          synchronized (this.onlineRegions) {
+            LOG.debug(this.onlineRegions);
+          }
+        }
+      }
+      Threads.sleep(1000);
+    }
+  }
+
+  List<HMsg> tryRegionServerReport(final List<HMsg> outboundMessages)
+  throws IOException {
+    this.serverInfo.setLoad(buildServerLoad());
+    this.requestCount.set(0);
+    addOutboundMsgs(outboundMessages);
+    HMsg [] msgs = null;
+    while (!this.stopped) {
+      try {
+        msgs = this.hbaseMaster.regionServerReport(this.serverInfo,
+          outboundMessages.toArray(HMsg.EMPTY_HMSG_ARRAY),
+          getMostLoadedRegions());
+        break;
+      } catch (IOException ioe) {
+        if (ioe instanceof RemoteException) {
+          ioe = ((RemoteException)ioe).unwrapRemoteException();
+        }
+        if (ioe instanceof YouAreDeadException) {
+          // This will be caught and handled as a fatal error in run()
+          throw ioe;
+        }
+        // Couldn't connect to the master, get location from zk and reconnect
+        // Method blocks until new master is found or we are stopped
+        getMaster();
+      }
+    }
+    updateOutboundMsgs(outboundMessages);
+    outboundMessages.clear();
+
+    for (int i = 0; !this.stopped && msgs != null && i < msgs.length; i++) {
+      LOG.info(msgs[i].toString());
+      // Intercept stop regionserver messages
+      if (msgs[i].getType().equals(HMsg.Type.STOP_REGIONSERVER)) {
+        stop("Received " + msgs[i]);
+        continue;
+      }
+      LOG.warn("NOT PROCESSING " + msgs[i] + " -- WHY IS MASTER SENDING IT TO US?");
+    }
+    return outboundMessages;
+  }
+
+  private HServerLoad buildServerLoad() {
+    MemoryUsage memory = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
+    HServerLoad hsl = new HServerLoad(requestCount.get(),
+      (int)(memory.getUsed() / 1024 / 1024),
+      (int) (memory.getMax() / 1024 / 1024));
+    synchronized (this.onlineRegions) {
+      for (HRegion r : this.onlineRegions.values()) {
+        hsl.addRegionInfo(createRegionLoad(r));
+      }
+    }
+    return hsl;
+  }
+
+  private void closeWAL(final boolean delete) {
+    try {
+      if (this.hlog != null) {
+        if (delete) {
+          hlog.closeAndDelete();
+        } else {
+          hlog.close();
+        }
+      }
+    } catch (Throwable e) {
+      LOG.error("Close and delete failed", RemoteExceptionHandler.checkThrowable(e));
+    }
+  }
+
+  private void closeAllScanners() {
+    // Close any outstanding scanners. Means they'll get an UnknownScanner
+    // exception next time they come in.
+    for (Map.Entry<String, InternalScanner> e : this.scanners.entrySet()) {
+      try {
+        e.getValue().close();
+      } catch (IOException ioe) {
+        LOG.warn("Closing scanner " + e.getKey(), ioe);
+      }
+    }
+  }
+
+  /*
+   * Add to the passed <code>msgs</code> messages to pass to the master.
+   *
+   * @param msgs Current outboundMsgs array; we'll add messages to this List.
+   */
+  private void addOutboundMsgs(final List<HMsg> msgs) {
+    if (msgs.isEmpty()) {
+      this.outboundMsgs.drainTo(msgs);
+      return;
+    }
+    OUTER: for (HMsg m : this.outboundMsgs) {
+      for (HMsg mm : msgs) {
+        // Be careful don't add duplicates.
+        if (mm.equals(m)) {
+          continue OUTER;
+        }
+      }
+      msgs.add(m);
+    }
+  }
+
+  /*
+   * Remove from this.outboundMsgs those messsages we sent the master.
+   *
+   * @param msgs Messages we sent the master.
+   */
+  private void updateOutboundMsgs(final List<HMsg> msgs) {
+    if (msgs.isEmpty()) {
+      return;
+    }
+    for (HMsg m : this.outboundMsgs) {
+      for (HMsg mm : msgs) {
+        if (mm.equals(m)) {
+          this.outboundMsgs.remove(m);
+          break;
+        }
+      }
+    }
+  }
+
+  /*
+   * Run init. Sets up hlog and starts up all server threads.
+   *
+   * @param c Extra configuration.
+   */
+  protected void handleReportForDutyResponse(final MapWritable c) throws IOException {
+    try {
+      for (Map.Entry<Writable, Writable> e : c.entrySet()) {
+
+        String key = e.getKey().toString();
+        // Use the address the master passed us
+        if (key.equals("hbase.regionserver.address")) {
+          HServerAddress hsa = (HServerAddress) e.getValue();
+          LOG.info("Master passed us address to use. Was="
+            + this.serverInfo.getServerAddress() + ", Now=" + hsa.toString());
+          this.serverInfo.setServerAddress(hsa);
+          continue;
+        }
+        String value = e.getValue().toString();
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Config from master: " + key + "=" + value);
+        }
+        this.conf.set(key, value);
+      }
+      // hack! Maps DFSClient => RegionServer for logs.  HDFS made this
+      // config param for task trackers, but we can piggyback off of it.
+      if (this.conf.get("mapred.task.id") == null) {
+        this.conf.set("mapred.task.id",
+            "hb_rs_" + this.serverInfo.getServerName() + "_" +
+            System.currentTimeMillis());
+      }
+
+      // Master sent us hbase.rootdir to use. Should be fully qualified
+      // path with file system specification included. Set 'fs.defaultFS'
+      // to match the filesystem on hbase.rootdir else underlying hadoop hdfs
+      // accessors will be going against wrong filesystem (unless all is set
+      // to defaults).
+      this.conf.set("fs.defaultFS", this.conf.get("hbase.rootdir"));
+      // Get fs instance used by this RS
+      this.fs = FileSystem.get(this.conf);
+      this.rootDir = new Path(this.conf.get(HConstants.HBASE_DIR));
+      this.hlog = setupWALAndReplication();
+      // Init in here rather than in constructor after thread name has been set
+      this.metrics = new RegionServerMetrics();
+      startServiceThreads();
+      LOG.info("Serving as " + this.serverInfo.getServerName() +
+        ", RPC listening on " + this.server.getListenerAddress() +
+        ", sessionid=0x" +
+        Long.toHexString(this.zooKeeper.getZooKeeper().getSessionId()));
+      isOnline = true;
+    } catch (Throwable e) {
+      this.isOnline = false;
+      stop("Failed initialization");
+      throw convertThrowableToIOE(cleanup(e, "Failed init"),
+          "Region server startup failed");
+    }
+  }
+
+  /*
+   * @param r Region to get RegionLoad for.
+   *
+   * @return RegionLoad instance.
+   *
+   * @throws IOException
+   */
+  private HServerLoad.RegionLoad createRegionLoad(final HRegion r) {
+    byte[] name = r.getRegionName();
+    int stores = 0;
+    int storefiles = 0;
+    int storefileSizeMB = 0;
+    int memstoreSizeMB = (int) (r.memstoreSize.get() / 1024 / 1024);
+    int storefileIndexSizeMB = 0;
+    synchronized (r.stores) {
+      stores += r.stores.size();
+      for (Store store : r.stores.values()) {
+        storefiles += store.getStorefilesCount();
+        storefileSizeMB += (int) (store.getStorefilesSize() / 1024 / 1024);
+        storefileIndexSizeMB += (int) (store.getStorefilesIndexSize() / 1024 / 1024);
+      }
+    }
+    return new HServerLoad.RegionLoad(name, stores, storefiles,
+        storefileSizeMB, memstoreSizeMB, storefileIndexSizeMB);
+  }
+
+  /**
+   * @param encodedRegionName
+   * @return An instance of RegionLoad.
+   * @throws IOException
+   */
+  public HServerLoad.RegionLoad createRegionLoad(final String encodedRegionName) {
+    HRegion r = null;
+    synchronized (this.onlineRegions) {
+      r = this.onlineRegions.get(encodedRegionName);
+    }
+    return createRegionLoad(r);
+  }
+
+  /*
+   * Cleanup after Throwable caught invoking method. Converts <code>t</code> to
+   * IOE if it isn't already.
+   *
+   * @param t Throwable
+   *
+   * @return Throwable converted to an IOE; methods can only let out IOEs.
+   */
+  private Throwable cleanup(final Throwable t) {
+    return cleanup(t, null);
+  }
+
+  /*
+   * Cleanup after Throwable caught invoking method. Converts <code>t</code> to
+   * IOE if it isn't already.
+   *
+   * @param t Throwable
+   *
+   * @param msg Message to log in error. Can be null.
+   *
+   * @return Throwable converted to an IOE; methods can only let out IOEs.
+   */
+  private Throwable cleanup(final Throwable t, final String msg) {
+    // Don't log as error if NSRE; NSRE is 'normal' operation.
+    if (t instanceof NotServingRegionException) {
+      LOG.debug("NotServingRegionException; " +  t.getMessage());
+      return t;
+    }
+    if (msg == null) {
+      LOG.error("", RemoteExceptionHandler.checkThrowable(t));
+    } else {
+      LOG.error(msg, RemoteExceptionHandler.checkThrowable(t));
+    }
+    if (!checkOOME(t)) {
+      checkFileSystem();
+    }
+    return t;
+  }
+
+  /*
+   * @param t
+   *
+   * @return Make <code>t</code> an IOE if it isn't already.
+   */
+  private IOException convertThrowableToIOE(final Throwable t) {
+    return convertThrowableToIOE(t, null);
+  }
+
+  /*
+   * @param t
+   *
+   * @param msg Message to put in new IOE if passed <code>t</code> is not an IOE
+   *
+   * @return Make <code>t</code> an IOE if it isn't already.
+   */
+  private IOException convertThrowableToIOE(final Throwable t, final String msg) {
+    return (t instanceof IOException ? (IOException) t : msg == null
+        || msg.length() == 0 ? new IOException(t) : new IOException(msg, t));
+  }
+
+  /*
+   * Check if an OOME and if so, call abort.
+   *
+   * @param e
+   *
+   * @return True if we OOME'd and are aborting.
+   */
+  public boolean checkOOME(final Throwable e) {
+    boolean stop = false;
+    if (e instanceof OutOfMemoryError
+        || (e.getCause() != null && e.getCause() instanceof OutOfMemoryError)
+        || (e.getMessage() != null && e.getMessage().contains(
+            "java.lang.OutOfMemoryError"))) {
+      abort("OutOfMemoryError, aborting", e);
+      stop = true;
+    }
+    return stop;
+  }
+
+  /**
+   * Checks to see if the file system is still accessible. If not, sets
+   * abortRequested and stopRequested
+   *
+   * @return false if file system is not available
+   */
+  protected boolean checkFileSystem() {
+    if (this.fsOk && this.fs != null) {
+      try {
+        FSUtils.checkFileSystemAvailable(this.fs);
+      } catch (IOException e) {
+        abort("File System not available", e);
+        this.fsOk = false;
+      }
+    }
+    return this.fsOk;
+  }
+
+  /*
+   * Inner class that runs on a long period checking if regions need major
+   * compaction.
+   */
+  private static class MajorCompactionChecker extends Chore {
+    private final HRegionServer instance;
+
+    MajorCompactionChecker(final HRegionServer h, final int sleepTime,
+        final Stoppable stopper) {
+      super("MajorCompactionChecker", sleepTime, h);
+      this.instance = h;
+      LOG.info("Runs every " + sleepTime + "ms");
+    }
+
+    @Override
+    protected void chore() {
+      synchronized (this.instance.onlineRegions) {
+        for (HRegion r : this.instance.onlineRegions.values()) {
+          try {
+            if (r != null && r.isMajorCompaction()) {
+              // Queue a compaction. Will recognize if major is needed.
+              this.instance.compactSplitThread.requestCompaction(r, getName()
+                + " requests major compaction");
+            }
+          } catch (IOException e) {
+            LOG.warn("Failed major compaction check on " + r, e);
+          }
+        }
+      }
+    }
+  }
+
+  /**
+   * Report the status of the server. A server is online once all the startup is
+   * completed (setting up filesystem, starting service threads, etc.). This
+   * method is designed mostly to be useful in tests.
+   *
+   * @return true if online, false if not.
+   */
+  public boolean isOnline() {
+    return isOnline;
+  }
+
+  /**
+   * Setup WAL log and replication if enabled.
+   * Replication setup is done in here because it wants to be hooked up to WAL.
+   * @return A WAL instance.
+   * @throws IOException
+   */
+  private HLog setupWALAndReplication() throws IOException {
+    final Path oldLogDir = new Path(rootDir, HConstants.HREGION_OLDLOGDIR_NAME);
+    Path logdir = new Path(rootDir, HLog.getHLogDirectoryName(this.serverInfo));
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("logdir=" + logdir);
+    }
+    if (this.fs.exists(logdir)) {
+      throw new RegionServerRunningException("Region server already "
+          + "running at " + this.serverInfo.getServerName()
+          + " because logdir " + logdir.toString() + " exists");
+    }
+
+    // Instantiate replication manager if replication enabled.  Pass it the
+    // log directories.
+    try {
+      this.replicationHandler = Replication.isReplication(this.conf)?
+        new Replication(this, this.fs, logdir, oldLogDir): null;
+    } catch (KeeperException e) {
+      throw new IOException("Failed replication handler create", e);
+    }
+    return instantiateHLog(logdir, oldLogDir);
+  }
+
+  /**
+   * Called by {@link #setupWALAndReplication()} creating WAL instance.
+   * @param logdir
+   * @param oldLogDir
+   * @return WAL instance.
+   * @throws IOException
+   */
+  protected HLog instantiateHLog(Path logdir, Path oldLogDir) throws IOException {
+    return new HLog(this.fs, logdir, oldLogDir, this.conf,
+      getWALActionListeners(), this.serverInfo.getServerAddress().toString());
+  }
+
+  /**
+   * Called by {@link #instantiateHLog(Path, Path)} setting up WAL instance.
+   * Add any {@link WALObserver}s you want inserted before WAL startup.
+   * @return List of WALActionsListener that will be passed in to
+   * {@link HLog} on construction.
+   */
+  protected List<WALObserver> getWALActionListeners() {
+    List<WALObserver> listeners = new ArrayList<WALObserver>();
+    // Log roller.
+    this.hlogRoller = new LogRoller(this, this);
+    listeners.add(this.hlogRoller);
+    if (this.replicationHandler != null) {
+      // Replication handler is an implementation of WALActionsListener.
+      listeners.add(this.replicationHandler);
+    }
+    return listeners;
+  }
+
+  protected LogRoller getLogRoller() {
+    return hlogRoller;
+  }
+
+  /*
+   * @param interval Interval since last time metrics were called.
+   */
+  protected void doMetrics() {
+    try {
+      metrics();
+    } catch (Throwable e) {
+      LOG.warn("Failed metrics", e);
+    }
+  }
+
+  protected void metrics() {
+    this.metrics.regions.set(this.onlineRegions.size());
+    this.metrics.incrementRequests(this.requestCount.get());
+    // Is this too expensive every three seconds getting a lock on onlineRegions
+    // and then per store carried? Can I make metrics be sloppier and avoid
+    // the synchronizations?
+    int stores = 0;
+    int storefiles = 0;
+    long memstoreSize = 0;
+    long storefileIndexSize = 0;
+    synchronized (this.onlineRegions) {
+      for (Map.Entry<String, HRegion> e : this.onlineRegions.entrySet()) {
+        HRegion r = e.getValue();
+        memstoreSize += r.memstoreSize.get();
+        synchronized (r.stores) {
+          stores += r.stores.size();
+          for (Map.Entry<byte[], Store> ee : r.stores.entrySet()) {
+            Store store = ee.getValue();
+            storefiles += store.getStorefilesCount();
+            storefileIndexSize += store.getStorefilesIndexSize();
+          }
+        }
+      }
+    }
+    this.metrics.stores.set(stores);
+    this.metrics.storefiles.set(storefiles);
+    this.metrics.memstoreSizeMB.set((int) (memstoreSize / (1024 * 1024)));
+    this.metrics.storefileIndexSizeMB
+        .set((int) (storefileIndexSize / (1024 * 1024)));
+    this.metrics.compactionQueueSize.set(compactSplitThread
+        .getCompactionQueueSize());
+
+    LruBlockCache lruBlockCache = (LruBlockCache) StoreFile.getBlockCache(conf);
+    if (lruBlockCache != null) {
+      this.metrics.blockCacheCount.set(lruBlockCache.size());
+      this.metrics.blockCacheFree.set(lruBlockCache.getFreeSize());
+      this.metrics.blockCacheSize.set(lruBlockCache.getCurrentSize());
+      CacheStats cacheStats = lruBlockCache.getStats();
+      this.metrics.blockCacheHitCount.set(cacheStats.getHitCount());
+      this.metrics.blockCacheMissCount.set(cacheStats.getMissCount());
+      this.metrics.blockCacheEvictedCount.set(lruBlockCache.getEvictedCount());
+      double ratio = lruBlockCache.getStats().getHitRatio();
+      int percent = (int) (ratio * 100);
+      this.metrics.blockCacheHitRatio.set(percent);
+      ratio = lruBlockCache.getStats().getHitCachingRatio();
+      percent = (int) (ratio * 100);
+      this.metrics.blockCacheHitCachingRatio.set(percent);
+    }
+  }
+
+  /**
+   * @return Region server metrics instance.
+   */
+  public RegionServerMetrics getMetrics() {
+    return this.metrics;
+  }
+
+  /*
+   * Start maintanence Threads, Server, Worker and lease checker threads.
+   * Install an UncaughtExceptionHandler that calls abort of RegionServer if we
+   * get an unhandled exception. We cannot set the handler on all threads.
+   * Server's internal Listener thread is off limits. For Server, if an OOME, it
+   * waits a while then retries. Meantime, a flush or a compaction that tries to
+   * run should trigger same critical condition and the shutdown will run. On
+   * its way out, this server will shut down Server. Leases are sort of
+   * inbetween. It has an internal thread that while it inherits from Chore, it
+   * keeps its own internal stop mechanism so needs to be stopped by this
+   * hosting server. Worker logs the exception and exits.
+   */
+  private void startServiceThreads() throws IOException {
+    String n = Thread.currentThread().getName();
+    UncaughtExceptionHandler handler = new UncaughtExceptionHandler() {
+      public void uncaughtException(Thread t, Throwable e) {
+        abort("Uncaught exception in service thread " + t.getName(), e);
+      }
+    };
+
+    // Start executor services
+    this.service = new ExecutorService(getServerName());
+    this.service.startExecutorService(ExecutorType.RS_OPEN_REGION,
+      conf.getInt("hbase.regionserver.executor.openregion.threads", 3));
+    this.service.startExecutorService(ExecutorType.RS_OPEN_ROOT,
+      conf.getInt("hbase.regionserver.executor.openroot.threads", 1));
+    this.service.startExecutorService(ExecutorType.RS_OPEN_META,
+      conf.getInt("hbase.regionserver.executor.openmeta.threads", 1));
+    this.service.startExecutorService(ExecutorType.RS_CLOSE_REGION,
+      conf.getInt("hbase.regionserver.executor.closeregion.threads", 3));
+    this.service.startExecutorService(ExecutorType.RS_CLOSE_ROOT,
+      conf.getInt("hbase.regionserver.executor.closeroot.threads", 1));
+    this.service.startExecutorService(ExecutorType.RS_CLOSE_META,
+      conf.getInt("hbase.regionserver.executor.closemeta.threads", 1));
+
+    Threads.setDaemonThreadRunning(this.hlogRoller, n + ".logRoller", handler);
+    Threads.setDaemonThreadRunning(this.cacheFlusher, n + ".cacheFlusher",
+        handler);
+    Threads.setDaemonThreadRunning(this.compactSplitThread, n + ".compactor",
+        handler);
+    Threads.setDaemonThreadRunning(this.majorCompactionChecker, n
+        + ".majorCompactionChecker", handler);
+
+    // Leases is not a Thread. Internally it runs a daemon thread. If it gets
+    // an unhandled exception, it will just exit.
+    this.leases.setName(n + ".leaseChecker");
+    this.leases.start();
+    // Put up info server.
+    int port = this.conf.getInt("hbase.regionserver.info.port", 60030);
+    // -1 is for disabling info server
+    if (port >= 0) {
+      String addr = this.conf.get("hbase.regionserver.info.bindAddress",
+          "0.0.0.0");
+      // check if auto port bind enabled
+      boolean auto = this.conf.getBoolean("hbase.regionserver.info.port.auto",
+          false);
+      while (true) {
+        try {
+          this.infoServer = new InfoServer("regionserver", addr, port, false);
+          this.infoServer.setAttribute("regionserver", this);
+          this.infoServer.start();
+          break;
+        } catch (BindException e) {
+          if (!auto) {
+            // auto bind disabled throw BindException
+            throw e;
+          }
+          // auto bind enabled, try to use another port
+          LOG.info("Failed binding http info server to port: " + port);
+          port++;
+          // update HRS server info port.
+          this.serverInfo = new HServerInfo(this.serverInfo.getServerAddress(),
+            this.serverInfo.getStartCode(), port,
+            this.serverInfo.getHostname());
+        }
+      }
+    }
+
+    if (this.replicationHandler != null) {
+      this.replicationHandler.startReplicationServices();
+    }
+
+    // Start Server.  This service is like leases in that it internally runs
+    // a thread.
+    this.server.start();
+  }
+
+  /*
+   * Verify that server is healthy
+   */
+  private boolean isHealthy() {
+    if (!fsOk) {
+      // File system problem
+      return false;
+    }
+    // Verify that all threads are alive
+    if (!(leases.isAlive() && compactSplitThread.isAlive()
+        && cacheFlusher.isAlive() && hlogRoller.isAlive()
+        && this.majorCompactionChecker.isAlive())) {
+      stop("One or more threads are no longer alive -- stop");
+      return false;
+    }
+    return true;
+  }
+
+  @Override
+  public HLog getWAL() {
+    return this.hlog;
+  }
+
+  @Override
+  public CatalogTracker getCatalogTracker() {
+    return this.catalogTracker;
+  }
+
+  @Override
+  public void stop(final String msg) {
+    this.stopped = true;
+    LOG.info("STOPPED: " + msg);
+    synchronized (this) {
+      // Wakes run() if it is sleeping
+      notifyAll(); // FindBugs NN_NAKED_NOTIFY
+    }
+  }
+
+  @Override
+  public void postOpenDeployTasks(final HRegion r, final CatalogTracker ct,
+      final boolean daughter)
+  throws KeeperException, IOException {
+    // Do checks to see if we need to compact (references or too many files)
+    if (r.hasReferences() || r.hasTooManyStoreFiles()) {
+      getCompactionRequester().requestCompaction(r,
+        r.hasReferences()? "Region has references on open" :
+          "Region has too many store files");
+    }
+
+    // Add to online regions if all above was successful.
+    addToOnlineRegions(r);
+
+    // Update ZK, ROOT or META
+    if (r.getRegionInfo().isRootRegion()) {
+      RootLocationEditor.setRootLocation(getZooKeeper(),
+        getServerInfo().getServerAddress());
+    } else if (r.getRegionInfo().isMetaRegion()) {
+      MetaEditor.updateMetaLocation(ct, r.getRegionInfo(), getServerInfo());
+    } else {
+      if (daughter) {
+        // If daughter of a split, update whole row, not just location.
+        MetaEditor.addDaughter(ct, r.getRegionInfo(), getServerInfo());
+      } else {
+        MetaEditor.updateRegionLocation(ct, r.getRegionInfo(), getServerInfo());
+      }
+    }
+  }
+
+  /**
+   * Cause the server to exit without closing the regions it is serving, the log
+   * it is using and without notifying the master. Used unit testing and on
+   * catastrophic events such as HDFS is yanked out from under hbase or we OOME.
+   *
+   * @param reason
+   *          the reason we are aborting
+   * @param cause
+   *          the exception that caused the abort, or null
+   */
+  public void abort(String reason, Throwable cause) {
+    if (cause != null) {
+      LOG.fatal("ABORTING region server " + this + ": " + reason, cause);
+    } else {
+      LOG.fatal("ABORTING region server " + this + ": " + reason);
+    }
+    this.abortRequested = true;
+    this.reservedSpace.clear();
+    if (this.metrics != null) {
+      LOG.info("Dump of metrics: " + this.metrics);
+    }
+    stop(reason);
+  }
+
+  /**
+   * @see HRegionServer#abort(String, Throwable)
+   */
+  public void abort(String reason) {
+    abort(reason, null);
+  }
+
+  /*
+   * Simulate a kill -9 of this server. Exits w/o closing regions or cleaninup
+   * logs but it does close socket in case want to bring up server on old
+   * hostname+port immediately.
+   */
+  protected void kill() {
+    this.killed = true;
+    abort("Simulated kill");
+  }
+
+  /**
+   * Wait on all threads to finish. Presumption is that all closes and stops
+   * have already been called.
+   */
+  protected void join() {
+    Threads.shutdown(this.majorCompactionChecker);
+    Threads.shutdown(this.cacheFlusher);
+    Threads.shutdown(this.compactSplitThread);
+    Threads.shutdown(this.hlogRoller);
+    this.service.shutdown();
+    if (this.replicationHandler != null) {
+      this.replicationHandler.join();
+    }
+  }
+
+  /**
+   * Get the current master from ZooKeeper and open the RPC connection to it.
+   *
+   * Method will block until a master is available. You can break from this
+   * block by requesting the server stop.
+   *
+   * @return master address, or null if server has been stopped
+   */
+  private HServerAddress getMaster() {
+    HServerAddress masterAddress = null;
+    while ((masterAddress = masterAddressManager.getMasterAddress()) == null) {
+      if (stopped) {
+        return null;
+      }
+      LOG.debug("No master found, will retry");
+      sleeper.sleep();
+    }
+    HMasterRegionInterface master = null;
+    while (!stopped && master == null) {
+      try {
+        // Do initial RPC setup. The final argument indicates that the RPC
+        // should retry indefinitely.
+        master = (HMasterRegionInterface) HBaseRPC.waitForProxy(
+            HMasterRegionInterface.class, HBaseRPCProtocolVersion.versionID,
+            masterAddress.getInetSocketAddress(), this.conf, -1,
+            this.rpcTimeout, this.rpcTimeout);
+      } catch (IOException e) {
+        e = e instanceof RemoteException ?
+            ((RemoteException)e).unwrapRemoteException() : e;
+        if (e instanceof ServerNotRunningException) {
+          LOG.info("Master isn't available yet, retrying");
+        } else {
+          LOG.warn("Unable to connect to master. Retrying. Error was:", e);
+        }
+        sleeper.sleep();
+      }
+    }
+    LOG.info("Connected to master at " + masterAddress);
+    this.hbaseMaster = master;
+    return masterAddress;
+  }
+
+  /**
+   * @return True if successfully invoked {@link #reportForDuty()}
+   * @throws IOException
+   */
+  private boolean tryReportForDuty() throws IOException {
+    MapWritable w = reportForDuty();
+    if (w != null) {
+      handleReportForDutyResponse(w);
+      return true;
+    }
+    sleeper.sleep();
+    LOG.warn("No response on reportForDuty. Sleeping and then retrying.");
+    return false;
+  }
+
+  /*
+   * Let the master know we're here Run initialization using parameters passed
+   * us by the master.
+   */
+  private MapWritable reportForDuty() throws IOException {
+    HServerAddress masterAddress = null;
+    while (!stopped && (masterAddress = getMaster()) == null) {
+      sleeper.sleep();
+      LOG.warn("Unable to get master for initialization");
+    }
+
+    MapWritable result = null;
+    long lastMsg = 0;
+    while (!stopped) {
+      try {
+        this.requestCount.set(0);
+        lastMsg = System.currentTimeMillis();
+        ZKUtil.setAddressAndWatch(zooKeeper,
+          ZKUtil.joinZNode(zooKeeper.rsZNode, ZKUtil.getNodeName(serverInfo)),
+          this.serverInfo.getServerAddress());
+        this.serverInfo.setLoad(buildServerLoad());
+        LOG.info("Telling master at " + masterAddress + " that we are up");
+        result = this.hbaseMaster.regionServerStartup(this.serverInfo,
+            EnvironmentEdgeManager.currentTimeMillis());
+        break;
+      } catch (RemoteException e) {
+        IOException ioe = e.unwrapRemoteException();
+        if (ioe instanceof ClockOutOfSyncException) {
+          LOG.fatal("Master rejected startup because clock is out of sync",
+              ioe);
+          // Re-throw IOE will cause RS to abort
+          throw ioe;
+        } else {
+          LOG.warn("remote error telling master we are up", e);
+        }
+      } catch (IOException e) {
+        LOG.warn("error telling master we are up", e);
+      } catch (KeeperException e) {
+        LOG.warn("error putting up ephemeral node in zookeeper", e);
+      }
+      sleeper.sleep(lastMsg);
+    }
+    return result;
+  }
+
+  /**
+   * Add to the outbound message buffer
+   *
+   * When a region splits, we need to tell the master that there are two new
+   * regions that need to be assigned.
+   *
+   * We do not need to inform the master about the old region, because we've
+   * updated the meta or root regions, and the master will pick that up on its
+   * next rescan of the root or meta tables.
+   */
+  void reportSplit(HRegionInfo oldRegion, HRegionInfo newRegionA,
+      HRegionInfo newRegionB) {
+    this.outboundMsgs.add(new HMsg(
+        HMsg.Type.REGION_SPLIT, oldRegion, newRegionA,
+        newRegionB, Bytes.toBytes("Daughters; "
+            + newRegionA.getRegionNameAsString() + ", "
+            + newRegionB.getRegionNameAsString())));
+  }
+
+  /**
+   * Closes all regions.  Called on our way out.
+   * Assumes that its not possible for new regions to be added to onlineRegions
+   * while this method runs.
+   */
+  protected void closeAllRegions(final boolean abort) {
+    closeUserRegions(abort);
+    // Only root and meta should remain.  Are we carrying root or meta?
+    HRegion meta = null;
+    HRegion root = null;
+    this.lock.writeLock().lock();
+    try {
+      synchronized (this.onlineRegions) {
+        for (Map.Entry<String, HRegion> e: onlineRegions.entrySet()) {
+          HRegionInfo hri = e.getValue().getRegionInfo();
+          if (hri.isRootRegion()) {
+            root = e.getValue();
+          } else if (hri.isMetaRegion()) {
+            meta = e.getValue();
+          }
+          if (meta != null && root != null) break;
+        }
+      }
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+    if (meta != null) closeRegion(meta.getRegionInfo(), abort, false);
+    if (root != null) closeRegion(root.getRegionInfo(), abort, false);
+  }
+
+  /**
+   * Schedule closes on all user regions.
+   * @param abort Whether we're running an abort.
+   */
+  void closeUserRegions(final boolean abort) {
+    this.lock.writeLock().lock();
+    try {
+      synchronized (this.onlineRegions) {
+        for (Map.Entry<String, HRegion> e: this.onlineRegions.entrySet()) {
+          HRegion r = e.getValue();
+          if (!r.getRegionInfo().isMetaRegion()) {
+            // Don't update zk with this close transition; pass false.
+            closeRegion(r.getRegionInfo(), abort, false);
+          }
+        }
+      }
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+  }
+
+  @Override
+  public HRegionInfo getRegionInfo(final byte[] regionName)
+  throws NotServingRegionException {
+    requestCount.incrementAndGet();
+    return getRegion(regionName).getRegionInfo();
+  }
+
+  public Result getClosestRowBefore(final byte[] regionName, final byte[] row,
+      final byte[] family) throws IOException {
+    checkOpen();
+    requestCount.incrementAndGet();
+    try {
+      // locate the region we're operating on
+      HRegion region = getRegion(regionName);
+      // ask the region for all the data
+
+      Result r = region.getClosestRowBefore(row, family);
+      return r;
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  /** {@inheritDoc} */
+  public Result get(byte[] regionName, Get get) throws IOException {
+    checkOpen();
+    requestCount.incrementAndGet();
+    try {
+      HRegion region = getRegion(regionName);
+      return region.get(get, getLockFromId(get.getLockId()));
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  public boolean exists(byte[] regionName, Get get) throws IOException {
+    checkOpen();
+    requestCount.incrementAndGet();
+    try {
+      HRegion region = getRegion(regionName);
+      Result r = region.get(get, getLockFromId(get.getLockId()));
+      return r != null && !r.isEmpty();
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  public void put(final byte[] regionName, final Put put) throws IOException {
+    if (put.getRow() == null) {
+      throw new IllegalArgumentException("update has null row");
+    }
+
+    checkOpen();
+    this.requestCount.incrementAndGet();
+    HRegion region = getRegion(regionName);
+    try {
+      if (!region.getRegionInfo().isMetaTable()) {
+        this.cacheFlusher.reclaimMemStoreMemory();
+      }
+      boolean writeToWAL = put.getWriteToWAL();
+      region.put(put, getLockFromId(put.getLockId()), writeToWAL);
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  public int put(final byte[] regionName, final List<Put> puts)
+      throws IOException {
+    checkOpen();
+    HRegion region = null;
+    try {
+      region = getRegion(regionName);
+      if (!region.getRegionInfo().isMetaTable()) {
+        this.cacheFlusher.reclaimMemStoreMemory();
+      }
+
+      @SuppressWarnings("unchecked")
+      Pair<Put, Integer>[] putsWithLocks = new Pair[puts.size()];
+
+      int i = 0;
+      for (Put p : puts) {
+        Integer lock = getLockFromId(p.getLockId());
+        putsWithLocks[i++] = new Pair<Put, Integer>(p, lock);
+      }
+
+      this.requestCount.addAndGet(puts.size());
+      OperationStatusCode[] codes = region.put(putsWithLocks);
+      for (i = 0; i < codes.length; i++) {
+        if (codes[i] != OperationStatusCode.SUCCESS) {
+          return i;
+        }
+      }
+      return -1;
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  private boolean checkAndMutate(final byte[] regionName, final byte[] row,
+      final byte[] family, final byte[] qualifier, final byte[] value,
+      final Writable w, Integer lock) throws IOException {
+    checkOpen();
+    this.requestCount.incrementAndGet();
+    HRegion region = getRegion(regionName);
+    try {
+      if (!region.getRegionInfo().isMetaTable()) {
+        this.cacheFlusher.reclaimMemStoreMemory();
+      }
+      return region
+          .checkAndMutate(row, family, qualifier, value, w, lock, true);
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  /**
+   *
+   * @param regionName
+   * @param row
+   * @param family
+   * @param qualifier
+   * @param value
+   *          the expected value
+   * @param put
+   * @throws IOException
+   * @return true if the new put was execute, false otherwise
+   */
+  public boolean checkAndPut(final byte[] regionName, final byte[] row,
+      final byte[] family, final byte[] qualifier, final byte[] value,
+      final Put put) throws IOException {
+    return checkAndMutate(regionName, row, family, qualifier, value, put,
+        getLockFromId(put.getLockId()));
+  }
+
+  /**
+   *
+   * @param regionName
+   * @param row
+   * @param family
+   * @param qualifier
+   * @param value
+   *          the expected value
+   * @param delete
+   * @throws IOException
+   * @return true if the new put was execute, false otherwise
+   */
+  public boolean checkAndDelete(final byte[] regionName, final byte[] row,
+      final byte[] family, final byte[] qualifier, final byte[] value,
+      final Delete delete) throws IOException {
+    return checkAndMutate(regionName, row, family, qualifier, value, delete,
+        getLockFromId(delete.getLockId()));
+  }
+
+  //
+  // remote scanner interface
+  //
+
+  public long openScanner(byte[] regionName, Scan scan) throws IOException {
+    checkOpen();
+    NullPointerException npe = null;
+    if (regionName == null) {
+      npe = new NullPointerException("regionName is null");
+    } else if (scan == null) {
+      npe = new NullPointerException("scan is null");
+    }
+    if (npe != null) {
+      throw new IOException("Invalid arguments to openScanner", npe);
+    }
+    requestCount.incrementAndGet();
+    try {
+      HRegion r = getRegion(regionName);
+      return addScanner(r.getScanner(scan));
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t, "Failed openScanner"));
+    }
+  }
+
+  protected long addScanner(InternalScanner s) throws LeaseStillHeldException {
+    long scannerId = -1L;
+    scannerId = rand.nextLong();
+    String scannerName = String.valueOf(scannerId);
+    scanners.put(scannerName, s);
+    this.leases.createLease(scannerName, new ScannerListener(scannerName));
+    return scannerId;
+  }
+
+  public Result next(final long scannerId) throws IOException {
+    Result[] res = next(scannerId, 1);
+    if (res == null || res.length == 0) {
+      return null;
+    }
+    return res[0];
+  }
+
+  public Result[] next(final long scannerId, int nbRows) throws IOException {
+    try {
+      String scannerName = String.valueOf(scannerId);
+      InternalScanner s = this.scanners.get(scannerName);
+      if (s == null) {
+        throw new UnknownScannerException("Name: " + scannerName);
+      }
+      try {
+        checkOpen();
+      } catch (IOException e) {
+        // If checkOpen failed, server not running or filesystem gone,
+        // cancel this lease; filesystem is gone or we're closing or something.
+        this.leases.cancelLease(scannerName);
+        throw e;
+      }
+      this.leases.renewLease(scannerName);
+      List<Result> results = new ArrayList<Result>(nbRows);
+      long currentScanResultSize = 0;
+      List<KeyValue> values = new ArrayList<KeyValue>();
+      for (int i = 0; i < nbRows
+          && currentScanResultSize < maxScannerResultSize; i++) {
+        requestCount.incrementAndGet();
+        // Collect values to be returned here
+        boolean moreRows = s.next(values);
+        if (!values.isEmpty()) {
+          for (KeyValue kv : values) {
+            currentScanResultSize += kv.heapSize();
+          }
+          results.add(new Result(values));
+        }
+        if (!moreRows) {
+          break;
+        }
+        values.clear();
+      }
+      // Below is an ugly hack where we cast the InternalScanner to be a
+      // HRegion.RegionScanner. The alternative is to change InternalScanner
+      // interface but its used everywhere whereas we just need a bit of info
+      // from HRegion.RegionScanner, IF its filter if any is done with the scan
+      // and wants to tell the client to stop the scan. This is done by passing
+      // a null result.
+      return ((HRegion.RegionScanner) s).isFilterDone() && results.isEmpty() ? null
+          : results.toArray(new Result[0]);
+    } catch (Throwable t) {
+      if (t instanceof NotServingRegionException) {
+        String scannerName = String.valueOf(scannerId);
+        this.scanners.remove(scannerName);
+      }
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  public void close(final long scannerId) throws IOException {
+    try {
+      checkOpen();
+      requestCount.incrementAndGet();
+      String scannerName = String.valueOf(scannerId);
+      InternalScanner s = scanners.remove(scannerName);
+      if (s != null) {
+        s.close();
+        this.leases.cancelLease(scannerName);
+      }
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  /**
+   * Instantiated as a scanner lease. If the lease times out, the scanner is
+   * closed
+   */
+  private class ScannerListener implements LeaseListener {
+    private final String scannerName;
+
+    ScannerListener(final String n) {
+      this.scannerName = n;
+    }
+
+    public void leaseExpired() {
+      LOG.info("Scanner " + this.scannerName + " lease expired");
+      InternalScanner s = scanners.remove(this.scannerName);
+      if (s != null) {
+        try {
+          s.close();
+        } catch (IOException e) {
+          LOG.error("Closing scanner", e);
+        }
+      }
+    }
+  }
+
+  //
+  // Methods that do the actual work for the remote API
+  //
+  public void delete(final byte[] regionName, final Delete delete)
+      throws IOException {
+    checkOpen();
+    try {
+      boolean writeToWAL = true;
+      this.requestCount.incrementAndGet();
+      HRegion region = getRegion(regionName);
+      if (!region.getRegionInfo().isMetaTable()) {
+        this.cacheFlusher.reclaimMemStoreMemory();
+      }
+      Integer lid = getLockFromId(delete.getLockId());
+      region.delete(delete, lid, writeToWAL);
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  public int delete(final byte[] regionName, final List<Delete> deletes)
+      throws IOException {
+    // Count of Deletes processed.
+    int i = 0;
+    checkOpen();
+    HRegion region = null;
+    try {
+      boolean writeToWAL = true;
+      region = getRegion(regionName);
+      if (!region.getRegionInfo().isMetaTable()) {
+        this.cacheFlusher.reclaimMemStoreMemory();
+      }
+      int size = deletes.size();
+      Integer[] locks = new Integer[size];
+      for (Delete delete : deletes) {
+        this.requestCount.incrementAndGet();
+        locks[i] = getLockFromId(delete.getLockId());
+        region.delete(delete, locks[i], writeToWAL);
+        i++;
+      }
+    } catch (WrongRegionException ex) {
+      LOG.debug("Batch deletes: " + i, ex);
+      return i;
+    } catch (NotServingRegionException ex) {
+      return i;
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+    return -1;
+  }
+
+  public long lockRow(byte[] regionName, byte[] row) throws IOException {
+    checkOpen();
+    NullPointerException npe = null;
+    if (regionName == null) {
+      npe = new NullPointerException("regionName is null");
+    } else if (row == null) {
+      npe = new NullPointerException("row to lock is null");
+    }
+    if (npe != null) {
+      IOException io = new IOException("Invalid arguments to lockRow");
+      io.initCause(npe);
+      throw io;
+    }
+    requestCount.incrementAndGet();
+    try {
+      HRegion region = getRegion(regionName);
+      Integer r = region.obtainRowLock(row);
+      long lockId = addRowLock(r, region);
+      LOG.debug("Row lock " + lockId + " explicitly acquired by client");
+      return lockId;
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t, "Error obtaining row lock (fsOk: "
+          + this.fsOk + ")"));
+    }
+  }
+
+  protected long addRowLock(Integer r, HRegion region)
+      throws LeaseStillHeldException {
+    long lockId = -1L;
+    lockId = rand.nextLong();
+    String lockName = String.valueOf(lockId);
+    rowlocks.put(lockName, r);
+    this.leases.createLease(lockName, new RowLockListener(lockName, region));
+    return lockId;
+  }
+
+  /**
+   * Method to get the Integer lock identifier used internally from the long
+   * lock identifier used by the client.
+   *
+   * @param lockId
+   *          long row lock identifier from client
+   * @return intId Integer row lock used internally in HRegion
+   * @throws IOException
+   *           Thrown if this is not a valid client lock id.
+   */
+  Integer getLockFromId(long lockId) throws IOException {
+    if (lockId == -1L) {
+      return null;
+    }
+    String lockName = String.valueOf(lockId);
+    Integer rl = rowlocks.get(lockName);
+    if (rl == null) {
+      throw new UnknownRowLockException("Invalid row lock");
+    }
+    this.leases.renewLease(lockName);
+    return rl;
+  }
+
+  public void unlockRow(byte[] regionName, long lockId) throws IOException {
+    checkOpen();
+    NullPointerException npe = null;
+    if (regionName == null) {
+      npe = new NullPointerException("regionName is null");
+    } else if (lockId == -1L) {
+      npe = new NullPointerException("lockId is null");
+    }
+    if (npe != null) {
+      IOException io = new IOException("Invalid arguments to unlockRow");
+      io.initCause(npe);
+      throw io;
+    }
+    requestCount.incrementAndGet();
+    try {
+      HRegion region = getRegion(regionName);
+      String lockName = String.valueOf(lockId);
+      Integer r = rowlocks.remove(lockName);
+      if (r == null) {
+        throw new UnknownRowLockException(lockName);
+      }
+      region.releaseRowLock(r);
+      this.leases.cancelLease(lockName);
+      LOG.debug("Row lock " + lockId
+          + " has been explicitly released by client");
+    } catch (Throwable t) {
+      throw convertThrowableToIOE(cleanup(t));
+    }
+  }
+
+  @Override
+  public void bulkLoadHFile(String hfilePath, byte[] regionName,
+      byte[] familyName) throws IOException {
+    HRegion region = getRegion(regionName);
+    region.bulkLoadHFile(hfilePath, familyName);
+  }
+
+  Map<String, Integer> rowlocks = new ConcurrentHashMap<String, Integer>();
+
+  /**
+   * Instantiated as a row lock lease. If the lease times out, the row lock is
+   * released
+   */
+  private class RowLockListener implements LeaseListener {
+    private final String lockName;
+    private final HRegion region;
+
+    RowLockListener(final String lockName, final HRegion region) {
+      this.lockName = lockName;
+      this.region = region;
+    }
+
+    public void leaseExpired() {
+      LOG.info("Row Lock " + this.lockName + " lease expired");
+      Integer r = rowlocks.remove(this.lockName);
+      if (r != null) {
+        region.releaseRowLock(r);
+      }
+    }
+  }
+
+  // Region open/close direct RPCs
+
+  @Override
+  public void openRegion(HRegionInfo region)
+  throws RegionServerStoppedException {
+    LOG.info("Received request to open region: " +
+      region.getRegionNameAsString());
+    if (this.stopped) throw new RegionServerStoppedException();
+    if (region.isRootRegion()) {
+      this.service.submit(new OpenRootHandler(this, this, region));
+    } else if(region.isMetaRegion()) {
+      this.service.submit(new OpenMetaHandler(this, this, region));
+    } else {
+      this.service.submit(new OpenRegionHandler(this, this, region));
+    }
+  }
+
+  @Override
+  public void openRegions(List<HRegionInfo> regions)
+  throws RegionServerStoppedException {
+    LOG.info("Received request to open " + regions.size() + " region(s)");
+    for (HRegionInfo region: regions) openRegion(region);
+  }
+
+  @Override
+  public boolean closeRegion(HRegionInfo region)
+  throws NotServingRegionException {
+    return closeRegion(region, true);
+  }
+
+  @Override
+  public boolean closeRegion(HRegionInfo region, final boolean zk)
+  throws NotServingRegionException {
+    LOG.info("Received close region: " + region.getRegionNameAsString());
+    synchronized (this.onlineRegions) {
+      boolean hasit = this.onlineRegions.containsKey(region.getEncodedName());
+      if (!hasit) {
+        LOG.warn("Received close for region we are not serving; " +
+          region.getEncodedName());
+        throw new NotServingRegionException("Received close for "
+          + region.getRegionNameAsString() + " but we are not serving it");
+      }
+    }
+    return closeRegion(region, false, zk);
+  }
+
+  /**
+   * @param region Region to close
+   * @param abort True if we are aborting
+   * @param zk True if we are to update zk about the region close; if the close
+   * was orchestrated by master, then update zk.  If the close is being run by
+   * the regionserver because its going down, don't update zk.
+   * @return True if closed a region.
+   */
+  protected boolean closeRegion(HRegionInfo region, final boolean abort,
+      final boolean zk) {
+    CloseRegionHandler crh = null;
+    if (region.isRootRegion()) {
+      crh = new CloseRootHandler(this, this, region, abort, zk);
+    } else if (region.isMetaRegion()) {
+      crh = new CloseMetaHandler(this, this, region, abort, zk);
+    } else {
+      crh = new CloseRegionHandler(this, this, region, abort, zk);
+    }
+    this.service.submit(crh);
+    return true;
+  }
+
+  // Manual remote region administration RPCs
+
+  @Override
+  public void flushRegion(HRegionInfo regionInfo)
+      throws NotServingRegionException, IOException {
+    LOG.info("Flushing " + regionInfo.getRegionNameAsString());
+    HRegion region = getRegion(regionInfo.getRegionName());
+    region.flushcache();
+  }
+
+  @Override
+  public void splitRegion(HRegionInfo regionInfo)
+      throws NotServingRegionException, IOException {
+    HRegion region = getRegion(regionInfo.getRegionName());
+    region.flushcache();
+    region.shouldSplit(true);
+    // force a compaction, split will be side-effect
+    // TODO: flush/compact/split refactor will make it trivial to do this
+    // sync/async (and won't require us to do a compaction to split!)
+    compactSplitThread.requestCompaction(region, "User-triggered split",
+        CompactSplitThread.PRIORITY_USER);
+  }
+
+  @Override
+  public void compactRegion(HRegionInfo regionInfo, boolean major)
+      throws NotServingRegionException, IOException {
+    HRegion region = getRegion(regionInfo.getRegionName());
+    compactSplitThread.requestCompaction(region, major, "User-triggered "
+        + (major ? "major " : "") + "compaction",
+        CompactSplitThread.PRIORITY_USER);
+  }
+
+  /** @return the info server */
+  public InfoServer getInfoServer() {
+    return infoServer;
+  }
+
+  /**
+   * @return true if a stop has been requested.
+   */
+  public boolean isStopped() {
+    return this.stopped;
+  }
+
+  @Override
+  public boolean isStopping() {
+    return this.stopping;
+  }
+
+  /**
+   *
+   * @return the configuration
+   */
+  public Configuration getConfiguration() {
+    return conf;
+  }
+
+  /** @return the write lock for the server */
+  ReentrantReadWriteLock.WriteLock getWriteLock() {
+    return lock.writeLock();
+  }
+
+  @Override
+  public List<HRegionInfo> getOnlineRegions() {
+    List<HRegionInfo> list = new ArrayList<HRegionInfo>();
+    synchronized(this.onlineRegions) {
+      for (Map.Entry<String,HRegion> e: this.onlineRegions.entrySet()) {
+        list.add(e.getValue().getRegionInfo());
+      }
+    }
+    Collections.sort(list);
+    return list;
+  }
+
+  public int getNumberOfOnlineRegions() {
+    int size = -1;
+    synchronized (this.onlineRegions) {
+      size = this.onlineRegions.size();
+    }
+    return size;
+  }
+
+  boolean isOnlineRegionsEmpty() {
+    synchronized (this.onlineRegions) {
+      return this.onlineRegions.isEmpty();
+    }
+  }
+
+  /**
+   * For tests and web ui.
+   * This method will only work if HRegionServer is in the same JVM as client;
+   * HRegion cannot be serialized to cross an rpc.
+   * @see #getOnlineRegions()
+   */
+  public Collection<HRegion> getOnlineRegionsLocalContext() {
+    synchronized (this.onlineRegions) {
+      Collection<HRegion> regions = this.onlineRegions.values();
+      return Collections.unmodifiableCollection(regions);
+    }
+  }
+
+  @Override
+  public void addToOnlineRegions(HRegion region) {
+    lock.writeLock().lock();
+    try {
+      synchronized (this.onlineRegions) {
+        this.onlineRegions.put(region.getRegionInfo().getEncodedName(), region);
+      }
+    } finally {
+      lock.writeLock().unlock();
+    }
+  }
+
+  @Override
+  public boolean removeFromOnlineRegions(final String encodedName) {
+    this.lock.writeLock().lock();
+    HRegion toReturn = null;
+    try {
+      synchronized (this.onlineRegions) {
+        toReturn = this.onlineRegions.remove(encodedName);
+      }
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+    return toReturn != null;
+  }
+
+  /**
+   * @return A new Map of online regions sorted by region size with the first
+   *         entry being the biggest.
+   */
+  public SortedMap<Long, HRegion> getCopyOfOnlineRegionsSortedBySize() {
+    // we'll sort the regions in reverse
+    SortedMap<Long, HRegion> sortedRegions = new TreeMap<Long, HRegion>(
+        new Comparator<Long>() {
+          public int compare(Long a, Long b) {
+            return -1 * a.compareTo(b);
+          }
+        });
+    // Copy over all regions. Regions are sorted by size with biggest first.
+    synchronized (this.onlineRegions) {
+      for (HRegion region : this.onlineRegions.values()) {
+        sortedRegions.put(Long.valueOf(region.memstoreSize.get()), region);
+      }
+    }
+    return sortedRegions;
+  }
+
+  @Override
+  public HRegion getFromOnlineRegions(final String encodedRegionName) {
+    HRegion r = null;
+    synchronized (this.onlineRegions) {
+      r = this.onlineRegions.get(encodedRegionName);
+    }
+    return r;
+  }
+
+  /**
+   * @param regionName
+   * @return HRegion for the passed binary <code>regionName</code> or null if
+   *         named region is not member of the online regions.
+   */
+  public HRegion getOnlineRegion(final byte[] regionName) {
+    return getFromOnlineRegions(HRegionInfo.encodeRegionName(regionName));
+  }
+
+  /** @return the request count */
+  public AtomicInteger getRequestCount() {
+    return this.requestCount;
+  }
+
+  /** @return reference to FlushRequester */
+  public FlushRequester getFlushRequester() {
+    return this.cacheFlusher;
+  }
+
+  /**
+   * Protected utility method for safely obtaining an HRegion handle.
+   *
+   * @param regionName
+   *          Name of online {@link HRegion} to return
+   * @return {@link HRegion} for <code>regionName</code>
+   * @throws NotServingRegionException
+   */
+  protected HRegion getRegion(final byte[] regionName)
+      throws NotServingRegionException {
+    HRegion region = null;
+    this.lock.readLock().lock();
+    try {
+      region = getOnlineRegion(regionName);
+      if (region == null) {
+        throw new NotServingRegionException("Region is not online: " +
+          Bytes.toStringBinary(regionName));
+      }
+      return region;
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Get the top N most loaded regions this server is serving so we can tell the
+   * master which regions it can reallocate if we're overloaded. TODO: actually
+   * calculate which regions are most loaded. (Right now, we're just grabbing
+   * the first N regions being served regardless of load.)
+   */
+  protected HRegionInfo[] getMostLoadedRegions() {
+    ArrayList<HRegionInfo> regions = new ArrayList<HRegionInfo>();
+    synchronized (onlineRegions) {
+      for (HRegion r : onlineRegions.values()) {
+        if (r.isClosed() || r.isClosing()) {
+          continue;
+        }
+        if (regions.size() < numRegionsToReport) {
+          regions.add(r.getRegionInfo());
+        } else {
+          break;
+        }
+      }
+    }
+    return regions.toArray(new HRegionInfo[regions.size()]);
+  }
+
+  /**
+   * Called to verify that this server is up and running.
+   *
+   * @throws IOException
+   */
+  protected void checkOpen() throws IOException {
+    if (this.stopped || this.abortRequested) {
+      throw new IOException("Server not running"
+          + (this.abortRequested ? ", aborting" : ""));
+    }
+    if (!fsOk) {
+      throw new IOException("File system not available");
+    }
+  }
+
+  /**
+   * @return Returns list of non-closed regions hosted on this server. If no
+   *         regions to check, returns an empty list.
+   */
+  protected Set<HRegion> getRegionsToCheck() {
+    HashSet<HRegion> regionsToCheck = new HashSet<HRegion>();
+    // TODO: is this locking necessary?
+    lock.readLock().lock();
+    try {
+      synchronized (this.onlineRegions) {
+        regionsToCheck.addAll(this.onlineRegions.values());
+      }
+    } finally {
+      lock.readLock().unlock();
+    }
+    // Purge closed regions.
+    for (final Iterator<HRegion> i = regionsToCheck.iterator(); i.hasNext();) {
+      HRegion r = i.next();
+      if (r.isClosed()) {
+        i.remove();
+      }
+    }
+    return regionsToCheck;
+  }
+
+  public long getProtocolVersion(final String protocol, final long clientVersion)
+      throws IOException {
+    if (protocol.equals(HRegionInterface.class.getName())) {
+      return HBaseRPCProtocolVersion.versionID;
+    }
+    throw new IOException("Unknown protocol to name node: " + protocol);
+  }
+
+  /**
+   * @return Queue to which you can add outbound messages.
+   */
+  protected LinkedBlockingQueue<HMsg> getOutboundMsgs() {
+    return this.outboundMsgs;
+  }
+
+  /**
+   * Return the total size of all memstores in every region.
+   *
+   * @return memstore size in bytes
+   */
+  public long getGlobalMemStoreSize() {
+    long total = 0;
+    synchronized (onlineRegions) {
+      for (HRegion region : onlineRegions.values()) {
+        total += region.memstoreSize.get();
+      }
+    }
+    return total;
+  }
+
+  /**
+   * @return Return the leases.
+   */
+  protected Leases getLeases() {
+    return leases;
+  }
+
+  /**
+   * @return Return the rootDir.
+   */
+  protected Path getRootDir() {
+    return rootDir;
+  }
+
+  /**
+   * @return Return the fs.
+   */
+  protected FileSystem getFileSystem() {
+    return fs;
+  }
+
+  /**
+   * @return Info on port this server has bound to, etc.
+   */
+  public HServerInfo getServerInfo() {
+    return this.serverInfo;
+  }
+
+
+  @Override
+  public Result increment(byte[] regionName, Increment increment)
+  throws IOException {
+    checkOpen();
+    if (regionName == null) {
+      throw new IOException("Invalid arguments to increment " +
+      "regionName is null");
+    }
+    requestCount.incrementAndGet();
+    try {
+      HRegion region = getRegion(regionName);
+      return region.increment(increment, getLockFromId(increment.getLockId()),
+          increment.getWriteToWAL());
+    } catch (IOException e) {
+      checkFileSystem();
+      throw e;
+    }
+  }
+
+  /** {@inheritDoc} */
+  public long incrementColumnValue(byte[] regionName, byte[] row,
+      byte[] family, byte[] qualifier, long amount, boolean writeToWAL)
+      throws IOException {
+    checkOpen();
+
+    if (regionName == null) {
+      throw new IOException("Invalid arguments to incrementColumnValue "
+          + "regionName is null");
+    }
+    requestCount.incrementAndGet();
+    try {
+      HRegion region = getRegion(regionName);
+      long retval = region.incrementColumnValue(row, family, qualifier, amount,
+          writeToWAL);
+
+      return retval;
+    } catch (IOException e) {
+      checkFileSystem();
+      throw e;
+    }
+  }
+
+  public HRegionInfo[] getRegionsAssignment() throws IOException {
+    synchronized (this.onlineRegions) {
+      HRegionInfo [] regions = new HRegionInfo[getNumberOfOnlineRegions()];
+      Iterator<HRegion> ite = onlineRegions.values().iterator();
+      for (int i = 0; ite.hasNext(); i++) {
+        regions[i] = ite.next().getRegionInfo();
+      }
+      return regions;
+    }
+  }
+
+  /** {@inheritDoc} */
+  public HServerInfo getHServerInfo() throws IOException {
+    return serverInfo;
+  }
+
+  @SuppressWarnings("unchecked")
+  @Override
+  public MultiResponse multi(MultiAction multi) throws IOException {
+
+    MultiResponse response = new MultiResponse();
+
+    for (Map.Entry<byte[], List<Action>> e : multi.actions.entrySet()) {
+      byte[] regionName = e.getKey();
+      List<Action> actionsForRegion = e.getValue();
+      // sort based on the row id - this helps in the case where we reach the
+      // end of a region, so that we don't have to try the rest of the
+      // actions in the list.
+      Collections.sort(actionsForRegion);
+      Row action;
+      List<Action> puts = new ArrayList<Action>();
+      for (Action a : actionsForRegion) {
+        action = a.getAction();
+        int originalIndex = a.getOriginalIndex();
+
+        try {
+          if (action instanceof Delete) {
+            delete(regionName, (Delete) action);
+            response.add(regionName, originalIndex, new Result());
+          } else if (action instanceof Get) {
+            response.add(regionName, originalIndex, get(regionName, (Get) action));
+          } else if (action instanceof Put) {
+            puts.add(a);  // wont throw.
+          } else {
+            LOG.debug("Error: invalid Action, row must be a Get, Delete or Put.");
+            throw new DoNotRetryIOException("Invalid Action, row must be a Get, Delete or Put.");
+          }
+        } catch (IOException ex) {
+          response.add(regionName, originalIndex, ex);
+        }
+      }
+
+      // We do the puts with result.put so we can get the batching efficiency
+      // we so need. All this data munging doesn't seem great, but at least
+      // we arent copying bytes or anything.
+      if (!puts.isEmpty()) {
+        try {
+          HRegion region = getRegion(regionName);
+
+          if (!region.getRegionInfo().isMetaTable()) {
+            this.cacheFlusher.reclaimMemStoreMemory();
+          }
+
+          List<Pair<Put,Integer>> putsWithLocks =
+              Lists.newArrayListWithCapacity(puts.size());
+          for (Action a : puts) {
+            Put p = (Put) a.getAction();
+
+            Integer lock;
+            try {
+              lock = getLockFromId(p.getLockId());
+            } catch (UnknownRowLockException ex) {
+              response.add(regionName, a.getOriginalIndex(), ex);
+              continue;
+            }
+            putsWithLocks.add(new Pair<Put, Integer>(p, lock));
+          }
+
+          this.requestCount.addAndGet(puts.size());
+
+          OperationStatusCode[] codes =
+              region.put(putsWithLocks.toArray(new Pair[]{}));
+
+          for( int i = 0 ; i < codes.length ; i++) {
+            OperationStatusCode code = codes[i];
+
+            Action theAction = puts.get(i);
+            Object result = null;
+
+            if (code == OperationStatusCode.SUCCESS) {
+              result = new Result();
+            } else if (code == OperationStatusCode.BAD_FAMILY) {
+              result = new NoSuchColumnFamilyException();
+            }
+            // FAILURE && NOT_RUN becomes null, aka: need to run again.
+
+            response.add(regionName, theAction.getOriginalIndex(), result);
+          }
+        } catch (IOException ioe) {
+          // fail all the puts with the ioe in question.
+          for (Action a: puts) {
+            response.add(regionName, a.getOriginalIndex(), ioe);
+          }
+        }
+      }
+    }
+    return response;
+  }
+
+  /**
+   * @deprecated Use HRegionServer.multi( MultiAction action) instead
+   */
+  @Override
+  public MultiPutResponse multiPut(MultiPut puts) throws IOException {
+    MultiPutResponse resp = new MultiPutResponse();
+
+    // do each region as it's own.
+    for (Map.Entry<byte[], List<Put>> e : puts.puts.entrySet()) {
+      int result = put(e.getKey(), e.getValue());
+      resp.addResult(e.getKey(), result);
+
+      e.getValue().clear(); // clear some RAM
+    }
+
+    return resp;
+  }
+
+  public String toString() {
+    return this.serverInfo.toString();
+  }
+
+  /**
+   * Interval at which threads should run
+   *
+   * @return the interval
+   */
+  public int getThreadWakeFrequency() {
+    return threadWakeFrequency;
+  }
+
+  @Override
+  public ZooKeeperWatcher getZooKeeper() {
+    return zooKeeper;
+  }
+
+  @Override
+  public String getServerName() {
+    return serverInfo.getServerName();
+  }
+
+  @Override
+  public CompactionRequestor getCompactionRequester() {
+    return this.compactSplitThread;
+  }
+
+  //
+  // Main program and support routines
+  //
+
+  /**
+   * @param hrs
+   * @return Thread the RegionServer is running in correctly named.
+   * @throws IOException
+   */
+  public static Thread startRegionServer(final HRegionServer hrs)
+      throws IOException {
+    return startRegionServer(hrs, "regionserver"
+        + hrs.getServerInfo().getServerAddress().getPort());
+  }
+
+  /**
+   * @param hrs
+   * @param name
+   * @return Thread the RegionServer is running in correctly named.
+   * @throws IOException
+   */
+  public static Thread startRegionServer(final HRegionServer hrs,
+      final String name) throws IOException {
+    Thread t = new Thread(hrs);
+    t.setName(name);
+    t.start();
+    // Install shutdown hook that will catch signals and run an orderly shutdown
+    // of the hrs.
+    ShutdownHook.install(hrs.getConfiguration(), FileSystem.get(hrs
+        .getConfiguration()), hrs, t);
+    return t;
+  }
+
+  /**
+   * Utility for constructing an instance of the passed HRegionServer class.
+   *
+   * @param regionServerClass
+   * @param conf2
+   * @return HRegionServer instance.
+   */
+  public static HRegionServer constructRegionServer(
+      Class<? extends HRegionServer> regionServerClass,
+      final Configuration conf2) {
+    try {
+      Constructor<? extends HRegionServer> c = regionServerClass
+          .getConstructor(Configuration.class);
+      return c.newInstance(conf2);
+    } catch (Exception e) {
+      throw new RuntimeException("Failed construction of " + "Regionserver: "
+          + regionServerClass.toString(), e);
+    }
+  }
+
+  @Override
+  public void replicateLogEntries(final HLog.Entry[] entries)
+  throws IOException {
+    if (this.replicationHandler == null) return;
+    this.replicationHandler.replicateLogEntries(entries);
+  }
+
+
+  /**
+   * @see org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine
+   */
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    @SuppressWarnings("unchecked")
+    Class<? extends HRegionServer> regionServerClass = (Class<? extends HRegionServer>) conf
+        .getClass(HConstants.REGION_SERVER_IMPL, HRegionServer.class);
+
+    new HRegionServerCommandLine(regionServerClass).doMain(args);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServerCommandLine.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServerCommandLine.java
new file mode 100644
index 0000000..71b9985
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServerCommandLine.java
@@ -0,0 +1,87 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.LocalHBaseCluster;
+import org.apache.hadoop.hbase.util.ServerCommandLine;
+
+/**
+ * Class responsible for parsing the command line and starting the
+ * RegionServer.
+ */
+public class HRegionServerCommandLine extends ServerCommandLine {
+  private static final Log LOG = LogFactory.getLog(HRegionServerCommandLine.class);
+
+  private final Class<? extends HRegionServer> regionServerClass;
+
+  private static final String USAGE =
+    "Usage: HRegionServer [-D conf.param=value] start";
+
+  public HRegionServerCommandLine(Class<? extends HRegionServer> clazz) {
+    this.regionServerClass = clazz;
+  }
+
+  protected String getUsage() {
+    return USAGE;
+  }
+
+  private int start() throws Exception {
+    Configuration conf = getConf();
+
+    // If 'local', don't start a region server here. Defer to
+    // LocalHBaseCluster. It manages 'local' clusters.
+    if (LocalHBaseCluster.isLocal(conf)) {
+      LOG.warn("Not starting a distinct region server because "
+               + HConstants.CLUSTER_DISTRIBUTED + " is false");
+    } else {
+      logJVMInfo();
+      HRegionServer hrs = HRegionServer.constructRegionServer(regionServerClass, conf);
+      HRegionServer.startRegionServer(hrs);
+    }
+    return 0;
+  }
+
+  public int run(String args[]) throws Exception {
+    if (args.length != 1) {
+      usage(null);
+      return -1;
+    }
+
+    String cmd = args[0];
+
+    if ("start".equals(cmd)) {
+      return start();
+    } else if ("stop".equals(cmd)) {
+      System.err.println(
+        "To shutdown the regionserver run " +
+        "bin/hbase-daemon.sh stop regionserver or send a kill signal to" +
+        "the regionserver pid");
+      return -1;
+    } else {
+      usage("Unknown command: " + args[0]);
+      return -1;
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScan.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScan.java
new file mode 100644
index 0000000..db2e02d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScan.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Scan;
+
+/**
+ * Special internal-only scanner, currently used for increment operations to
+ * allow additional server-side arguments for Scan operations.
+ * <p>
+ * Rather than adding new options/parameters to the public Scan API, this new
+ * class has been created.
+ * <p>
+ * Supports adding an option to only read from the MemStore with
+ * {@link #checkOnlyMemStore()} or to only read from StoreFiles with
+ * {@link #checkOnlyStoreFiles()}.
+ */
+class InternalScan extends Scan {
+  private boolean memOnly = false;
+  private boolean filesOnly = false;
+
+  /**
+   * @param get get to model scan after
+   */
+  public InternalScan(Get get) {
+    super(get);
+  }
+
+  /**
+   * StoreFiles will not be scanned. Only MemStore will be scanned.
+   */
+  public void checkOnlyMemStore() {
+    memOnly = true;
+    filesOnly = false;
+  }
+
+  /**
+   * MemStore will not be scanned. Only StoreFiles will be scanned.
+   */
+  public void checkOnlyStoreFiles() {
+    memOnly = false;
+    filesOnly = true;
+  }
+
+  /**
+   * Returns true if only the MemStore should be checked.  False if not.
+   * @return true to only check MemStore
+   */
+  public boolean isCheckOnlyMemStore() {
+    return (memOnly);
+  }
+
+  /**
+   * Returns true if only StoreFiles should be checked.  False if not.
+   * @return true if only check StoreFiles
+   */
+  public boolean isCheckOnlyStoreFiles() {
+    return (filesOnly);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScanner.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScanner.java
new file mode 100644
index 0000000..0f5f36c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScanner.java
@@ -0,0 +1,66 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Internal scanners differ from client-side scanners in that they operate on
+ * HStoreKeys and byte[] instead of RowResults. This is because they are
+ * actually close to how the data is physically stored, and therefore it is more
+ * convenient to interact with them that way. It is also much easier to merge
+ * the results across SortedMaps than RowResults.
+ *
+ * <p>Additionally, we need to be able to determine if the scanner is doing
+ * wildcard column matches (when only a column family is specified or if a
+ * column regex is specified) or if multiple members of the same column family
+ * were specified. If so, we need to ignore the timestamp to ensure that we get
+ * all the family members, as they may have been last updated at different
+ * times.
+ */
+public interface InternalScanner extends Closeable {
+  /**
+   * Grab the next row's worth of values.
+   * @param results return output array
+   * @return true if more rows exist after this one, false if scanner is done
+   * @throws IOException e
+   */
+  public boolean next(List<KeyValue> results) throws IOException;
+
+  /**
+   * Grab the next row's worth of values with a limit on the number of values
+   * to return.
+   * @param result return output array
+   * @param limit limit on row count to get
+   * @return true if more rows exist after this one, false if scanner is done
+   * @throws IOException e
+   */
+  public boolean next(List<KeyValue> result, int limit) throws IOException;
+
+  /**
+   * Closes the scanner and releases any resources it has allocated
+   * @throws IOException
+   */
+  public void close() throws IOException;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueHeap.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueHeap.java
new file mode 100644
index 0000000..9d9895c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueHeap.java
@@ -0,0 +1,276 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValue.KVComparator;
+
+import java.io.IOException;
+import java.util.Comparator;
+import java.util.List;
+import java.util.PriorityQueue;
+
+/**
+ * Implements a heap merge across any number of KeyValueScanners.
+ * <p>
+ * Implements KeyValueScanner itself.
+ * <p>
+ * This class is used at the Region level to merge across Stores
+ * and at the Store level to merge across the memstore and StoreFiles.
+ * <p>
+ * In the Region case, we also need InternalScanner.next(List), so this class
+ * also implements InternalScanner.  WARNING: As is, if you try to use this
+ * as an InternalScanner at the Store level, you will get runtime exceptions.
+ */
+public class KeyValueHeap implements KeyValueScanner, InternalScanner {
+  private PriorityQueue<KeyValueScanner> heap = null;
+  private KeyValueScanner current = null;
+  private KVScannerComparator comparator;
+
+  /**
+   * Constructor.  This KeyValueHeap will handle closing of passed in
+   * KeyValueScanners.
+   * @param scanners
+   * @param comparator
+   */
+  public KeyValueHeap(List<? extends KeyValueScanner> scanners,
+      KVComparator comparator) {
+    this.comparator = new KVScannerComparator(comparator);
+    if (!scanners.isEmpty()) {
+      this.heap = new PriorityQueue<KeyValueScanner>(scanners.size(),
+          this.comparator);
+      for (KeyValueScanner scanner : scanners) {
+        if (scanner.peek() != null) {
+          this.heap.add(scanner);
+        } else {
+          scanner.close();
+        }
+      }
+      this.current = heap.poll();
+    }
+  }
+
+  public KeyValue peek() {
+    if (this.current == null) {
+      return null;
+    }
+    return this.current.peek();
+  }
+
+  public KeyValue next()  throws IOException {
+    if(this.current == null) {
+      return null;
+    }
+    KeyValue kvReturn = this.current.next();
+    KeyValue kvNext = this.current.peek();
+    if (kvNext == null) {
+      this.current.close();
+      this.current = this.heap.poll();
+    } else {
+      KeyValueScanner topScanner = this.heap.peek();
+      if (topScanner == null ||
+          this.comparator.compare(kvNext, topScanner.peek()) >= 0) {
+        this.heap.add(this.current);
+        this.current = this.heap.poll();
+      }
+    }
+    return kvReturn;
+  }
+
+  /**
+   * Gets the next row of keys from the top-most scanner.
+   * <p>
+   * This method takes care of updating the heap.
+   * <p>
+   * This can ONLY be called when you are using Scanners that implement
+   * InternalScanner as well as KeyValueScanner (a {@link StoreScanner}).
+   * @param result
+   * @param limit
+   * @return true if there are more keys, false if all scanners are done
+   */
+  public boolean next(List<KeyValue> result, int limit) throws IOException {
+    if (this.current == null) {
+      return false;
+    }
+    InternalScanner currentAsInternal = (InternalScanner)this.current;
+    boolean mayContainsMoreRows = currentAsInternal.next(result, limit);
+    KeyValue pee = this.current.peek();
+    /*
+     * By definition, any InternalScanner must return false only when it has no
+     * further rows to be fetched. So, we can close a scanner if it returns
+     * false. All existing implementations seem to be fine with this. It is much
+     * more efficient to close scanners which are not needed than keep them in
+     * the heap. This is also required for certain optimizations.
+     */
+    if (pee == null || !mayContainsMoreRows) {
+      this.current.close();
+    } else {
+      this.heap.add(this.current);
+    }
+    this.current = this.heap.poll();
+    return (this.current != null);
+  }
+
+  /**
+   * Gets the next row of keys from the top-most scanner.
+   * <p>
+   * This method takes care of updating the heap.
+   * <p>
+   * This can ONLY be called when you are using Scanners that implement
+   * InternalScanner as well as KeyValueScanner (a {@link StoreScanner}).
+   * @param result
+   * @return true if there are more keys, false if all scanners are done
+   */
+  public boolean next(List<KeyValue> result) throws IOException {
+    return next(result, -1);
+  }
+
+  private static class KVScannerComparator implements Comparator<KeyValueScanner> {
+    private KVComparator kvComparator;
+    /**
+     * Constructor
+     * @param kvComparator
+     */
+    public KVScannerComparator(KVComparator kvComparator) {
+      this.kvComparator = kvComparator;
+    }
+    public int compare(KeyValueScanner left, KeyValueScanner right) {
+      int comparison = compare(left.peek(), right.peek());
+      if (comparison != 0) {
+        return comparison;
+      } else {
+        // Since both the keys are exactly the same, we break the tie in favor
+        // of the key which came latest.
+        long leftSequenceID = left.getSequenceID();
+        long rightSequenceID = right.getSequenceID();
+        if (leftSequenceID > rightSequenceID) {
+          return -1;
+        } else if (leftSequenceID < rightSequenceID) {
+          return 1;
+        } else {
+          return 0;
+        }
+      }
+    }
+    /**
+     * Compares two KeyValue
+     * @param left
+     * @param right
+     * @return less than 0 if left is smaller, 0 if equal etc..
+     */
+    public int compare(KeyValue left, KeyValue right) {
+      return this.kvComparator.compare(left, right);
+    }
+    /**
+     * @return KVComparator
+     */
+    public KVComparator getComparator() {
+      return this.kvComparator;
+    }
+  }
+
+  public void close() {
+    if (this.current != null) {
+      this.current.close();
+    }
+    if (this.heap != null) {
+      KeyValueScanner scanner;
+      while ((scanner = this.heap.poll()) != null) {
+        scanner.close();
+      }
+    }
+  }
+
+  /**
+   * Seeks all scanners at or below the specified seek key.  If we earlied-out
+   * of a row, we may end up skipping values that were never reached yet.
+   * Rather than iterating down, we want to give the opportunity to re-seek.
+   * <p>
+   * As individual scanners may run past their ends, those scanners are
+   * automatically closed and removed from the heap.
+   * @param seekKey KeyValue to seek at or after
+   * @return true if KeyValues exist at or after specified key, false if not
+   * @throws IOException
+   */
+  public boolean seek(KeyValue seekKey) throws IOException {
+    if (this.current == null) {
+      return false;
+    }
+    this.heap.add(this.current);
+    this.current = null;
+
+    KeyValueScanner scanner;
+    while((scanner = this.heap.poll()) != null) {
+      KeyValue topKey = scanner.peek();
+      if(comparator.getComparator().compare(seekKey, topKey) <= 0) { // Correct?
+        // Top KeyValue is at-or-after Seek KeyValue
+        this.current = scanner;
+        return true;
+      }
+      if(!scanner.seek(seekKey)) {
+        scanner.close();
+      } else {
+        this.heap.add(scanner);
+      }
+    }
+    // Heap is returning empty, scanner is done
+    return false;
+  }
+
+  public boolean reseek(KeyValue seekKey) throws IOException {
+    //This function is very identical to the seek(KeyValue) function except that
+    //scanner.seek(seekKey) is changed to scanner.reseek(seekKey)
+    if (this.current == null) {
+      return false;
+    }
+    this.heap.add(this.current);
+    this.current = null;
+
+    KeyValueScanner scanner;
+    while ((scanner = this.heap.poll()) != null) {
+      KeyValue topKey = scanner.peek();
+      if (comparator.getComparator().compare(seekKey, topKey) <= 0) {
+        // Top KeyValue is at-or-after Seek KeyValue
+        this.current = scanner;
+        return true;
+      }
+      if (!scanner.reseek(seekKey)) {
+        scanner.close();
+      } else {
+        this.heap.add(scanner);
+      }
+    }
+    // Heap is returning empty, scanner is done
+    return false;
+  }
+
+  /**
+   * @return the current Heap
+   */
+  public PriorityQueue<KeyValueScanner> getHeap() {
+    return this.heap;
+  }
+
+  @Override
+  public long getSequenceID() {
+    return 0;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueScanner.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueScanner.java
new file mode 100644
index 0000000..6cdada7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueScanner.java
@@ -0,0 +1,71 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+/**
+ * Scanner that returns the next KeyValue.
+ */
+public interface KeyValueScanner {
+  /**
+   * Look at the next KeyValue in this scanner, but do not iterate scanner.
+   * @return the next KeyValue
+   */
+  public KeyValue peek();
+
+  /**
+   * Return the next KeyValue in this scanner, iterating the scanner
+   * @return the next KeyValue
+   */
+  public KeyValue next() throws IOException;
+
+  /**
+   * Seek the scanner at or after the specified KeyValue.
+   * @param key seek value
+   * @return true if scanner has values left, false if end of scanner
+   */
+  public boolean seek(KeyValue key) throws IOException;
+
+  /**
+   * Reseek the scanner at or after the specified KeyValue.
+   * This method is guaranteed to seek to or before the required key only if the
+   * key comes after the current position of the scanner. Should not be used
+   * to seek to a key which may come before the current position.
+   * @param key seek value (should be non-null)
+   * @return true if scanner has values left, false if end of scanner
+   */
+  public boolean reseek(KeyValue key) throws IOException;
+
+  /**
+   * Get the sequence id associated with this KeyValueScanner. This is required
+   * for comparing multiple files to find out which one has the latest data.
+   * The default implementation for this would be to return 0. A file having
+   * lower sequence id will be considered to be the older one.
+   */
+  public long getSequenceID();
+
+  /**
+   * Close the KeyValue scanner.
+   */
+  public void close();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueSkipListSet.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueSkipListSet.java
new file mode 100644
index 0000000..95111b4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueSkipListSet.java
@@ -0,0 +1,205 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.SortedSet;
+import java.util.concurrent.ConcurrentNavigableMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+
+/**
+ * A {@link java.util.Set} of {@link KeyValue}s implemented on top of a
+ * {@link java.util.concurrent.ConcurrentSkipListMap}.  Works like a
+ * {@link java.util.concurrent.ConcurrentSkipListSet} in all but one regard:
+ * An add will overwrite if already an entry for the added key.  In other words,
+ * where CSLS does "Adds the specified element to this set if it is not already
+ * present.", this implementation "Adds the specified element to this set EVEN
+ * if it is already present overwriting what was there previous".  The call to
+ * add returns true if no value in the backing map or false if there was an
+ * entry with same key (though value may be different).
+ * <p>Otherwise,
+ * has same attributes as ConcurrentSkipListSet: e.g. tolerant of concurrent
+ * get and set and won't throw ConcurrentModificationException when iterating.
+ */
+class KeyValueSkipListSet implements NavigableSet<KeyValue> {
+  private final ConcurrentNavigableMap<KeyValue, KeyValue> delegatee;
+
+  KeyValueSkipListSet(final KeyValue.KVComparator c) {
+    this.delegatee = new ConcurrentSkipListMap<KeyValue, KeyValue>(c);
+  }
+
+  KeyValueSkipListSet(final ConcurrentNavigableMap<KeyValue, KeyValue> m) {
+    this.delegatee = m;
+  }
+
+  /**
+   * Iterator that maps Iterator calls to return the value component of the
+   * passed-in Map.Entry Iterator.
+   */
+  static class MapEntryIterator implements Iterator<KeyValue> {
+    private final Iterator<Map.Entry<KeyValue, KeyValue>> iterator;
+
+    MapEntryIterator(final Iterator<Map.Entry<KeyValue, KeyValue>> i) {
+      this.iterator = i;
+    }
+
+    public boolean hasNext() {
+      return this.iterator.hasNext();
+    }
+
+    public KeyValue next() {
+      return this.iterator.next().getValue();
+    }
+
+    public void remove() {
+      this.iterator.remove();
+    }
+  }
+
+  public KeyValue ceiling(KeyValue e) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public Iterator<KeyValue> descendingIterator() {
+    return new MapEntryIterator(this.delegatee.descendingMap().entrySet().
+      iterator());
+  }
+
+  public NavigableSet<KeyValue> descendingSet() {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public KeyValue floor(KeyValue e) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public SortedSet<KeyValue> headSet(final KeyValue toElement) {
+    return headSet(toElement, false);
+  }
+
+  public NavigableSet<KeyValue> headSet(final KeyValue toElement,
+      boolean inclusive) {
+    return new KeyValueSkipListSet(this.delegatee.headMap(toElement, inclusive));
+  }
+
+  public KeyValue higher(KeyValue e) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public Iterator<KeyValue> iterator() {
+    return new MapEntryIterator(this.delegatee.entrySet().iterator());
+  }
+
+  public KeyValue lower(KeyValue e) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public KeyValue pollFirst() {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public KeyValue pollLast() {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public SortedSet<KeyValue> subSet(KeyValue fromElement, KeyValue toElement) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public NavigableSet<KeyValue> subSet(KeyValue fromElement,
+      boolean fromInclusive, KeyValue toElement, boolean toInclusive) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public SortedSet<KeyValue> tailSet(KeyValue fromElement) {
+    return tailSet(fromElement, true);
+  }
+
+  public NavigableSet<KeyValue> tailSet(KeyValue fromElement, boolean inclusive) {
+    return new KeyValueSkipListSet(this.delegatee.tailMap(fromElement, inclusive));
+  }
+
+  public Comparator<? super KeyValue> comparator() {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public KeyValue first() {
+    return this.delegatee.get(this.delegatee.firstKey());
+  }
+
+  public KeyValue last() {
+    return this.delegatee.get(this.delegatee.lastKey());
+  }
+
+  public boolean add(KeyValue e) {
+    return this.delegatee.put(e, e) == null;
+  }
+
+  public boolean addAll(Collection<? extends KeyValue> c) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public void clear() {
+    this.delegatee.clear();
+  }
+
+  public boolean contains(Object o) {
+    //noinspection SuspiciousMethodCalls
+    return this.delegatee.containsKey(o);
+  }
+
+  public boolean containsAll(Collection<?> c) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public boolean isEmpty() {
+    return this.delegatee.isEmpty();
+  }
+
+  public boolean remove(Object o) {
+    return this.delegatee.remove(o) != null;
+  }
+
+  public boolean removeAll(Collection<?> c) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public boolean retainAll(Collection<?> c) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public int size() {
+    return this.delegatee.size();
+  }
+
+  public Object[] toArray() {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+
+  public <T> T[] toArray(T[] a) {
+    throw new UnsupportedOperationException("Not implemented");
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseException.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseException.java
new file mode 100644
index 0000000..cafbb28
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseException.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Reports a problem with a lease
+ */
+public class LeaseException extends DoNotRetryIOException {
+
+  private static final long serialVersionUID = 8179703995292418650L;
+
+  /** default constructor */
+  public LeaseException() {
+    super();
+  }
+
+  /**
+   * @param message
+   */
+  public LeaseException(String message) {
+    super(message);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseListener.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseListener.java
new file mode 100644
index 0000000..a843736
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseListener.java
@@ -0,0 +1,34 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+
+/**
+ * LeaseListener is an interface meant to be implemented by users of the Leases
+ * class.
+ *
+ * It receives events from the Leases class about the status of its accompanying
+ * lease.  Users of the Leases class can use a LeaseListener subclass to, for
+ * example, clean up resources after a lease has expired.
+ */
+public interface LeaseListener {
+  /** When a lease expires, this method is called. */
+  public void leaseExpired();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/Leases.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/Leases.java
new file mode 100644
index 0000000..5a4b275
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/Leases.java
@@ -0,0 +1,282 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.util.ConcurrentModificationException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.Delayed;
+import java.util.concurrent.DelayQueue;
+import java.util.concurrent.TimeUnit;
+
+import java.io.IOException;
+
+/**
+ * Leases
+ *
+ * There are several server classes in HBase that need to track external
+ * clients that occasionally send heartbeats.
+ *
+ * <p>These external clients hold resources in the server class.
+ * Those resources need to be released if the external client fails to send a
+ * heartbeat after some interval of time passes.
+ *
+ * <p>The Leases class is a general reusable class for this kind of pattern.
+ * An instance of the Leases class will create a thread to do its dirty work.
+ * You should close() the instance if you want to clean up the thread properly.
+ *
+ * <p>
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ */
+public class Leases extends Thread {
+  private static final Log LOG = LogFactory.getLog(Leases.class.getName());
+  private final int leasePeriod;
+  private final int leaseCheckFrequency;
+  private volatile DelayQueue<Lease> leaseQueue = new DelayQueue<Lease>();
+  protected final Map<String, Lease> leases = new HashMap<String, Lease>();
+  private volatile boolean stopRequested = false;
+
+  /**
+   * Creates a lease monitor
+   *
+   * @param leasePeriod - length of time (milliseconds) that the lease is valid
+   * @param leaseCheckFrequency - how often the lease should be checked
+   * (milliseconds)
+   */
+  public Leases(final int leasePeriod, final int leaseCheckFrequency) {
+    this.leasePeriod = leasePeriod;
+    this.leaseCheckFrequency = leaseCheckFrequency;
+    setDaemon(true);
+  }
+
+  /**
+   * @see java.lang.Thread#run()
+   */
+  @Override
+  public void run() {
+    while (!stopRequested || (stopRequested && leaseQueue.size() > 0) ) {
+      Lease lease = null;
+      try {
+        lease = leaseQueue.poll(leaseCheckFrequency, TimeUnit.MILLISECONDS);
+      } catch (InterruptedException e) {
+        continue;
+      } catch (ConcurrentModificationException e) {
+        continue;
+      } catch (Throwable e) {
+        LOG.fatal("Unexpected exception killed leases thread", e);
+        break;
+      }
+      if (lease == null) {
+        continue;
+      }
+      // A lease expired.  Run the expired code before removing from queue
+      // since its presence in queue is used to see if lease exists still.
+      if (lease.getListener() == null) {
+        LOG.error("lease listener is null for lease " + lease.getLeaseName());
+      } else {
+        lease.getListener().leaseExpired();
+      }
+      synchronized (leaseQueue) {
+        leases.remove(lease.getLeaseName());
+      }
+    }
+    close();
+  }
+
+  /**
+   * Shuts down this lease instance when all outstanding leases expire.
+   * Like {@link #close()} but rather than violently end all leases, waits
+   * first on extant leases to finish.  Use this method if the lease holders
+   * could loose data, leak locks, etc.  Presumes client has shutdown
+   * allocation of new leases.
+   */
+  public void closeAfterLeasesExpire() {
+    this.stopRequested = true;
+  }
+
+  /**
+   * Shut down this Leases instance.  All pending leases will be destroyed,
+   * without any cancellation calls.
+   */
+  public void close() {
+    LOG.info(Thread.currentThread().getName() + " closing leases");
+    this.stopRequested = true;
+    synchronized (leaseQueue) {
+      leaseQueue.clear();
+      leases.clear();
+      leaseQueue.notifyAll();
+    }
+    LOG.info(Thread.currentThread().getName() + " closed leases");
+  }
+
+  /**
+   * Obtain a lease
+   *
+   * @param leaseName name of the lease
+   * @param listener listener that will process lease expirations
+   * @throws LeaseStillHeldException
+   */
+  public void createLease(String leaseName, final LeaseListener listener)
+  throws LeaseStillHeldException {
+    if (stopRequested) {
+      return;
+    }
+    Lease lease = new Lease(leaseName, listener,
+        System.currentTimeMillis() + leasePeriod);
+    synchronized (leaseQueue) {
+      if (leases.containsKey(leaseName)) {
+        throw new LeaseStillHeldException(leaseName);
+      }
+      leases.put(leaseName, lease);
+      leaseQueue.add(lease);
+    }
+  }
+
+  /**
+   * Thrown if we are asked create a lease but lease on passed name already
+   * exists.
+   */
+  @SuppressWarnings("serial")
+  public static class LeaseStillHeldException extends IOException {
+    private final String leaseName;
+
+    /**
+     * @param name
+     */
+    public LeaseStillHeldException(final String name) {
+      this.leaseName = name;
+    }
+
+    /** @return name of lease */
+    public String getName() {
+      return this.leaseName;
+    }
+  }
+
+  /**
+   * Renew a lease
+   *
+   * @param leaseName name of lease
+   * @throws LeaseException
+   */
+  public void renewLease(final String leaseName) throws LeaseException {
+    synchronized (leaseQueue) {
+      Lease lease = leases.get(leaseName);
+      // We need to check to see if the remove is successful as the poll in the run()
+      // method could have completed between the get and the remove which will result
+      // in a corrupt leaseQueue.
+      if (lease == null || !leaseQueue.remove(lease)) {
+        throw new LeaseException("lease '" + leaseName +
+                "' does not exist or has already expired");
+      }
+      lease.setExpirationTime(System.currentTimeMillis() + leasePeriod);
+      leaseQueue.add(lease);
+    }
+  }
+
+  /**
+   * Client explicitly cancels a lease.
+   *
+   * @param leaseName name of lease
+   * @throws LeaseException
+   */
+  public void cancelLease(final String leaseName) throws LeaseException {
+    synchronized (leaseQueue) {
+      Lease lease = leases.remove(leaseName);
+      if (lease == null) {
+        throw new LeaseException("lease '" + leaseName + "' does not exist");
+      }
+      leaseQueue.remove(lease);
+    }
+  }
+
+  /** This class tracks a single Lease. */
+  private static class Lease implements Delayed {
+    private final String leaseName;
+    private final LeaseListener listener;
+    private long expirationTime;
+
+    Lease(final String leaseName, LeaseListener listener, long expirationTime) {
+      this.leaseName = leaseName;
+      this.listener = listener;
+      this.expirationTime = expirationTime;
+    }
+
+    /** @return the lease name */
+    public String getLeaseName() {
+      return leaseName;
+    }
+
+    /** @return listener */
+    public LeaseListener getListener() {
+      return this.listener;
+    }
+
+    @Override
+    public boolean equals(Object obj) {
+      if (this == obj) {
+        return true;
+      }
+      if (obj == null) {
+        return false;
+      }
+      if (getClass() != obj.getClass()) {
+        return false;
+      }
+      return this.hashCode() == ((Lease) obj).hashCode();
+    }
+
+    @Override
+    public int hashCode() {
+      return this.leaseName.hashCode();
+    }
+
+    public long getDelay(TimeUnit unit) {
+      return unit.convert(this.expirationTime - System.currentTimeMillis(),
+          TimeUnit.MILLISECONDS);
+    }
+
+    public int compareTo(Delayed o) {
+      long delta = this.getDelay(TimeUnit.MILLISECONDS) -
+        o.getDelay(TimeUnit.MILLISECONDS);
+
+      return this.equals(o) ? 0 : (delta > 0 ? 1 : -1);
+    }
+
+    /** @param expirationTime the expirationTime to set */
+    public void setExpirationTime(long expirationTime) {
+      this.expirationTime = expirationTime;
+    }
+
+    /**
+     * Get the expiration time for that lease
+     * @return expiration time
+     */
+    public long getExpirationTime() {
+      return this.expirationTime;
+    }
+
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
new file mode 100644
index 0000000..9ccf248
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java
@@ -0,0 +1,172 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.regionserver.wal.WALObserver;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Runs periodically to determine if the HLog should be rolled.
+ *
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ */
+class LogRoller extends Thread implements WALObserver {
+  static final Log LOG = LogFactory.getLog(LogRoller.class);
+  private final ReentrantLock rollLock = new ReentrantLock();
+  private final AtomicBoolean rollLog = new AtomicBoolean(false);
+  private final Server server;
+  private final RegionServerServices services;
+  private volatile long lastrolltime = System.currentTimeMillis();
+  // Period to roll log.
+  private final long rollperiod;
+  private final int threadWakeFrequency;
+
+  /** @param server */
+  public LogRoller(final Server server, final RegionServerServices services) {
+    super();
+    this.server = server;
+    this.services = services;
+    this.rollperiod = this.server.getConfiguration().
+      getLong("hbase.regionserver.logroll.period", 3600000);
+    this.threadWakeFrequency = this.server.getConfiguration().
+      getInt(HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000);
+  }
+
+  @Override
+  public void run() {
+    while (!server.isStopped()) {
+      long now = System.currentTimeMillis();
+      boolean periodic = false;
+      if (!rollLog.get()) {
+        periodic = (now - this.lastrolltime) > this.rollperiod;
+        if (!periodic) {
+          synchronized (rollLog) {
+            try {
+              rollLog.wait(this.threadWakeFrequency);
+            } catch (InterruptedException e) {
+              // Fall through
+            }
+          }
+          continue;
+        }
+        // Time for periodic roll
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Hlog roll period " + this.rollperiod + "ms elapsed");
+        }
+      }
+      rollLock.lock(); // FindBugs UL_UNRELEASED_LOCK_EXCEPTION_PATH
+      try {
+        this.lastrolltime = now;
+        // This is array of actual region names.
+        byte [][] regionsToFlush = this.services.getWAL().rollWriter();
+        if (regionsToFlush != null) {
+          for (byte [] r: regionsToFlush) scheduleFlush(r);
+        }
+      } catch (FailedLogCloseException e) {
+        server.abort("Failed log close in log roller", e);
+      } catch (java.net.ConnectException e) {
+        server.abort("Failed log close in log roller", e);
+      } catch (IOException ex) {
+        // Abort if we get here.  We probably won't recover an IOE. HBASE-1132
+        server.abort("IOE in log roller",
+          RemoteExceptionHandler.checkIOException(ex));
+      } catch (Exception ex) {
+        LOG.error("Log rolling failed", ex);
+        server.abort("Log rolling failed", ex);
+      } finally {
+        rollLog.set(false);
+        rollLock.unlock();
+      }
+    }
+    LOG.info("LogRoller exiting.");
+  }
+
+  /**
+   * @param region Encoded name of region to flush.
+   */
+  private void scheduleFlush(final byte [] region) {
+    boolean scheduled = false;
+    HRegion r = this.services.getFromOnlineRegions(Bytes.toString(region));
+    FlushRequester requester = null;
+    if (r != null) {
+      requester = this.services.getFlushRequester();
+      if (requester != null) {
+        requester.requestFlush(r);
+        scheduled = true;
+      }
+    }
+    if (!scheduled) {
+    LOG.warn("Failed to schedule flush of " +
+      Bytes.toString(region) + "r=" + r + ", requester=" + requester);
+    }
+  }
+
+  public void logRollRequested() {
+    synchronized (rollLog) {
+      rollLog.set(true);
+      rollLog.notifyAll();
+    }
+  }
+
+  /**
+   * Called by region server to wake up this thread if it sleeping.
+   * It is sleeping if rollLock is not held.
+   */
+  public void interruptIfNecessary() {
+    try {
+      rollLock.lock();
+      this.interrupt();
+    } finally {
+      rollLock.unlock();
+    }
+  }
+
+  @Override
+  public void logRolled(Path newFile) {
+    // Not interested
+  }
+
+  @Override
+  public void visitLogEntryBeforeWrite(HRegionInfo info, HLogKey logKey,
+      WALEdit logEdit) {
+    // Not interested.
+  }
+
+  @Override
+  public void logCloseRequested() {
+    // not interested
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LruHashMap.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LruHashMap.java
new file mode 100644
index 0000000..161ae18
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/LruHashMap.java
@@ -0,0 +1,1099 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+/**
+ * The LruHashMap is a memory-aware HashMap with a configurable maximum
+ * memory footprint.
+ * <p>
+ * It maintains an ordered list of all entries in the map ordered by
+ * access time.  When space needs to be freed becase the maximum has been
+ * reached, or the application has asked to free memory, entries will be
+ * evicted according to an LRU (least-recently-used) algorithm.  That is,
+ * those entries which have not been accessed the longest will be evicted
+ * first.
+ * <p>
+ * Both the Key and Value Objects used for this class must extend
+ * <code>HeapSize</code> in order to track heap usage.
+ * <p>
+ * This class contains internal synchronization and is thread-safe.
+ */
+public class LruHashMap<K extends HeapSize, V extends HeapSize>
+implements HeapSize, Map<K,V> {
+
+  static final Log LOG = LogFactory.getLog(LruHashMap.class);
+
+  /** The default size (in bytes) of the LRU */
+  private static final long DEFAULT_MAX_MEM_USAGE = 50000;
+  /** The default capacity of the hash table */
+  private static final int DEFAULT_INITIAL_CAPACITY = 16;
+  /** The maxmum capacity of the hash table */
+  private static final int MAXIMUM_CAPACITY = 1 << 30;
+  /** The default load factor to use */
+  private static final float DEFAULT_LOAD_FACTOR = 0.75f;
+
+  /** Memory overhead of this Object (for HeapSize) */
+  private static final int OVERHEAD = 5 * Bytes.SIZEOF_LONG +
+    2 * Bytes.SIZEOF_INT + 2 * Bytes.SIZEOF_FLOAT + 3 * ClassSize.REFERENCE +
+    1 * ClassSize.ARRAY;
+
+  /** Load factor allowed (usually 75%) */
+  private final float loadFactor;
+  /** Number of key/vals in the map */
+  private int size;
+  /** Size at which we grow hash */
+  private int threshold;
+  /** Entries in the map */
+  private Entry [] entries;
+
+  /** Pointer to least recently used entry */
+  private Entry<K,V> headPtr;
+  /** Pointer to most recently used entry */
+  private Entry<K,V> tailPtr;
+
+  /** Maximum memory usage of this map */
+  private long memTotal = 0;
+  /** Amount of available memory */
+  private long memFree = 0;
+
+  /** Number of successful (found) get() calls */
+  private long hitCount = 0;
+  /** Number of unsuccessful (not found) get() calls */
+  private long missCount = 0;
+
+  /**
+   * Constructs a new, empty map with the specified initial capacity,
+   * load factor, and maximum memory usage.
+   *
+   * @param initialCapacity the initial capacity
+   * @param loadFactor the load factor
+   * @param maxMemUsage the maximum total memory usage
+   * @throws IllegalArgumentException if the initial capacity is less than one
+   * @throws IllegalArgumentException if the initial capacity is greater than
+   * the maximum capacity
+   * @throws IllegalArgumentException if the load factor is <= 0
+   * @throws IllegalArgumentException if the max memory usage is too small
+   * to support the base overhead
+   */
+  public LruHashMap(int initialCapacity, float loadFactor,
+  long maxMemUsage) {
+    if (initialCapacity < 1) {
+      throw new IllegalArgumentException("Initial capacity must be > 0");
+    }
+    if (initialCapacity > MAXIMUM_CAPACITY) {
+      throw new IllegalArgumentException("Initial capacity is too large");
+    }
+    if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
+      throw new IllegalArgumentException("Load factor must be > 0");
+    }
+    if (maxMemUsage <= (OVERHEAD + initialCapacity * ClassSize.REFERENCE)) {
+      throw new IllegalArgumentException("Max memory usage too small to " +
+      "support base overhead");
+    }
+
+    /** Find a power of 2 >= initialCapacity */
+    int capacity = calculateCapacity(initialCapacity);
+    this.loadFactor = loadFactor;
+    this.threshold = calculateThreshold(capacity,loadFactor);
+    this.entries = new Entry[capacity];
+    this.memFree = maxMemUsage;
+    this.memTotal = maxMemUsage;
+    init();
+  }
+
+  /**
+   * Constructs a new, empty map with the specified initial capacity and
+   * load factor, and default maximum memory usage.
+   *
+   * @param initialCapacity the initial capacity
+   * @param loadFactor the load factor
+   * @throws IllegalArgumentException if the initial capacity is less than one
+   * @throws IllegalArgumentException if the initial capacity is greater than
+   * the maximum capacity
+   * @throws IllegalArgumentException if the load factor is <= 0
+   */
+  public LruHashMap(int initialCapacity, float loadFactor) {
+    this(initialCapacity, loadFactor, DEFAULT_MAX_MEM_USAGE);
+  }
+
+  /**
+   * Constructs a new, empty map with the specified initial capacity and
+   * with the default load factor and maximum memory usage.
+   *
+   * @param initialCapacity the initial capacity
+   * @throws IllegalArgumentException if the initial capacity is less than one
+   * @throws IllegalArgumentException if the initial capacity is greater than
+   * the maximum capacity
+   */
+  public LruHashMap(int initialCapacity) {
+    this(initialCapacity, DEFAULT_LOAD_FACTOR, DEFAULT_MAX_MEM_USAGE);
+  }
+
+  /**
+   * Constructs a new, empty map with the specified maximum memory usage
+   * and with default initial capacity and load factor.
+   *
+   * @param maxMemUsage the maximum total memory usage
+   * @throws IllegalArgumentException if the max memory usage is too small
+   * to support the base overhead
+   */
+  public LruHashMap(long maxMemUsage) {
+    this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR,
+    maxMemUsage);
+  }
+
+  /**
+   * Constructs a new, empty map with the default initial capacity,
+   * load factor and maximum memory usage.
+   */
+  public LruHashMap() {
+    this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR,
+    DEFAULT_MAX_MEM_USAGE);
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Get the currently available memory for this LRU in bytes.
+   * This is (maxAllowed - currentlyUsed).
+   *
+   * @return currently available bytes
+   */
+  public long getMemFree() {
+    return memFree;
+  }
+
+  /**
+   * Get the maximum memory allowed for this LRU in bytes.
+   *
+   * @return maximum allowed bytes
+   */
+  public long getMemMax() {
+    return memTotal;
+  }
+
+  /**
+   * Get the currently used memory for this LRU in bytes.
+   *
+   * @return currently used memory in bytes
+   */
+  public long getMemUsed() {
+    return (memTotal - memFree); // FindBugs IS2_INCONSISTENT_SYNC
+  }
+
+  /**
+   * Get the number of hits to the map.  This is the number of times
+   * a call to get() returns a matched key.
+   *
+   * @return number of hits
+   */
+  public long getHitCount() {
+    return hitCount;
+  }
+
+  /**
+   * Get the number of misses to the map.  This is the number of times
+   * a call to get() returns null.
+   *
+   * @return number of misses
+   */
+  public long getMissCount() {
+    return missCount; // FindBugs IS2_INCONSISTENT_SYNC
+  }
+
+  /**
+   * Get the hit ratio.  This is the number of hits divided by the
+   * total number of requests.
+   *
+   * @return hit ratio (double between 0 and 1)
+   */
+  public double getHitRatio() {
+    return (double)((double)hitCount/
+      ((double)(hitCount+missCount)));
+  }
+
+  /**
+   * Free the requested amount of memory from the LRU map.
+   *
+   * This will do LRU eviction from the map until at least as much
+   * memory as requested is freed.  This does not affect the maximum
+   * memory usage parameter.
+   *
+   * @param requestedAmount memory to free from LRU in bytes
+   * @return actual amount of memory freed in bytes
+   */
+  public synchronized long freeMemory(long requestedAmount) throws Exception {
+    if(requestedAmount > (getMemUsed() - getMinimumUsage())) {
+      return clearAll();
+    }
+    long freedMemory = 0;
+    while(freedMemory < requestedAmount) {
+      freedMemory += evictFromLru();
+    }
+    return freedMemory;
+  }
+
+  /**
+   * The total memory usage of this map
+   *
+   * @return memory usage of map in bytes
+   */
+  public long heapSize() {
+    return (memTotal - memFree);
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Retrieves the value associated with the specified key.
+   *
+   * If an entry is found, it is updated in the LRU as the most recently
+   * used (last to be evicted) entry in the map.
+   *
+   * @param key the key
+   * @return the associated value, or null if none found
+   * @throws NullPointerException if key is null
+   */
+  public synchronized V get(Object key) {
+    checkKey((K)key);
+    int hash = hash(key);
+    int i = hashIndex(hash, entries.length);
+    Entry<K,V> e = entries[i];
+    while (true) {
+      if (e == null) {
+        missCount++;
+        return null;
+      }
+      if (e.hash == hash && isEqual(key, e.key))  {
+        // Hit!  Update position in LRU
+        hitCount++;
+        updateLru(e);
+        return e.value;
+      }
+      e = e.next;
+    }
+  }
+
+  /**
+   * Insert a key-value mapping into the map.
+   *
+   * Entry will be inserted as the most recently used.
+   *
+   * Both the key and value are required to be Objects and must
+   * implement the HeapSize interface.
+   *
+   * @param key the key
+   * @param value the value
+   * @return the value that was previously mapped to this key, null if none
+   * @throws UnsupportedOperationException if either objects do not
+   * implement HeapSize
+   * @throws NullPointerException if the key or value is null
+   */
+  public synchronized V put(K key, V value) {
+    checkKey(key);
+    checkValue(value);
+    int hash = hash(key);
+    int i = hashIndex(hash, entries.length);
+
+    // For old values
+    for (Entry<K,V> e = entries[i]; e != null; e = e.next) {
+      if (e.hash == hash && isEqual(key, e.key)) {
+        V oldValue = e.value;
+        long memChange = e.replaceValue(value);
+        checkAndFreeMemory(memChange);
+        // If replacing an old value for this key, update in LRU
+        updateLru(e);
+        return oldValue;
+      }
+    }
+    long memChange = addEntry(hash, key, value, i);
+    checkAndFreeMemory(memChange);
+    return null;
+  }
+
+  /**
+   * Deletes the mapping for the specified key if it exists.
+   *
+   * @param key the key of the entry to be removed from the map
+   * @return the value associated with the specified key, or null
+   * if no mapping exists.
+   */
+  public synchronized V remove(Object key) {
+    Entry<K,V> e = removeEntryForKey((K)key);
+    if(e == null) return null;
+    // Add freed memory back to available
+    memFree += e.heapSize();
+    return e.value;
+  }
+
+  /**
+   * Gets the size (number of entries) of the map.
+   *
+   * @return size of the map
+   */
+  public int size() {
+    return size;
+  }
+
+  /**
+   * Checks whether the map is currently empty.
+   *
+   * @return true if size of map is zero
+   */
+  public boolean isEmpty() {
+    return size == 0;
+  }
+
+  /**
+   * Clears all entries from the map.
+   *
+   * This frees all entries, tracking memory usage along the way.
+   * All references to entries are removed so they can be GC'd.
+   */
+  public synchronized void clear() {
+    memFree += clearAll();
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Checks whether there is a value in the map for the specified key.
+   *
+   * Does not affect the LRU.
+   *
+   * @param key the key to check
+   * @return true if the map contains a value for this key, false if not
+   * @throws NullPointerException if the key is null
+   */
+  public synchronized boolean containsKey(Object key) {
+    checkKey((K)key);
+    int hash = hash(key);
+    int i = hashIndex(hash, entries.length);
+    Entry e = entries[i];
+    while (e != null) {
+      if (e.hash == hash && isEqual(key, e.key))
+          return true;
+      e = e.next;
+    }
+    return false;
+  }
+
+  /**
+   * Checks whether this is a mapping which contains the specified value.
+   *
+   * Does not affect the LRU.  This is an inefficient operation.
+   *
+   * @param value the value to check
+   * @return true if the map contains an entry for this value, false
+   * if not
+   * @throws NullPointerException if the value is null
+   */
+  public synchronized boolean containsValue(Object value) {
+    checkValue((V)value);
+    Entry[] tab = entries;
+    for (int i = 0; i < tab.length ; i++)
+      for (Entry e = tab[i] ; e != null ; e = e.next)
+          if (value.equals(e.value))
+            return true;
+    return false;
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Enforces key constraints.  Null keys are not permitted and key must
+   * implement HeapSize.  It should not be necessary to verify the second
+   * constraint because that's enforced on instantiation?
+   *
+   * Can add other constraints in the future.
+   *
+   * @param key the key
+   * @throws NullPointerException if the key is null
+   * @throws UnsupportedOperationException if the key class does not
+   * implement the HeapSize interface
+   */
+  private void checkKey(K key) {
+    if(key == null) {
+      throw new NullPointerException("null keys are not allowed");
+    }
+  }
+
+  /**
+   * Enforces value constraints.  Null values are not permitted and value must
+   * implement HeapSize.  It should not be necessary to verify the second
+   * constraint because that's enforced on instantiation?
+   *
+   * Can add other contraints in the future.
+   *
+   * @param value the value
+   * @throws NullPointerException if the value is null
+   * @throws UnsupportedOperationException if the value class does not
+   * implement the HeapSize interface
+   */
+  private void checkValue(V value) {
+    if(value == null) {
+      throw new NullPointerException("null values are not allowed");
+    }
+  }
+
+  /**
+   * Returns the minimum memory usage of the base map structure.
+   *
+   * @return baseline memory overhead of object in bytes
+   */
+  private long getMinimumUsage() {
+    return OVERHEAD + (entries.length * ClassSize.REFERENCE);
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Evicts and frees based on LRU until at least as much memory as requested
+   * is available.
+   *
+   * @param memNeeded the amount of memory needed in bytes
+   */
+  private void checkAndFreeMemory(long memNeeded) {
+    while(memFree < memNeeded) {
+      evictFromLru();
+    }
+    memFree -= memNeeded;
+  }
+
+  /**
+   * Evicts based on LRU.  This removes all references and updates available
+   * memory.
+   *
+   * @return amount of memory freed in bytes
+   */
+  private long evictFromLru() {
+    long freed = headPtr.heapSize();
+    memFree += freed;
+    removeEntry(headPtr);
+    return freed;
+  }
+
+  /**
+   * Moves the specified entry to the most recently used slot of the
+   * LRU.  This is called whenever an entry is fetched.
+   *
+   * @param e entry that was accessed
+   */
+  private void updateLru(Entry<K,V> e) {
+    Entry<K,V> prev = e.getPrevPtr();
+    Entry<K,V> next = e.getNextPtr();
+    if(next != null) {
+      if(prev != null) {
+        prev.setNextPtr(next);
+        next.setPrevPtr(prev);
+      } else {
+        headPtr = next;
+        headPtr.setPrevPtr(null);
+      }
+      e.setNextPtr(null);
+      e.setPrevPtr(tailPtr);
+      tailPtr.setNextPtr(e);
+      tailPtr = e;
+    }
+  }
+
+  /**
+   * Removes the specified entry from the map and LRU structure.
+   *
+   * @param entry entry to be removed
+   */
+  private void removeEntry(Entry<K,V> entry) {
+    K k = entry.key;
+    int hash = entry.hash;
+    int i = hashIndex(hash, entries.length);
+    Entry<K,V> prev = entries[i];
+    Entry<K,V> e = prev;
+
+    while (e != null) {
+      Entry<K,V> next = e.next;
+      if (e.hash == hash && isEqual(k, e.key)) {
+          size--;
+          if (prev == e) {
+            entries[i] = next;
+          } else {
+            prev.next = next;
+          }
+
+          Entry<K,V> prevPtr = e.getPrevPtr();
+          Entry<K,V> nextPtr = e.getNextPtr();
+
+          if(prevPtr != null && nextPtr != null) {
+            prevPtr.setNextPtr(nextPtr);
+            nextPtr.setPrevPtr(prevPtr);
+          } else if(prevPtr != null) {
+            tailPtr = prevPtr;
+            prevPtr.setNextPtr(null);
+          } else if(nextPtr != null) {
+            headPtr = nextPtr;
+            nextPtr.setPrevPtr(null);
+          }
+
+          return;
+      }
+      prev = e;
+      e = next;
+    }
+  }
+
+  /**
+   * Removes and returns the entry associated with the specified
+   * key.
+   *
+   * @param key key of the entry to be deleted
+   * @return entry that was removed, or null if none found
+   */
+  private Entry<K,V> removeEntryForKey(K key) {
+    int hash = hash(key);
+    int i = hashIndex(hash, entries.length);
+    Entry<K,V> prev = entries[i];
+    Entry<K,V> e = prev;
+
+    while (e != null) {
+      Entry<K,V> next = e.next;
+      if (e.hash == hash && isEqual(key, e.key)) {
+          size--;
+          if (prev == e) {
+            entries[i] = next;
+          } else {
+            prev.next = next;
+          }
+
+          // Updating LRU
+          Entry<K,V> prevPtr = e.getPrevPtr();
+          Entry<K,V> nextPtr = e.getNextPtr();
+          if(prevPtr != null && nextPtr != null) {
+            prevPtr.setNextPtr(nextPtr);
+            nextPtr.setPrevPtr(prevPtr);
+          } else if(prevPtr != null) {
+            tailPtr = prevPtr;
+            prevPtr.setNextPtr(null);
+          } else if(nextPtr != null) {
+            headPtr = nextPtr;
+            nextPtr.setPrevPtr(null);
+          }
+
+          return e;
+      }
+      prev = e;
+      e = next;
+    }
+
+    return e;
+  }
+
+ /**
+  * Adds a new entry with the specified key, value, hash code, and
+  * bucket index to the map.
+  *
+  * Also puts it in the bottom (most-recent) slot of the list and
+  * checks to see if we need to grow the array.
+  *
+  * @param hash hash value of key
+  * @param key the key
+  * @param value the value
+  * @param bucketIndex index into hash array to store this entry
+  * @return the amount of heap size used to store the new entry
+  */
+  private long addEntry(int hash, K key, V value, int bucketIndex) {
+    Entry<K,V> e = entries[bucketIndex];
+    Entry<K,V> newE = new Entry<K,V>(hash, key, value, e, tailPtr);
+    entries[bucketIndex] = newE;
+    // add as most recently used in lru
+    if (size == 0) {
+      headPtr = newE;
+      tailPtr = newE;
+    } else {
+      newE.setPrevPtr(tailPtr);
+      tailPtr.setNextPtr(newE);
+      tailPtr = newE;
+    }
+    // Grow table if we are past the threshold now
+    if (size++ >= threshold) {
+      growTable(2 * entries.length);
+    }
+    return newE.heapSize();
+  }
+
+  /**
+   * Clears all the entries in the map.  Tracks the amount of memory being
+   * freed along the way and returns the total.
+   *
+   * Cleans up all references to allow old entries to be GC'd.
+   *
+   * @return total memory freed in bytes
+   */
+  private long clearAll() {
+    Entry cur;
+    long freedMemory = 0;
+    for(int i=0; i<entries.length; i++) {
+      cur = entries[i];
+      while(cur != null) {
+        freedMemory += cur.heapSize();
+        cur = cur.next;
+      }
+      entries[i] = null;
+    }
+    headPtr = null;
+    tailPtr = null;
+    size = 0;
+    return freedMemory;
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Recreates the entire contents of the hashmap into a new array
+   * with double the capacity.  This method is called when the number of
+   * keys in the map reaches the current threshold.
+   *
+   * @param newCapacity the new size of the hash entries
+   */
+  private void growTable(int newCapacity) {
+    Entry [] oldTable = entries;
+    int oldCapacity = oldTable.length;
+
+    // Do not allow growing the table beyond the max capacity
+    if (oldCapacity == MAXIMUM_CAPACITY) {
+      threshold = Integer.MAX_VALUE;
+      return;
+    }
+
+    // Determine how much additional space will be required to grow the array
+    long requiredSpace = (newCapacity - oldCapacity) * ClassSize.REFERENCE;
+
+    // Verify/enforce we have sufficient memory to grow
+    checkAndFreeMemory(requiredSpace);
+
+    Entry [] newTable = new Entry[newCapacity];
+
+    // Transfer existing entries to new hash table
+    for(int i=0; i < oldCapacity; i++) {
+      Entry<K,V> entry = oldTable[i];
+      if(entry != null) {
+        // Set to null for GC
+        oldTable[i] = null;
+        do {
+          Entry<K,V> next = entry.next;
+          int idx = hashIndex(entry.hash, newCapacity);
+          entry.next = newTable[idx];
+          newTable[idx] = entry;
+          entry = next;
+        } while(entry != null);
+      }
+    }
+
+    entries = newTable;
+    threshold = (int)(newCapacity * loadFactor);
+  }
+
+  /**
+   * Gets the hash code for the specified key.
+   * This implementation uses the additional hashing routine
+   * from JDK 1.4.
+   *
+   * @param key the key to get a hash value for
+   * @return the hash value
+   */
+  private int hash(Object key) {
+    int h = key.hashCode();
+    h += ~(h << 9);
+    h ^=  (h >>> 14);
+    h +=  (h << 4);
+    h ^=  (h >>> 10);
+    return h;
+  }
+
+  /**
+   * Compares two objects for equality.  Method uses equals method and
+   * assumes neither value is null.
+   *
+   * @param x the first value
+   * @param y the second value
+   * @return true if equal
+   */
+  private boolean isEqual(Object x, Object y) {
+    return (x == y || x.equals(y));
+  }
+
+  /**
+   * Determines the index into the current hash table for the specified
+   * hashValue.
+   *
+   * @param hashValue the hash value
+   * @param length the current number of hash buckets
+   * @return the index of the current hash array to use
+   */
+  private int hashIndex(int hashValue, int length) {
+    return hashValue & (length - 1);
+  }
+
+  /**
+   * Calculates the capacity of the array backing the hash
+   * by normalizing capacity to a power of 2 and enforcing
+   * capacity limits.
+   *
+   * @param proposedCapacity the proposed capacity
+   * @return the normalized capacity
+   */
+  private int calculateCapacity(int proposedCapacity) {
+    int newCapacity = 1;
+    if(proposedCapacity > MAXIMUM_CAPACITY) {
+      newCapacity = MAXIMUM_CAPACITY;
+    } else {
+      while(newCapacity < proposedCapacity) {
+        newCapacity <<= 1;
+      }
+      if(newCapacity > MAXIMUM_CAPACITY) {
+        newCapacity = MAXIMUM_CAPACITY;
+      }
+    }
+    return newCapacity;
+  }
+
+  /**
+   * Calculates the threshold of the map given the capacity and load
+   * factor.  Once the number of entries in the map grows to the
+   * threshold we will double the size of the array.
+   *
+   * @param capacity the size of the array
+   * @param factor the load factor of the hash
+   */
+  private int calculateThreshold(int capacity, float factor) {
+    return (int)(capacity * factor);
+  }
+
+  /**
+   * Set the initial heap usage of this class.  Includes class variable
+   * overhead and the entry array.
+   */
+  private void init() {
+    memFree -= OVERHEAD;
+    memFree -= (entries.length * ClassSize.REFERENCE);
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Debugging function that returns a List sorted by access time.
+   *
+   * The order is oldest to newest (first in list is next to be evicted).
+   *
+   * @return Sorted list of entries
+   */
+  public List<Entry<K,V>> entryLruList() {
+    List<Entry<K,V>> entryList = new ArrayList<Entry<K,V>>();
+    Entry<K,V> entry = headPtr;
+    while(entry != null) {
+      entryList.add(entry);
+      entry = entry.getNextPtr();
+    }
+    return entryList;
+  }
+
+  /**
+   * Debugging function that returns a Set of all entries in the hash table.
+   *
+   * @return Set of entries in hash
+   */
+  public Set<Entry<K,V>> entryTableSet() {
+    Set<Entry<K,V>> entrySet = new HashSet<Entry<K,V>>();
+    Entry [] table = entries; // FindBugs IS2_INCONSISTENT_SYNC
+    for(int i=0;i<table.length;i++) {
+      for(Entry e = table[i]; e != null; e = e.next) {
+        entrySet.add(e);
+      }
+    }
+    return entrySet;
+  }
+
+  /**
+   * Get the head of the linked list (least recently used).
+   *
+   * @return head of linked list
+   */
+  public Entry getHeadPtr() {
+    return headPtr;
+  }
+
+  /**
+   * Get the tail of the linked list (most recently used).
+   *
+   * @return tail of linked list
+   */
+  public Entry getTailPtr() {
+    return tailPtr;
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * To best optimize this class, some of the methods that are part of a
+   * Map implementation are not supported.  This is primarily related
+   * to being able to get Sets and Iterators of this map which require
+   * significant overhead and code complexity to support and are
+   * unnecessary for the requirements of this class.
+   */
+
+  /**
+   * Intentionally unimplemented.
+   */
+  public Set<Map.Entry<K,V>> entrySet() {
+    throw new UnsupportedOperationException(
+    "entrySet() is intentionally unimplemented");
+  }
+
+  /**
+   * Intentionally unimplemented.
+   */
+  public boolean equals(Object o) {
+    throw new UnsupportedOperationException(
+    "equals(Object) is intentionally unimplemented");
+  }
+
+  /**
+   * Intentionally unimplemented.
+   */
+  public int hashCode() {
+    throw new UnsupportedOperationException(
+    "hashCode(Object) is intentionally unimplemented");
+  }
+
+  /**
+   * Intentionally unimplemented.
+   */
+  public Set<K> keySet() {
+    throw new UnsupportedOperationException(
+    "keySet() is intentionally unimplemented");
+  }
+
+  /**
+   * Intentionally unimplemented.
+   */
+  public void putAll(Map<? extends K, ? extends V> m) {
+    throw new UnsupportedOperationException(
+    "putAll() is intentionally unimplemented");
+  }
+
+  /**
+   * Intentionally unimplemented.
+   */
+  public Collection<V> values() {
+    throw new UnsupportedOperationException(
+    "values() is intentionally unimplemented");
+  }
+
+  //--------------------------------------------------------------------------
+  /**
+   * Entry to store key/value mappings.
+   * <p>
+   * Contains previous and next pointers for the doubly linked-list which is
+   * used for LRU eviction.
+   * <p>
+   * Instantiations of this class are memory aware.  Both the key and value
+   * classes used must also implement <code>HeapSize</code>.
+   */
+  protected static class Entry<K extends HeapSize, V extends HeapSize>
+  implements Map.Entry<K,V>, HeapSize {
+    /** The baseline overhead memory usage of this class */
+    static final int OVERHEAD = 1 * Bytes.SIZEOF_LONG +
+      5 * ClassSize.REFERENCE + 2 * Bytes.SIZEOF_INT;
+
+    /** The key */
+    protected final K key;
+    /** The value */
+    protected V value;
+    /** The hash value for this entries key */
+    protected final int hash;
+    /** The next entry in the hash chain (for collisions) */
+    protected Entry<K,V> next;
+
+    /** The previous entry in the LRU list (towards LRU) */
+    protected Entry<K,V> prevPtr;
+    /** The next entry in the LRU list (towards MRU) */
+    protected Entry<K,V> nextPtr;
+
+    /** The precomputed heap size of this entry */
+    protected long heapSize;
+
+    /**
+     * Create a new entry.
+     *
+     * @param h the hash value of the key
+     * @param k the key
+     * @param v the value
+     * @param nextChainPtr the next entry in the hash chain, null if none
+     * @param prevLruPtr the previous entry in the LRU
+     */
+    Entry(int h, K k, V v, Entry<K,V> nextChainPtr, Entry<K,V> prevLruPtr) {
+      value = v;
+      next = nextChainPtr;
+      key = k;
+      hash = h;
+      prevPtr = prevLruPtr;
+      nextPtr = null;
+      // Pre-compute heap size
+      heapSize = OVERHEAD + k.heapSize() + v.heapSize();
+    }
+
+    /**
+     * Get the key of this entry.
+     *
+     * @return the key associated with this entry
+     */
+    public K getKey() {
+      return key;
+    }
+
+    /**
+     * Get the value of this entry.
+     *
+     * @return the value currently associated with this entry
+     */
+    public V getValue() {
+      return value;
+    }
+
+    /**
+     * Set the value of this entry.
+     *
+     * It is not recommended to use this method when changing the value.
+     * Rather, using <code>replaceValue</code> will return the difference
+     * in heap usage between the previous and current values.
+     *
+     * @param newValue the new value to associate with this entry
+     * @return the value previously associated with this entry
+     */
+    public V setValue(V newValue) {
+      V oldValue = value;
+      value = newValue;
+      return oldValue;
+    }
+
+    /**
+     * Replace the value of this entry.
+     *
+     * Computes and returns the difference in heap size when changing
+     * the value associated with this entry.
+     *
+     * @param newValue the new value to associate with this entry
+     * @return the change in heap usage of this entry in bytes
+     */
+    protected long replaceValue(V newValue) {
+      long sizeDiff = newValue.heapSize() - value.heapSize();
+      value = newValue;
+      heapSize += sizeDiff;
+      return sizeDiff;
+    }
+
+    /**
+     * Returns true is the specified entry has the same key and the
+     * same value as this entry.
+     *
+     * @param o entry to test against current
+     * @return true is entries have equal key and value, false if no
+     */
+    public boolean equals(Object o) {
+      if (!(o instanceof Map.Entry))
+          return false;
+      Map.Entry e = (Map.Entry)o;
+      Object k1 = getKey();
+      Object k2 = e.getKey();
+      if (k1 == k2 || (k1 != null && k1.equals(k2))) {
+          Object v1 = getValue();
+          Object v2 = e.getValue();
+          if (v1 == v2 || (v1 != null && v1.equals(v2)))
+            return true;
+      }
+      return false;
+    }
+
+    /**
+     * Returns the hash code of the entry by xor'ing the hash values
+     * of the key and value of this entry.
+     *
+     * @return hash value of this entry
+     */
+    public int hashCode() {
+      return (key.hashCode() ^ value.hashCode());
+    }
+
+    /**
+     * Returns String representation of the entry in form "key=value"
+     *
+     * @return string value of entry
+     */
+    public String toString() {
+      return getKey() + "=" + getValue();
+    }
+
+    //------------------------------------------------------------------------
+    /**
+     * Sets the previous pointer for the entry in the LRU.
+     * @param prevPtr previous entry
+     */
+    protected void setPrevPtr(Entry<K,V> prevPtr){
+      this.prevPtr = prevPtr;
+    }
+
+    /**
+     * Returns the previous pointer for the entry in the LRU.
+     * @return previous entry
+     */
+    protected Entry<K,V> getPrevPtr(){
+      return prevPtr;
+    }
+
+    /**
+     * Sets the next pointer for the entry in the LRU.
+     * @param nextPtr next entry
+     */
+    protected void setNextPtr(Entry<K,V> nextPtr){
+      this.nextPtr = nextPtr;
+    }
+
+    /**
+     * Returns the next pointer for the entry in teh LRU.
+     * @return next entry
+     */
+    protected Entry<K,V> getNextPtr(){
+      return nextPtr;
+    }
+
+    /**
+     * Returns the pre-computed and "deep" size of the Entry
+     * @return size of the entry in bytes
+     */
+    public long heapSize() {
+      return heapSize;
+    }
+  }
+}
+
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
new file mode 100644
index 0000000..53ec17c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
@@ -0,0 +1,813 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.lang.management.ManagementFactory;
+import java.lang.management.RuntimeMXBean;
+import java.rmi.UnexpectedException;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.SortedSet;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+
+/**
+ * The MemStore holds in-memory modifications to the Store.  Modifications
+ * are {@link KeyValue}s.  When asked to flush, current memstore is moved
+ * to snapshot and is cleared.  We continue to serve edits out of new memstore
+ * and backing snapshot until flusher reports in that the flush succeeded. At
+ * this point we let the snapshot go.
+ * TODO: Adjust size of the memstore when we remove items because they have
+ * been deleted.
+ * TODO: With new KVSLS, need to make sure we update HeapSize with difference
+ * in KV size.
+ */
+public class MemStore implements HeapSize {
+  private static final Log LOG = LogFactory.getLog(MemStore.class);
+
+  // MemStore.  Use a KeyValueSkipListSet rather than SkipListSet because of the
+  // better semantics.  The Map will overwrite if passed a key it already had
+  // whereas the Set will not add new KV if key is same though value might be
+  // different.  Value is not important -- just make sure always same
+  // reference passed.
+  volatile KeyValueSkipListSet kvset;
+
+  // Snapshot of memstore.  Made for flusher.
+  volatile KeyValueSkipListSet snapshot;
+
+  final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+
+  final KeyValue.KVComparator comparator;
+
+  // Used comparing versions -- same r/c and ts but different type.
+  final KeyValue.KVComparator comparatorIgnoreType;
+
+  // Used comparing versions -- same r/c and type but different timestamp.
+  final KeyValue.KVComparator comparatorIgnoreTimestamp;
+
+  // Used to track own heapSize
+  final AtomicLong size;
+
+  TimeRangeTracker timeRangeTracker;
+  TimeRangeTracker snapshotTimeRangeTracker;
+
+  /**
+   * Default constructor. Used for tests.
+   */
+  public MemStore() {
+    this(KeyValue.COMPARATOR);
+  }
+
+  /**
+   * Constructor.
+   * @param c Comparator
+   */
+  public MemStore(final KeyValue.KVComparator c) {
+    this.comparator = c;
+    this.comparatorIgnoreTimestamp =
+      this.comparator.getComparatorIgnoringTimestamps();
+    this.comparatorIgnoreType = this.comparator.getComparatorIgnoringType();
+    this.kvset = new KeyValueSkipListSet(c);
+    this.snapshot = new KeyValueSkipListSet(c);
+    timeRangeTracker = new TimeRangeTracker();
+    snapshotTimeRangeTracker = new TimeRangeTracker();
+    this.size = new AtomicLong(DEEP_OVERHEAD);
+  }
+
+  void dump() {
+    for (KeyValue kv: this.kvset) {
+      LOG.info(kv);
+    }
+    for (KeyValue kv: this.snapshot) {
+      LOG.info(kv);
+    }
+  }
+
+  /**
+   * Creates a snapshot of the current memstore.
+   * Snapshot must be cleared by call to {@link #clearSnapshot(SortedSet<KeyValue>)}
+   * To get the snapshot made by this method, use {@link #getSnapshot()}
+   */
+  void snapshot() {
+    this.lock.writeLock().lock();
+    try {
+      // If snapshot currently has entries, then flusher failed or didn't call
+      // cleanup.  Log a warning.
+      if (!this.snapshot.isEmpty()) {
+        LOG.warn("Snapshot called again without clearing previous. " +
+          "Doing nothing. Another ongoing flush or did we fail last attempt?");
+      } else {
+        if (!this.kvset.isEmpty()) {
+          this.snapshot = this.kvset;
+          this.kvset = new KeyValueSkipListSet(this.comparator);
+          this.snapshotTimeRangeTracker = this.timeRangeTracker;
+          this.timeRangeTracker = new TimeRangeTracker();
+          // Reset heap to not include any keys
+          this.size.set(DEEP_OVERHEAD);
+        }
+      }
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+  }
+
+  /**
+   * Return the current snapshot.
+   * Called by flusher to get current snapshot made by a previous
+   * call to {@link #snapshot()}
+   * @return Return snapshot.
+   * @see {@link #snapshot()}
+   * @see {@link #clearSnapshot(SortedSet<KeyValue>)}
+   */
+  KeyValueSkipListSet getSnapshot() {
+    return this.snapshot;
+  }
+
+  /**
+   * The passed snapshot was successfully persisted; it can be let go.
+   * @param ss The snapshot to clean out.
+   * @throws UnexpectedException
+   * @see {@link #snapshot()}
+   */
+  void clearSnapshot(final SortedSet<KeyValue> ss)
+  throws UnexpectedException {
+    this.lock.writeLock().lock();
+    try {
+      if (this.snapshot != ss) {
+        throw new UnexpectedException("Current snapshot is " +
+          this.snapshot + ", was passed " + ss);
+      }
+      // OK. Passed in snapshot is same as current snapshot.  If not-empty,
+      // create a new snapshot and let the old one go.
+      if (!ss.isEmpty()) {
+        this.snapshot = new KeyValueSkipListSet(this.comparator);
+        this.snapshotTimeRangeTracker = new TimeRangeTracker();
+      }
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+  }
+
+  /**
+   * Write an update
+   * @param kv
+   * @return approximate size of the passed key and value.
+   */
+  long add(final KeyValue kv) {
+    long s = -1;
+    this.lock.readLock().lock();
+    try {
+      s = heapSizeChange(kv, this.kvset.add(kv));
+      timeRangeTracker.includeTimestamp(kv);
+      this.size.addAndGet(s);
+    } finally {
+      this.lock.readLock().unlock();
+    }
+    return s;
+  }
+
+  /**
+   * Write a delete
+   * @param delete
+   * @return approximate size of the passed key and value.
+   */
+  long delete(final KeyValue delete) {
+    long s = 0;
+    this.lock.readLock().lock();
+    try {
+      s += heapSizeChange(delete, this.kvset.add(delete));
+      timeRangeTracker.includeTimestamp(delete);
+    } finally {
+      this.lock.readLock().unlock();
+    }
+    this.size.addAndGet(s);
+    return s;
+  }
+
+  /**
+   * @param kv Find the row that comes after this one.  If null, we return the
+   * first.
+   * @return Next row or null if none found.
+   */
+  KeyValue getNextRow(final KeyValue kv) {
+    this.lock.readLock().lock();
+    try {
+      return getLowest(getNextRow(kv, this.kvset), getNextRow(kv, this.snapshot));
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /*
+   * @param a
+   * @param b
+   * @return Return lowest of a or b or null if both a and b are null
+   */
+  private KeyValue getLowest(final KeyValue a, final KeyValue b) {
+    if (a == null) {
+      return b;
+    }
+    if (b == null) {
+      return a;
+    }
+    return comparator.compareRows(a, b) <= 0? a: b;
+  }
+
+  /*
+   * @param key Find row that follows this one.  If null, return first.
+   * @param map Set to look in for a row beyond <code>row</code>.
+   * @return Next row or null if none found.  If one found, will be a new
+   * KeyValue -- can be destroyed by subsequent calls to this method.
+   */
+  private KeyValue getNextRow(final KeyValue key,
+      final NavigableSet<KeyValue> set) {
+    KeyValue result = null;
+    SortedSet<KeyValue> tail = key == null? set: set.tailSet(key);
+    // Iterate until we fall into the next row; i.e. move off current row
+    for (KeyValue kv: tail) {
+      if (comparator.compareRows(kv, key) <= 0)
+        continue;
+      // Note: Not suppressing deletes or expired cells.  Needs to be handled
+      // by higher up functions.
+      result = kv;
+      break;
+    }
+    return result;
+  }
+
+  /**
+   * @param state column/delete tracking state
+   */
+  void getRowKeyAtOrBefore(final GetClosestRowBeforeTracker state) {
+    this.lock.readLock().lock();
+    try {
+      getRowKeyAtOrBefore(kvset, state);
+      getRowKeyAtOrBefore(snapshot, state);
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /*
+   * @param set
+   * @param state Accumulates deletes and candidates.
+   */
+  private void getRowKeyAtOrBefore(final NavigableSet<KeyValue> set,
+      final GetClosestRowBeforeTracker state) {
+    if (set.isEmpty()) {
+      return;
+    }
+    if (!walkForwardInSingleRow(set, state.getTargetKey(), state)) {
+      // Found nothing in row.  Try backing up.
+      getRowKeyBefore(set, state);
+    }
+  }
+
+  /*
+   * Walk forward in a row from <code>firstOnRow</code>.  Presumption is that
+   * we have been passed the first possible key on a row.  As we walk forward
+   * we accumulate deletes until we hit a candidate on the row at which point
+   * we return.
+   * @param set
+   * @param firstOnRow First possible key on this row.
+   * @param state
+   * @return True if we found a candidate walking this row.
+   */
+  private boolean walkForwardInSingleRow(final SortedSet<KeyValue> set,
+      final KeyValue firstOnRow, final GetClosestRowBeforeTracker state) {
+    boolean foundCandidate = false;
+    SortedSet<KeyValue> tail = set.tailSet(firstOnRow);
+    if (tail.isEmpty()) return foundCandidate;
+    for (Iterator<KeyValue> i = tail.iterator(); i.hasNext();) {
+      KeyValue kv = i.next();
+      // Did we go beyond the target row? If so break.
+      if (state.isTooFar(kv, firstOnRow)) break;
+      if (state.isExpired(kv)) {
+        i.remove();
+        continue;
+      }
+      // If we added something, this row is a contender. break.
+      if (state.handle(kv)) {
+        foundCandidate = true;
+        break;
+      }
+    }
+    return foundCandidate;
+  }
+
+  /*
+   * Walk backwards through the passed set a row at a time until we run out of
+   * set or until we get a candidate.
+   * @param set
+   * @param state
+   */
+  private void getRowKeyBefore(NavigableSet<KeyValue> set,
+      final GetClosestRowBeforeTracker state) {
+    KeyValue firstOnRow = state.getTargetKey();
+    for (Member p = memberOfPreviousRow(set, state, firstOnRow);
+        p != null; p = memberOfPreviousRow(p.set, state, firstOnRow)) {
+      // Make sure we don't fall out of our table.
+      if (!state.isTargetTable(p.kv)) break;
+      // Stop looking if we've exited the better candidate range.
+      if (!state.isBetterCandidate(p.kv)) break;
+      // Make into firstOnRow
+      firstOnRow = new KeyValue(p.kv.getRow(), HConstants.LATEST_TIMESTAMP);
+      // If we find something, break;
+      if (walkForwardInSingleRow(p.set, firstOnRow, state)) break;
+    }
+  }
+
+  /**
+   * Given the specs of a column, update it, first by inserting a new record,
+   * then removing the old one.  Since there is only 1 KeyValue involved, the memstoreTS
+   * will be set to 0, thus ensuring that they instantly appear to anyone. The underlying
+   * store will ensure that the insert/delete each are atomic. A scanner/reader will either
+   * get the new value, or the old value and all readers will eventually only see the new
+   * value after the old was removed.
+   *
+   * @param row
+   * @param family
+   * @param qualifier
+   * @param newValue
+   * @param now
+   * @return  Timestamp
+   */
+  public long updateColumnValue(byte[] row,
+                                byte[] family,
+                                byte[] qualifier,
+                                long newValue,
+                                long now) {
+   this.lock.readLock().lock();
+    try {
+      KeyValue firstKv = KeyValue.createFirstOnRow(
+          row, family, qualifier);
+      // Is there a KeyValue in 'snapshot' with the same TS? If so, upgrade the timestamp a bit.
+      SortedSet<KeyValue> snSs = snapshot.tailSet(firstKv);
+      if (!snSs.isEmpty()) {
+        KeyValue snKv = snSs.first();
+        // is there a matching KV in the snapshot?
+        if (snKv.matchingRow(firstKv) && snKv.matchingQualifier(firstKv)) {
+          if (snKv.getTimestamp() == now) {
+            // poop,
+            now += 1;
+          }
+        }
+      }
+
+      // logic here: the new ts MUST be at least 'now'. But it could be larger if necessary.
+      // But the timestamp should also be max(now, mostRecentTsInMemstore)
+
+      // so we cant add the new KV w/o knowing what's there already, but we also
+      // want to take this chance to delete some kvs. So two loops (sad)
+
+      SortedSet<KeyValue> ss = kvset.tailSet(firstKv);
+      Iterator<KeyValue> it = ss.iterator();
+      while ( it.hasNext() ) {
+        KeyValue kv = it.next();
+
+        // if this isnt the row we are interested in, then bail:
+        if (!firstKv.matchingColumn(family,qualifier) || !firstKv.matchingRow(kv) ) {
+          break; // rows dont match, bail.
+        }
+
+        // if the qualifier matches and it's a put, just RM it out of the kvset.
+        if (firstKv.matchingQualifier(kv)) {
+          // to be extra safe we only remove Puts that have a memstoreTS==0
+          if (kv.getType() == KeyValue.Type.Put.getCode()) {
+            now = Math.max(now, kv.getTimestamp());
+          }
+        }
+      }
+
+      // create or update (upsert) a new KeyValue with
+      // 'now' and a 0 memstoreTS == immediately visible
+      return upsert(Arrays.asList(new KeyValue [] {
+          new KeyValue(row, family, qualifier, now,
+              Bytes.toBytes(newValue))
+      }));
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Update or insert the specified KeyValues.
+   * <p>
+   * For each KeyValue, insert into MemStore.  This will atomically upsert the
+   * value for that row/family/qualifier.  If a KeyValue did already exist,
+   * it will then be removed.
+   * <p>
+   * Currently the memstoreTS is kept at 0 so as each insert happens, it will
+   * be immediately visible.  May want to change this so it is atomic across
+   * all KeyValues.
+   * <p>
+   * This is called under row lock, so Get operations will still see updates
+   * atomically.  Scans will only see each KeyValue update as atomic.
+   *
+   * @param kvs
+   * @return change in memstore size
+   */
+  public long upsert(List<KeyValue> kvs) {
+   this.lock.readLock().lock();
+    try {
+      long size = 0;
+      for (KeyValue kv : kvs) {
+        kv.setMemstoreTS(0);
+        size += upsert(kv);
+      }
+      return size;
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Inserts the specified KeyValue into MemStore and deletes any existing
+   * versions of the same row/family/qualifier as the specified KeyValue.
+   * <p>
+   * First, the specified KeyValue is inserted into the Memstore.
+   * <p>
+   * If there are any existing KeyValues in this MemStore with the same row,
+   * family, and qualifier, they are removed.
+   * @param kv
+   * @return change in size of MemStore
+   */
+  private long upsert(KeyValue kv) {
+    // Add the KeyValue to the MemStore
+    long addedSize = add(kv);
+
+    // Get the KeyValues for the row/family/qualifier regardless of timestamp.
+    // For this case we want to clean up any other puts
+    KeyValue firstKv = KeyValue.createFirstOnRow(
+        kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(),
+        kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength(),
+        kv.getBuffer(), kv.getQualifierOffset(), kv.getQualifierLength());
+    SortedSet<KeyValue> ss = kvset.tailSet(firstKv);
+    Iterator<KeyValue> it = ss.iterator();
+    while ( it.hasNext() ) {
+      KeyValue cur = it.next();
+
+      if (kv == cur) {
+        // ignore the one just put in
+        continue;
+      }
+      // if this isn't the row we are interested in, then bail
+      if (!kv.matchingRow(cur)) {
+        break;
+      }
+
+      // if the qualifier matches and it's a put, remove it
+      if (kv.matchingQualifier(cur)) {
+
+        // to be extra safe we only remove Puts that have a memstoreTS==0
+        if (kv.getType() == KeyValue.Type.Put.getCode() &&
+            kv.getMemstoreTS() == 0) {
+          // false means there was a change, so give us the size.
+          addedSize -= heapSizeChange(kv, true);
+          it.remove();
+        }
+      } else {
+        // past the column, done
+        break;
+      }
+    }
+    return addedSize;
+  }
+
+  /*
+   * Immutable data structure to hold member found in set and the set it was
+   * found in.  Include set because it is carrying context.
+   */
+  private static class Member {
+    final KeyValue kv;
+    final NavigableSet<KeyValue> set;
+    Member(final NavigableSet<KeyValue> s, final KeyValue kv) {
+      this.kv = kv;
+      this.set = s;
+    }
+  }
+
+  /*
+   * @param set Set to walk back in.  Pass a first in row or we'll return
+   * same row (loop).
+   * @param state Utility and context.
+   * @param firstOnRow First item on the row after the one we want to find a
+   * member in.
+   * @return Null or member of row previous to <code>firstOnRow</code>
+   */
+  private Member memberOfPreviousRow(NavigableSet<KeyValue> set,
+      final GetClosestRowBeforeTracker state, final KeyValue firstOnRow) {
+    NavigableSet<KeyValue> head = set.headSet(firstOnRow, false);
+    if (head.isEmpty()) return null;
+    for (Iterator<KeyValue> i = head.descendingIterator(); i.hasNext();) {
+      KeyValue found = i.next();
+      if (state.isExpired(found)) {
+        i.remove();
+        continue;
+      }
+      return new Member(head, found);
+    }
+    return null;
+  }
+
+  /**
+   * @return scanner on memstore and snapshot in this order.
+   */
+  List<KeyValueScanner> getScanners() {
+    this.lock.readLock().lock();
+    try {
+      return Collections.<KeyValueScanner>singletonList(
+          new MemStoreScanner());
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Check if this memstore may contain the required keys
+   * @param scan
+   * @return False if the key definitely does not exist in this Memstore
+   */
+  public boolean shouldSeek(Scan scan) {
+    return timeRangeTracker.includesTimeRange(scan.getTimeRange()) ||
+        snapshotTimeRangeTracker.includesTimeRange(scan.getTimeRange());
+  }
+
+  public TimeRangeTracker getSnapshotTimeRangeTracker() {
+    return this.snapshotTimeRangeTracker;
+  }
+
+  /*
+   * MemStoreScanner implements the KeyValueScanner.
+   * It lets the caller scan the contents of a memstore -- both current
+   * map and snapshot.
+   * This behaves as if it were a real scanner but does not maintain position.
+   */
+  protected class MemStoreScanner implements KeyValueScanner {
+    // Next row information for either kvset or snapshot
+    private KeyValue kvsetNextRow = null;
+    private KeyValue snapshotNextRow = null;
+
+    // iterator based scanning.
+    Iterator<KeyValue> kvsetIt;
+    Iterator<KeyValue> snapshotIt;
+
+    /*
+    Some notes...
+
+     So memstorescanner is fixed at creation time. this includes pointers/iterators into
+    existing kvset/snapshot.  during a snapshot creation, the kvset is null, and the
+    snapshot is moved.  since kvset is null there is no point on reseeking on both,
+      we can save us the trouble. During the snapshot->hfile transition, the memstore
+      scanner is re-created by StoreScanner#updateReaders().  StoreScanner should
+      potentially do something smarter by adjusting the existing memstore scanner.
+
+      But there is a greater problem here, that being once a scanner has progressed
+      during a snapshot scenario, we currently iterate past the kvset then 'finish' up.
+      if a scan lasts a little while, there is a chance for new entries in kvset to
+      become available but we will never see them.  This needs to be handled at the
+      StoreScanner level with coordination with MemStoreScanner.
+
+    */
+
+    MemStoreScanner() {
+      super();
+
+      //DebugPrint.println(" MS new@" + hashCode());
+    }
+
+    protected KeyValue getNext(Iterator<KeyValue> it) {
+      KeyValue ret = null;
+      long readPoint = ReadWriteConsistencyControl.getThreadReadPoint();
+      //DebugPrint.println( " MS@" + hashCode() + ": threadpoint = " + readPoint);
+
+      while (ret == null && it.hasNext()) {
+        KeyValue v = it.next();
+        if (v.getMemstoreTS() <= readPoint) {
+          // keep it.
+          ret = v;
+        }
+      }
+      return ret;
+    }
+
+    public synchronized boolean seek(KeyValue key) {
+      if (key == null) {
+        close();
+        return false;
+      }
+
+      // kvset and snapshot will never be empty.
+      // if tailSet cant find anything, SS is empty (not null).
+      SortedSet<KeyValue> kvTail = kvset.tailSet(key);
+      SortedSet<KeyValue> snapshotTail = snapshot.tailSet(key);
+
+      kvsetIt = kvTail.iterator();
+      snapshotIt = snapshotTail.iterator();
+
+      kvsetNextRow = getNext(kvsetIt);
+      snapshotNextRow = getNext(snapshotIt);
+
+
+      //long readPoint = ReadWriteConsistencyControl.getThreadReadPoint();
+      //DebugPrint.println( " MS@" + hashCode() + " kvset seek: " + kvsetNextRow + " with size = " +
+      //    kvset.size() + " threadread = " + readPoint);
+      //DebugPrint.println( " MS@" + hashCode() + " snapshot seek: " + snapshotNextRow + " with size = " +
+      //    snapshot.size() + " threadread = " + readPoint);
+
+
+      KeyValue lowest = getLowest();
+
+      // has data := (lowest != null)
+      return lowest != null;
+    }
+
+    @Override
+    public boolean reseek(KeyValue key) {
+      while (kvsetNextRow != null &&
+          comparator.compare(kvsetNextRow, key) < 0) {
+        kvsetNextRow = getNext(kvsetIt);
+      }
+
+      while (snapshotNextRow != null &&
+          comparator.compare(snapshotNextRow, key) < 0) {
+        snapshotNextRow = getNext(snapshotIt);
+      }
+      return (kvsetNextRow != null || snapshotNextRow != null);
+    }
+
+    public synchronized KeyValue peek() {
+      //DebugPrint.println(" MS@" + hashCode() + " peek = " + getLowest());
+      return getLowest();
+    }
+
+
+    public synchronized KeyValue next() {
+      KeyValue theNext = getLowest();
+
+      if (theNext == null) {
+          return null;
+      }
+
+      // Advance one of the iterators
+      if (theNext == kvsetNextRow) {
+        kvsetNextRow = getNext(kvsetIt);
+      } else {
+        snapshotNextRow = getNext(snapshotIt);
+      }
+
+      //long readpoint = ReadWriteConsistencyControl.getThreadReadPoint();
+      //DebugPrint.println(" MS@" + hashCode() + " next: " + theNext + " next_next: " +
+      //    getLowest() + " threadpoint=" + readpoint);
+      return theNext;
+    }
+
+    protected KeyValue getLowest() {
+      return getLower(kvsetNextRow,
+          snapshotNextRow);
+    }
+
+    /*
+     * Returns the lower of the two key values, or null if they are both null.
+     * This uses comparator.compare() to compare the KeyValue using the memstore
+     * comparator.
+     */
+    protected KeyValue getLower(KeyValue first, KeyValue second) {
+      if (first == null && second == null) {
+        return null;
+      }
+      if (first != null && second != null) {
+        int compare = comparator.compare(first, second);
+        return (compare <= 0 ? first : second);
+      }
+      return (first != null ? first : second);
+    }
+
+    public synchronized void close() {
+      this.kvsetNextRow = null;
+      this.snapshotNextRow = null;
+
+      this.kvsetIt = null;
+      this.snapshotIt = null;
+    }
+
+    /**
+     * MemStoreScanner returns max value as sequence id because it will
+     * always have the latest data among all files.
+     */
+    @Override
+    public long getSequenceID() {
+      return Long.MAX_VALUE;
+    }
+  }
+
+  public final static long FIXED_OVERHEAD = ClassSize.align(
+      ClassSize.OBJECT + (9 * ClassSize.REFERENCE));
+
+  public final static long DEEP_OVERHEAD = ClassSize.align(FIXED_OVERHEAD +
+      ClassSize.REENTRANT_LOCK + ClassSize.ATOMIC_LONG +
+      ClassSize.COPYONWRITE_ARRAYSET + ClassSize.COPYONWRITE_ARRAYLIST +
+      (2 * ClassSize.CONCURRENT_SKIPLISTMAP));
+
+  /*
+   * Calculate how the MemStore size has changed.  Includes overhead of the
+   * backing Map.
+   * @param kv
+   * @param notpresent True if the kv was NOT present in the set.
+   * @return Size
+   */
+  long heapSizeChange(final KeyValue kv, final boolean notpresent) {
+    return notpresent ?
+        ClassSize.align(ClassSize.CONCURRENT_SKIPLISTMAP_ENTRY + kv.heapSize()):
+        0;
+  }
+
+  /**
+   * Get the entire heap usage for this MemStore not including keys in the
+   * snapshot.
+   */
+  @Override
+  public long heapSize() {
+    return size.get();
+  }
+
+  /**
+   * Get the heap usage of KVs in this MemStore.
+   */
+  public long keySize() {
+    return heapSize() - DEEP_OVERHEAD;
+  }
+
+  /**
+   * Code to help figure if our approximation of object heap sizes is close
+   * enough.  See hbase-900.  Fills memstores then waits so user can heap
+   * dump and bring up resultant hprof in something like jprofiler which
+   * allows you get 'deep size' on objects.
+   * @param args main args
+   */
+  public static void main(String [] args) {
+    RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
+    LOG.info("vmName=" + runtime.getVmName() + ", vmVendor=" +
+      runtime.getVmVendor() + ", vmVersion=" + runtime.getVmVersion());
+    LOG.info("vmInputArguments=" + runtime.getInputArguments());
+    MemStore memstore1 = new MemStore();
+    // TODO: x32 vs x64
+    long size = 0;
+    final int count = 10000;
+    byte [] fam = Bytes.toBytes("col");
+    byte [] qf = Bytes.toBytes("umn");
+    byte [] empty = new byte[0];
+    for (int i = 0; i < count; i++) {
+      // Give each its own ts
+      size += memstore1.add(new KeyValue(Bytes.toBytes(i), fam, qf, i, empty));
+    }
+    LOG.info("memstore1 estimated size=" + size);
+    for (int i = 0; i < count; i++) {
+      size += memstore1.add(new KeyValue(Bytes.toBytes(i), fam, qf, i, empty));
+    }
+    LOG.info("memstore1 estimated size (2nd loading of same data)=" + size);
+    // Make a variably sized memstore.
+    MemStore memstore2 = new MemStore();
+    for (int i = 0; i < count; i++) {
+      size += memstore2.add(new KeyValue(Bytes.toBytes(i), fam, qf, i,
+        new byte[i]));
+    }
+    LOG.info("memstore2 estimated size=" + size);
+    final int seconds = 30;
+    LOG.info("Waiting " + seconds + " seconds while heap dump is taken");
+    for (int i = 0; i < seconds; i++) {
+      // Thread.sleep(1000);
+    }
+    LOG.info("Exiting.");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
new file mode 100644
index 0000000..46239e9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java
@@ -0,0 +1,393 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.DroppedSnapshotException;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.StringUtils;
+
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.util.ArrayList;
+import java.util.ConcurrentModificationException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.DelayQueue;
+import java.util.concurrent.Delayed;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.locks.ReentrantLock;
+
+/**
+ * Thread that flushes cache on request
+ *
+ * NOTE: This class extends Thread rather than Chore because the sleep time
+ * can be interrupted when there is something to do, rather than the Chore
+ * sleep time which is invariant.
+ *
+ * @see FlushRequester
+ */
+class MemStoreFlusher extends Thread implements FlushRequester {
+  static final Log LOG = LogFactory.getLog(MemStoreFlusher.class);
+  // These two data members go together.  Any entry in the one must have
+  // a corresponding entry in the other.
+  private final BlockingQueue<FlushQueueEntry> flushQueue =
+    new DelayQueue<FlushQueueEntry>();
+  private final Map<HRegion, FlushQueueEntry> regionsInQueue =
+    new HashMap<HRegion, FlushQueueEntry>();
+
+  private final long threadWakeFrequency;
+  private final HRegionServer server;
+  private final ReentrantLock lock = new ReentrantLock();
+
+  protected final long globalMemStoreLimit;
+  protected final long globalMemStoreLimitLowMark;
+
+  private static final float DEFAULT_UPPER = 0.4f;
+  private static final float DEFAULT_LOWER = 0.35f;
+  private static final String UPPER_KEY =
+    "hbase.regionserver.global.memstore.upperLimit";
+  private static final String LOWER_KEY =
+    "hbase.regionserver.global.memstore.lowerLimit";
+  private long blockingStoreFilesNumber;
+  private long blockingWaitTime;
+
+  /**
+   * @param conf
+   * @param server
+   */
+  public MemStoreFlusher(final Configuration conf,
+      final HRegionServer server) {
+    super();
+    this.server = server;
+    this.threadWakeFrequency =
+      conf.getLong(HConstants.THREAD_WAKE_FREQUENCY, 10 * 1000);
+    long max = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getMax();
+    this.globalMemStoreLimit = globalMemStoreLimit(max, DEFAULT_UPPER,
+      UPPER_KEY, conf);
+    long lower = globalMemStoreLimit(max, DEFAULT_LOWER, LOWER_KEY, conf);
+    if (lower > this.globalMemStoreLimit) {
+      lower = this.globalMemStoreLimit;
+      LOG.info("Setting globalMemStoreLimitLowMark == globalMemStoreLimit " +
+        "because supplied " + LOWER_KEY + " was > " + UPPER_KEY);
+    }
+    this.globalMemStoreLimitLowMark = lower;
+    this.blockingStoreFilesNumber =
+      conf.getInt("hbase.hstore.blockingStoreFiles", 7);
+    if (this.blockingStoreFilesNumber == -1) {
+      this.blockingStoreFilesNumber = 1 +
+        conf.getInt("hbase.hstore.compactionThreshold", 3);
+    }
+    this.blockingWaitTime = conf.getInt("hbase.hstore.blockingWaitTime",
+      90000);
+    LOG.info("globalMemStoreLimit=" +
+      StringUtils.humanReadableInt(this.globalMemStoreLimit) +
+      ", globalMemStoreLimitLowMark=" +
+      StringUtils.humanReadableInt(this.globalMemStoreLimitLowMark) +
+      ", maxHeap=" + StringUtils.humanReadableInt(max));
+  }
+
+  /**
+   * Calculate size using passed <code>key</code> for configured
+   * percentage of <code>max</code>.
+   * @param max
+   * @param defaultLimit
+   * @param key
+   * @param c
+   * @return Limit.
+   */
+  static long globalMemStoreLimit(final long max,
+     final float defaultLimit, final String key, final Configuration c) {
+    float limit = c.getFloat(key, defaultLimit);
+    return getMemStoreLimit(max, limit, defaultLimit);
+  }
+
+  static long getMemStoreLimit(final long max, final float limit,
+      final float defaultLimit) {
+    if (limit >= 0.9f || limit < 0.1f) {
+      LOG.warn("Setting global memstore limit to default of " + defaultLimit +
+        " because supplied value outside allowed range of 0.1 -> 0.9");
+    }
+    return (long)(max * limit);
+  }
+
+  @Override
+  public void run() {
+    while (!this.server.isStopped()) {
+      FlushQueueEntry fqe = null;
+      try {
+        fqe = flushQueue.poll(threadWakeFrequency, TimeUnit.MILLISECONDS);
+        if (fqe == null) {
+          continue;
+        }
+        if (!flushRegion(fqe)) {
+          break;
+        }
+      } catch (InterruptedException ex) {
+        continue;
+      } catch (ConcurrentModificationException ex) {
+        continue;
+      } catch (Exception ex) {
+        LOG.error("Cache flush failed" +
+          (fqe != null ? (" for region " + Bytes.toString(fqe.region.getRegionName())) : ""),
+          ex);
+        if (!server.checkFileSystem()) {
+          break;
+        }
+      }
+    }
+    this.regionsInQueue.clear();
+    this.flushQueue.clear();
+    LOG.info(getName() + " exiting");
+  }
+
+  public void requestFlush(HRegion r) {
+    synchronized (regionsInQueue) {
+      if (!regionsInQueue.containsKey(r)) {
+        // This entry has no delay so it will be added at the top of the flush
+        // queue.  It'll come out near immediately.
+        FlushQueueEntry fqe = new FlushQueueEntry(r);
+        this.regionsInQueue.put(r, fqe);
+        this.flushQueue.add(fqe);
+      }
+    }
+  }
+
+  /**
+   * Only interrupt once it's done with a run through the work loop.
+   */
+  void interruptIfNecessary() {
+    lock.lock();
+    try {
+      this.interrupt();
+    } finally {
+      lock.unlock();
+    }
+  }
+
+  /*
+   * A flushRegion that checks store file count.  If too many, puts the flush
+   * on delay queue to retry later.
+   * @param fqe
+   * @return true if the region was successfully flushed, false otherwise. If 
+   * false, there will be accompanying log messages explaining why the log was
+   * not flushed.
+   */
+  private boolean flushRegion(final FlushQueueEntry fqe) {
+    HRegion region = fqe.region;
+    if (!fqe.region.getRegionInfo().isMetaRegion() &&
+        isTooManyStoreFiles(region)) {
+      if (fqe.isMaximumWait(this.blockingWaitTime)) {
+        LOG.info("Waited " + (System.currentTimeMillis() - fqe.createTime) +
+          "ms on a compaction to clean up 'too many store files'; waited " +
+          "long enough... proceeding with flush of " +
+          region.getRegionNameAsString());
+      } else {
+        // If this is first time we've been put off, then emit a log message.
+        if (fqe.getRequeueCount() <= 0) {
+          // Note: We don't impose blockingStoreFiles constraint on meta regions
+          LOG.warn("Region " + region.getRegionNameAsString() + " has too many " +
+            "store files; delaying flush up to " + this.blockingWaitTime + "ms");
+        }
+        this.server.compactSplitThread.requestCompaction(region, getName());
+        // Put back on the queue.  Have it come back out of the queue
+        // after a delay of this.blockingWaitTime / 100 ms.
+        this.flushQueue.add(fqe.requeue(this.blockingWaitTime / 100));
+        // Tell a lie, it's not flushed but it's ok
+        return true;
+      }
+    }
+    return flushRegion(region, false);
+  }
+
+  /*
+   * Flush a region.
+   * @param region Region to flush.
+   * @param emergencyFlush Set if we are being force flushed. If true the region
+   * needs to be removed from the flush queue. If false, when we were called
+   * from the main flusher run loop and we got the entry to flush by calling
+   * poll on the flush queue (which removed it).
+   *
+   * @return true if the region was successfully flushed, false otherwise. If
+   * false, there will be accompanying log messages explaining why the log was
+   * not flushed.
+   */
+  private boolean flushRegion(final HRegion region, final boolean emergencyFlush) {
+    synchronized (this.regionsInQueue) {
+      FlushQueueEntry fqe = this.regionsInQueue.remove(region);
+      if (fqe != null && emergencyFlush) {
+        // Need to remove from region from delay queue.  When NOT an
+        // emergencyFlush, then item was removed via a flushQueue.poll.
+        flushQueue.remove(fqe);
+     }
+     lock.lock();
+    }
+    try {
+      if (region.flushcache()) {
+        server.compactSplitThread.requestCompaction(region, getName());
+      }
+      server.getMetrics().addFlush(region.getRecentFlushInfo());
+    } catch (DroppedSnapshotException ex) {
+      // Cache flush can fail in a few places. If it fails in a critical
+      // section, we get a DroppedSnapshotException and a replay of hlog
+      // is required. Currently the only way to do this is a restart of
+      // the server. Abort because hdfs is probably bad (HBASE-644 is a case
+      // where hdfs was bad but passed the hdfs check).
+      server.abort("Replay of HLog required. Forcing server shutdown", ex);
+      return false;
+    } catch (IOException ex) {
+      LOG.error("Cache flush failed" +
+        (region != null ? (" for region " + Bytes.toString(region.getRegionName())) : ""),
+        RemoteExceptionHandler.checkIOException(ex));
+      if (!server.checkFileSystem()) {
+        return false;
+      }
+    } finally {
+      lock.unlock();
+    }
+    return true;
+  }
+
+  private boolean isTooManyStoreFiles(HRegion region) {
+    for (Store hstore: region.stores.values()) {
+      if (hstore.getStorefilesCount() > this.blockingStoreFilesNumber) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  /**
+   * Check if the regionserver's memstore memory usage is greater than the
+   * limit. If so, flush regions with the biggest memstores until we're down
+   * to the lower limit. This method blocks callers until we're down to a safe
+   * amount of memstore consumption.
+   */
+  public synchronized void reclaimMemStoreMemory() {
+    if (server.getGlobalMemStoreSize() >= globalMemStoreLimit) {
+      flushSomeRegions();
+    }
+  }
+
+  /*
+   * Emergency!  Need to flush memory.
+   */
+  private synchronized void flushSomeRegions() {
+    // keep flushing until we hit the low water mark
+    long globalMemStoreSize = -1;
+    ArrayList<HRegion> regionsToCompact = new ArrayList<HRegion>();
+    for (SortedMap<Long, HRegion> m =
+        this.server.getCopyOfOnlineRegionsSortedBySize();
+      (globalMemStoreSize = server.getGlobalMemStoreSize()) >=
+        this.globalMemStoreLimitLowMark;) {
+      // flush the region with the biggest memstore
+      if (m.size() <= 0) {
+        LOG.info("No online regions to flush though we've been asked flush " +
+          "some; globalMemStoreSize=" +
+          StringUtils.humanReadableInt(globalMemStoreSize) +
+          ", globalMemStoreLimitLowMark=" +
+          StringUtils.humanReadableInt(this.globalMemStoreLimitLowMark));
+        break;
+      }
+      HRegion biggestMemStoreRegion = m.remove(m.firstKey());
+      LOG.info("Forced flushing of " +  biggestMemStoreRegion.toString() +
+        " because global memstore limit of " +
+        StringUtils.humanReadableInt(this.globalMemStoreLimit) +
+        " exceeded; currently " +
+        StringUtils.humanReadableInt(globalMemStoreSize) + " and flushing till " +
+        StringUtils.humanReadableInt(this.globalMemStoreLimitLowMark));
+      if (!flushRegion(biggestMemStoreRegion, true)) {
+        LOG.warn("Flush failed");
+        break;
+      }
+      regionsToCompact.add(biggestMemStoreRegion);
+    }
+    for (HRegion region : regionsToCompact) {
+      server.compactSplitThread.requestCompaction(region, getName());
+    }
+  }
+
+  /**
+   * Datastructure used in the flush queue.  Holds region and retry count.
+   * Keeps tabs on how old this object is.  Implements {@link Delayed}.  On
+   * construction, the delay is zero. When added to a delay queue, we'll come
+   * out near immediately.  Call {@link #requeue(long)} passing delay in
+   * milliseconds before readding to delay queue if you want it to stay there
+   * a while.
+   */
+  static class FlushQueueEntry implements Delayed {
+    private final HRegion region;
+    private final long createTime;
+    private long whenToExpire;
+    private int requeueCount = 0;
+
+    FlushQueueEntry(final HRegion r) {
+      this.region = r;
+      this.createTime = System.currentTimeMillis();
+      this.whenToExpire = this.createTime;
+    }
+
+    /**
+     * @param maximumWait
+     * @return True if we have been delayed > <code>maximumWait</code> milliseconds.
+     */
+    public boolean isMaximumWait(final long maximumWait) {
+      return (System.currentTimeMillis() - this.createTime) > maximumWait;
+    }
+
+    /**
+     * @return Count of times {@link #resetDelay()} was called; i.e this is
+     * number of times we've been requeued.
+     */
+    public int getRequeueCount() {
+      return this.requeueCount;
+    }
+ 
+    /**
+     * @param when When to expire, when to come up out of the queue.
+     * Specify in milliseconds.  This method adds System.currentTimeMillis()
+     * to whatever you pass.
+     * @return This.
+     */
+    public FlushQueueEntry requeue(final long when) {
+      this.whenToExpire = System.currentTimeMillis() + when;
+      this.requeueCount++;
+      return this;
+    }
+
+    @Override
+    public long getDelay(TimeUnit unit) {
+      return unit.convert(this.whenToExpire - System.currentTimeMillis(),
+          TimeUnit.MILLISECONDS);
+    }
+
+    @Override
+    public int compareTo(Delayed other) {
+      return Long.valueOf(getDelay(TimeUnit.MILLISECONDS) -
+        other.getDelay(TimeUnit.MILLISECONDS)).intValue();
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java
new file mode 100644
index 0000000..4881fc0
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown if request for nonexistent column family.
+ */
+public class NoSuchColumnFamilyException extends DoNotRetryIOException {
+  private static final long serialVersionUID = -6569952730832331274L;
+
+  /** default constructor */
+  public NoSuchColumnFamilyException() {
+    super();
+  }
+
+  /**
+   * @param message exception message
+   */
+  public NoSuchColumnFamilyException(String message) {
+    super(message);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/OnlineRegions.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/OnlineRegions.java
new file mode 100644
index 0000000..3c90ed1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/OnlineRegions.java
@@ -0,0 +1,50 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+/**
+ * Interface to Map of online regions.  In the  Map, the key is the region's
+ * encoded name and the value is an {@link HRegion} instance.
+ */
+interface OnlineRegions {
+  /**
+   * Add to online regions.
+   * @param r
+   */
+  public void addToOnlineRegions(final HRegion r);
+
+  /**
+   * This method removes HRegion corresponding to hri from the Map of onlineRegions.
+   *
+   * @param encodedRegionName
+   * @return True if we removed a region from online list.
+   */
+  public boolean removeFromOnlineRegions(String encodedRegionName);
+
+  /**
+   * Return {@link HRegion} instance.
+   * Only works if caller is in same context, in same JVM. HRegion is not
+   * serializable.
+   * @param encodedRegionName
+   * @return HRegion for the passed encoded <code>encodedRegionName</code> or
+   * null if named region is not member of the online regions.
+   */
+  public HRegion getFromOnlineRegions(String encodedRegionName);
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/PriorityCompactionQueue.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/PriorityCompactionQueue.java
new file mode 100644
index 0000000..5cab5bd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/PriorityCompactionQueue.java
@@ -0,0 +1,379 @@
+/**
+* Copyright 2010 The Apache Software Foundation
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.Collection;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.PriorityBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * This class delegates to the BlockingQueue but wraps all HRegions in
+ * compaction requests that hold the priority and the date requested.
+ *
+ * Implementation Note: With an elevation time of -1 there is the potential for
+ * starvation of the lower priority compaction requests as long as there is a
+ * constant stream of high priority requests.
+ */
+public class PriorityCompactionQueue implements BlockingQueue<HRegion> {
+  static final Log LOG = LogFactory.getLog(PriorityCompactionQueue.class);
+
+  /**
+   * This class represents a compaction request and holds the region, priority,
+   * and time submitted.
+   */
+  private class CompactionRequest implements Comparable<CompactionRequest> {
+    private final HRegion r;
+    private final int p;
+    private final Date date;
+
+    public CompactionRequest(HRegion r, int p) {
+      this(r, p, null);
+    }
+
+    public CompactionRequest(HRegion r, int p, Date d) {
+      if (r == null) {
+        throw new NullPointerException("HRegion cannot be null");
+      }
+
+      if (d == null) {
+        d = new Date();
+      }
+
+      this.r = r;
+      this.p = p;
+      this.date = d;
+    }
+
+    /**
+     * This function will define where in the priority queue the request will
+     * end up.  Those with the highest priorities will be first.  When the
+     * priorities are the same it will It will first compare priority then date
+     * to maintain a FIFO functionality.
+     *
+     * <p>Note: The date is only accurate to the millisecond which means it is
+     * possible that two requests were inserted into the queue within a
+     * millisecond.  When that is the case this function will break the tie
+     * arbitrarily.
+     */
+    @Override
+    public int compareTo(CompactionRequest request) {
+      //NOTE: The head of the priority queue is the least element
+      if (this.equals(request)) {
+        return 0; //they are the same request
+      }
+      int compareVal;
+
+      compareVal = p - request.p; //compare priority
+      if (compareVal != 0) {
+        return compareVal;
+      }
+
+      compareVal = date.compareTo(request.date);
+      if (compareVal != 0) {
+        return compareVal;
+      }
+
+      //break the tie arbitrarily
+      return -1;
+    }
+
+    /** Gets the HRegion for the request */
+    HRegion getHRegion() {
+      return r;
+    }
+
+    /** Gets the priority for the request */
+    int getPriority() {
+      return p;
+    }
+
+    public String toString() {
+      return "regionName=" + r.getRegionNameAsString() +
+        ", priority=" + p + ", date=" + date;
+    }
+  }
+
+  /** The actual blocking queue we delegate to */
+  protected final BlockingQueue<CompactionRequest> queue =
+    new PriorityBlockingQueue<CompactionRequest>();
+
+  /** Hash map of the HRegions contained within the Compaction Queue */
+  private final HashMap<HRegion, CompactionRequest> regionsInQueue =
+    new HashMap<HRegion, CompactionRequest>();
+
+  /** Creates a new PriorityCompactionQueue with no priority elevation time */
+  public PriorityCompactionQueue() {
+    LOG.debug("Create PriorityCompactionQueue");
+  }
+
+  /** If the region is not already in the queue it will add it and return a
+   * new compaction request object.  If it is already present in the queue
+   * then it will return null.
+   * @param p If null it will use the default priority
+   * @return returns a compaction request if it isn't already in the queue
+   */
+  protected CompactionRequest addToRegionsInQueue(HRegion r, int p) {
+    CompactionRequest queuedRequest = null;
+    CompactionRequest newRequest = new CompactionRequest(r, p);
+    synchronized (regionsInQueue) {
+      queuedRequest = regionsInQueue.get(r);
+      if (queuedRequest == null ||
+          newRequest.getPriority() < queuedRequest.getPriority()) {
+        LOG.trace("Inserting region in queue. " + newRequest);
+        regionsInQueue.put(r, newRequest);
+      } else {
+        LOG.trace("Region already in queue, skipping. Queued: " + queuedRequest +
+          ", requested: " + newRequest);
+        newRequest = null; // It is already present so don't add it
+      }
+    }
+
+    if (newRequest != null && queuedRequest != null) {
+      // Remove the lower priority request
+      queue.remove(queuedRequest);
+    }
+
+    return newRequest;
+  }
+
+  /** Removes the request from the regions in queue
+   * @param remove
+   */
+  protected CompactionRequest removeFromRegionsInQueue(CompactionRequest remove) {
+    if (remove == null) return null;
+
+    synchronized (regionsInQueue) {
+      CompactionRequest cr = null;
+      cr = regionsInQueue.remove(remove.getHRegion());
+      if (cr != null && !cr.equals(remove))
+      {
+        //Because we don't synchronize across both this.regionsInQueue and this.queue
+        //a rare race condition exists where a higher priority compaction request replaces
+        //the lower priority request in this.regionsInQueue but the lower priority request
+        //is taken off this.queue before the higher can be added to this.queue.
+        //So if we didn't remove what we were expecting we put it back on.
+        regionsInQueue.put(cr.getHRegion(), cr);
+      }
+      if (cr == null) {
+        LOG.warn("Removed a region it couldn't find in regionsInQueue: " + remove.getHRegion());
+      }
+      return cr;
+    }
+  }
+
+  public boolean add(HRegion e, int p) {
+    CompactionRequest request = this.addToRegionsInQueue(e, p);
+    if (request != null) {
+      boolean result = queue.add(request);
+      return result;
+    } else {
+      return false;
+    }
+  }
+
+  @Override
+  public boolean add(HRegion e) {
+    return add(e, e.getCompactPriority());
+  }
+
+  public boolean offer(HRegion e, int p) {
+    CompactionRequest request = this.addToRegionsInQueue(e, p);
+    return (request != null)? queue.offer(request): false;
+  }
+
+  @Override
+  public boolean offer(HRegion e) {
+    return offer(e, e.getCompactPriority());
+  }
+
+  public void put(HRegion e, int p) throws InterruptedException {
+    CompactionRequest request = this.addToRegionsInQueue(e, p);
+    if (request != null) {
+      queue.put(request);
+    }
+  }
+
+  @Override
+  public void put(HRegion e) throws InterruptedException {
+    put(e, e.getCompactPriority());
+  }
+
+  public boolean offer(HRegion e, int p, long timeout, TimeUnit unit)
+  throws InterruptedException {
+    CompactionRequest request = this.addToRegionsInQueue(e, p);
+    return (request != null)? queue.offer(request, timeout, unit): false;
+  }
+
+  @Override
+  public boolean offer(HRegion e, long timeout, TimeUnit unit)
+  throws InterruptedException {
+    return offer(e, e.getCompactPriority(), timeout, unit);
+  }
+
+  @Override
+  public HRegion take() throws InterruptedException {
+    CompactionRequest cr = queue.take();
+    if (cr != null) {
+      removeFromRegionsInQueue(cr);
+      return cr.getHRegion();
+    }
+    return null;
+  }
+
+  @Override
+  public HRegion poll(long timeout, TimeUnit unit) throws InterruptedException {
+    CompactionRequest cr = queue.poll(timeout, unit);
+    if (cr != null) {
+      removeFromRegionsInQueue(cr);
+      return cr.getHRegion();
+    }
+    return null;
+  }
+
+  @Override
+  public boolean remove(Object r) {
+    if (r instanceof CompactionRequest) {
+      CompactionRequest cr = removeFromRegionsInQueue((CompactionRequest) r);
+      if (cr != null) {
+        return queue.remove(cr);
+      }
+    }
+
+    return false;
+  }
+
+  @Override
+  public HRegion remove() {
+    CompactionRequest cr = queue.remove();
+    if (cr != null) {
+      removeFromRegionsInQueue(cr);
+      return cr.getHRegion();
+    }
+    return null;
+  }
+
+  @Override
+  public HRegion poll() {
+    CompactionRequest cr = queue.poll();
+    if (cr != null) {
+      removeFromRegionsInQueue(cr);
+      return cr.getHRegion();
+    }
+    return null;
+  }
+
+  @Override
+  public int remainingCapacity() {
+    return queue.remainingCapacity();
+  }
+
+  @Override
+  public boolean contains(Object r) {
+    if (r instanceof HRegion) {
+      synchronized (regionsInQueue) {
+        return regionsInQueue.containsKey((HRegion) r);
+      }
+    } else if (r instanceof CompactionRequest) {
+      return queue.contains(r);
+    }
+    return false;
+  }
+
+  @Override
+  public HRegion element() {
+    CompactionRequest cr = queue.element();
+    return (cr != null)? cr.getHRegion(): null;
+  }
+
+  @Override
+  public HRegion peek() {
+    CompactionRequest cr = queue.peek();
+    return (cr != null)? cr.getHRegion(): null;
+  }
+
+  @Override
+  public int size() {
+    return queue.size();
+  }
+
+  @Override
+  public boolean isEmpty() {
+    return queue.isEmpty();
+  }
+
+  @Override
+  public void clear() {
+    regionsInQueue.clear();
+    queue.clear();
+  }
+
+  // Unimplemented methods, collection methods
+
+  @Override
+  public Iterator<HRegion> iterator() {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public Object[] toArray() {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public <T> T[] toArray(T[] a) {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public boolean containsAll(Collection<?> c) {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public boolean addAll(Collection<? extends HRegion> c) {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public boolean removeAll(Collection<?> c) {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public boolean retainAll(Collection<?> c) {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public int drainTo(Collection<? super HRegion> c) {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+
+  @Override
+  public int drainTo(Collection<? super HRegion> c, int maxElements) {
+    throw new UnsupportedOperationException("Not supported.");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ReadWriteConsistencyControl.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ReadWriteConsistencyControl.java
new file mode 100644
index 0000000..167e423
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ReadWriteConsistencyControl.java
@@ -0,0 +1,161 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.LinkedList;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * Manages the read/write consistency within memstore. This provides
+ * an interface for readers to determine what entries to ignore, and
+ * a mechanism for writers to obtain new write numbers, then "commit"
+ * the new writes for readers to read (thus forming atomic transactions).
+ */
+public class ReadWriteConsistencyControl {
+  private volatile long memstoreRead = 0;
+  private volatile long memstoreWrite = 0;
+
+  private final Object readWaiters = new Object();
+
+  // This is the pending queue of writes.
+  private final LinkedList<WriteEntry> writeQueue =
+      new LinkedList<WriteEntry>();
+
+  private static final ThreadLocal<Long> perThreadReadPoint =
+      new ThreadLocal<Long>();
+
+  /**
+   * Get this thread's read point. Used primarily by the memstore scanner to
+   * know which values to skip (ie: have not been completed/committed to 
+   * memstore).
+   */
+  public static long getThreadReadPoint() {
+    return perThreadReadPoint.get();
+  }
+
+  /** 
+   * Set the thread read point to the given value. The thread RWCC
+   * is used by the Memstore scanner so it knows which values to skip. 
+   * Give it a value of 0 if you want everything.
+   */
+  public static void setThreadReadPoint(long readPoint) {
+    perThreadReadPoint.set(readPoint);
+  }
+
+  /**
+   * Set the thread RWCC read point to whatever the current read point is in
+   * this particular instance of RWCC.  Returns the new thread read point value.
+   */
+  public static long resetThreadReadPoint(ReadWriteConsistencyControl rwcc) {
+    perThreadReadPoint.set(rwcc.memstoreReadPoint());
+    return getThreadReadPoint();
+  }
+  
+  /**
+   * Set the thread RWCC read point to 0 (include everything).
+   */
+  public static void resetThreadReadPoint() {
+    perThreadReadPoint.set(0L);
+  }
+
+  public WriteEntry beginMemstoreInsert() {
+    synchronized (writeQueue) {
+      long nextWriteNumber = ++memstoreWrite;
+      WriteEntry e = new WriteEntry(nextWriteNumber);
+      writeQueue.add(e);
+      return e;
+    }
+  }
+
+  public void completeMemstoreInsert(WriteEntry e) {
+    synchronized (writeQueue) {
+      e.markCompleted();
+
+      long nextReadValue = -1;
+      boolean ranOnce=false;
+      while (!writeQueue.isEmpty()) {
+        ranOnce=true;
+        WriteEntry queueFirst = writeQueue.getFirst();
+
+        if (nextReadValue > 0) {
+          if (nextReadValue+1 != queueFirst.getWriteNumber()) {
+            throw new RuntimeException("invariant in completeMemstoreInsert violated, prev: "
+                + nextReadValue + " next: " + queueFirst.getWriteNumber());
+          }
+        }
+
+        if (queueFirst.isCompleted()) {
+          nextReadValue = queueFirst.getWriteNumber();
+          writeQueue.removeFirst();
+        } else {
+          break;
+        }
+      }
+
+      if (!ranOnce) {
+        throw new RuntimeException("never was a first");
+      }
+
+      if (nextReadValue > 0) {
+        synchronized (readWaiters) {
+          memstoreRead = nextReadValue;
+          readWaiters.notifyAll();
+        }
+
+      }
+    }
+
+    boolean interrupted = false;
+    synchronized (readWaiters) {
+      while (memstoreRead < e.getWriteNumber()) {
+        try {
+          readWaiters.wait(0);
+        } catch (InterruptedException ie) {
+          // We were interrupted... finish the loop -- i.e. cleanup --and then
+          // on our way out, reset the interrupt flag.
+          interrupted = true;
+        }
+      }
+    }
+    if (interrupted) Thread.currentThread().interrupt();
+  }
+
+  public long memstoreReadPoint() {
+    return memstoreRead;
+  }
+
+
+  public static class WriteEntry {
+    private long writeNumber;
+    private boolean completed = false;
+    WriteEntry(long writeNumber) {
+      this.writeNumber = writeNumber;
+    }
+    void markCompleted() {
+      this.completed = true;
+    }
+    boolean isCompleted() {
+      return this.completed;
+    }
+    long getWriteNumber() {
+      return this.writeNumber;
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerRunningException.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerRunningException.java
new file mode 100644
index 0000000..ed36ed7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerRunningException.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+/**
+ * Thrown if the region server log directory exists (which indicates another
+ * region server is running at the same address)
+ */
+public class RegionServerRunningException extends IOException {
+  private static final long serialVersionUID = 1L << 31 - 1L;
+
+  /** Default Constructor */
+  public RegionServerRunningException() {
+    super();
+  }
+
+  /**
+   * Constructs the exception and supplies a string as the message
+   * @param s - message
+   */
+  public RegionServerRunningException(String s) {
+    super(s);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
new file mode 100644
index 0000000..1309f93
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java
@@ -0,0 +1,69 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Services provided by {@link HRegionServer}
+ */
+public interface RegionServerServices extends OnlineRegions {
+  /**
+   * @return True if this regionserver is stopping.
+   */
+  public boolean isStopping();
+
+  /** @return the HLog */
+  public HLog getWAL();
+
+  /**
+   * @return Implementation of {@link CompactionRequestor} or null.
+   */
+  public CompactionRequestor getCompactionRequester();
+  
+  /**
+   * @return Implementation of {@link FlushRequester} or null.
+   */
+  public FlushRequester getFlushRequester();
+
+  /**
+   * Return data structure that has Server address and startcode.
+   * @return The HServerInfo for this RegionServer.
+   */
+  public HServerInfo getServerInfo();
+
+  /**
+   * Tasks to perform after region open to complete deploy of region on
+   * regionserver
+   * @param r Region to open.
+   * @param ct Instance of {@link CatalogTracker}
+   * @param daughter True if this is daughter of a split
+   * @throws KeeperException
+   * @throws IOException
+   */
+  public void postOpenDeployTasks(final HRegion r, final CatalogTracker ct,
+      final boolean daughter)
+  throws KeeperException, IOException;
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerStoppedException.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerStoppedException.java
new file mode 100644
index 0000000..7330ed3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerStoppedException.java
@@ -0,0 +1,32 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+
+/**
+ * Thrown by the region server when it is shutting down state.
+ *
+ * Should NEVER be thrown to HBase clients, they will abort the call chain
+ * and not retry even though regions will transition to new servers.
+ */
+@SuppressWarnings("serial")
+public class RegionServerStoppedException extends DoNotRetryIOException {
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanDeleteTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanDeleteTracker.java
new file mode 100644
index 0000000..6c4580e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanDeleteTracker.java
@@ -0,0 +1,157 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * This class is responsible for the tracking and enforcement of Deletes
+ * during the course of a Scan operation.
+ *
+ * It only has to enforce Delete and DeleteColumn, since the
+ * DeleteFamily is handled at a higher level.
+ *
+ * <p>
+ * This class is utilized through three methods:
+ * <ul><li>{@link #add} when encountering a Delete or DeleteColumn
+ * <li>{@link #isDeleted} when checking if a Put KeyValue has been deleted
+ * <li>{@link #update} when reaching the end of a StoreFile or row for scans
+ * <p>
+ * This class is NOT thread-safe as queries are never multi-threaded
+ */
+public class ScanDeleteTracker implements DeleteTracker {
+
+  private long familyStamp = -1L;
+  private byte [] deleteBuffer = null;
+  private int deleteOffset = 0;
+  private int deleteLength = 0;
+  private byte deleteType = 0;
+  private long deleteTimestamp = 0L;
+
+  /**
+   * Constructor for ScanDeleteTracker
+   */
+  public ScanDeleteTracker() {
+    super();
+  }
+
+  /**
+   * Add the specified KeyValue to the list of deletes to check against for
+   * this row operation.
+   * <p>
+   * This is called when a Delete is encountered in a StoreFile.
+   * @param buffer KeyValue buffer
+   * @param qualifierOffset column qualifier offset
+   * @param qualifierLength column qualifier length
+   * @param timestamp timestamp
+   * @param type delete type as byte
+   */
+  @Override
+  public void add(byte[] buffer, int qualifierOffset, int qualifierLength,
+      long timestamp, byte type) {
+    if (timestamp > familyStamp) {
+      if (type == KeyValue.Type.DeleteFamily.getCode()) {
+        familyStamp = timestamp;
+        return;
+      }
+
+      if (deleteBuffer != null && type < deleteType) {
+        // same column, so ignore less specific delete
+        if (Bytes.compareTo(deleteBuffer, deleteOffset, deleteLength,
+            buffer, qualifierOffset, qualifierLength) == 0){
+          return;
+        }
+      }
+      // new column, or more general delete type
+      deleteBuffer = buffer;
+      deleteOffset = qualifierOffset;
+      deleteLength = qualifierLength;
+      deleteType = type;
+      deleteTimestamp = timestamp;
+    }
+    // missing else is never called.
+  }
+
+  /**
+   * Check if the specified KeyValue buffer has been deleted by a previously
+   * seen delete.
+   *
+   * @param buffer KeyValue buffer
+   * @param qualifierOffset column qualifier offset
+   * @param qualifierLength column qualifier length
+   * @param timestamp timestamp
+   * @return true is the specified KeyValue is deleted, false if not
+   */
+  @Override
+  public boolean isDeleted(byte [] buffer, int qualifierOffset,
+      int qualifierLength, long timestamp) {
+    if (timestamp <= familyStamp) {
+      return true;
+    }
+
+    if (deleteBuffer != null) {
+      int ret = Bytes.compareTo(deleteBuffer, deleteOffset, deleteLength,
+          buffer, qualifierOffset, qualifierLength);
+
+      if (ret == 0) {
+        if (deleteType == KeyValue.Type.DeleteColumn.getCode()) {
+          return true;
+        }
+        // Delete (aka DeleteVersion)
+        // If the timestamp is the same, keep this one
+        if (timestamp == deleteTimestamp) {
+          return true;
+        }
+        // use assert or not?
+        assert timestamp < deleteTimestamp;
+
+        // different timestamp, let's clear the buffer.
+        deleteBuffer = null;
+      } else if(ret < 0){
+        // Next column case.
+        deleteBuffer = null;
+      } else {
+        //Should never happen, throw Exception
+      }
+    }
+
+    return false;
+  }
+
+  @Override
+  public boolean isEmpty() {
+    return deleteBuffer == null && familyStamp == 0;
+  }
+
+  @Override
+  // called between every row.
+  public void reset() {
+    familyStamp = 0L;
+    deleteBuffer = null;
+  }
+
+  @Override
+  // should not be called at all even (!)
+  public void update() {
+    this.reset();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
new file mode 100644
index 0000000..48dd8e9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
@@ -0,0 +1,374 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.Filter.ReturnCode;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.util.NavigableSet;
+
+/**
+ * A query matcher that is specifically designed for the scan case.
+ */
+public class ScanQueryMatcher {
+  // Optimization so we can skip lots of compares when we decide to skip
+  // to the next row.
+  private boolean stickyNextRow;
+  private byte[] stopRow;
+
+  protected TimeRange tr;
+
+  protected Filter filter;
+
+  /** Keeps track of deletes */
+  protected DeleteTracker deletes;
+  protected boolean retainDeletesInOutput;
+
+  /** Keeps track of columns and versions */
+  protected ColumnTracker columns;
+
+  /** Key to seek to in memstore and StoreFiles */
+  protected KeyValue startKey;
+
+  /** Oldest allowed version stamp for TTL enforcement */
+  protected long oldestStamp;
+
+  /** Row comparator for the region this query is for */
+  KeyValue.KeyComparator rowComparator;
+
+  /** Row the query is on */
+  protected byte [] row;
+
+  /**
+   * Constructs a ScanQueryMatcher for a Scan.
+   * @param scan
+   * @param family
+   * @param columns
+   * @param ttl
+   * @param rowComparator
+   */
+  public ScanQueryMatcher(Scan scan, byte [] family,
+      NavigableSet<byte[]> columns, long ttl,
+      KeyValue.KeyComparator rowComparator, int maxVersions,
+      boolean retainDeletesInOutput) {
+    this.tr = scan.getTimeRange();
+    this.oldestStamp = System.currentTimeMillis() - ttl;
+    this.rowComparator = rowComparator;
+    this.deletes =  new ScanDeleteTracker();
+    this.stopRow = scan.getStopRow();
+    this.startKey = KeyValue.createFirstOnRow(scan.getStartRow());
+    this.filter = scan.getFilter();
+    this.retainDeletesInOutput = retainDeletesInOutput;
+
+    // Single branch to deal with two types of reads (columns vs all in family)
+    if (columns == null || columns.size() == 0) {
+      // use a specialized scan for wildcard column tracker.
+      this.columns = new ScanWildcardColumnTracker(maxVersions);
+    } else {
+      // We can share the ExplicitColumnTracker, diff is we reset
+      // between rows, not between storefiles.
+      this.columns = new ExplicitColumnTracker(columns,maxVersions);
+    }
+  }
+  public ScanQueryMatcher(Scan scan, byte [] family,
+      NavigableSet<byte[]> columns, long ttl,
+      KeyValue.KeyComparator rowComparator, int maxVersions) {
+      /* By default we will not include deletes */
+      /* deletes are included explicitly (for minor compaction) */
+      this(scan, family, columns, ttl, rowComparator, maxVersions, false);
+  }
+
+  /**
+   * Determines if the caller should do one of several things:
+   * - seek/skip to the next row (MatchCode.SEEK_NEXT_ROW)
+   * - seek/skip to the next column (MatchCode.SEEK_NEXT_COL)
+   * - include the current KeyValue (MatchCode.INCLUDE)
+   * - ignore the current KeyValue (MatchCode.SKIP)
+   * - got to the next row (MatchCode.DONE)
+   *
+   * @param kv KeyValue to check
+   * @return The match code instance.
+   */
+  public MatchCode match(KeyValue kv) {
+    if (filter != null && filter.filterAllRemaining()) {
+      return MatchCode.DONE_SCAN;
+    }
+
+    byte [] bytes = kv.getBuffer();
+    int offset = kv.getOffset();
+    int initialOffset = offset;
+
+    int keyLength = Bytes.toInt(bytes, offset, Bytes.SIZEOF_INT);
+    offset += KeyValue.ROW_OFFSET;
+
+    short rowLength = Bytes.toShort(bytes, offset, Bytes.SIZEOF_SHORT);
+    offset += Bytes.SIZEOF_SHORT;
+
+    int ret = this.rowComparator.compareRows(row, 0, row.length,
+        bytes, offset, rowLength);
+    if (ret <= -1) {
+      return MatchCode.DONE;
+    } else if (ret >= 1) {
+      // could optimize this, if necessary?
+      // Could also be called SEEK_TO_CURRENT_ROW, but this
+      // should be rare/never happens.
+      return MatchCode.SEEK_NEXT_ROW;
+    }
+
+    // optimize case.
+    if (this.stickyNextRow)
+        return MatchCode.SEEK_NEXT_ROW;
+
+    if (this.columns.done()) {
+      stickyNextRow = true;
+      return MatchCode.SEEK_NEXT_ROW;
+    }
+
+    //Passing rowLength
+    offset += rowLength;
+
+    //Skipping family
+    byte familyLength = bytes [offset];
+    offset += familyLength + 1;
+
+    int qualLength = keyLength + KeyValue.ROW_OFFSET -
+      (offset - initialOffset) - KeyValue.TIMESTAMP_TYPE_SIZE;
+
+    long timestamp = kv.getTimestamp();
+    if (isExpired(timestamp)) {
+      // done, the rest of this column will also be expired as well.
+      return getNextRowOrNextColumn(bytes, offset, qualLength);
+    }
+
+    byte type = kv.getType();
+    if (isDelete(type)) {
+      if (tr.withinOrAfterTimeRange(timestamp)) {
+        this.deletes.add(bytes, offset, qualLength, timestamp, type);
+        // Can't early out now, because DelFam come before any other keys
+      }
+      if (retainDeletesInOutput) {
+        return MatchCode.INCLUDE;
+      }
+      else {
+        return MatchCode.SKIP;
+      }
+    }
+
+    if (!this.deletes.isEmpty() &&
+        deletes.isDeleted(bytes, offset, qualLength, timestamp)) {
+
+      // May be able to optimize the SKIP here, if we matched
+      // due to a DelFam, we can skip to next row
+      // due to a DelCol, we can skip to next col
+      // But it requires more info out of isDelete().
+      // needful -> million column challenge.
+      return MatchCode.SKIP;
+    }
+
+    int timestampComparison = tr.compare(timestamp);
+    if (timestampComparison >= 1) {
+      return MatchCode.SKIP;
+    } else if (timestampComparison <= -1) {
+      return getNextRowOrNextColumn(bytes, offset, qualLength);
+    }
+
+    /**
+     * Filters should be checked before checking column trackers. If we do
+     * otherwise, as was previously being done, ColumnTracker may increment its
+     * counter for even that KV which may be discarded later on by Filter. This
+     * would lead to incorrect results in certain cases.
+     */
+    if (filter != null) {
+      ReturnCode filterResponse = filter.filterKeyValue(kv);
+      if (filterResponse == ReturnCode.SKIP) {
+        return MatchCode.SKIP;
+      } else if (filterResponse == ReturnCode.NEXT_COL) {
+        return getNextRowOrNextColumn(bytes, offset, qualLength);
+      } else if (filterResponse == ReturnCode.NEXT_ROW) {
+        stickyNextRow = true;
+        return MatchCode.SEEK_NEXT_ROW;
+      } else if (filterResponse == ReturnCode.SEEK_NEXT_USING_HINT) {
+        return MatchCode.SEEK_NEXT_USING_HINT;
+      }
+    }
+
+    MatchCode colChecker = columns.checkColumn(bytes, offset, qualLength, timestamp);
+    /*
+     * According to current implementation, colChecker can only be
+     * SEEK_NEXT_COL, SEEK_NEXT_ROW, SKIP or INCLUDE. Therefore, always return
+     * the MatchCode. If it is SEEK_NEXT_ROW, also set stickyNextRow.
+     */
+    if (colChecker == MatchCode.SEEK_NEXT_ROW) {
+      stickyNextRow = true;
+    }
+    return colChecker;
+
+  }
+
+  public MatchCode getNextRowOrNextColumn(byte[] bytes, int offset,
+      int qualLength) {
+    if (columns instanceof ExplicitColumnTracker) {
+      //We only come here when we know that columns is an instance of
+      //ExplicitColumnTracker so we should never have a cast exception
+      ((ExplicitColumnTracker)columns).doneWithColumn(bytes, offset,
+          qualLength);
+      if (columns.getColumnHint() == null) {
+        return MatchCode.SEEK_NEXT_ROW;
+      } else {
+        return MatchCode.SEEK_NEXT_COL;
+      }
+    } else {
+      return MatchCode.SEEK_NEXT_COL;
+    }
+  }
+
+  public boolean moreRowsMayExistAfter(KeyValue kv) {
+    if (!Bytes.equals(stopRow , HConstants.EMPTY_END_ROW) &&
+        rowComparator.compareRows(kv.getBuffer(),kv.getRowOffset(),
+            kv.getRowLength(), stopRow, 0, stopRow.length) >= 0) {
+      // KV >= STOPROW
+      // then NO there is nothing left.
+      return false;
+    } else {
+      return true;
+    }
+  }
+
+  /**
+   * Set current row
+   * @param row
+   */
+  public void setRow(byte [] row) {
+    this.row = row;
+    reset();
+  }
+
+  public void reset() {
+    this.deletes.reset();
+    this.columns.reset();
+
+    stickyNextRow = false;
+  }
+
+  // should be in KeyValue.
+  protected boolean isDelete(byte type) {
+    return (type != KeyValue.Type.Put.getCode());
+  }
+
+  protected boolean isExpired(long timestamp) {
+    return (timestamp < oldestStamp);
+  }
+
+  /**
+   *
+   * @return the start key
+   */
+  public KeyValue getStartKey() {
+    return this.startKey;
+  }
+
+  public KeyValue getNextKeyHint(KeyValue kv) {
+    if (filter == null) {
+      return null;
+    } else {
+      return filter.getNextKeyHint(kv);
+    }
+  }
+
+  public KeyValue getKeyForNextColumn(KeyValue kv) {
+    ColumnCount nextColumn = columns.getColumnHint();
+    if (nextColumn == null) {
+      return KeyValue.createLastOnRow(
+          kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(),
+          kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength(),
+          kv.getBuffer(), kv.getQualifierOffset(), kv.getQualifierLength());
+    } else {
+      return KeyValue.createFirstOnRow(
+          kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(),
+          kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength(),
+          nextColumn.getBuffer(), nextColumn.getOffset(), nextColumn.getLength());
+    }
+  }
+
+  public KeyValue getKeyForNextRow(KeyValue kv) {
+    return KeyValue.createLastOnRow(
+        kv.getBuffer(), kv.getRowOffset(), kv.getRowLength(),
+        null, 0, 0,
+        null, 0, 0);
+  }
+
+  /**
+   * {@link #match} return codes.  These instruct the scanner moving through
+   * memstores and StoreFiles what to do with the current KeyValue.
+   * <p>
+   * Additionally, this contains "early-out" language to tell the scanner to
+   * move on to the next File (memstore or Storefile), or to return immediately.
+   */
+  public static enum MatchCode {
+    /**
+     * Include KeyValue in the returned result
+     */
+    INCLUDE,
+
+    /**
+     * Do not include KeyValue in the returned result
+     */
+    SKIP,
+
+    /**
+     * Do not include, jump to next StoreFile or memstore (in time order)
+     */
+    NEXT,
+
+    /**
+     * Do not include, return current result
+     */
+    DONE,
+
+    /**
+     * These codes are used by the ScanQueryMatcher
+     */
+
+    /**
+     * Done with the row, seek there.
+     */
+    SEEK_NEXT_ROW,
+    /**
+     * Done with column, seek to next.
+     */
+    SEEK_NEXT_COL,
+
+    /**
+     * Done with scan, thanks to the row filter.
+     */
+    DONE_SCAN,
+
+    /*
+     * Seek to next key which is given as hint.
+     */
+    SEEK_NEXT_USING_HINT,
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
new file mode 100644
index 0000000..6a027d6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
@@ -0,0 +1,173 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Keeps track of the columns for a scan if they are not explicitly specified
+ */
+public class ScanWildcardColumnTracker implements ColumnTracker {
+  private static final Log LOG =
+    LogFactory.getLog(ScanWildcardColumnTracker.class);
+  private byte [] columnBuffer = null;
+  private int columnOffset = 0;
+  private int columnLength = 0;
+  private int currentCount = 0;
+  private int maxVersions;
+  /* Keeps track of the latest timestamp included for current column.
+   * Used to eliminate duplicates. */
+  private long latestTSOfCurrentColumn;
+
+  /**
+   * Return maxVersions of every row.
+   * @param maxVersion
+   */
+  public ScanWildcardColumnTracker(int maxVersion) {
+    this.maxVersions = maxVersion;
+  }
+
+  /**
+   * Can only return INCLUDE or SKIP, since returning "NEXT" or
+   * "DONE" would imply we have finished with this row, when
+   * this class can't figure that out.
+   *
+   * @param bytes
+   * @param offset
+   * @param length
+   * @param timestamp
+   * @return The match code instance.
+   */
+  @Override
+  public MatchCode checkColumn(byte[] bytes, int offset, int length,
+      long timestamp) {
+    if (columnBuffer == null) {
+      // first iteration.
+      columnBuffer = bytes;
+      columnOffset = offset;
+      columnLength = length;
+      currentCount = 0;
+
+      if (++currentCount > maxVersions) {
+        return ScanQueryMatcher.MatchCode.SEEK_NEXT_COL;
+      }
+      setTS(timestamp);
+      return ScanQueryMatcher.MatchCode.INCLUDE;
+    }
+    int cmp = Bytes.compareTo(bytes, offset, length,
+        columnBuffer, columnOffset, columnLength);
+    if (cmp == 0) {
+      //If column matches, check if it is a duplicate timestamp
+      if (sameAsPreviousTS(timestamp)) {
+        return ScanQueryMatcher.MatchCode.SKIP;
+      }
+      if (++currentCount > maxVersions) {
+        return ScanQueryMatcher.MatchCode.SEEK_NEXT_COL; // skip to next col
+      }
+      setTS(timestamp);
+      return ScanQueryMatcher.MatchCode.INCLUDE;
+    }
+
+    resetTS();
+
+    // new col > old col
+    if (cmp > 0) {
+      // switched columns, lets do something.x
+      columnBuffer = bytes;
+      columnOffset = offset;
+      columnLength = length;
+      currentCount = 0;
+      if (++currentCount > maxVersions)
+        return ScanQueryMatcher.MatchCode.SEEK_NEXT_COL;
+      setTS(timestamp);
+      return ScanQueryMatcher.MatchCode.INCLUDE;
+    }
+
+    // new col < oldcol
+    // if (cmp < 0) {
+    // WARNING: This means that very likely an edit for some other family
+    // was incorrectly stored into the store for this one. Continue, but
+    // complain.
+    LOG.error("ScanWildcardColumnTracker.checkColumn ran " +
+  		"into a column actually smaller than the previous column: " +
+      Bytes.toStringBinary(bytes, offset, length));
+    // switched columns
+    columnBuffer = bytes;
+    columnOffset = offset;
+    columnLength = length;
+    currentCount = 0;
+    if (++currentCount > maxVersions) {
+      return ScanQueryMatcher.MatchCode.SEEK_NEXT_COL;
+    }
+    setTS(timestamp);
+    return ScanQueryMatcher.MatchCode.INCLUDE;
+  }
+
+  @Override
+  public void update() {
+    // no-op, shouldn't even be called
+    throw new UnsupportedOperationException(
+        "ScanWildcardColumnTracker.update should never be called!");
+  }
+
+  @Override
+  public void reset() {
+    columnBuffer = null;
+    resetTS();
+  }
+
+  private void resetTS() {
+    latestTSOfCurrentColumn = HConstants.LATEST_TIMESTAMP;
+  }
+
+  private void setTS(long timestamp) {
+    latestTSOfCurrentColumn = timestamp;
+  }
+
+  private boolean sameAsPreviousTS(long timestamp) {
+    return timestamp == latestTSOfCurrentColumn;
+  }
+
+  /**
+   * Used by matcher and scan/get to get a hint of the next column
+   * to seek to after checkColumn() returns SKIP.  Returns the next interesting
+   * column we want, or NULL there is none (wildcard scanner).
+   *
+   * @return The column count.
+   */
+  public ColumnCount getColumnHint() {
+    return null;
+  }
+
+
+  /**
+   * We can never know a-priori if we are done, so always return false.
+   * @return false
+   */
+  @Override
+  public boolean done() {
+    return false;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ShutdownHook.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ShutdownHook.java
new file mode 100644
index 0000000..b25e575
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/ShutdownHook.java
@@ -0,0 +1,234 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.util.Threads;
+
+/**
+ * Manage regionserver shutdown hooks.
+ * @see #install(Configuration, FileSystem, Stoppable, Thread)
+ */
+class ShutdownHook {
+  private static final Log LOG = LogFactory.getLog(ShutdownHook.class);
+  private static final String CLIENT_FINALIZER_DATA_METHOD = "clientFinalizer";
+
+  /**
+   * Key for boolean configuration whose default is true.
+   */
+  public static final String RUN_SHUTDOWN_HOOK = "hbase.shutdown.hook";
+
+  /**
+   * Key for a long configuration on how much time to wait on the fs shutdown
+   * hook. Default is 30 seconds.
+   */
+  public static final String FS_SHUTDOWN_HOOK_WAIT = "hbase.fs.shutdown.hook.wait";
+
+  /**
+   * Install a shutdown hook that calls stop on the passed Stoppable
+   * and then thread joins against the passed <code>threadToJoin</code>.
+   * When this thread completes, it then runs the hdfs thread (This install
+   * removes the hdfs shutdown hook keeping a handle on it to run it after
+   * <code>threadToJoin</code> has stopped).
+   *
+   * <p>To suppress all shutdown hook  handling -- both the running of the
+   * regionserver hook and of the hdfs hook code -- set
+   * {@link ShutdownHook#RUN_SHUTDOWN_HOOK} in {@link Configuration} to
+   * <code>false</code>.
+   * This configuration value is checked when the hook code runs.
+   * @param conf
+   * @param fs Instance of Filesystem used by the RegionServer
+   * @param stop Installed shutdown hook will call stop against this passed
+   * <code>Stoppable</code> instance.
+   * @param threadToJoin After calling stop on <code>stop</code> will then
+   * join this thread.
+   */
+  static void install(final Configuration conf, final FileSystem fs,
+      final Stoppable stop, final Thread threadToJoin) {
+    Thread fsShutdownHook = suppressHdfsShutdownHook(fs);
+    Thread t = new ShutdownHookThread(conf, stop, threadToJoin, fsShutdownHook);
+    Runtime.getRuntime().addShutdownHook(t);
+    LOG.info("Installed shutdown hook thread: " + t.getName());
+  }
+
+  /*
+   * Thread run by shutdown hook.
+   */
+  private static class ShutdownHookThread extends Thread {
+    private final Stoppable stop;
+    private final Thread threadToJoin;
+    private final Thread fsShutdownHook;
+    private final Configuration conf;
+
+    ShutdownHookThread(final Configuration conf, final Stoppable stop,
+        final Thread threadToJoin, final Thread fsShutdownHook) {
+      super("Shutdownhook:" + threadToJoin.getName());
+      this.stop = stop;
+      this.threadToJoin = threadToJoin;
+      this.conf = conf;
+      this.fsShutdownHook = fsShutdownHook;
+    }
+
+    @Override
+    public void run() {
+      boolean b = this.conf.getBoolean(RUN_SHUTDOWN_HOOK, true);
+      LOG.info("Shutdown hook starting; " + RUN_SHUTDOWN_HOOK + "=" + b +
+        "; fsShutdownHook=" + this.fsShutdownHook);
+      if (b) {
+        this.stop.stop("Shutdown hook");
+        Threads.shutdown(this.threadToJoin);
+        if (this.fsShutdownHook != null) {
+          LOG.info("Starting fs shutdown hook thread.");
+          this.fsShutdownHook.start();
+          Threads.shutdown(this.fsShutdownHook,
+            this.conf.getLong(FS_SHUTDOWN_HOOK_WAIT, 30000));
+        }
+      }
+      LOG.info("Shutdown hook finished.");
+    }
+  }
+
+  /*
+   * So, HDFS keeps a static map of all FS instances. In order to make sure
+   * things are cleaned up on our way out, it also creates a shutdown hook
+   * so that all filesystems can be closed when the process is terminated; it
+   * calls FileSystem.closeAll. This inconveniently runs concurrently with our
+   * own shutdown handler, and therefore causes all the filesystems to be closed
+   * before the server can do all its necessary cleanup.
+   *
+   * <p>The dirty reflection in this method sneaks into the FileSystem class
+   * and grabs the shutdown hook, removes it from the list of active shutdown
+   * hooks, and returns the hook for the caller to run at its convenience.
+   *
+   * <p>This seems quite fragile and susceptible to breaking if Hadoop changes
+   * anything about the way this cleanup is managed. Keep an eye on things.
+   * @return The fs shutdown hook
+   * @throws RuntimeException if we fail to find or grap the shutdown hook.
+   */
+  private static Thread suppressHdfsShutdownHook(final FileSystem fs) {
+    try {
+      // This introspection has been updated to work for hadoop 0.20, 0.21 and for
+      // cloudera 0.20.  0.21 and cloudera 0.20 both have hadoop-4829.  With the
+      // latter in place, things are a little messy in that there are now two
+      // instances of the data member clientFinalizer; an uninstalled one in
+      // FileSystem and one in the innner class named Cache that actually gets
+      // registered as a shutdown hook.  If the latter is present, then we are
+      // on 0.21 or cloudera patched 0.20.
+      Thread hdfsClientFinalizer = null;
+      // Look into the FileSystem#Cache class for clientFinalizer
+      Class<?> [] classes = FileSystem.class.getDeclaredClasses();
+      Class<?> cache = null;
+      for (Class<?> c: classes) {
+        if (c.getSimpleName().equals("Cache")) {
+          cache = c;
+          break;
+        }
+      }
+      Field field = null;
+      try {
+        field = cache.getDeclaredField(CLIENT_FINALIZER_DATA_METHOD);
+      } catch (NoSuchFieldException e) {
+        // We can get here if the Cache class does not have a clientFinalizer
+        // instance: i.e. we're running on straight 0.20 w/o hadoop-4829.
+      }
+      if (field != null) {
+        field.setAccessible(true);
+        Field cacheField = FileSystem.class.getDeclaredField("CACHE");
+        cacheField.setAccessible(true);
+        Object cacheInstance = cacheField.get(fs);
+        hdfsClientFinalizer = (Thread)field.get(cacheInstance);
+      } else {
+        // Then we didnt' find clientFinalizer in Cache.  Presume clean 0.20 hadoop.
+        field = FileSystem.class.getDeclaredField(CLIENT_FINALIZER_DATA_METHOD);
+        field.setAccessible(true);
+        hdfsClientFinalizer = (Thread)field.get(null);
+      }
+      if (hdfsClientFinalizer == null) {
+        throw new RuntimeException("Client finalizer is null, can't suppress!");
+      }
+      if (!Runtime.getRuntime().removeShutdownHook(hdfsClientFinalizer)) {
+        throw new RuntimeException("Failed suppression of fs shutdown hook: " +
+          hdfsClientFinalizer);
+      }
+      return hdfsClientFinalizer;
+    } catch (NoSuchFieldException nsfe) {
+      LOG.fatal("Couldn't find field 'clientFinalizer' in FileSystem!", nsfe);
+      throw new RuntimeException("Failed to suppress HDFS shutdown hook");
+    } catch (IllegalAccessException iae) {
+      LOG.fatal("Couldn't access field 'clientFinalizer' in FileSystem!", iae);
+      throw new RuntimeException("Failed to suppress HDFS shutdown hook");
+    }
+  }
+
+  // Thread that does nothing. Used in below main testing.
+  static class DoNothingThread extends Thread {
+    DoNothingThread() {
+      super("donothing");
+    }
+    @Override
+    public void run() {
+      super.run();
+    }
+  }
+
+  // Stoppable with nothing to stop.  Used below in main testing.
+  static class DoNothingStoppable implements Stoppable {
+    @Override
+    public boolean isStopped() {
+      // TODO Auto-generated method stub
+      return false;
+    }
+
+    @Override
+    public void stop(String why) {
+      // TODO Auto-generated method stub
+    }
+  }
+
+  /**
+   * Main to test basic functionality.  Run with clean hadoop 0.20 and hadoop
+   * 0.21 and cloudera patched hadoop to make sure our shutdown hook handling
+   * works for all compbinations.
+   * Pass '-Dhbase.shutdown.hook=false' to test turning off the running of
+   * shutdown hooks.
+   * @param args
+   * @throws IOException
+   */
+  public static void main(final String [] args) throws IOException {
+    Configuration conf = HBaseConfiguration.create();
+    String prop = System.getProperty(RUN_SHUTDOWN_HOOK);
+    if (prop != null) {
+      conf.setBoolean(RUN_SHUTDOWN_HOOK, Boolean.parseBoolean(prop));
+    }
+    // Instantiate a FileSystem. This will register the fs shutdown hook.
+    FileSystem fs = FileSystem.get(conf);
+    Thread donothing = new DoNothingThread();
+    donothing.start();
+    ShutdownHook.install(conf, fs, new DoNothingStoppable(), donothing);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
new file mode 100644
index 0000000..2ba66a1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java
@@ -0,0 +1,614 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.ThreadFactory;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.TimeUnit;
+
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.MetaEditor;
+import org.apache.hadoop.hbase.io.Reference.Range;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.PairOfSameType;
+import org.apache.hadoop.util.Progressable;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Executes region split as a "transaction".  Call {@link #prepare()} to setup
+ * the transaction, {@link #execute(OnlineRegions)} to run the transaction and
+ * {@link #rollback(OnlineRegions)} to cleanup if execute fails.
+ *
+ * <p>Here is an example of how you would use this class:
+ * <pre>
+ *  SplitTransaction st = new SplitTransaction(this.conf, parent, midKey)
+ *  if (!st.prepare()) return;
+ *  try {
+ *    st.execute(myOnlineRegions);
+ *  } catch (IOException ioe) {
+ *    try {
+ *      st.rollback(myOnlineRegions);
+ *      return;
+ *    } catch (RuntimeException e) {
+ *      myAbortable.abort("Failed split, abort");
+ *    }
+ *  }
+ * </Pre>
+ * <p>This class is not thread safe.  Caller needs ensure split is run by
+ * one thread only.
+ */
+class SplitTransaction {
+  private static final Log LOG = LogFactory.getLog(SplitTransaction.class);
+  private static final String SPLITDIR = "splits";
+
+  /*
+   * Region to split
+   */
+  private final HRegion parent;
+  private HRegionInfo hri_a;
+  private HRegionInfo hri_b;
+  private Path splitdir;
+  private long fileSplitTimeout = 30000;
+
+  /*
+   * Row to split around
+   */
+  private final byte [] splitrow;
+
+  /**
+   * Types to add to the transaction journal
+   */
+  enum JournalEntry {
+    /**
+     * We created the temporary split data directory.
+     */
+    CREATE_SPLIT_DIR,
+    /**
+     * Closed the parent region.
+     */
+    CLOSED_PARENT_REGION,
+    /**
+     * The parent has been taken out of the server's online regions list.
+     */
+    OFFLINED_PARENT,
+    /**
+     * Started in on creation of the first daughter region.
+     */
+    STARTED_REGION_A_CREATION,
+    /**
+     * Started in on the creation of the second daughter region.
+     */
+    STARTED_REGION_B_CREATION
+  }
+
+  /*
+   * Journal of how far the split transaction has progressed.
+   */
+  private final List<JournalEntry> journal = new ArrayList<JournalEntry>();
+
+  /**
+   * Constructor
+   * @param services So we can online new servces.  If null, we'll skip onlining
+   * (Useful testing).
+   * @param c Configuration to use running split
+   * @param r Region to split
+   * @param splitrow Row to split around
+   */
+  SplitTransaction(final HRegion r, final byte [] splitrow) {
+    this.parent = r;
+    this.splitrow = splitrow;
+    this.splitdir = getSplitDir(this.parent);
+  }
+
+  /**
+   * Does checks on split inputs.
+   * @return <code>true</code> if the region is splittable else
+   * <code>false</code> if it is not (e.g. its already closed, etc.).
+   */
+  public boolean prepare() {
+    if (this.parent.isClosed() || this.parent.isClosing()) return false;
+    HRegionInfo hri = this.parent.getRegionInfo();
+    parent.prepareToSplit();
+    // Check splitrow.
+    byte [] startKey = hri.getStartKey();
+    byte [] endKey = hri.getEndKey();
+    if (Bytes.equals(startKey, splitrow) ||
+        !this.parent.getRegionInfo().containsRow(splitrow)) {
+      LOG.info("Split row is not inside region key range or is equal to " +
+          "startkey: " + Bytes.toString(this.splitrow));
+      return false;
+    }
+    long rid = getDaughterRegionIdTimestamp(hri);
+    this.hri_a = new HRegionInfo(hri.getTableDesc(), startKey, this.splitrow,
+      false, rid);
+    this.hri_b = new HRegionInfo(hri.getTableDesc(), this.splitrow, endKey,
+      false, rid);
+    return true;
+  }
+
+  /**
+   * Calculate daughter regionid to use.
+   * @param hri Parent {@link HRegionInfo}
+   * @return Daughter region id (timestamp) to use.
+   */
+  private static long getDaughterRegionIdTimestamp(final HRegionInfo hri) {
+    long rid = EnvironmentEdgeManager.currentTimeMillis();
+    // Regionid is timestamp.  Can't be less than that of parent else will insert
+    // at wrong location in .META. (See HBASE-710).
+    if (rid < hri.getRegionId()) {
+      LOG.warn("Clock skew; parent regions id is " + hri.getRegionId() +
+        " but current time here is " + rid);
+      rid = hri.getRegionId() + 1;
+    }
+    return rid;
+  }
+
+  /**
+   * Run the transaction.
+   * @param server Hosting server instance.
+   * @param services Used to online/offline regions.
+   * @throws IOException If thrown, transaction failed. Call {@link #rollback(OnlineRegions)}
+   * @return Regions created
+   * @see #rollback(OnlineRegions)
+   */
+  PairOfSameType<HRegion> execute(final Server server,
+      final RegionServerServices services)
+  throws IOException {
+    LOG.info("Starting split of region " + this.parent);
+    if ((server != null && server.isStopped()) ||
+        (services != null && services.isStopping())) {
+      throw new IOException("Server is stopped or stopping");
+    }
+    assert !this.parent.lock.writeLock().isHeldByCurrentThread() : "Unsafe to hold write lock while performing RPCs";
+
+    // If true, no cluster to write meta edits into.
+    boolean testing = server == null? true:
+      server.getConfiguration().getBoolean("hbase.testing.nocluster", false);
+    this.fileSplitTimeout = testing ? this.fileSplitTimeout :
+        server.getConfiguration().getLong(
+            "hbase.regionserver.fileSplitTimeout", this.fileSplitTimeout);
+
+    createSplitDir(this.parent.getFilesystem(), this.splitdir);
+    this.journal.add(JournalEntry.CREATE_SPLIT_DIR);
+
+    List<StoreFile> hstoreFilesToSplit = this.parent.close(false);
+    if (hstoreFilesToSplit == null) {
+      // The region was closed by a concurrent thread.  We can't continue
+      // with the split, instead we must just abandon the split.  If we
+      // reopen or split this could cause problems because the region has
+      // probably already been moved to a different server, or is in the
+      // process of moving to a different server.
+      throw new IOException("Failed to close region: already closed by " +
+        "another thread");
+    }
+    this.journal.add(JournalEntry.CLOSED_PARENT_REGION);
+
+    if (!testing) {
+      services.removeFromOnlineRegions(this.parent.getRegionInfo().getEncodedName());
+    }
+    this.journal.add(JournalEntry.OFFLINED_PARENT);
+    
+    // TODO: If the below were multithreaded would we complete steps in less
+    // elapsed time?  St.Ack 20100920
+
+    splitStoreFiles(this.splitdir, hstoreFilesToSplit);
+    // splitStoreFiles creates daughter region dirs under the parent splits dir
+    // Nothing to unroll here if failure -- clean up of CREATE_SPLIT_DIR will
+    // clean this up.
+
+    // Log to the journal that we are creating region A, the first daughter
+    // region.  We could fail halfway through.  If we do, we could have left
+    // stuff in fs that needs cleanup -- a storefile or two.  Thats why we
+    // add entry to journal BEFORE rather than AFTER the change.
+    this.journal.add(JournalEntry.STARTED_REGION_A_CREATION);
+    HRegion a = createDaughterRegion(this.hri_a, this.parent.flushRequester);
+
+    // Ditto
+    this.journal.add(JournalEntry.STARTED_REGION_B_CREATION);
+    HRegion b = createDaughterRegion(this.hri_b, this.parent.flushRequester);
+
+    // Edit parent in meta
+    if (!testing) {
+      MetaEditor.offlineParentInMeta(server.getCatalogTracker(),
+        this.parent.getRegionInfo(), a.getRegionInfo(), b.getRegionInfo());
+    }
+
+    // The is the point of no return.  We are committed to the split now.  We
+    // have still the daughter regions to open but meta has been changed.
+    // If we fail from here on out, we can not rollback so, we'll just abort.
+    // The meta has been changed though so there will need to be a fixup run
+    // during processing of the crashed server by master (TODO: Verify this in place).
+
+    // TODO: Could we be smarter about the sequence in which we do these steps?
+
+    if (!testing) {
+      // Open daughters in parallel.
+      DaughterOpener aOpener = new DaughterOpener(server, services, a);
+      DaughterOpener bOpener = new DaughterOpener(server, services, b);
+      aOpener.start();
+      bOpener.start();
+      try {
+        aOpener.join();
+        bOpener.join();
+      } catch (InterruptedException e) {
+        server.abort("Exception running daughter opens", e);
+      }
+    }
+
+    // Leaving here, the splitdir with its dross will be in place but since the
+    // split was successful, just leave it; it'll be cleaned when parent is
+    // deleted and cleaned up.
+    return new PairOfSameType<HRegion>(a, b);
+  }
+
+  class DaughterOpener extends Thread {
+    private final RegionServerServices services;
+    private final Server server;
+    private final HRegion r;
+
+    DaughterOpener(final Server s, final RegionServerServices services,
+        final HRegion r) {
+      super(s.getServerName() + "-daughterOpener=" + r.getRegionInfo().getEncodedName());
+      setDaemon(true);
+      this.services = services;
+      this.server = s;
+      this.r = r;
+    }
+
+    @Override
+    public void run() {
+      try {
+        openDaughterRegion(this.server, this.services, r);
+      } catch (Throwable t) {
+        this.server.abort("Failed open of daughter " +
+          this.r.getRegionInfo().getRegionNameAsString(), t);
+      }
+    }
+  }
+
+  /**
+   * Open daughter regions, add them to online list and update meta.
+   * @param server
+   * @param services
+   * @param daughter
+   * @throws IOException
+   * @throws KeeperException
+   */
+  void openDaughterRegion(final Server server,
+      final RegionServerServices services, final HRegion daughter)
+  throws IOException, KeeperException {
+    if (server.isStopped() || services.isStopping()) {
+      MetaEditor.addDaughter(server.getCatalogTracker(),
+        daughter.getRegionInfo(), null);
+      LOG.info("Not opening daughter " +
+        daughter.getRegionInfo().getRegionNameAsString() +
+        " because stopping=" + services.isStopping() + ", stopped=" +
+        server.isStopped());
+      return;
+    }
+    HRegionInfo hri = daughter.getRegionInfo();
+    LoggingProgressable reporter =
+      new LoggingProgressable(hri, server.getConfiguration());
+    HRegion r = daughter.openHRegion(reporter);
+    services.postOpenDeployTasks(r, server.getCatalogTracker(), true);
+  }
+
+  static class LoggingProgressable implements Progressable {
+    private final HRegionInfo hri;
+    private long lastLog = -1;
+    private final long interval;
+
+    LoggingProgressable(final HRegionInfo hri, final Configuration c) {
+      this.hri = hri;
+      this.interval = c.getLong("hbase.regionserver.split.daughter.open.log.interval",
+        10000);
+    }
+
+    @Override
+    public void progress() {
+      long now = System.currentTimeMillis();
+      if (now - lastLog > this.interval) {
+        LOG.info("Opening " + this.hri.getRegionNameAsString());
+        this.lastLog = now;
+      }
+    }
+  }
+
+  private static Path getSplitDir(final HRegion r) {
+    return new Path(r.getRegionDir(), SPLITDIR);
+  }
+
+  /**
+   * @param fs Filesystem to use
+   * @param splitdir Directory to store temporary split data in
+   * @throws IOException If <code>splitdir</code> already exists or we fail
+   * to create it.
+   * @see #cleanupSplitDir(FileSystem, Path)
+   */
+  private static void createSplitDir(final FileSystem fs, final Path splitdir)
+  throws IOException {
+    if (fs.exists(splitdir)) throw new IOException("Splitdir already exits? " + splitdir);
+    if (!fs.mkdirs(splitdir)) throw new IOException("Failed create of " + splitdir);
+  }
+
+  private static void cleanupSplitDir(final FileSystem fs, final Path splitdir)
+  throws IOException {
+    // Splitdir may have been cleaned up by reopen of the parent dir.
+    deleteDir(fs, splitdir, false);
+  }
+
+  /**
+   * @param fs Filesystem to use
+   * @param dir Directory to delete
+   * @param mustPreExist If true, we'll throw exception if <code>dir</code>
+   * does not preexist, else we'll just pass.
+   * @throws IOException Thrown if we fail to delete passed <code>dir</code>
+   */
+  private static void deleteDir(final FileSystem fs, final Path dir,
+      final boolean mustPreExist)
+  throws IOException {
+    if (!fs.exists(dir)) {
+      if (mustPreExist) throw new IOException(dir.toString() + " does not exist!");
+    } else if (!fs.delete(dir, true)) {
+      throw new IOException("Failed delete of " + dir);
+    }
+  }
+
+  private void splitStoreFiles(final Path splitdir,
+    final List<StoreFile> hstoreFilesToSplit)
+  throws IOException {
+    if (hstoreFilesToSplit == null) {
+      // Could be null because close didn't succeed -- for now consider it fatal
+      throw new IOException("Close returned empty list of StoreFiles");
+    }
+    // The following code sets up a thread pool executor with as many slots as
+    // there's files to split. It then fires up everything, waits for
+    // completion and finally checks for any exception
+    int nbFiles = hstoreFilesToSplit.size();
+    ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
+    builder.setNameFormat("StoreFileSplitter-%1$d");
+    ThreadFactory factory = builder.build();
+    ThreadPoolExecutor threadPool =
+      (ThreadPoolExecutor) Executors.newFixedThreadPool(nbFiles, factory);
+    List<Future<Void>> futures = new ArrayList<Future<Void>>(nbFiles);
+
+     // Split each store file.
+    for (StoreFile sf: hstoreFilesToSplit) {
+      //splitStoreFile(sf, splitdir);
+      StoreFileSplitter sfs = new StoreFileSplitter(sf, splitdir);
+      futures.add(threadPool.submit(sfs));
+    }
+    // Shutdown the pool
+    threadPool.shutdown();
+
+    // Wait for all the tasks to finish
+    try {
+      boolean stillRunning = !threadPool.awaitTermination(
+          this.fileSplitTimeout, TimeUnit.MILLISECONDS);
+      if (stillRunning) {
+        threadPool.shutdownNow();
+        throw new IOException("Took too long to split the" +
+            " files and create the references, aborting split");
+      }
+    } catch (InterruptedException e) {
+      Thread.currentThread().interrupt();
+      throw new IOException("Interrupted while waiting for file splitters", e);
+    }
+
+    // Look for any exception
+    for (Future future : futures) {
+      try {
+        future.get();
+      } catch (InterruptedException e) {
+        Thread.currentThread().interrupt();
+        throw new IOException(
+            "Interrupted while trying to get the results of file splitters", e);
+      } catch (ExecutionException e) {
+        throw new IOException(e);
+      }
+    }
+  }
+
+  private void splitStoreFile(final StoreFile sf, final Path splitdir)
+  throws IOException {
+    FileSystem fs = this.parent.getFilesystem();
+    byte [] family = sf.getFamily();
+    String encoded = this.hri_a.getEncodedName();
+    Path storedir = Store.getStoreHomedir(splitdir, encoded, family);
+    StoreFile.split(fs, storedir, sf, this.splitrow, Range.bottom);
+    encoded = this.hri_b.getEncodedName();
+    storedir = Store.getStoreHomedir(splitdir, encoded, family);
+    StoreFile.split(fs, storedir, sf, this.splitrow, Range.top);
+  }
+
+  /**
+   * Utility class used to do the file splitting / reference writing
+   * in parallel instead of sequentially.
+   */
+  class StoreFileSplitter implements Callable<Void> {
+
+    private final StoreFile sf;
+    private final Path splitdir;
+
+    /**
+     * Constructor that takes what it needs to split
+     * @param sf which file
+     * @param splitdir where the splitting is done
+     */
+    public StoreFileSplitter(final StoreFile sf, final Path splitdir) {
+      this.sf = sf;
+      this.splitdir = splitdir;
+    }
+
+    public Void call() throws IOException {
+      splitStoreFile(sf, splitdir);
+      return null;
+    }
+  }
+
+  /**
+   * @param hri Spec. for daughter region to open.
+   * @param flusher Flusher this region should use.
+   * @return Created daughter HRegion.
+   * @throws IOException
+   * @see #cleanupDaughterRegion(FileSystem, Path, HRegionInfo)
+   */
+  HRegion createDaughterRegion(final HRegionInfo hri,
+      final FlushRequester flusher)
+  throws IOException {
+    // Package private so unit tests have access.
+    FileSystem fs = this.parent.getFilesystem();
+    Path regionDir = getSplitDirForDaughter(this.parent.getFilesystem(),
+      this.splitdir, hri);
+    HRegion r = HRegion.newHRegion(this.parent.getTableDir(),
+      this.parent.getLog(), fs, this.parent.getConf(),
+      hri, flusher);
+    HRegion.moveInitialFilesIntoPlace(fs, regionDir, r.getRegionDir());
+    return r;
+  }
+
+  private static void cleanupDaughterRegion(final FileSystem fs,
+    final Path tabledir, final String encodedName)
+  throws IOException {
+    Path regiondir = HRegion.getRegionDir(tabledir, encodedName);
+    // Dir may not preexist.
+    deleteDir(fs, regiondir, false);
+  }
+
+  /*
+   * Get the daughter directories in the splits dir.  The splits dir is under
+   * the parent regions' directory.
+   * @param fs
+   * @param splitdir
+   * @param hri
+   * @return Path to daughter split dir.
+   * @throws IOException
+   */
+  private static Path getSplitDirForDaughter(final FileSystem fs,
+      final Path splitdir, final HRegionInfo hri)
+  throws IOException {
+    return new Path(splitdir, hri.getEncodedName());
+  }
+
+  /**
+   * @param or Object that can online/offline parent region.  Can be passed null
+   * by unit tests.
+   * @return The region we were splitting
+   * @throws IOException If thrown, rollback failed.  Take drastic action.
+   */
+  public void rollback(final OnlineRegions or) throws IOException {
+    FileSystem fs = this.parent.getFilesystem();
+    ListIterator<JournalEntry> iterator =
+      this.journal.listIterator(this.journal.size());
+    while (iterator.hasPrevious()) {
+      JournalEntry je = iterator.previous();
+      switch(je) {
+      case CREATE_SPLIT_DIR:
+        cleanupSplitDir(fs, this.splitdir);
+        break;
+
+      case CLOSED_PARENT_REGION:
+        // So, this returns a seqid but if we just closed and then reopened, we
+        // should be ok. On close, we flushed using sequenceid obtained from
+        // hosting regionserver so no need to propagate the sequenceid returned
+        // out of initialize below up into regionserver as we normally do.
+        // TODO: Verify.
+        this.parent.initialize();
+        break;
+
+      case STARTED_REGION_A_CREATION:
+        cleanupDaughterRegion(fs, this.parent.getTableDir(),
+          this.hri_a.getEncodedName());
+        break;
+
+      case STARTED_REGION_B_CREATION:
+        cleanupDaughterRegion(fs, this.parent.getTableDir(),
+          this.hri_b.getEncodedName());
+        break;
+
+      case OFFLINED_PARENT:
+        if (or != null) or.addToOnlineRegions(this.parent);
+        break;
+
+      default:
+        throw new RuntimeException("Unhandled journal entry: " + je);
+      }
+    }
+  }
+
+  HRegionInfo getFirstDaughter() {
+    return hri_a;
+  }
+
+  HRegionInfo getSecondDaughter() {
+    return hri_b;
+  }
+
+  // For unit testing.
+  Path getSplitDir() {
+    return this.splitdir;
+  }
+
+  /**
+   * Clean up any split detritus that may have been left around from previous
+   * split attempts.
+   * Call this method on initial region deploy.  Cleans up any mess
+   * left by previous deploys of passed <code>r</code> region.
+   * @param r
+   * @throws IOException 
+   */
+  static void cleanupAnySplitDetritus(final HRegion r) throws IOException {
+    Path splitdir = getSplitDir(r);
+    FileSystem fs = r.getFilesystem();
+    if (!fs.exists(splitdir)) return;
+    // Look at the splitdir.  It could have the encoded names of the daughter
+    // regions we tried to make.  See if the daughter regions actually got made
+    // out under the tabledir.  If here under splitdir still, then the split did
+    // not complete.  Try and do cleanup.  This code WILL NOT catch the case
+    // where we successfully created daughter a but regionserver crashed during
+    // the creation of region b.  In this case, there'll be an orphan daughter
+    // dir in the filesystem.  TOOD: Fix.
+    FileStatus [] daughters = fs.listStatus(splitdir, new FSUtils.DirFilter(fs));
+    for (int i = 0; i < daughters.length; i++) {
+      cleanupDaughterRegion(fs, r.getTableDir(),
+        daughters[i].getPath().getName());
+    }
+    cleanupSplitDir(r.getFilesystem(), splitdir);
+    LOG.info("Cleaned up old failed split transaction detritus: " + splitdir);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
new file mode 100644
index 0000000..7376d6a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
@@ -0,0 +1,1543 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.SortedSet;
+import java.util.concurrent.CopyOnWriteArraySet;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.util.StringUtils;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Iterables;
+
+/**
+ * A Store holds a column family in a Region.  Its a memstore and a set of zero
+ * or more StoreFiles, which stretch backwards over time.
+ *
+ * <p>There's no reason to consider append-logging at this level; all logging
+ * and locking is handled at the HRegion level.  Store just provides
+ * services to manage sets of StoreFiles.  One of the most important of those
+ * services is compaction services where files are aggregated once they pass
+ * a configurable threshold.
+ *
+ * <p>The only thing having to do with logs that Store needs to deal with is
+ * the reconstructionLog.  This is a segment of an HRegion's log that might
+ * NOT be present upon startup.  If the param is NULL, there's nothing to do.
+ * If the param is non-NULL, we need to process the log to reconstruct
+ * a TreeMap that might not have been written to disk before the process
+ * died.
+ *
+ * <p>It's assumed that after this constructor returns, the reconstructionLog
+ * file will be deleted (by whoever has instantiated the Store).
+ *
+ * <p>Locking and transactions are handled at a higher level.  This API should
+ * not be called directly but by an HRegion manager.
+ */
+public class Store implements HeapSize {
+  static final Log LOG = LogFactory.getLog(Store.class);
+  protected final MemStore memstore;
+  // This stores directory in the filesystem.
+  private final Path homedir;
+  private final HRegion region;
+  private final HColumnDescriptor family;
+  final FileSystem fs;
+  final Configuration conf;
+  // ttl in milliseconds.
+  protected long ttl;
+  private long majorCompactionTime;
+  private final int maxFilesToCompact;
+  private final long minCompactSize;
+  // compactRatio: double on purpose!  Float.MAX < Long.MAX < Double.MAX
+  // With float, java will downcast your long to float for comparisons (bad)
+  private double compactRatio;
+  private long lastCompactSize = 0;
+  /* how many bytes to write between status checks */
+  static int closeCheckInterval = 0;
+  private final long desiredMaxFileSize;
+  private final int blockingStoreFileCount;
+  private volatile long storeSize = 0L;
+  private final Object flushLock = new Object();
+  final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
+  private final String storeNameStr;
+  private final boolean inMemory;
+
+  /*
+   * List of store files inside this store. This is an immutable list that
+   * is atomically replaced when its contents change.
+   */
+  private ImmutableList<StoreFile> storefiles = null;
+
+
+  // All access must be synchronized.
+  private final CopyOnWriteArraySet<ChangedReadersObserver> changedReaderObservers =
+    new CopyOnWriteArraySet<ChangedReadersObserver>();
+
+  private final Object compactLock = new Object();
+  private final int compactionThreshold;
+  private final int blocksize;
+  private final boolean blockcache;
+  /** Compression algorithm for flush files and minor compaction */
+  private final Compression.Algorithm compression;
+  /** Compression algorithm for major compaction */
+  private final Compression.Algorithm compactionCompression;
+
+  // Comparing KeyValues
+  final KeyValue.KVComparator comparator;
+
+  /**
+   * Constructor
+   * @param basedir qualified path under which the region directory lives;
+   * generally the table subdirectory
+   * @param region
+   * @param family HColumnDescriptor for this column
+   * @param fs file system object
+   * @param conf configuration object
+   * failed.  Can be null.
+   * @throws IOException
+   */
+  protected Store(Path basedir, HRegion region, HColumnDescriptor family,
+    FileSystem fs, Configuration conf)
+  throws IOException {
+    HRegionInfo info = region.regionInfo;
+    this.fs = fs;
+    this.homedir = getStoreHomedir(basedir, info.getEncodedName(), family.getName());
+    if (!this.fs.exists(this.homedir)) {
+      if (!this.fs.mkdirs(this.homedir))
+        throw new IOException("Failed create of: " + this.homedir.toString());
+    }
+    this.region = region;
+    this.family = family;
+    this.conf = conf;
+    this.blockcache = family.isBlockCacheEnabled();
+    this.blocksize = family.getBlocksize();
+    this.compression = family.getCompression();
+    // avoid overriding compression setting for major compactions if the user
+    // has not specified it separately
+    this.compactionCompression =
+      (family.getCompactionCompression() != Compression.Algorithm.NONE) ?
+        family.getCompactionCompression() : this.compression;
+    this.comparator = info.getComparator();
+    // getTimeToLive returns ttl in seconds.  Convert to milliseconds.
+    this.ttl = family.getTimeToLive();
+    if (ttl == HConstants.FOREVER) {
+      // default is unlimited ttl.
+      ttl = Long.MAX_VALUE;
+    } else if (ttl == -1) {
+      ttl = Long.MAX_VALUE;
+    } else {
+      // second -> ms adjust for user data
+      this.ttl *= 1000;
+    }
+    this.memstore = new MemStore(this.comparator);
+    this.storeNameStr = Bytes.toString(this.family.getName());
+
+    // By default, we compact if an HStore has more than
+    // MIN_COMMITS_FOR_COMPACTION map files
+    this.compactionThreshold = Math.max(2,
+      conf.getInt("hbase.hstore.compactionThreshold", 3));
+
+    // Check if this is in-memory store
+    this.inMemory = family.isInMemory();
+
+    // By default we split region if a file > HConstants.DEFAULT_MAX_FILE_SIZE.
+    long maxFileSize = info.getTableDesc().getMaxFileSize();
+    if (maxFileSize == HConstants.DEFAULT_MAX_FILE_SIZE) {
+      maxFileSize = conf.getLong("hbase.hregion.max.filesize",
+        HConstants.DEFAULT_MAX_FILE_SIZE);
+    }
+    this.desiredMaxFileSize = maxFileSize;
+    this.blockingStoreFileCount =
+      conf.getInt("hbase.hstore.blockingStoreFiles", 7);
+
+    this.majorCompactionTime = getNextMajorCompactTime();
+
+    this.maxFilesToCompact = conf.getInt("hbase.hstore.compaction.max", 10);
+    this.minCompactSize = conf.getLong("hbase.hstore.compaction.min.size",
+        this.region.memstoreFlushSize);
+    this.compactRatio = conf.getFloat("hbase.hstore.compaction.ratio", 1.2F);
+
+    if (Store.closeCheckInterval == 0) {
+      Store.closeCheckInterval = conf.getInt(
+          "hbase.hstore.close.check.interval", 10*1000*1000 /* 10 MB */);
+    }
+    this.storefiles = sortAndClone(loadStoreFiles());
+  }
+
+  public HColumnDescriptor getFamily() {
+    return this.family;
+  }
+
+  /**
+   * @return The maximum sequence id in all store files.
+   */
+  long getMaxSequenceId() {
+    return StoreFile.getMaxSequenceIdInList(this.getStorefiles());
+  }
+
+  /**
+   * @param tabledir
+   * @param encodedName Encoded region name.
+   * @param family
+   * @return Path to family/Store home directory.
+   */
+  public static Path getStoreHomedir(final Path tabledir,
+      final String encodedName, final byte [] family) {
+    return new Path(tabledir, new Path(encodedName,
+      new Path(Bytes.toString(family))));
+  }
+
+  /**
+   * Return the directory in which this store stores its
+   * StoreFiles
+   */
+  public Path getHomedir() {
+    return homedir;
+  }
+
+  /*
+   * Creates an unsorted list of StoreFile loaded from the given directory.
+   * @throws IOException
+   */
+  private List<StoreFile> loadStoreFiles()
+  throws IOException {
+    ArrayList<StoreFile> results = new ArrayList<StoreFile>();
+    FileStatus files[] = this.fs.listStatus(this.homedir);
+    for (int i = 0; files != null && i < files.length; i++) {
+      // Skip directories.
+      if (files[i].isDir()) {
+        continue;
+      }
+      Path p = files[i].getPath();
+      // Check for empty file.  Should never be the case but can happen
+      // after data loss in hdfs for whatever reason (upgrade, etc.): HBASE-646
+      if (this.fs.getFileStatus(p).getLen() <= 0) {
+        LOG.warn("Skipping " + p + " because its empty. HBASE-646 DATA LOSS?");
+        continue;
+      }
+      StoreFile curfile = null;
+      try {
+        curfile = new StoreFile(fs, p, blockcache, this.conf,
+            this.family.getBloomFilterType(), this.inMemory);
+        curfile.createReader();
+      } catch (IOException ioe) {
+        LOG.warn("Failed open of " + p + "; presumption is that file was " +
+          "corrupted at flush and lost edits picked up by commit log replay. " +
+          "Verify!", ioe);
+        continue;
+      }
+      long length = curfile.getReader().length();
+      this.storeSize += length;
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("loaded " + curfile.toStringDetailed());
+      }
+      results.add(curfile);
+    }
+    return results;
+  }
+
+  /**
+   * Adds a value to the memstore
+   *
+   * @param kv
+   * @return memstore size delta
+   */
+  protected long add(final KeyValue kv) {
+    lock.readLock().lock();
+    try {
+      return this.memstore.add(kv);
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Adds a value to the memstore
+   *
+   * @param kv
+   * @return memstore size delta
+   */
+  protected long delete(final KeyValue kv) {
+    lock.readLock().lock();
+    try {
+      return this.memstore.delete(kv);
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * @return All store files.
+   */
+  List<StoreFile> getStorefiles() {
+    return this.storefiles;
+  }
+
+  public void bulkLoadHFile(String srcPathStr) throws IOException {
+    Path srcPath = new Path(srcPathStr);
+
+    HFile.Reader reader  = null;
+    try {
+      LOG.info("Validating hfile at " + srcPath + " for inclusion in "
+          + "store " + this + " region " + this.region);
+      reader = new HFile.Reader(srcPath.getFileSystem(conf),
+          srcPath, null, false);
+      reader.loadFileInfo();
+
+      byte[] firstKey = reader.getFirstRowKey();
+      byte[] lk = reader.getLastKey();
+      byte[] lastKey =
+          (lk == null) ? null :
+              KeyValue.createKeyValueFromKey(lk).getRow();
+
+      LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) +
+          " last=" + Bytes.toStringBinary(lastKey));
+      LOG.debug("Region bounds: first=" +
+          Bytes.toStringBinary(region.getStartKey()) +
+          " last=" + Bytes.toStringBinary(region.getEndKey()));
+
+      HRegionInfo hri = region.getRegionInfo();
+      if (!hri.containsRange(firstKey, lastKey)) {
+        throw new WrongRegionException(
+            "Bulk load file " + srcPathStr + " does not fit inside region "
+            + this.region);
+      }
+    } finally {
+      if (reader != null) reader.close();
+    }
+
+    // Move the file if it's on another filesystem
+    FileSystem srcFs = srcPath.getFileSystem(conf);
+    if (!srcFs.equals(fs)) {
+      LOG.info("File " + srcPath + " on different filesystem than " +
+          "destination store - moving to this filesystem.");
+      Path tmpPath = getTmpPath();
+      FileUtil.copy(srcFs, srcPath, fs, tmpPath, false, conf);
+      LOG.info("Copied to temporary path on dst filesystem: " + tmpPath);
+      srcPath = tmpPath;
+    }
+
+    Path dstPath = StoreFile.getRandomFilename(fs, homedir);
+    LOG.info("Renaming bulk load file " + srcPath + " to " + dstPath);
+    StoreFile.rename(fs, srcPath, dstPath);
+
+    StoreFile sf = new StoreFile(fs, dstPath, blockcache,
+        this.conf, this.family.getBloomFilterType(), this.inMemory);
+    sf.createReader();
+
+    LOG.info("Moved hfile " + srcPath + " into store directory " +
+        homedir + " - updating store file list.");
+
+    // Append the new storefile into the list
+    this.lock.writeLock().lock();
+    try {
+      ArrayList<StoreFile> newFiles = new ArrayList<StoreFile>(storefiles);
+      newFiles.add(sf);
+      this.storefiles = sortAndClone(newFiles);
+      notifyChangedReadersObservers();
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+    LOG.info("Successfully loaded store file " + srcPath
+        + " into store " + this + " (new location: " + dstPath + ")");
+  }
+
+  /**
+   * Get a temporary path in this region. These temporary files
+   * will get cleaned up when the region is re-opened if they are
+   * still around.
+   */
+  private Path getTmpPath() throws IOException {
+    return StoreFile.getRandomFilename(
+        fs, region.getTmpDir());
+  }
+
+  /**
+   * Close all the readers
+   *
+   * We don't need to worry about subsequent requests because the HRegion holds
+   * a write lock that will prevent any more reads or writes.
+   *
+   * @throws IOException
+   */
+  ImmutableList<StoreFile> close() throws IOException {
+    this.lock.writeLock().lock();
+    try {
+      ImmutableList<StoreFile> result = storefiles;
+
+      // Clear so metrics doesn't find them.
+      storefiles = ImmutableList.of();
+
+      for (StoreFile f: result) {
+        f.closeReader();
+      }
+      LOG.debug("closed " + this.storeNameStr);
+      return result;
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+  }
+
+  /**
+   * Snapshot this stores memstore.  Call before running
+   * {@link #flushCache(long, SortedSet<KeyValue>)} so it has some work to do.
+   */
+  void snapshot() {
+    this.memstore.snapshot();
+  }
+
+  /**
+   * Write out current snapshot.  Presumes {@link #snapshot()} has been called
+   * previously.
+   * @param logCacheFlushId flush sequence number
+   * @param snapshot
+   * @param snapshotTimeRangeTracker
+   * @return true if a compaction is needed
+   * @throws IOException
+   */
+  private StoreFile flushCache(final long logCacheFlushId,
+      SortedSet<KeyValue> snapshot,
+      TimeRangeTracker snapshotTimeRangeTracker) throws IOException {
+    // If an exception happens flushing, we let it out without clearing
+    // the memstore snapshot.  The old snapshot will be returned when we say
+    // 'snapshot', the next time flush comes around.
+    return internalFlushCache(snapshot, logCacheFlushId, snapshotTimeRangeTracker);
+  }
+
+  /*
+   * @param cache
+   * @param logCacheFlushId
+   * @return StoreFile created.
+   * @throws IOException
+   */
+  private StoreFile internalFlushCache(final SortedSet<KeyValue> set,
+      final long logCacheFlushId,
+      TimeRangeTracker snapshotTimeRangeTracker)
+      throws IOException {
+    StoreFile.Writer writer = null;
+    long flushed = 0;
+    // Don't flush if there are no entries.
+    if (set.size() == 0) {
+      return null;
+    }
+    long oldestTimestamp = System.currentTimeMillis() - ttl;
+    // TODO:  We can fail in the below block before we complete adding this
+    // flush to list of store files.  Add cleanup of anything put on filesystem
+    // if we fail.
+    synchronized (flushLock) {
+      // A. Write the map out to the disk
+      writer = createWriterInTmp(set.size());
+      writer.setTimeRangeTracker(snapshotTimeRangeTracker);
+      int entries = 0;
+      try {
+        for (KeyValue kv: set) {
+          if (!isExpired(kv, oldestTimestamp)) {
+            writer.append(kv);
+            entries++;
+            flushed += this.memstore.heapSizeChange(kv, true);
+          }
+        }
+      } finally {
+        // Write out the log sequence number that corresponds to this output
+        // hfile.  The hfile is current up to and including logCacheFlushId.
+        writer.appendMetadata(logCacheFlushId, false);
+        writer.close();
+      }
+    }
+
+    // Write-out finished successfully, move into the right spot
+    Path dstPath = StoreFile.getUniqueFile(fs, homedir);
+    LOG.info("Renaming flushed file at " + writer.getPath() + " to " + dstPath);
+    if (!fs.rename(writer.getPath(), dstPath)) {
+      LOG.warn("Unable to rename " + writer.getPath() + " to " + dstPath);
+    }
+
+    StoreFile sf = new StoreFile(this.fs, dstPath, blockcache,
+      this.conf, this.family.getBloomFilterType(), this.inMemory);
+    StoreFile.Reader r = sf.createReader();
+    this.storeSize += r.length();
+    if(LOG.isInfoEnabled()) {
+      LOG.info("Added " + sf + ", entries=" + r.getEntries() +
+        ", sequenceid=" + logCacheFlushId +
+        ", memsize=" + StringUtils.humanReadableInt(flushed) +
+        ", filesize=" + StringUtils.humanReadableInt(r.length()));
+    }
+    return sf;
+  }
+
+  /*
+   * @param maxKeyCount
+   * @return Writer for a new StoreFile in the tmp dir.
+   */
+  private StoreFile.Writer createWriterInTmp(int maxKeyCount)
+  throws IOException {
+    return createWriterInTmp(maxKeyCount, this.compression);
+  }
+
+  /*
+   * @param maxKeyCount
+   * @param compression Compression algorithm to use
+   * @return Writer for a new StoreFile in the tmp dir.
+   */
+  private StoreFile.Writer createWriterInTmp(int maxKeyCount,
+    Compression.Algorithm compression)
+  throws IOException {
+    return StoreFile.createWriter(this.fs, region.getTmpDir(), this.blocksize,
+        compression, this.comparator, this.conf,
+        this.family.getBloomFilterType(), maxKeyCount);
+  }
+
+  /*
+   * Change storefiles adding into place the Reader produced by this new flush.
+   * @param sf
+   * @param set That was used to make the passed file <code>p</code>.
+   * @throws IOException
+   * @return Whether compaction is required.
+   */
+  private boolean updateStorefiles(final StoreFile sf,
+                                   final SortedSet<KeyValue> set)
+  throws IOException {
+    this.lock.writeLock().lock();
+    try {
+      ArrayList<StoreFile> newList = new ArrayList<StoreFile>(storefiles);
+      newList.add(sf);
+      storefiles = sortAndClone(newList);
+      this.memstore.clearSnapshot(set);
+
+      // Tell listeners of the change in readers.
+      notifyChangedReadersObservers();
+
+      return this.storefiles.size() >= this.compactionThreshold;
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+  }
+
+  /*
+   * Notify all observers that set of Readers has changed.
+   * @throws IOException
+   */
+  private void notifyChangedReadersObservers() throws IOException {
+    for (ChangedReadersObserver o: this.changedReaderObservers) {
+      o.updateReaders();
+    }
+  }
+
+  /*
+   * @param o Observer who wants to know about changes in set of Readers
+   */
+  void addChangedReaderObserver(ChangedReadersObserver o) {
+    this.changedReaderObservers.add(o);
+  }
+
+  /*
+   * @param o Observer no longer interested in changes in set of Readers.
+   */
+  void deleteChangedReaderObserver(ChangedReadersObserver o) {
+    // We don't check if observer present; it may not be (legitimately)
+    this.changedReaderObservers.remove(o);
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Compaction
+  //////////////////////////////////////////////////////////////////////////////
+
+  /**
+   * Compact the StoreFiles.  This method may take some time, so the calling
+   * thread must be able to block for long periods.
+   *
+   * <p>During this time, the Store can work as usual, getting values from
+   * StoreFiles and writing new StoreFiles from the memstore.
+   *
+   * Existing StoreFiles are not destroyed until the new compacted StoreFile is
+   * completely written-out to disk.
+   *
+   * <p>The compactLock prevents multiple simultaneous compactions.
+   * The structureLock prevents us from interfering with other write operations.
+   *
+   * <p>We don't want to hold the structureLock for the whole time, as a compact()
+   * can be lengthy and we want to allow cache-flushes during this period.
+   *
+   * @param forceMajor True to force a major compaction regardless of thresholds
+   * @return row to split around if a split is needed, null otherwise
+   * @throws IOException
+   */
+  StoreSize compact(final boolean forceMajor) throws IOException {
+    boolean forceSplit = this.region.shouldSplit(false);
+    boolean majorcompaction = forceMajor;
+    synchronized (compactLock) {
+      this.lastCompactSize = 0;
+
+      // filesToCompact are sorted oldest to newest.
+      List<StoreFile> filesToCompact = this.storefiles;
+      if (filesToCompact.isEmpty()) {
+        LOG.debug(this.storeNameStr + ": no store files to compact");
+        return null;
+      }
+
+      // Check to see if we need to do a major compaction on this region.
+      // If so, change doMajorCompaction to true to skip the incremental
+      // compacting below. Only check if doMajorCompaction is not true.
+      if (!majorcompaction) {
+        majorcompaction = isMajorCompaction(filesToCompact);
+      }
+
+      boolean references = hasReferences(filesToCompact);
+      if (!majorcompaction && !references &&
+          (forceSplit || (filesToCompact.size() < compactionThreshold))) {
+        return checkSplit(forceSplit);
+      }
+
+      /* get store file sizes for incremental compacting selection.
+       * normal skew:
+       *
+       *         older ----> newer
+       *     _
+       *    | |   _
+       *    | |  | |   _
+       *  --|-|- |-|- |-|---_-------_-------  minCompactSize
+       *    | |  | |  | |  | |  _  | |
+       *    | |  | |  | |  | | | | | |
+       *    | |  | |  | |  | | | | | |
+       */
+      int countOfFiles = filesToCompact.size();
+      long [] fileSizes = new long[countOfFiles];
+      long [] sumSize = new long[countOfFiles];
+      for (int i = countOfFiles-1; i >= 0; --i) {
+        StoreFile file = filesToCompact.get(i);
+        Path path = file.getPath();
+        if (path == null) {
+          LOG.error("Path is null for " + file);
+          return null;
+        }
+        StoreFile.Reader r = file.getReader();
+        if (r == null) {
+          LOG.error("StoreFile " + file + " has a null Reader");
+          return null;
+        }
+        fileSizes[i] = file.getReader().length();
+        // calculate the sum of fileSizes[i,i+maxFilesToCompact-1) for algo
+        int tooFar = i + this.maxFilesToCompact - 1;
+        sumSize[i] = fileSizes[i]
+                   + ((i+1    < countOfFiles) ? sumSize[i+1]      : 0)
+                   - ((tooFar < countOfFiles) ? fileSizes[tooFar] : 0);
+      }
+
+      long totalSize = 0;
+      if (!majorcompaction && !references) {
+        // we're doing a minor compaction, let's see what files are applicable
+        int start = 0;
+        double r = this.compactRatio;
+
+        /* Start at the oldest file and stop when you find the first file that
+         * meets compaction criteria:
+         *   (1) a recently-flushed, small file (i.e. <= minCompactSize)
+         *      OR
+         *   (2) within the compactRatio of sum(newer_files)
+         * Given normal skew, any newer files will also meet this criteria
+         *
+         * Additional Note:
+         * If fileSizes.size() >> maxFilesToCompact, we will recurse on
+         * compact().  Consider the oldest files first to avoid a
+         * situation where we always compact [end-threshold,end).  Then, the
+         * last file becomes an aggregate of the previous compactions.
+         */
+        while(countOfFiles - start >= this.compactionThreshold &&
+              fileSizes[start] >
+                Math.max(minCompactSize, (long)(sumSize[start+1] * r))) {
+          ++start;
+        }
+        int end = Math.min(countOfFiles, start + this.maxFilesToCompact);
+        totalSize = fileSizes[start]
+                  + ((start+1 < countOfFiles) ? sumSize[start+1] : 0);
+
+        // if we don't have enough files to compact, just wait
+        if (end - start < this.compactionThreshold) {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Skipped compaction of " + this.storeNameStr
+              + " because only " + (end - start) + " file(s) of size "
+              + StringUtils.humanReadableInt(totalSize)
+              + " meet compaction criteria.");
+          }
+          return checkSplit(forceSplit);
+        }
+
+        if (0 == start && end == countOfFiles) {
+          // we decided all the files were candidates! major compact
+          majorcompaction = true;
+        } else {
+          filesToCompact = new ArrayList<StoreFile>(filesToCompact.subList(start,
+            end));
+        }
+      } else {
+        // all files included in this compaction
+        for (long i : fileSizes) {
+          totalSize += i;
+        }
+      }
+      this.lastCompactSize = totalSize;
+
+      // Max-sequenceID is the last key in the files we're compacting
+      long maxId = StoreFile.getMaxSequenceIdInList(filesToCompact);
+
+      // Ready to go.  Have list of files to compact.
+      LOG.info("Started compaction of " + filesToCompact.size() + " file(s) in cf=" +
+          this.storeNameStr +
+        (references? ", hasReferences=true,": " ") + " into " +
+          region.getTmpDir() + ", seqid=" + maxId +
+          ", totalSize=" + StringUtils.humanReadableInt(totalSize));
+      StoreFile.Writer writer = compact(filesToCompact, majorcompaction, maxId);
+      // Move the compaction into place.
+      StoreFile sf = completeCompaction(filesToCompact, writer);
+      if (LOG.isInfoEnabled()) {
+        LOG.info("Completed" + (majorcompaction? " major ": " ") +
+          "compaction of " + filesToCompact.size() +
+          " file(s), new file=" + (sf == null? "none": sf.toString()) +
+          ", size=" + (sf == null? "none": StringUtils.humanReadableInt(sf.getReader().length())) +
+          "; total size for store is " + StringUtils.humanReadableInt(storeSize));
+      }
+    }
+    return checkSplit(forceSplit);
+  }
+
+  /*
+   * Compact the most recent N files. Essentially a hook for testing.
+   */
+  protected void compactRecent(int N) throws IOException {
+    synchronized(compactLock) {
+      List<StoreFile> filesToCompact = this.storefiles;
+      int count = filesToCompact.size();
+      if (N > count) {
+        throw new RuntimeException("Not enough files");
+      }
+
+      filesToCompact = new ArrayList<StoreFile>(filesToCompact.subList(count-N, count));
+      long maxId = StoreFile.getMaxSequenceIdInList(filesToCompact);
+      boolean majorcompaction = (N == count);
+
+      // Ready to go.  Have list of files to compact.
+      StoreFile.Writer writer = compact(filesToCompact, majorcompaction, maxId);
+      // Move the compaction into place.
+      StoreFile sf = completeCompaction(filesToCompact, writer);
+    }
+  }
+
+  /*
+   * @param files
+   * @return True if any of the files in <code>files</code> are References.
+   */
+  private boolean hasReferences(Collection<StoreFile> files) {
+    if (files != null && files.size() > 0) {
+      for (StoreFile hsf: files) {
+        if (hsf.isReference()) {
+          return true;
+        }
+      }
+    }
+    return false;
+  }
+
+  /*
+   * Gets lowest timestamp from files in a dir
+   *
+   * @param fs
+   * @param dir
+   * @throws IOException
+   */
+  private static long getLowestTimestamp(FileSystem fs, Path dir) throws IOException {
+    FileStatus[] stats = fs.listStatus(dir);
+    if (stats == null || stats.length == 0) {
+      return 0l;
+    }
+    long lowTimestamp = Long.MAX_VALUE;
+    for (int i = 0; i < stats.length; i++) {
+      long timestamp = stats[i].getModificationTime();
+      if (timestamp < lowTimestamp){
+        lowTimestamp = timestamp;
+      }
+    }
+    return lowTimestamp;
+  }
+
+  /*
+   * @return True if we should run a major compaction.
+   */
+  boolean isMajorCompaction() throws IOException {
+    return isMajorCompaction(storefiles);
+  }
+
+  /*
+   * @param filesToCompact Files to compact. Can be null.
+   * @return True if we should run a major compaction.
+   */
+  private boolean isMajorCompaction(final List<StoreFile> filesToCompact) throws IOException {
+    boolean result = false;
+    if (filesToCompact == null || filesToCompact.isEmpty() ||
+        majorCompactionTime == 0) {
+      return result;
+    }
+    // TODO: Use better method for determining stamp of last major (HBASE-2990)
+    long lowTimestamp = getLowestTimestamp(fs,
+      filesToCompact.get(0).getPath().getParent());
+    long now = System.currentTimeMillis();
+    if (lowTimestamp > 0l && lowTimestamp < (now - this.majorCompactionTime)) {
+      // Major compaction time has elapsed.
+      if (filesToCompact.size() == 1) {
+        // Single file
+        StoreFile sf = filesToCompact.get(0);
+        long oldest = now - sf.getReader().timeRangeTracker.minimumTimestamp;
+        if (sf.isMajorCompaction() &&
+            (this.ttl == HConstants.FOREVER || oldest < this.ttl)) {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Skipping major compaction of " + this.storeNameStr +
+                " because one (major) compacted file only and oldestTime " +
+                oldest + "ms is < ttl=" + this.ttl);
+          }
+        }
+      } else {
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Major compaction triggered on store " + this.storeNameStr +
+            "; time since last major compaction " + (now - lowTimestamp) + "ms");
+        }
+        result = true;
+        this.majorCompactionTime = getNextMajorCompactTime();
+      }
+    }
+    return result;
+  }
+
+  long getNextMajorCompactTime() {
+    // default = 24hrs
+    long ret = conf.getLong(HConstants.MAJOR_COMPACTION_PERIOD, 1000*60*60*24);
+    if (family.getValue(HConstants.MAJOR_COMPACTION_PERIOD) != null) {
+      String strCompactionTime =
+        family.getValue(HConstants.MAJOR_COMPACTION_PERIOD);
+      ret = (new Long(strCompactionTime)).longValue();
+    }
+
+    if (ret > 0) {
+      // default = 20% = +/- 4.8 hrs
+      double jitterPct =  conf.getFloat("hbase.hregion.majorcompaction.jitter",
+          0.20F);
+      if (jitterPct > 0) {
+        long jitter = Math.round(ret * jitterPct);
+        ret += jitter - Math.round(2L * jitter * Math.random());
+      }
+    }
+    return ret;
+  }
+
+  /**
+   * Do a minor/major compaction.  Uses the scan infrastructure to make it easy.
+   *
+   * @param filesToCompact which files to compact
+   * @param majorCompaction true to major compact (prune all deletes, max versions, etc)
+   * @param maxId Readers maximum sequence id.
+   * @return Product of compaction or null if all cells expired or deleted and
+   * nothing made it through the compaction.
+   * @throws IOException
+   */
+  private StoreFile.Writer compact(final List<StoreFile> filesToCompact,
+                               final boolean majorCompaction, final long maxId)
+      throws IOException {
+    // calculate maximum key count after compaction (for blooms)
+    int maxKeyCount = 0;
+    for (StoreFile file : filesToCompact) {
+      StoreFile.Reader r = file.getReader();
+      if (r != null) {
+        // NOTE: getFilterEntries could cause under-sized blooms if the user
+        //       switches bloom type (e.g. from ROW to ROWCOL)
+        long keyCount = (r.getBloomFilterType() == family.getBloomFilterType())
+          ? r.getFilterEntries() : r.getEntries();
+        maxKeyCount += keyCount;
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("Compacting " + file +
+            ", keycount=" + keyCount +
+            ", bloomtype=" + r.getBloomFilterType().toString() +
+            ", size=" + StringUtils.humanReadableInt(r.length()) );
+        }
+      }
+    }
+
+    // For each file, obtain a scanner:
+    List<StoreFileScanner> scanners = StoreFileScanner
+      .getScannersForStoreFiles(filesToCompact, false, false);
+
+    // Make the instantiation lazy in case compaction produces no product; i.e.
+    // where all source cells are expired or deleted.
+    StoreFile.Writer writer = null;
+    try {
+      InternalScanner scanner = null;
+      try {
+        Scan scan = new Scan();
+        scan.setMaxVersions(family.getMaxVersions());
+        /* include deletes, unless we are doing a major compaction */
+        scanner = new StoreScanner(this, scan, scanners, !majorCompaction);
+        int bytesWritten = 0;
+        // since scanner.next() can return 'false' but still be delivering data,
+        // we have to use a do/while loop.
+        ArrayList<KeyValue> kvs = new ArrayList<KeyValue>();
+        while (scanner.next(kvs)) {
+          if (writer == null && !kvs.isEmpty()) {
+            writer = createWriterInTmp(maxKeyCount,
+              this.compactionCompression);
+          }
+          if (writer != null) {
+            // output to writer:
+            for (KeyValue kv : kvs) {
+              writer.append(kv);
+
+              // check periodically to see if a system stop is requested
+              if (Store.closeCheckInterval > 0) {
+                bytesWritten += kv.getLength();
+                if (bytesWritten > Store.closeCheckInterval) {
+                  bytesWritten = 0;
+                  if (!this.region.areWritesEnabled()) {
+                    writer.close();
+                    fs.delete(writer.getPath(), false);
+                    throw new InterruptedIOException(
+                        "Aborting compaction of store " + this +
+                        " in region " + this.region +
+                        " because user requested stop.");
+                  }
+                }
+              }
+            }
+          }
+          kvs.clear();
+        }
+      } finally {
+        if (scanner != null) {
+          scanner.close();
+        }
+      }
+    } finally {
+      if (writer != null) {
+        writer.appendMetadata(maxId, majorCompaction);
+        writer.close();
+      }
+    }
+    return writer;
+  }
+
+  /*
+   * It's assumed that the compactLock  will be acquired prior to calling this
+   * method!  Otherwise, it is not thread-safe!
+   *
+   * <p>It works by processing a compaction that's been written to disk.
+   *
+   * <p>It is usually invoked at the end of a compaction, but might also be
+   * invoked at HStore startup, if the prior execution died midway through.
+   *
+   * <p>Moving the compacted TreeMap into place means:
+   * <pre>
+   * 1) Moving the new compacted StoreFile into place
+   * 2) Unload all replaced StoreFile, close and collect list to delete.
+   * 3) Loading the new TreeMap.
+   * 4) Compute new store size
+   * </pre>
+   *
+   * @param compactedFiles list of files that were compacted
+   * @param compactedFile StoreFile that is the result of the compaction
+   * @return StoreFile created. May be null.
+   * @throws IOException
+   */
+  private StoreFile completeCompaction(final List<StoreFile> compactedFiles,
+                                       final StoreFile.Writer compactedFile)
+      throws IOException {
+    // 1. Moving the new files into place -- if there is a new file (may not
+    // be if all cells were expired or deleted).
+    StoreFile result = null;
+    if (compactedFile != null) {
+      Path p = null;
+      try {
+        p = StoreFile.rename(this.fs, compactedFile.getPath(),
+          StoreFile.getRandomFilename(fs, this.homedir));
+      } catch (IOException e) {
+        LOG.error("Failed move of compacted file " + compactedFile.getPath(), e);
+        return null;
+      }
+      result = new StoreFile(this.fs, p, blockcache, this.conf,
+          this.family.getBloomFilterType(), this.inMemory);
+      result.createReader();
+    }
+    this.lock.writeLock().lock();
+    try {
+      try {
+        // 2. Unloading
+        // 3. Loading the new TreeMap.
+        // Change this.storefiles so it reflects new state but do not
+        // delete old store files until we have sent out notification of
+        // change in case old files are still being accessed by outstanding
+        // scanners.
+        ArrayList<StoreFile> newStoreFiles = new ArrayList<StoreFile>();
+        for (StoreFile sf : storefiles) {
+          if (!compactedFiles.contains(sf)) {
+            newStoreFiles.add(sf);
+          }
+        }
+
+        // If a StoreFile result, move it into place.  May be null.
+        if (result != null) {
+          newStoreFiles.add(result);
+        }
+
+        this.storefiles = sortAndClone(newStoreFiles);
+
+        // Tell observers that list of StoreFiles has changed.
+        notifyChangedReadersObservers();
+        // Finally, delete old store files.
+        for (StoreFile hsf: compactedFiles) {
+          hsf.deleteReader();
+        }
+      } catch (IOException e) {
+        e = RemoteExceptionHandler.checkIOException(e);
+        LOG.error("Failed replacing compacted files in " + this.storeNameStr +
+          ". Compacted file is " + (result == null? "none": result.toString()) +
+          ".  Files replaced " + compactedFiles.toString() +
+          " some of which may have been already removed", e);
+      }
+      // 4. Compute new store size
+      this.storeSize = 0L;
+      for (StoreFile hsf : this.storefiles) {
+        StoreFile.Reader r = hsf.getReader();
+        if (r == null) {
+          LOG.warn("StoreFile " + hsf + " has a null Reader");
+          continue;
+        }
+        this.storeSize += r.length();
+      }
+    } finally {
+      this.lock.writeLock().unlock();
+    }
+    return result;
+  }
+
+  public ImmutableList<StoreFile> sortAndClone(List<StoreFile> storeFiles) {
+    Collections.sort(storeFiles, StoreFile.Comparators.FLUSH_TIME);
+    ImmutableList<StoreFile> newList = ImmutableList.copyOf(storeFiles);
+    return newList;
+  }
+
+  // ////////////////////////////////////////////////////////////////////////////
+  // Accessors.
+  // (This is the only section that is directly useful!)
+  //////////////////////////////////////////////////////////////////////////////
+  /**
+   * @return the number of files in this store
+   */
+  public int getNumberOfstorefiles() {
+    return this.storefiles.size();
+  }
+
+  /*
+   * @param wantedVersions How many versions were asked for.
+   * @return wantedVersions or this families' {@link HConstants#VERSIONS}.
+   */
+  int versionsToReturn(final int wantedVersions) {
+    if (wantedVersions <= 0) {
+      throw new IllegalArgumentException("Number of versions must be > 0");
+    }
+    // Make sure we do not return more than maximum versions for this store.
+    int maxVersions = this.family.getMaxVersions();
+    return wantedVersions > maxVersions ? maxVersions: wantedVersions;
+  }
+
+  static boolean isExpired(final KeyValue key, final long oldestTimestamp) {
+    return key.getTimestamp() < oldestTimestamp;
+  }
+
+  /**
+   * Find the key that matches <i>row</i> exactly, or the one that immediately
+   * preceeds it. WARNING: Only use this method on a table where writes occur
+   * with strictly increasing timestamps. This method assumes this pattern of
+   * writes in order to make it reasonably performant.  Also our search is
+   * dependent on the axiom that deletes are for cells that are in the container
+   * that follows whether a memstore snapshot or a storefile, not for the
+   * current container: i.e. we'll see deletes before we come across cells we
+   * are to delete. Presumption is that the memstore#kvset is processed before
+   * memstore#snapshot and so on.
+   * @param kv First possible item on targeted row; i.e. empty columns, latest
+   * timestamp and maximum type.
+   * @return Found keyvalue or null if none found.
+   * @throws IOException
+   */
+  KeyValue getRowKeyAtOrBefore(final KeyValue kv) throws IOException {
+    GetClosestRowBeforeTracker state = new GetClosestRowBeforeTracker(
+      this.comparator, kv, this.ttl, this.region.getRegionInfo().isMetaRegion());
+    this.lock.readLock().lock();
+    try {
+      // First go to the memstore.  Pick up deletes and candidates.
+      this.memstore.getRowKeyAtOrBefore(state);
+      // Check if match, if we got a candidate on the asked for 'kv' row.
+      // Process each store file. Run through from newest to oldest.
+      for (StoreFile sf : Iterables.reverse(storefiles)) {
+        // Update the candidate keys from the current map file
+        rowAtOrBeforeFromStoreFile(sf, state);
+      }
+      return state.getCandidate();
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /*
+   * Check an individual MapFile for the row at or before a given row.
+   * @param f
+   * @param state
+   * @throws IOException
+   */
+  private void rowAtOrBeforeFromStoreFile(final StoreFile f,
+                                          final GetClosestRowBeforeTracker state)
+      throws IOException {
+    StoreFile.Reader r = f.getReader();
+    if (r == null) {
+      LOG.warn("StoreFile " + f + " has a null Reader");
+      return;
+    }
+    // TODO: Cache these keys rather than make each time?
+    byte [] fk = r.getFirstKey();
+    KeyValue firstKV = KeyValue.createKeyValueFromKey(fk, 0, fk.length);
+    byte [] lk = r.getLastKey();
+    KeyValue lastKV = KeyValue.createKeyValueFromKey(lk, 0, lk.length);
+    KeyValue firstOnRow = state.getTargetKey();
+    if (this.comparator.compareRows(lastKV, firstOnRow) < 0) {
+      // If last key in file is not of the target table, no candidates in this
+      // file.  Return.
+      if (!state.isTargetTable(lastKV)) return;
+      // If the row we're looking for is past the end of file, set search key to
+      // last key. TODO: Cache last and first key rather than make each time.
+      firstOnRow = new KeyValue(lastKV.getRow(), HConstants.LATEST_TIMESTAMP);
+    }
+    // Get a scanner that caches blocks and that uses pread.
+    HFileScanner scanner = r.getScanner(true, true);
+    // Seek scanner.  If can't seek it, return.
+    if (!seekToScanner(scanner, firstOnRow, firstKV)) return;
+    // If we found candidate on firstOnRow, just return. THIS WILL NEVER HAPPEN!
+    // Unlikely that there'll be an instance of actual first row in table.
+    if (walkForwardInSingleRow(scanner, firstOnRow, state)) return;
+    // If here, need to start backing up.
+    while (scanner.seekBefore(firstOnRow.getBuffer(), firstOnRow.getKeyOffset(),
+       firstOnRow.getKeyLength())) {
+      KeyValue kv = scanner.getKeyValue();
+      if (!state.isTargetTable(kv)) break;
+      if (!state.isBetterCandidate(kv)) break;
+      // Make new first on row.
+      firstOnRow = new KeyValue(kv.getRow(), HConstants.LATEST_TIMESTAMP);
+      // Seek scanner.  If can't seek it, break.
+      if (!seekToScanner(scanner, firstOnRow, firstKV)) break;
+      // If we find something, break;
+      if (walkForwardInSingleRow(scanner, firstOnRow, state)) break;
+    }
+  }
+
+  /*
+   * Seek the file scanner to firstOnRow or first entry in file.
+   * @param scanner
+   * @param firstOnRow
+   * @param firstKV
+   * @return True if we successfully seeked scanner.
+   * @throws IOException
+   */
+  private boolean seekToScanner(final HFileScanner scanner,
+                                final KeyValue firstOnRow,
+                                final KeyValue firstKV)
+      throws IOException {
+    KeyValue kv = firstOnRow;
+    // If firstOnRow < firstKV, set to firstKV
+    if (this.comparator.compareRows(firstKV, firstOnRow) == 0) kv = firstKV;
+    int result = scanner.seekTo(kv.getBuffer(), kv.getKeyOffset(),
+      kv.getKeyLength());
+    return result >= 0;
+  }
+
+  /*
+   * When we come in here, we are probably at the kv just before we break into
+   * the row that firstOnRow is on.  Usually need to increment one time to get
+   * on to the row we are interested in.
+   * @param scanner
+   * @param firstOnRow
+   * @param state
+   * @return True we found a candidate.
+   * @throws IOException
+   */
+  private boolean walkForwardInSingleRow(final HFileScanner scanner,
+                                         final KeyValue firstOnRow,
+                                         final GetClosestRowBeforeTracker state)
+      throws IOException {
+    boolean foundCandidate = false;
+    do {
+      KeyValue kv = scanner.getKeyValue();
+      // If we are not in the row, skip.
+      if (this.comparator.compareRows(kv, firstOnRow) < 0) continue;
+      // Did we go beyond the target row? If so break.
+      if (state.isTooFar(kv, firstOnRow)) break;
+      if (state.isExpired(kv)) {
+        continue;
+      }
+      // If we added something, this row is a contender. break.
+      if (state.handle(kv)) {
+        foundCandidate = true;
+        break;
+      }
+    } while(scanner.next());
+    return foundCandidate;
+  }
+
+  /**
+   * Determines if HStore can be split
+   * @param force Whether to force a split or not.
+   * @return a StoreSize if store can be split, null otherwise.
+   */
+  StoreSize checkSplit(final boolean force) {
+    this.lock.readLock().lock();
+    try {
+      // Iterate through all store files
+      if (this.storefiles.isEmpty()) {
+        return null;
+      }
+      if (!force && (storeSize < this.desiredMaxFileSize)) {
+        return null;
+      }
+
+      if (this.region.getRegionInfo().isMetaRegion()) {
+        if (force) {
+          LOG.warn("Cannot split meta regions in HBase 0.20");
+        }
+        return null;
+      }
+
+      // Not splitable if we find a reference store file present in the store.
+      boolean splitable = true;
+      long maxSize = 0L;
+      StoreFile largestSf = null;
+      for (StoreFile sf : storefiles) {
+        if (splitable) {
+          splitable = !sf.isReference();
+          if (!splitable) {
+            // RETURN IN MIDDLE OF FUNCTION!!! If not splitable, just return.
+            if (LOG.isDebugEnabled()) {
+              LOG.debug(sf +  " is not splittable");
+            }
+            return null;
+          }
+        }
+        StoreFile.Reader r = sf.getReader();
+        if (r == null) {
+          LOG.warn("Storefile " + sf + " Reader is null");
+          continue;
+        }
+        long size = r.length();
+        if (size > maxSize) {
+          // This is the largest one so far
+          maxSize = size;
+          largestSf = sf;
+        }
+      }
+      StoreFile.Reader r = largestSf.getReader();
+      if (r == null) {
+        LOG.warn("Storefile " + largestSf + " Reader is null");
+        return null;
+      }
+      // Get first, last, and mid keys.  Midkey is the key that starts block
+      // in middle of hfile.  Has column and timestamp.  Need to return just
+      // the row we want to split on as midkey.
+      byte [] midkey = r.midkey();
+      if (midkey != null) {
+        KeyValue mk = KeyValue.createKeyValueFromKey(midkey, 0, midkey.length);
+        byte [] fk = r.getFirstKey();
+        KeyValue firstKey = KeyValue.createKeyValueFromKey(fk, 0, fk.length);
+        byte [] lk = r.getLastKey();
+        KeyValue lastKey = KeyValue.createKeyValueFromKey(lk, 0, lk.length);
+        // if the midkey is the same as the first and last keys, then we cannot
+        // (ever) split this region.
+        if (this.comparator.compareRows(mk, firstKey) == 0 &&
+            this.comparator.compareRows(mk, lastKey) == 0) {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("cannot split because midkey is the same as first or " +
+              "last row");
+          }
+          return null;
+        }
+        return new StoreSize(maxSize, mk.getRow());
+      }
+    } catch(IOException e) {
+      LOG.warn("Failed getting store size for " + this.storeNameStr, e);
+    } finally {
+      this.lock.readLock().unlock();
+    }
+    return null;
+  }
+
+  /** @return aggregate size of all HStores used in the last compaction */
+  public long getLastCompactSize() {
+    return this.lastCompactSize;
+  }
+
+  /** @return aggregate size of HStore */
+  public long getSize() {
+    return storeSize;
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // File administration
+  //////////////////////////////////////////////////////////////////////////////
+
+  /**
+   * Return a scanner for both the memstore and the HStore files
+   * @throws IOException
+   */
+  public KeyValueScanner getScanner(Scan scan,
+      final NavigableSet<byte []> targetCols) throws IOException {
+    lock.readLock().lock();
+    try {
+      return new StoreScanner(this, scan, targetCols);
+    } finally {
+      lock.readLock().unlock();
+    }
+  }
+
+  @Override
+  public String toString() {
+    return this.storeNameStr;
+  }
+
+  /**
+   * @return Count of store files
+   */
+  int getStorefilesCount() {
+    return this.storefiles.size();
+  }
+
+  /**
+   * @return The size of the store files, in bytes.
+   */
+  long getStorefilesSize() {
+    long size = 0;
+    for (StoreFile s: storefiles) {
+      StoreFile.Reader r = s.getReader();
+      if (r == null) {
+        LOG.warn("StoreFile " + s + " has a null Reader");
+        continue;
+      }
+      size += r.length();
+    }
+    return size;
+  }
+
+  /**
+   * @return The size of the store file indexes, in bytes.
+   */
+  long getStorefilesIndexSize() {
+    long size = 0;
+    for (StoreFile s: storefiles) {
+      StoreFile.Reader r = s.getReader();
+      if (r == null) {
+        LOG.warn("StoreFile " + s + " has a null Reader");
+        continue;
+      }
+      size += r.indexSize();
+    }
+    return size;
+  }
+
+  /**
+   * @return The priority that this store should have in the compaction queue
+   */
+  int getCompactPriority() {
+    return this.blockingStoreFileCount - this.storefiles.size();
+  }
+
+  /**
+   * Datastructure that holds size and row to split a file around.
+   * TODO: Take a KeyValue rather than row.
+   */
+  static class StoreSize {
+    private final long size;
+    private final byte [] row;
+
+    StoreSize(long size, byte [] row) {
+      this.size = size;
+      this.row = row;
+    }
+    /* @return the size */
+    long getSize() {
+      return size;
+    }
+
+    byte [] getSplitRow() {
+      return this.row;
+    }
+  }
+
+  HRegion getHRegion() {
+    return this.region;
+  }
+
+  HRegionInfo getHRegionInfo() {
+    return this.region.regionInfo;
+  }
+
+  /**
+   * Increments the value for the given row/family/qualifier.
+   *
+   * This function will always be seen as atomic by other readers
+   * because it only puts a single KV to memstore. Thus no
+   * read/write control necessary.
+   *
+   * @param row
+   * @param f
+   * @param qualifier
+   * @param newValue the new value to set into memstore
+   * @return memstore size delta
+   * @throws IOException
+   */
+  public long updateColumnValue(byte [] row, byte [] f,
+                                byte [] qualifier, long newValue)
+      throws IOException {
+
+    this.lock.readLock().lock();
+    try {
+      long now = EnvironmentEdgeManager.currentTimeMillis();
+
+      return this.memstore.updateColumnValue(row,
+          f,
+          qualifier,
+          newValue,
+          now);
+
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  /**
+   * Adds or replaces the specified KeyValues.
+   * <p>
+   * For each KeyValue specified, if a cell with the same row, family, and
+   * qualifier exists in MemStore, it will be replaced.  Otherwise, it will just
+   * be inserted to MemStore.
+   * <p>
+   * This operation is atomic on each KeyValue (row/family/qualifier) but not
+   * necessarily atomic across all of them.
+   * @param kvs
+   * @return memstore size delta
+   * @throws IOException
+   */
+  public long upsert(List<KeyValue> kvs)
+      throws IOException {
+    this.lock.readLock().lock();
+    try {
+      // TODO: Make this operation atomic w/ RWCC
+      return this.memstore.upsert(kvs);
+    } finally {
+      this.lock.readLock().unlock();
+    }
+  }
+
+  public StoreFlusher getStoreFlusher(long cacheFlushId) {
+    return new StoreFlusherImpl(cacheFlushId);
+  }
+
+  private class StoreFlusherImpl implements StoreFlusher {
+
+    private long cacheFlushId;
+    private SortedSet<KeyValue> snapshot;
+    private StoreFile storeFile;
+    private TimeRangeTracker snapshotTimeRangeTracker;
+
+    private StoreFlusherImpl(long cacheFlushId) {
+      this.cacheFlushId = cacheFlushId;
+    }
+
+    @Override
+    public void prepare() {
+      memstore.snapshot();
+      this.snapshot = memstore.getSnapshot();
+      this.snapshotTimeRangeTracker = memstore.getSnapshotTimeRangeTracker();
+    }
+
+    @Override
+    public void flushCache() throws IOException {
+      storeFile = Store.this.flushCache(cacheFlushId, snapshot, snapshotTimeRangeTracker);
+    }
+
+    @Override
+    public boolean commit() throws IOException {
+      if (storeFile == null) {
+        return false;
+      }
+      // Add new file to store files.  Clear snapshot too while we have
+      // the Store write lock.
+      return Store.this.updateStorefiles(storeFile, snapshot);
+    }
+  }
+
+  /**
+   * See if there's too much store files in this store
+   * @return true if number of store files is greater than
+   *  the number defined in compactionThreshold
+   */
+  public boolean hasTooManyStoreFiles() {
+    return this.storefiles.size() > this.compactionThreshold;
+  }
+
+  public static final long FIXED_OVERHEAD = ClassSize.align(
+      ClassSize.OBJECT + (15 * ClassSize.REFERENCE) +
+      (6 * Bytes.SIZEOF_LONG) + (1 * Bytes.SIZEOF_DOUBLE) +
+      (4 * Bytes.SIZEOF_INT) + (Bytes.SIZEOF_BOOLEAN * 2));
+
+  public static final long DEEP_OVERHEAD = ClassSize.align(FIXED_OVERHEAD +
+      ClassSize.OBJECT + ClassSize.REENTRANT_LOCK +
+      ClassSize.CONCURRENT_SKIPLISTMAP +
+      ClassSize.CONCURRENT_SKIPLISTMAP_ENTRY + ClassSize.OBJECT);
+
+  @Override
+  public long heapSize() {
+    return DEEP_OVERHEAD + this.memstore.heapSize();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
new file mode 100644
index 0000000..33e4470
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
@@ -0,0 +1,1127 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValue.KVComparator;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.HalfStoreFileReader;
+import org.apache.hadoop.hbase.io.Reference;
+import org.apache.hadoop.hbase.io.hfile.BlockCache;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.io.hfile.LruBlockCache;
+import org.apache.hadoop.hbase.util.BloomFilter;
+import org.apache.hadoop.hbase.util.ByteBloomFilter;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Hash;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.hadoop.util.StringUtils;
+
+import com.google.common.base.Function;
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Ordering;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.management.MemoryUsage;
+import java.nio.ByteBuffer;
+import java.text.NumberFormat;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.SortedSet;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+/**
+ * A Store data file.  Stores usually have one or more of these files.  They
+ * are produced by flushing the memstore to disk.  To
+ * create, call {@link #createWriter(FileSystem, Path, int)} and append data.  Be
+ * sure to add any metadata before calling close on the Writer
+ * (Use the appendMetadata convenience methods). On close, a StoreFile is
+ * sitting in the Filesystem.  To refer to it, create a StoreFile instance
+ * passing filesystem and path.  To read, call {@link #createReader()}.
+ * <p>StoreFiles may also reference store files in another Store.
+ *
+ * The reason for this weird pattern where you use a different instance for the
+ * writer and a reader is that we write once but read a lot more.
+ */
+public class StoreFile {
+  static final Log LOG = LogFactory.getLog(StoreFile.class.getName());
+
+  // Config keys.
+  static final String IO_STOREFILE_BLOOM_ERROR_RATE = "io.storefile.bloom.error.rate";
+  static final String IO_STOREFILE_BLOOM_MAX_FOLD = "io.storefile.bloom.max.fold";
+  static final String IO_STOREFILE_BLOOM_MAX_KEYS = "io.storefile.bloom.max.keys";
+  static final String IO_STOREFILE_BLOOM_ENABLED = "io.storefile.bloom.enabled";
+  static final String HFILE_BLOCK_CACHE_SIZE_KEY = "hfile.block.cache.size";
+
+  public static enum BloomType {
+    /**
+     * Bloomfilters disabled
+     */
+    NONE,
+    /**
+     * Bloom enabled with Table row as Key
+     */
+    ROW,
+    /**
+     * Bloom enabled with Table row & column (family+qualifier) as Key
+     */
+    ROWCOL
+  }
+  // Keys for fileinfo values in HFile
+  /** Max Sequence ID in FileInfo */
+  public static final byte [] MAX_SEQ_ID_KEY = Bytes.toBytes("MAX_SEQ_ID_KEY");
+  /** Major compaction flag in FileInfo */
+  public static final byte [] MAJOR_COMPACTION_KEY = Bytes.toBytes("MAJOR_COMPACTION_KEY");
+  /** Bloom filter Type in FileInfo */
+  static final byte[] BLOOM_FILTER_TYPE_KEY = Bytes.toBytes("BLOOM_FILTER_TYPE");
+  /** Key for Timerange information in metadata*/
+  static final byte[] TIMERANGE_KEY = Bytes.toBytes("TIMERANGE");
+
+  /** Meta data block name for bloom filter meta-info (ie: bloom params/specs) */
+  static final String BLOOM_FILTER_META_KEY = "BLOOM_FILTER_META";
+  /** Meta data block name for bloom filter data (ie: bloom bits) */
+  static final String BLOOM_FILTER_DATA_KEY = "BLOOM_FILTER_DATA";
+
+  // Make default block size for StoreFiles 8k while testing.  TODO: FIX!
+  // Need to make it 8k for testing.
+  public static final int DEFAULT_BLOCKSIZE_SMALL = 8 * 1024;
+
+
+  private static BlockCache hfileBlockCache = null;
+
+  private final FileSystem fs;
+  // This file's path.
+  private final Path path;
+  // If this storefile references another, this is the reference instance.
+  private Reference reference;
+  // If this StoreFile references another, this is the other files path.
+  private Path referencePath;
+  // Should the block cache be used or not.
+  private boolean blockcache;
+  // Is this from an in-memory store
+  private boolean inMemory;
+
+  // Keys for metadata stored in backing HFile.
+  // Set when we obtain a Reader.
+  private long sequenceid = -1;
+
+  // If true, this file was product of a major compaction.  Its then set
+  // whenever you get a Reader.
+  private AtomicBoolean majorCompaction = null;
+
+  /** Meta key set when store file is a result of a bulk load */
+  public static final byte[] BULKLOAD_TASK_KEY =
+    Bytes.toBytes("BULKLOAD_SOURCE_TASK");
+  public static final byte[] BULKLOAD_TIME_KEY =
+    Bytes.toBytes("BULKLOAD_TIMESTAMP");
+
+  /**
+   * Map of the metadata entries in the corresponding HFile
+   */
+  private Map<byte[], byte[]> metadataMap;
+
+  /*
+   * Regex that will work for straight filenames and for reference names.
+   * If reference, then the regex has more than just one group.  Group 1 is
+   * this files id.  Group 2 the referenced region name, etc.
+   */
+  private static final Pattern REF_NAME_PARSER =
+    Pattern.compile("^(\\d+)(?:\\.(.+))?$");
+
+  // StoreFile.Reader
+  private volatile Reader reader;
+
+  // Used making file ids.
+  private final static Random rand = new Random();
+  private final Configuration conf;
+  private final BloomType bloomType;
+
+
+  /**
+   * Constructor, loads a reader and it's indices, etc. May allocate a
+   * substantial amount of ram depending on the underlying files (10-20MB?).
+   *
+   * @param fs  The current file system to use.
+   * @param p  The path of the file.
+   * @param blockcache  <code>true</code> if the block cache is enabled.
+   * @param conf  The current configuration.
+   * @param bt The bloom type to use for this store file
+   * @throws IOException When opening the reader fails.
+   */
+  StoreFile(final FileSystem fs,
+            final Path p,
+            final boolean blockcache,
+            final Configuration conf,
+            final BloomType bt,
+            final boolean inMemory)
+      throws IOException {
+    this.conf = conf;
+    this.fs = fs;
+    this.path = p;
+    this.blockcache = blockcache;
+    this.inMemory = inMemory;
+    if (isReference(p)) {
+      this.reference = Reference.read(fs, p);
+      this.referencePath = getReferredToFile(this.path);
+    }
+    // ignore if the column family config says "no bloom filter"
+    // even if there is one in the hfile.
+    if (conf.getBoolean(IO_STOREFILE_BLOOM_ENABLED, true)) {
+      this.bloomType = bt;
+    } else {
+      this.bloomType = BloomType.NONE;
+      LOG.info("Ignoring bloom filter check for file (disabled in config)");
+    }
+  }
+
+  /**
+   * @return Path or null if this StoreFile was made with a Stream.
+   */
+  Path getPath() {
+    return this.path;
+  }
+
+  /**
+   * @return The Store/ColumnFamily this file belongs to.
+   */
+  byte [] getFamily() {
+    return Bytes.toBytes(this.path.getParent().getName());
+  }
+
+  /**
+   * @return True if this is a StoreFile Reference; call after {@link #open()}
+   * else may get wrong answer.
+   */
+  boolean isReference() {
+    return this.reference != null;
+  }
+
+  /**
+   * @param p Path to check.
+   * @return True if the path has format of a HStoreFile reference.
+   */
+  public static boolean isReference(final Path p) {
+    return !p.getName().startsWith("_") &&
+      isReference(p, REF_NAME_PARSER.matcher(p.getName()));
+  }
+
+  /**
+   * @param p Path to check.
+   * @param m Matcher to use.
+   * @return True if the path has format of a HStoreFile reference.
+   */
+  public static boolean isReference(final Path p, final Matcher m) {
+    if (m == null || !m.matches()) {
+      LOG.warn("Failed match of store file name " + p.toString());
+      throw new RuntimeException("Failed match of store file name " +
+          p.toString());
+    }
+    return m.groupCount() > 1 && m.group(2) != null;
+  }
+
+  /*
+   * Return path to the file referred to by a Reference.  Presumes a directory
+   * hierarchy of <code>${hbase.rootdir}/tablename/regionname/familyname</code>.
+   * @param p Path to a Reference file.
+   * @return Calculated path to parent region file.
+   * @throws IOException
+   */
+  static Path getReferredToFile(final Path p) {
+    Matcher m = REF_NAME_PARSER.matcher(p.getName());
+    if (m == null || !m.matches()) {
+      LOG.warn("Failed match of store file name " + p.toString());
+      throw new RuntimeException("Failed match of store file name " +
+          p.toString());
+    }
+    // Other region name is suffix on the passed Reference file name
+    String otherRegion = m.group(2);
+    // Tabledir is up two directories from where Reference was written.
+    Path tableDir = p.getParent().getParent().getParent();
+    String nameStrippedOfSuffix = m.group(1);
+    // Build up new path with the referenced region in place of our current
+    // region in the reference path.  Also strip regionname suffix from name.
+    return new Path(new Path(new Path(tableDir, otherRegion),
+      p.getParent().getName()), nameStrippedOfSuffix);
+  }
+
+  /**
+   * @return True if this file was made by a major compaction.
+   */
+  boolean isMajorCompaction() {
+    if (this.majorCompaction == null) {
+      throw new NullPointerException("This has not been set yet");
+    }
+    return this.majorCompaction.get();
+  }
+
+  /**
+   * @return This files maximum edit sequence id.
+   */
+  public long getMaxSequenceId() {
+    return this.sequenceid;
+  }
+
+  /**
+   * Return the highest sequence ID found across all storefiles in
+   * the given list. Store files that were created by a mapreduce
+   * bulk load are ignored, as they do not correspond to any edit
+   * log items.
+   * @return 0 if no non-bulk-load files are provided or, this is Store that
+   * does not yet have any store files.
+   */
+  public static long getMaxSequenceIdInList(List<StoreFile> sfs) {
+    long max = 0;
+    for (StoreFile sf : sfs) {
+      if (!sf.isBulkLoadResult()) {
+        max = Math.max(max, sf.getMaxSequenceId());
+      }
+    }
+    return max;
+  }
+
+  /**
+   * @return true if this storefile was created by HFileOutputFormat
+   * for a bulk load.
+   */
+  boolean isBulkLoadResult() {
+    return metadataMap.containsKey(BULKLOAD_TIME_KEY);
+  }
+
+  /**
+   * Return the timestamp at which this bulk load file was generated.
+   */
+  public long getBulkLoadTimestamp() {
+    return Bytes.toLong(metadataMap.get(BULKLOAD_TIME_KEY));
+  }
+
+  /**
+   * Returns the block cache or <code>null</code> in case none should be used.
+   *
+   * @param conf  The current configuration.
+   * @return The block cache or <code>null</code>.
+   */
+  public static synchronized BlockCache getBlockCache(Configuration conf) {
+    if (hfileBlockCache != null) return hfileBlockCache;
+
+    float cachePercentage = conf.getFloat(HFILE_BLOCK_CACHE_SIZE_KEY, 0.2f);
+    // There should be a better way to optimize this. But oh well.
+    if (cachePercentage == 0L) return null;
+    if (cachePercentage > 1.0) {
+      throw new IllegalArgumentException(HFILE_BLOCK_CACHE_SIZE_KEY +
+        " must be between 0.0 and 1.0, not > 1.0");
+    }
+
+    // Calculate the amount of heap to give the heap.
+    MemoryUsage mu = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
+    long cacheSize = (long)(mu.getMax() * cachePercentage);
+    LOG.info("Allocating LruBlockCache with maximum size " +
+      StringUtils.humanReadableInt(cacheSize));
+    hfileBlockCache = new LruBlockCache(cacheSize, DEFAULT_BLOCKSIZE_SMALL);
+    return hfileBlockCache;
+  }
+
+  /**
+   * @return the blockcache
+   */
+  public BlockCache getBlockCache() {
+    return blockcache ? getBlockCache(conf) : null;
+  }
+
+  /**
+   * Opens reader on this store file.  Called by Constructor.
+   * @return Reader for the store file.
+   * @throws IOException
+   * @see #closeReader()
+   */
+  private Reader open() throws IOException {
+    if (this.reader != null) {
+      throw new IllegalAccessError("Already open");
+    }
+    if (isReference()) {
+      this.reader = new HalfStoreFileReader(this.fs, this.referencePath,
+          getBlockCache(), this.reference);
+    } else {
+      this.reader = new Reader(this.fs, this.path, getBlockCache(),
+          this.inMemory);
+    }
+    // Load up indices and fileinfo.
+    metadataMap = Collections.unmodifiableMap(this.reader.loadFileInfo());
+    // Read in our metadata.
+    byte [] b = metadataMap.get(MAX_SEQ_ID_KEY);
+    if (b != null) {
+      // By convention, if halfhfile, top half has a sequence number > bottom
+      // half. Thats why we add one in below. Its done for case the two halves
+      // are ever merged back together --rare.  Without it, on open of store,
+      // since store files are distingushed by sequence id, the one half would
+      // subsume the other.
+      this.sequenceid = Bytes.toLong(b);
+      if (isReference()) {
+        if (Reference.isTopFileRegion(this.reference.getFileRegion())) {
+          this.sequenceid += 1;
+        }
+      }
+    }
+    this.reader.setSequenceID(this.sequenceid);
+
+    b = metadataMap.get(MAJOR_COMPACTION_KEY);
+    if (b != null) {
+      boolean mc = Bytes.toBoolean(b);
+      if (this.majorCompaction == null) {
+        this.majorCompaction = new AtomicBoolean(mc);
+      } else {
+        this.majorCompaction.set(mc);
+      }
+    } else {
+      // Presume it is not major compacted if it doesn't explicity say so
+      // HFileOutputFormat explicitly sets the major compacted key.
+      this.majorCompaction = new AtomicBoolean(false);
+    }
+
+    if (this.bloomType != BloomType.NONE) {
+      this.reader.loadBloomfilter();
+    }
+
+    try {
+      byte [] timerangeBytes = metadataMap.get(TIMERANGE_KEY);
+      if (timerangeBytes != null) {
+        this.reader.timeRangeTracker = new TimeRangeTracker();
+        Writables.copyWritable(timerangeBytes, this.reader.timeRangeTracker);
+      }
+    } catch (IllegalArgumentException e) {
+      LOG.error("Error reading timestamp range data from meta -- " +
+          "proceeding without", e);
+      this.reader.timeRangeTracker = null;
+    }
+    return this.reader;
+  }
+
+  /**
+   * @return Reader for StoreFile. creates if necessary
+   * @throws IOException
+   */
+  public Reader createReader() throws IOException {
+    if (this.reader == null) {
+      this.reader = open();
+    }
+    return this.reader;
+  }
+
+  /**
+   * @return Current reader.  Must call createReader first else returns null.
+   * @throws IOException
+   * @see #createReader()
+   */
+  public Reader getReader() {
+    return this.reader;
+  }
+
+  /**
+   * @throws IOException
+   */
+  public synchronized void closeReader() throws IOException {
+    if (this.reader != null) {
+      this.reader.close();
+      this.reader = null;
+    }
+  }
+
+  /**
+   * Delete this file
+   * @throws IOException
+   */
+  public void deleteReader() throws IOException {
+    closeReader();
+    this.fs.delete(getPath(), true);
+  }
+
+  @Override
+  public String toString() {
+    return this.path.toString() +
+      (isReference()? "-" + this.referencePath + "-" + reference.toString(): "");
+  }
+
+  /**
+   * @return a length description of this StoreFile, suitable for debug output
+   */
+  public String toStringDetailed() {
+    StringBuilder sb = new StringBuilder();
+    sb.append(this.path.toString());
+    sb.append(", isReference=").append(isReference());
+    sb.append(", isBulkLoadResult=").append(isBulkLoadResult());
+    if (isBulkLoadResult()) {
+      sb.append(", bulkLoadTS=").append(getBulkLoadTimestamp());
+    } else {
+      sb.append(", seqid=").append(getMaxSequenceId());
+    }
+    sb.append(", majorCompaction=").append(isMajorCompaction());
+
+    return sb.toString();
+  }
+
+  /**
+   * Utility to help with rename.
+   * @param fs
+   * @param src
+   * @param tgt
+   * @return True if succeeded.
+   * @throws IOException
+   */
+  public static Path rename(final FileSystem fs,
+                            final Path src,
+                            final Path tgt)
+      throws IOException {
+
+    if (!fs.exists(src)) {
+      throw new FileNotFoundException(src.toString());
+    }
+    if (!fs.rename(src, tgt)) {
+      throw new IOException("Failed rename of " + src + " to " + tgt);
+    }
+    return tgt;
+  }
+
+  /**
+   * Get a store file writer. Client is responsible for closing file when done.
+   *
+   * @param fs
+   * @param dir Path to family directory.  Makes the directory if doesn't exist.
+   * Creates a file with a unique name in this directory.
+   * @param blocksize size per filesystem block
+   * @return StoreFile.Writer
+   * @throws IOException
+   */
+  public static Writer createWriter(final FileSystem fs,
+                                              final Path dir,
+                                              final int blocksize)
+      throws IOException {
+
+    return createWriter(fs, dir, blocksize, null, null, null, BloomType.NONE, 0);
+  }
+
+  /**
+   * Create a store file writer. Client is responsible for closing file when done.
+   * If metadata, add BEFORE closing using appendMetadata()
+   * @param fs
+   * @param dir Path to family directory.  Makes the directory if doesn't exist.
+   * Creates a file with a unique name in this directory.
+   * @param blocksize
+   * @param algorithm Pass null to get default.
+   * @param conf HBase system configuration. used with bloom filters
+   * @param bloomType column family setting for bloom filters
+   * @param c Pass null to get default.
+   * @param maxKeySize peak theoretical entry size (maintains error rate)
+   * @return HFile.Writer
+   * @throws IOException
+   */
+  public static StoreFile.Writer createWriter(final FileSystem fs,
+                                              final Path dir,
+                                              final int blocksize,
+                                              final Compression.Algorithm algorithm,
+                                              final KeyValue.KVComparator c,
+                                              final Configuration conf,
+                                              BloomType bloomType,
+                                              int maxKeySize)
+      throws IOException {
+
+    if (!fs.exists(dir)) {
+      fs.mkdirs(dir);
+    }
+    Path path = getUniqueFile(fs, dir);
+    if(conf == null || !conf.getBoolean(IO_STOREFILE_BLOOM_ENABLED, true)) {
+      bloomType = BloomType.NONE;
+    }
+
+    return new Writer(fs, path, blocksize,
+        algorithm == null? HFile.DEFAULT_COMPRESSION_ALGORITHM: algorithm,
+        conf, c == null? KeyValue.COMPARATOR: c, bloomType, maxKeySize);
+  }
+
+  /**
+   * @param fs
+   * @param dir Directory to create file in.
+   * @return random filename inside passed <code>dir</code>
+   */
+  public static Path getUniqueFile(final FileSystem fs, final Path dir)
+      throws IOException {
+    if (!fs.getFileStatus(dir).isDir()) {
+      throw new IOException("Expecting " + dir.toString() +
+        " to be a directory");
+    }
+    return fs.getFileStatus(dir).isDir()? getRandomFilename(fs, dir): dir;
+  }
+
+  /**
+   *
+   * @param fs
+   * @param dir
+   * @return Path to a file that doesn't exist at time of this invocation.
+   * @throws IOException
+   */
+  static Path getRandomFilename(final FileSystem fs, final Path dir)
+      throws IOException {
+    return getRandomFilename(fs, dir, null);
+  }
+
+  /**
+   *
+   * @param fs
+   * @param dir
+   * @param suffix
+   * @return Path to a file that doesn't exist at time of this invocation.
+   * @throws IOException
+   */
+  static Path getRandomFilename(final FileSystem fs,
+                                final Path dir,
+                                final String suffix)
+      throws IOException {
+    long id = -1;
+    Path p = null;
+    do {
+      id = Math.abs(rand.nextLong());
+      p = new Path(dir, Long.toString(id) +
+        ((suffix == null || suffix.length() <= 0)? "": suffix));
+    } while(fs.exists(p));
+    return p;
+  }
+
+  /**
+   * Write out a split reference.
+   *
+   * Package local so it doesnt leak out of regionserver.
+   *
+   * @param fs
+   * @param splitDir Presumes path format is actually
+   * <code>SOME_DIRECTORY/REGIONNAME/FAMILY</code>.
+   * @param f File to split.
+   * @param splitRow
+   * @param range
+   * @return Path to created reference.
+   * @throws IOException
+   */
+  static Path split(final FileSystem fs,
+                    final Path splitDir,
+                    final StoreFile f,
+                    final byte [] splitRow,
+                    final Reference.Range range)
+      throws IOException {
+    // A reference to the bottom half of the hsf store file.
+    Reference r = new Reference(splitRow, range);
+    // Add the referred-to regions name as a dot separated suffix.
+    // See REF_NAME_PARSER regex above.  The referred-to regions name is
+    // up in the path of the passed in <code>f</code> -- parentdir is family,
+    // then the directory above is the region name.
+    String parentRegionName = f.getPath().getParent().getParent().getName();
+    // Write reference with same file id only with the other region name as
+    // suffix and into the new region location (under same family).
+    Path p = new Path(splitDir, f.getPath().getName() + "." + parentRegionName);
+    return r.write(fs, p);
+  }
+
+
+  /**
+   * A StoreFile writer.  Use this to read/write HBase Store Files. It is package
+   * local because it is an implementation detail of the HBase regionserver.
+   */
+  public static class Writer {
+    private final BloomFilter bloomFilter;
+    private final BloomType bloomType;
+    private KVComparator kvComparator;
+    private KeyValue lastKv = null;
+    private byte[] lastByteArray = null;
+    TimeRangeTracker timeRangeTracker = new TimeRangeTracker();
+    /* isTimeRangeTrackerSet keeps track if the timeRange has already been set
+     * When flushing a memstore, we set TimeRange and use this variable to
+     * indicate that it doesn't need to be calculated again while
+     * appending KeyValues.
+     * It is not set in cases of compactions when it is recalculated using only
+     * the appended KeyValues*/
+    boolean isTimeRangeTrackerSet = false;
+
+    protected HFile.Writer writer;
+    /**
+     * Creates an HFile.Writer that also write helpful meta data.
+     * @param fs file system to write to
+     * @param path file name to create
+     * @param blocksize HDFS block size
+     * @param compress HDFS block compression
+     * @param conf user configuration
+     * @param comparator key comparator
+     * @param bloomType bloom filter setting
+     * @param maxKeys maximum amount of keys to add (for blooms)
+     * @throws IOException problem writing to FS
+     */
+    public Writer(FileSystem fs, Path path, int blocksize,
+        Compression.Algorithm compress, final Configuration conf,
+        final KVComparator comparator, BloomType bloomType, int maxKeys)
+        throws IOException {
+      writer = new HFile.Writer(fs, path, blocksize, compress, comparator.getRawComparator());
+
+      this.kvComparator = comparator;
+
+      BloomFilter bloom = null;
+      BloomType bt = BloomType.NONE;
+
+      if (bloomType != BloomType.NONE && conf != null) {
+        float err = conf.getFloat(IO_STOREFILE_BLOOM_ERROR_RATE, (float)0.01);
+        // Since in row+col blooms we have 2 calls to shouldSeek() instead of 1
+        // and the false positives are adding up, we should keep the error rate
+        // twice as low in order to maintain the number of false positives as
+        // desired by the user
+        if (bloomType == BloomType.ROWCOL) {
+          err /= 2;
+        }
+        int maxFold = conf.getInt(IO_STOREFILE_BLOOM_MAX_FOLD, 7);
+        int tooBig = conf.getInt(IO_STOREFILE_BLOOM_MAX_KEYS, 128*1000*1000);
+        
+        if (maxKeys < tooBig) { 
+          try {
+            bloom = new ByteBloomFilter(maxKeys, err,
+                Hash.getHashType(conf), maxFold);
+            bloom.allocBloom();
+            bt = bloomType;
+          } catch (IllegalArgumentException iae) {
+            LOG.warn(String.format(
+              "Parse error while creating bloom for %s (%d, %f)", 
+              path, maxKeys, err), iae);
+            bloom = null;
+            bt = BloomType.NONE;
+          }
+        } else {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("Skipping bloom filter because max keysize too large: " 
+                + maxKeys);
+          }
+        }
+      }
+
+      this.bloomFilter = bloom;
+      this.bloomType = bt;
+    }
+
+    /**
+     * Writes meta data.
+     * Call before {@link #close()} since its written as meta data to this file.
+     * @param maxSequenceId Maximum sequence id.
+     * @param majorCompaction True if this file is product of a major compaction
+     * @throws IOException problem writing to FS
+     */
+    public void appendMetadata(final long maxSequenceId, final boolean majorCompaction)
+    throws IOException {
+      writer.appendFileInfo(MAX_SEQ_ID_KEY, Bytes.toBytes(maxSequenceId));
+      writer.appendFileInfo(MAJOR_COMPACTION_KEY,
+          Bytes.toBytes(majorCompaction));
+      appendTimeRangeMetadata();
+    }
+
+    /**
+     * Add TimestampRange to Metadata
+     */
+    public void appendTimeRangeMetadata() throws IOException {
+      appendFileInfo(TIMERANGE_KEY,WritableUtils.toByteArray(timeRangeTracker));
+    }
+
+    /**
+     * Set TimeRangeTracker
+     * @param trt
+     */
+    public void setTimeRangeTracker(final TimeRangeTracker trt) {
+      this.timeRangeTracker = trt;
+      isTimeRangeTrackerSet = true;
+    }
+
+    /**
+     * If the timeRangeTracker is not set,
+     * update TimeRangeTracker to include the timestamp of this key
+     * @param kv
+     * @throws IOException
+     */
+    public void includeInTimeRangeTracker(final KeyValue kv) {
+      if (!isTimeRangeTrackerSet) {
+        timeRangeTracker.includeTimestamp(kv);
+      }
+    }
+
+    /**
+     * If the timeRangeTracker is not set,
+     * update TimeRangeTracker to include the timestamp of this key
+     * @param key
+     * @throws IOException
+     */
+    public void includeInTimeRangeTracker(final byte [] key) {
+      if (!isTimeRangeTrackerSet) {
+        timeRangeTracker.includeTimestamp(key);
+      }
+    }
+
+    public void append(final KeyValue kv) throws IOException {
+      if (this.bloomFilter != null) {
+        // only add to the bloom filter on a new, unique key
+        boolean newKey = true;
+        if (this.lastKv != null) {
+          switch(bloomType) {
+          case ROW:
+            newKey = ! kvComparator.matchingRows(kv, lastKv);
+            break;
+          case ROWCOL:
+            newKey = ! kvComparator.matchingRowColumn(kv, lastKv);
+            break;
+          case NONE:
+            newKey = false;
+          }
+        }
+        if (newKey) {
+          /*
+           * http://2.bp.blogspot.com/_Cib_A77V54U/StZMrzaKufI/AAAAAAAAADo/ZhK7bGoJdMQ/s400/KeyValue.png
+           * Key = RowLen + Row + FamilyLen + Column [Family + Qualifier] + TimeStamp
+           *
+           * 2 Types of Filtering:
+           *  1. Row = Row
+           *  2. RowCol = Row + Qualifier
+           */
+          switch (bloomType) {
+          case ROW:
+            this.bloomFilter.add(kv.getBuffer(), kv.getRowOffset(),
+                kv.getRowLength());
+            break;
+          case ROWCOL:
+            // merge(row, qualifier)
+            int ro = kv.getRowOffset();
+            int rl = kv.getRowLength();
+            int qo = kv.getQualifierOffset();
+            int ql = kv.getQualifierLength();
+            byte [] result = new byte[rl + ql];
+            System.arraycopy(kv.getBuffer(), ro, result, 0,  rl);
+            System.arraycopy(kv.getBuffer(), qo, result, rl, ql);
+            this.bloomFilter.add(result);
+            break;
+          default:
+          }
+          this.lastKv = kv;
+        }
+      }
+      writer.append(kv);
+      includeInTimeRangeTracker(kv);
+    }
+
+    public Path getPath() {
+      return this.writer.getPath();
+    }
+    
+    boolean hasBloom() { 
+      return this.bloomFilter != null;
+    }
+
+    public void append(final byte [] key, final byte [] value) throws IOException {
+      if (this.bloomFilter != null) {
+        // only add to the bloom filter on a new row
+        if (this.lastByteArray == null || !Arrays.equals(key, lastByteArray)) {
+          this.bloomFilter.add(key);
+          this.lastByteArray = key;
+        }
+      }
+      writer.append(key, value);
+      includeInTimeRangeTracker(key);
+    }
+
+    public void close() throws IOException {
+      // make sure we wrote something to the bloom before adding it
+      if (this.bloomFilter != null && this.bloomFilter.getKeyCount() > 0) {
+        bloomFilter.compactBloom();
+        if (this.bloomFilter.getMaxKeys() > 0) {
+          int b = this.bloomFilter.getByteSize();
+          int k = this.bloomFilter.getKeyCount();
+          int m = this.bloomFilter.getMaxKeys();
+          StoreFile.LOG.info("Bloom added to HFile (" + 
+              getPath() + "): " + StringUtils.humanReadableInt(b) + ", " +
+              k + "/" + m + " (" + NumberFormat.getPercentInstance().format(
+                ((double)k) / ((double)m)) + ")");
+        }
+        writer.appendMetaBlock(BLOOM_FILTER_META_KEY, bloomFilter.getMetaWriter());
+        writer.appendMetaBlock(BLOOM_FILTER_DATA_KEY, bloomFilter.getDataWriter());
+        writer.appendFileInfo(BLOOM_FILTER_TYPE_KEY, Bytes.toBytes(bloomType.toString()));
+      }
+      writer.close();
+    }
+
+    public void appendFileInfo(byte[] key, byte[] value) throws IOException {
+      writer.appendFileInfo(key, value);
+    }
+  }
+
+  /**
+   * Reader for a StoreFile.
+   */
+  public static class Reader {
+    static final Log LOG = LogFactory.getLog(Reader.class.getName());
+
+    protected BloomFilter bloomFilter = null;
+    protected BloomType bloomFilterType;
+    private final HFile.Reader reader;
+    protected TimeRangeTracker timeRangeTracker = null;
+    protected long sequenceID = -1;
+
+    public Reader(FileSystem fs, Path path, BlockCache blockCache, boolean inMemory)
+        throws IOException {
+      reader = new HFile.Reader(fs, path, blockCache, inMemory);
+      bloomFilterType = BloomType.NONE;
+    }
+
+    public RawComparator<byte []> getComparator() {
+      return reader.getComparator();
+    }
+
+    /**
+     * Get a scanner to scan over this StoreFile.
+     *
+     * @param cacheBlocks should this scanner cache blocks?
+     * @param pread use pread (for highly concurrent small readers)
+     * @return a scanner
+     */
+    public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean pread) {
+      return new StoreFileScanner(this, getScanner(cacheBlocks, pread));
+    }
+
+    /**
+     * Warning: Do not write further code which depends on this call. Instead
+     * use getStoreFileScanner() which uses the StoreFileScanner class/interface
+     * which is the preferred way to scan a store with higher level concepts.
+     *
+     * @param cacheBlocks should we cache the blocks?
+     * @param pread use pread (for concurrent small readers)
+     * @return the underlying HFileScanner
+     */
+    @Deprecated
+    public HFileScanner getScanner(boolean cacheBlocks, boolean pread) {
+      return reader.getScanner(cacheBlocks, pread);
+    }
+
+    public void close() throws IOException {
+      reader.close();
+    }
+
+    public boolean shouldSeek(Scan scan, final SortedSet<byte[]> columns) {
+        return (passesTimerangeFilter(scan) && passesBloomFilter(scan,columns));
+    }
+
+    /**
+     * Check if this storeFile may contain keys within the TimeRange
+     * @param scan
+     * @return False if it definitely does not exist in this StoreFile
+     */
+    private boolean passesTimerangeFilter(Scan scan) {
+      if (timeRangeTracker == null) {
+        return true;
+      } else {
+        return timeRangeTracker.includesTimeRange(scan.getTimeRange());
+      }
+    }
+
+    private boolean passesBloomFilter(Scan scan, final SortedSet<byte[]> columns) {
+      if (this.bloomFilter == null || !scan.isGetScan()) {
+        return true;
+      }
+      byte[] row = scan.getStartRow();
+      byte[] key;
+      switch (this.bloomFilterType) {
+        case ROW:
+          key = row;
+          break;
+        case ROWCOL:
+          if (columns != null && columns.size() == 1) {
+            byte[] col = columns.first();
+            key = Bytes.add(row, col);
+            break;
+          }
+          //$FALL-THROUGH$
+        default:
+          return true;
+      }
+
+      try {
+        ByteBuffer bloom = reader.getMetaBlock(BLOOM_FILTER_DATA_KEY, true);
+        if (bloom != null) {
+          if (this.bloomFilterType == BloomType.ROWCOL) {
+            // Since a Row Delete is essentially a DeleteFamily applied to all
+            // columns, a file might be skipped if using row+col Bloom filter.
+            // In order to ensure this file is included an additional check is
+            // required looking only for a row bloom.
+            return this.bloomFilter.contains(key, bloom) ||
+                this.bloomFilter.contains(row, bloom);
+          }
+          else {
+            return this.bloomFilter.contains(key, bloom);
+          }
+        }
+      } catch (IOException e) {
+        LOG.error("Error reading bloom filter data -- proceeding without",
+            e);
+        setBloomFilterFaulty();
+      } catch (IllegalArgumentException e) {
+        LOG.error("Bad bloom filter data -- proceeding without", e);
+        setBloomFilterFaulty();
+      }
+
+      return true;
+    }
+
+    public Map<byte[], byte[]> loadFileInfo() throws IOException {
+      Map<byte [], byte []> fi = reader.loadFileInfo();
+
+      byte[] b = fi.get(BLOOM_FILTER_TYPE_KEY);
+      if (b != null) {
+        bloomFilterType = BloomType.valueOf(Bytes.toString(b));
+      }
+
+      return fi;
+    }
+
+    public void loadBloomfilter() {
+      if (this.bloomFilter != null) {
+        return; // already loaded
+      }
+
+      try {
+        ByteBuffer b = reader.getMetaBlock(BLOOM_FILTER_META_KEY, false);
+        if (b != null) {
+          if (bloomFilterType == BloomType.NONE) {
+            throw new IOException("valid bloom filter type not found in FileInfo");
+          }
+
+
+          this.bloomFilter = new ByteBloomFilter(b);
+          LOG.info("Loaded " + (bloomFilterType== BloomType.ROW? "row":"col")
+                 + " bloom filter metadata for " + reader.getName());
+        }
+      } catch (IOException e) {
+        LOG.error("Error reading bloom filter meta -- proceeding without", e);
+        this.bloomFilter = null;
+      } catch (IllegalArgumentException e) {
+        LOG.error("Bad bloom filter meta -- proceeding without", e);
+        this.bloomFilter = null;
+      }
+    }
+
+    public int getFilterEntries() {
+      return (this.bloomFilter != null) ? this.bloomFilter.getKeyCount()
+          : reader.getFilterEntries();
+    }
+
+    public ByteBuffer getMetaBlock(String bloomFilterDataKey, boolean cacheBlock) throws IOException {
+      return reader.getMetaBlock(bloomFilterDataKey, cacheBlock);
+    }
+
+    public void setBloomFilterFaulty() {
+      bloomFilter = null;
+    }
+
+    public byte[] getLastKey() {
+      return reader.getLastKey();
+    }
+
+    public byte[] midkey() throws IOException {
+      return reader.midkey();
+    }
+
+    public long length() {
+      return reader.length();
+    }
+
+    public int getEntries() {
+      return reader.getEntries();
+    }
+
+    public byte[] getFirstKey() {
+      return reader.getFirstKey();
+    }
+
+    public long indexSize() {
+      return reader.indexSize();
+    }
+
+    public BloomType getBloomFilterType() {
+      return this.bloomFilterType;
+    }
+
+    public long getSequenceID() {
+      return sequenceID;
+    }
+
+    public void setSequenceID(long sequenceID) {
+      this.sequenceID = sequenceID;
+    }
+  }
+
+  /**
+   * Useful comparators for comparing StoreFiles.
+   */
+  abstract static class Comparators {
+    /**
+     * Comparator that compares based on the flush time of
+     * the StoreFiles. All bulk loads are placed before all non-
+     * bulk loads, and then all files are sorted by sequence ID.
+     * If there are ties, the path name is used as a tie-breaker.
+     */
+    static final Comparator<StoreFile> FLUSH_TIME =
+      Ordering.compound(ImmutableList.of(
+          Ordering.natural().onResultOf(new GetBulkTime()),
+          Ordering.natural().onResultOf(new GetSeqId()),
+          Ordering.natural().onResultOf(new GetPathName())
+      ));
+
+    private static class GetBulkTime implements Function<StoreFile, Long> {
+      @Override
+      public Long apply(StoreFile sf) {
+        if (!sf.isBulkLoadResult()) return Long.MAX_VALUE;
+        return sf.getBulkLoadTimestamp();
+      }
+    }
+    private static class GetSeqId implements Function<StoreFile, Long> {
+      @Override
+      public Long apply(StoreFile sf) {
+        if (sf.isBulkLoadResult()) return -1L;
+        return sf.getMaxSequenceId();
+      }
+    }
+    private static class GetPathName implements Function<StoreFile, String> {
+      @Override
+      public String apply(StoreFile sf) {
+        return sf.getPath().getName();
+      }
+    }
+
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
new file mode 100644
index 0000000..96fa423
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
@@ -0,0 +1,171 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.List;
+import java.util.SortedSet;
+
+/**
+ * KeyValueScanner adaptor over the Reader.  It also provides hooks into
+ * bloom filter things.
+ */
+class StoreFileScanner implements KeyValueScanner {
+  static final Log LOG = LogFactory.getLog(Store.class);
+
+  // the reader it comes from:
+  private final StoreFile.Reader reader;
+  private final HFileScanner hfs;
+  private KeyValue cur = null;
+
+  /**
+   * Implements a {@link KeyValueScanner} on top of the specified {@link HFileScanner}
+   * @param hfs HFile scanner
+   */
+  public StoreFileScanner(StoreFile.Reader reader, HFileScanner hfs) {
+    this.reader = reader;
+    this.hfs = hfs;
+  }
+
+  /**
+   * Return an array of scanners corresponding to the given
+   * set of store files.
+   */
+  public static List<StoreFileScanner> getScannersForStoreFiles(
+      Collection<StoreFile> filesToCompact,
+      boolean cacheBlocks,
+      boolean usePread) throws IOException {
+    List<StoreFileScanner> scanners =
+      new ArrayList<StoreFileScanner>(filesToCompact.size());
+    for (StoreFile file : filesToCompact) {
+      StoreFile.Reader r = file.createReader();
+      scanners.add(r.getStoreFileScanner(cacheBlocks, usePread));
+    }
+    return scanners;
+  }
+
+  public String toString() {
+    return "StoreFileScanner[" + hfs.toString() + ", cur=" + cur + "]";
+  }
+
+  public KeyValue peek() {
+    return cur;
+  }
+
+  public KeyValue next() throws IOException {
+    KeyValue retKey = cur;
+    cur = hfs.getKeyValue();
+    try {
+      // only seek if we arent at the end. cur == null implies 'end'.
+      if (cur != null)
+        hfs.next();
+    } catch(IOException e) {
+      throw new IOException("Could not iterate " + this, e);
+    }
+    return retKey;
+  }
+
+  public boolean seek(KeyValue key) throws IOException {
+    try {
+      if(!seekAtOrAfter(hfs, key)) {
+        close();
+        return false;
+      }
+      cur = hfs.getKeyValue();
+      hfs.next();
+      return true;
+    } catch(IOException ioe) {
+      throw new IOException("Could not seek " + this, ioe);
+    }
+  }
+
+  public boolean reseek(KeyValue key) throws IOException {
+    try {
+      if (!reseekAtOrAfter(hfs, key)) {
+        close();
+        return false;
+      }
+      cur = hfs.getKeyValue();
+      hfs.next();
+      return true;
+    } catch (IOException ioe) {
+      throw new IOException("Could not seek " + this, ioe);
+    }
+  }
+
+  public void close() {
+    // Nothing to close on HFileScanner?
+    cur = null;
+  }
+
+  /**
+   *
+   * @param s
+   * @param k
+   * @return
+   * @throws IOException
+   */
+  public static boolean seekAtOrAfter(HFileScanner s, KeyValue k)
+  throws IOException {
+    int result = s.seekTo(k.getBuffer(), k.getKeyOffset(), k.getKeyLength());
+    if(result < 0) {
+      // Passed KV is smaller than first KV in file, work from start of file
+      return s.seekTo();
+    } else if(result > 0) {
+      // Passed KV is larger than current KV in file, if there is a next
+      // it is the "after", if not then this scanner is done.
+      return s.next();
+    }
+    // Seeked to the exact key
+    return true;
+  }
+
+  static boolean reseekAtOrAfter(HFileScanner s, KeyValue k)
+  throws IOException {
+    //This function is similar to seekAtOrAfter function
+    int result = s.reseekTo(k.getBuffer(), k.getKeyOffset(), k.getKeyLength());
+    if (result <= 0) {
+      return true;
+    } else {
+      // passed KV is larger than current KV in file, if there is a next
+      // it is after, if not then this scanner is done.
+      return s.next();
+    }
+  }
+
+  // StoreFile filter hook.
+  public boolean shouldSeek(Scan scan, final SortedSet<byte[]> columns) {
+    return reader.shouldSeek(scan, columns);
+  }
+
+  @Override
+  public long getSequenceID() {
+    return reader.getSequenceID();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java
new file mode 100644
index 0000000..8706e65
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java
@@ -0,0 +1,62 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+/**
+ * A package protected interface for a store flushing.
+ * A store flusher carries the state required to prepare/flush/commit the
+ * store's cache.
+ */
+interface StoreFlusher {
+
+  /**
+   * Prepare for a store flush (create snapshot)
+   *
+   * Requires pausing writes.
+   *
+   * A very short operation.
+   */
+  void prepare();
+
+  /**
+   * Flush the cache (create the new store file)
+   *
+   * A length operation which doesn't require locking out any function
+   * of the store.
+   *
+   * @throws IOException in case the flush fails
+   */
+  void flushCache() throws IOException;
+
+  /**
+   * Commit the flush - add the store file to the store and clear the
+   * memstore snapshot.
+   *
+   * Requires pausing scans.
+   *
+   * A very short operation
+   *
+   * @return
+   * @throws IOException
+   */
+  boolean commit() throws IOException;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
new file mode 100644
index 0000000..0d6d2be
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java
@@ -0,0 +1,394 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.NavigableSet;
+
+/**
+ * Scanner scans both the memstore and the HStore. Coalesce KeyValue stream
+ * into List<KeyValue> for a single row.
+ */
+class StoreScanner implements KeyValueScanner, InternalScanner, ChangedReadersObserver {
+  static final Log LOG = LogFactory.getLog(StoreScanner.class);
+  private Store store;
+  private ScanQueryMatcher matcher;
+  private KeyValueHeap heap;
+  private boolean cacheBlocks;
+
+  // Used to indicate that the scanner has closed (see HBASE-1107)
+  // Doesnt need to be volatile because it's always accessed via synchronized methods
+  private boolean closing = false;
+  private final boolean isGet;
+
+  // if heap == null and lastTop != null, you need to reseek given the key below
+  private KeyValue lastTop = null;
+
+  /**
+   * Opens a scanner across memstore, snapshot, and all StoreFiles.
+   *
+   * @param store who we scan
+   * @param scan the spec
+   * @param columns which columns we are scanning
+   * @throws IOException
+   */
+  StoreScanner(Store store, Scan scan, final NavigableSet<byte[]> columns)
+                              throws IOException {
+    this.store = store;
+    this.cacheBlocks = scan.getCacheBlocks();
+    matcher = new ScanQueryMatcher(scan, store.getFamily().getName(),
+        columns, store.ttl, store.comparator.getRawComparator(),
+        store.versionsToReturn(scan.getMaxVersions()), 
+        false);
+
+    this.isGet = scan.isGetScan();
+    // pass columns = try to filter out unnecessary ScanFiles
+    List<KeyValueScanner> scanners = getScanners(scan, columns);
+
+    // Seek all scanners to the start of the Row (or if the exact maching row key does not
+    // exist, then to the start of the next matching Row).
+    for(KeyValueScanner scanner : scanners) {
+      scanner.seek(matcher.getStartKey());
+    }
+
+    // Combine all seeked scanners with a heap
+    heap = new KeyValueHeap(scanners, store.comparator);
+
+    this.store.addChangedReaderObserver(this);
+  }
+
+  /**
+   * Used for major compactions.<p>
+   *
+   * Opens a scanner across specified StoreFiles.
+   * @param store who we scan
+   * @param scan the spec
+   * @param scanners ancilliary scanners
+   */
+  StoreScanner(Store store, Scan scan, List<? extends KeyValueScanner> scanners,
+      boolean retainDeletesInOutput)
+  throws IOException {
+    this.store = store;
+    this.cacheBlocks = false;
+    this.isGet = false;
+    matcher = new ScanQueryMatcher(scan, store.getFamily().getName(),
+        null, store.ttl, store.comparator.getRawComparator(),
+        store.versionsToReturn(scan.getMaxVersions()), retainDeletesInOutput);
+
+    // Seek all scanners to the initial key
+    for(KeyValueScanner scanner : scanners) {
+      scanner.seek(matcher.getStartKey());
+    }
+
+    // Combine all seeked scanners with a heap
+    heap = new KeyValueHeap(scanners, store.comparator);
+  }
+
+  // Constructor for testing.
+  StoreScanner(final Scan scan, final byte [] colFamily, final long ttl,
+      final KeyValue.KVComparator comparator,
+      final NavigableSet<byte[]> columns,
+      final List<KeyValueScanner> scanners)
+        throws IOException {
+    this.store = null;
+    this.isGet = false;
+    this.cacheBlocks = scan.getCacheBlocks();
+    this.matcher = new ScanQueryMatcher(scan, colFamily, columns, ttl,
+        comparator.getRawComparator(), scan.getMaxVersions(), false);
+
+    // Seek all scanners to the initial key
+    for(KeyValueScanner scanner : scanners) {
+      scanner.seek(matcher.getStartKey());
+    }
+    heap = new KeyValueHeap(scanners, comparator);
+  }
+
+  /*
+   * @return List of scanners ordered properly.
+   */
+  private List<KeyValueScanner> getScanners() throws IOException {
+    // First the store file scanners
+
+    // TODO this used to get the store files in descending order,
+    // but now we get them in ascending order, which I think is
+    // actually more correct, since memstore get put at the end.
+    List<StoreFileScanner> sfScanners = StoreFileScanner
+      .getScannersForStoreFiles(store.getStorefiles(), cacheBlocks, isGet);
+    List<KeyValueScanner> scanners =
+      new ArrayList<KeyValueScanner>(sfScanners.size()+1);
+    scanners.addAll(sfScanners);
+    // Then the memstore scanners
+    scanners.addAll(this.store.memstore.getScanners());
+    return scanners;
+  }
+
+  /*
+   * @return List of scanners to seek, possibly filtered by StoreFile.
+   */
+  private List<KeyValueScanner> getScanners(Scan scan,
+      final NavigableSet<byte[]> columns) throws IOException {
+    boolean memOnly;
+    boolean filesOnly;
+    if (scan instanceof InternalScan) {
+      InternalScan iscan = (InternalScan)scan;
+      memOnly = iscan.isCheckOnlyMemStore();
+      filesOnly = iscan.isCheckOnlyStoreFiles();
+    } else {
+      memOnly = false;
+      filesOnly = false;
+    }
+    List<KeyValueScanner> scanners = new LinkedList<KeyValueScanner>();
+    // First the store file scanners
+    if (memOnly == false) {
+      List<StoreFileScanner> sfScanners = StoreFileScanner
+      .getScannersForStoreFiles(store.getStorefiles(), cacheBlocks, isGet);
+
+      // include only those scan files which pass all filters
+      for (StoreFileScanner sfs : sfScanners) {
+        if (sfs.shouldSeek(scan, columns)) {
+          scanners.add(sfs);
+        }
+      }
+    }
+
+    // Then the memstore scanners
+    if ((filesOnly == false) && (this.store.memstore.shouldSeek(scan))) {
+        scanners.addAll(this.store.memstore.getScanners());
+    }
+    return scanners;
+  }
+
+  public synchronized KeyValue peek() {
+    if (this.heap == null) {
+      return this.lastTop;
+    }
+    return this.heap.peek();
+  }
+
+  public KeyValue next() {
+    // throw runtime exception perhaps?
+    throw new RuntimeException("Never call StoreScanner.next()");
+  }
+
+  public synchronized void close() {
+    if (this.closing) return;
+    this.closing = true;
+    // under test, we dont have a this.store
+    if (this.store != null)
+      this.store.deleteChangedReaderObserver(this);
+    if (this.heap != null)
+      this.heap.close();
+    this.heap = null; // CLOSED!
+    this.lastTop = null; // If both are null, we are closed.
+  }
+
+  public synchronized boolean seek(KeyValue key) throws IOException {
+    if (this.heap == null) {
+
+      List<KeyValueScanner> scanners = getScanners();
+
+      heap = new KeyValueHeap(scanners, store.comparator);
+    }
+
+    return this.heap.seek(key);
+  }
+
+  /**
+   * Get the next row of values from this Store.
+   * @param outResult
+   * @param limit
+   * @return true if there are more rows, false if scanner is done
+   */
+  public synchronized boolean next(List<KeyValue> outResult, int limit) throws IOException {
+    //DebugPrint.println("SS.next");
+
+    checkReseek();
+
+    // if the heap was left null, then the scanners had previously run out anyways, close and
+    // return.
+    if (this.heap == null) {
+      close();
+      return false;
+    }
+
+    KeyValue peeked = this.heap.peek();
+    if (peeked == null) {
+      close();
+      return false;
+    }
+
+    // only call setRow if the row changes; avoids confusing the query matcher
+    // if scanning intra-row
+    if ((matcher.row == null) || !peeked.matchingRow(matcher.row)) {
+      matcher.setRow(peeked.getRow());
+    }
+
+    KeyValue kv;
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    LOOP: while((kv = this.heap.peek()) != null) {
+      // kv is no longer immutable due to KeyOnlyFilter! use copy for safety
+      KeyValue copyKv = new KeyValue(kv.getBuffer(), kv.getOffset(), kv.getLength());
+      ScanQueryMatcher.MatchCode qcode = matcher.match(copyKv);
+      //DebugPrint.println("SS peek kv = " + kv + " with qcode = " + qcode);
+      switch(qcode) {
+        case INCLUDE:
+          results.add(copyKv);
+          this.heap.next();
+          if (limit > 0 && (results.size() == limit)) {
+            break LOOP;
+          }
+          continue;
+
+        case DONE:
+          // copy jazz
+          outResult.addAll(results);
+          return true;
+
+        case DONE_SCAN:
+          close();
+
+          // copy jazz
+          outResult.addAll(results);
+
+          return false;
+
+        case SEEK_NEXT_ROW:
+          // This is just a relatively simple end of scan fix, to short-cut end us if there is a
+          // endKey in the scan.
+          if (!matcher.moreRowsMayExistAfter(kv)) {
+            outResult.addAll(results);
+            return false;
+          }
+
+          reseek(matcher.getKeyForNextRow(kv));
+          break;
+
+        case SEEK_NEXT_COL:
+          reseek(matcher.getKeyForNextColumn(kv));
+          break;
+
+        case SKIP:
+          this.heap.next();
+          break;
+
+        case SEEK_NEXT_USING_HINT:
+          KeyValue nextKV = matcher.getNextKeyHint(kv);
+          if (nextKV != null) {
+            reseek(nextKV);
+          } else {
+            heap.next();
+          }
+          break;
+
+        default:
+          throw new RuntimeException("UNEXPECTED");
+      }
+    }
+
+    if (!results.isEmpty()) {
+      // copy jazz
+      outResult.addAll(results);
+      return true;
+    }
+
+    // No more keys
+    close();
+    return false;
+  }
+
+  public synchronized boolean next(List<KeyValue> outResult) throws IOException {
+    return next(outResult, -1);
+  }
+
+  // Implementation of ChangedReadersObserver
+  public synchronized void updateReaders() throws IOException {
+    if (this.closing) return;
+
+    // All public synchronized API calls will call 'checkReseek' which will cause
+    // the scanner stack to reseek if this.heap==null && this.lastTop != null.
+    // But if two calls to updateReaders() happen without a 'next' or 'peek' then we
+    // will end up calling this.peek() which would cause a reseek in the middle of a updateReaders
+    // which is NOT what we want, not to mention could cause an NPE. So we early out here.
+    if (this.heap == null) return;
+
+    // this could be null.
+    this.lastTop = this.peek();
+
+    //DebugPrint.println("SS updateReaders, topKey = " + lastTop);
+
+    // close scanners to old obsolete Store files
+    this.heap.close(); // bubble thru and close all scanners.
+    this.heap = null; // the re-seeks could be slow (access HDFS) free up memory ASAP
+
+    // Let the next() call handle re-creating and seeking
+  }
+
+  private void checkReseek() throws IOException {
+    if (this.heap == null && this.lastTop != null) {
+      resetScannerStack(this.lastTop);
+      this.lastTop = null; // gone!
+    }
+    // else dont need to reseek
+  }
+
+  private void resetScannerStack(KeyValue lastTopKey) throws IOException {
+    if (heap != null) {
+      throw new RuntimeException("StoreScanner.reseek run on an existing heap!");
+    }
+
+    /* When we have the scan object, should we not pass it to getScanners()
+     * to get a limited set of scanners? We did so in the constructor and we
+     * could have done it now by storing the scan object from the constructor */
+    List<KeyValueScanner> scanners = getScanners();
+
+    for(KeyValueScanner scanner : scanners) {
+      scanner.seek(lastTopKey);
+    }
+
+    // Combine all seeked scanners with a heap
+    heap = new KeyValueHeap(scanners, store.comparator);
+
+    // Reset the state of the Query Matcher and set to top row
+    matcher.reset();
+    KeyValue kv = heap.peek();
+    matcher.setRow((kv == null ? lastTopKey : kv).getRow());
+  }
+
+  @Override
+  public synchronized boolean reseek(KeyValue kv) throws IOException {
+    //Heap cannot be null, because this is only called from next() which
+    //guarantees that heap will never be null before this call.
+    return this.heap.reseek(kv);
+  }
+
+  @Override
+  public long getSequenceID() {
+    return 0;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
new file mode 100644
index 0000000..d3f1c65
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/TimeRangeTracker.java
@@ -0,0 +1,147 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValue.Type;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * Stores the minimum and maximum timestamp values.
+ * Can be used to find if any given time range overlaps with its time range
+ * MemStores use this class to track its minimum and maximum timestamps.
+ * When writing StoreFiles, this information is stored in meta blocks and used
+ * at read time to match against the required TimeRange
+ */
+public class TimeRangeTracker implements Writable {
+
+  long minimumTimestamp = -1;
+  long maximumTimestamp = -1;
+
+  /**
+   * Default constructor.
+   * Initializes TimeRange to be null
+   */
+  public TimeRangeTracker() {
+
+  }
+
+  /**
+   * Copy Constructor
+   * @param trt source TimeRangeTracker
+   */
+  public TimeRangeTracker(final TimeRangeTracker trt) {
+    this.minimumTimestamp = trt.getMinimumTimestamp();
+    this.maximumTimestamp = trt.getMaximumTimestamp();
+  }
+
+  public TimeRangeTracker(long minimumTimestamp, long maximumTimestamp) {
+    this.minimumTimestamp = minimumTimestamp;
+    this.maximumTimestamp = maximumTimestamp;
+  }
+
+  /**
+   * Update the current TimestampRange to include the timestamp from KeyValue
+   * If the Key is of type DeleteColumn or DeleteFamily, it includes the
+   * entire time range from 0 to timestamp of the key.
+   * @param kv the KeyValue to include
+   */
+  public void includeTimestamp(final KeyValue kv) {
+    includeTimestamp(kv.getTimestamp());
+    if (kv.isDeleteColumnOrFamily()) {
+      includeTimestamp(0);
+    }
+  }
+
+  /**
+   * Update the current TimestampRange to include the timestamp from Key.
+   * If the Key is of type DeleteColumn or DeleteFamily, it includes the
+   * entire time range from 0 to timestamp of the key.
+   * @param key
+   */
+  public void includeTimestamp(final byte[] key) {
+    includeTimestamp(Bytes.toLong(key,key.length-KeyValue.TIMESTAMP_TYPE_SIZE));
+    int type = key[key.length - 1];
+    if (type == Type.DeleteColumn.getCode() ||
+        type == Type.DeleteFamily.getCode()) {
+      includeTimestamp(0);
+    }
+  }
+
+  /**
+   * If required, update the current TimestampRange to include timestamp
+   * @param timestamp the timestamp value to include
+   */
+  private void includeTimestamp(final long timestamp) {
+    if (maximumTimestamp == -1) {
+      minimumTimestamp = timestamp;
+      maximumTimestamp = timestamp;
+    }
+    else if (minimumTimestamp > timestamp) {
+      minimumTimestamp = timestamp;
+    }
+    else if (maximumTimestamp < timestamp) {
+      maximumTimestamp = timestamp;
+    }
+    return;
+  }
+
+  /**
+   * Check if the range has any overlap with TimeRange
+   * @param tr TimeRange
+   * @return True if there is overlap, false otherwise
+   */
+  public boolean includesTimeRange(final TimeRange tr) {
+    return (this.minimumTimestamp < tr.getMax() &&
+        this.maximumTimestamp >= tr.getMin());
+  }
+
+  /**
+   * @return the minimumTimestamp
+   */
+  public long getMinimumTimestamp() {
+    return minimumTimestamp;
+  }
+
+  /**
+   * @return the maximumTimestamp
+   */
+  public long getMaximumTimestamp() {
+    return maximumTimestamp;
+  }
+
+  public void write(final DataOutput out) throws IOException {
+    out.writeLong(minimumTimestamp);
+    out.writeLong(maximumTimestamp);
+  }
+
+  public void readFields(final DataInput in) throws IOException {
+    this.minimumTimestamp = in.readLong();
+    this.maximumTimestamp = in.readLong();
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java
new file mode 100644
index 0000000..52b9a6c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java
@@ -0,0 +1,42 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+/**
+ * Thrown when a request contains a key which is not part of this region
+ */
+public class WrongRegionException extends IOException {
+  private static final long serialVersionUID = 993179627856392526L;
+
+  /** constructor */
+  public WrongRegionException() {
+    super();
+  }
+
+  /**
+   * Constructor
+   * @param s message
+   */
+  public WrongRegionException(String s) {
+    super(s);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseMetaHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseMetaHandler.java
new file mode 100644
index 0000000..e8e95ed
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseMetaHandler.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.handler;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+
+/**
+ * Handles closing of the root region on a region server.
+ */
+public class CloseMetaHandler extends CloseRegionHandler {
+  // Called when master tells us shutdown a region via close rpc
+  public CloseMetaHandler(final Server server,
+      final RegionServerServices rsServices, final HRegionInfo regionInfo) {
+    this(server, rsServices, regionInfo, false, true);
+  }
+
+  // Called when regionserver determines its to go down; not master orchestrated
+  public CloseMetaHandler(final Server server,
+      final RegionServerServices rsServices,
+      final HRegionInfo regionInfo,
+      final boolean abort, final boolean zk) {
+    super(server, rsServices, regionInfo, abort, zk, EventType.M_RS_CLOSE_META);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
new file mode 100644
index 0000000..30d913a
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
@@ -0,0 +1,183 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.handler;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Handles closing of a region on a region server.
+ */
+public class CloseRegionHandler extends EventHandler {
+  // NOTE on priorities shutting down.  There are none for close. There are some
+  // for open.  I think that is right.  On shutdown, we want the meta to close
+  // before root and both to close after the user regions have closed.  What
+  // about the case where master tells us to shutdown a catalog region and we
+  // have a running queue of user regions to close?
+  private static final Log LOG = LogFactory.getLog(CloseRegionHandler.class);
+
+  private final int FAILED = -1;
+
+  private final RegionServerServices rsServices;
+
+  private final HRegionInfo regionInfo;
+
+  // If true, the hosting server is aborting.  Region close process is different
+  // when we are aborting.
+  private final boolean abort;
+
+  // Update zk on closing transitions. Usually true.  Its false if cluster
+  // is going down.  In this case, its the rs that initiates the region
+  // close -- not the master process so state up in zk will unlikely be
+  // CLOSING.
+  private final boolean zk;
+
+  // This is executed after receiving an CLOSE RPC from the master.
+  public CloseRegionHandler(final Server server,
+      final RegionServerServices rsServices, HRegionInfo regionInfo) {
+    this(server, rsServices, regionInfo, false, true);
+  }
+
+  /**
+   * This method used internally by the RegionServer to close out regions.
+   * @param server
+   * @param rsServices
+   * @param regionInfo
+   * @param abort If the regionserver is aborting.
+   * @param zk If the close should be noted out in zookeeper.
+   */
+  public CloseRegionHandler(final Server server,
+      final RegionServerServices rsServices,
+      final HRegionInfo regionInfo, final boolean abort, final boolean zk) {
+    this(server, rsServices,  regionInfo, abort, zk, EventType.M_RS_CLOSE_REGION);
+  }
+
+  protected CloseRegionHandler(final Server server,
+      final RegionServerServices rsServices, HRegionInfo regionInfo,
+      boolean abort, final boolean zk, EventType eventType) {
+    super(server, eventType);
+    this.server = server;
+    this.rsServices = rsServices;
+    this.regionInfo = regionInfo;
+    this.abort = abort;
+    this.zk = zk;
+  }
+
+  public HRegionInfo getRegionInfo() {
+    return regionInfo;
+  }
+
+  @Override
+  public void process() {
+    String name = regionInfo.getRegionNameAsString();
+    LOG.debug("Processing close of " + name);
+    String encodedRegionName = regionInfo.getEncodedName();
+    // Check that this region is being served here
+    HRegion region = this.rsServices.getFromOnlineRegions(encodedRegionName);
+    if (region == null) {
+      LOG.warn("Received CLOSE for region " + name + " but currently not serving");
+      return;
+    }
+
+    int expectedVersion = FAILED;
+    if (this.zk) {
+      expectedVersion = setClosingState();
+      if (expectedVersion == FAILED) return;
+    }
+
+    // Close the region
+    try {
+      // TODO: If we need to keep updating CLOSING stamp to prevent against
+      // a timeout if this is long-running, need to spin up a thread?
+      if (region.close(abort) == null) {
+        // This region got closed.  Most likely due to a split. So instead
+        // of doing the setClosedState() below, let's just ignore and continue.
+        // The split message will clean up the master state.
+        LOG.warn("Can't close region: was already closed during close(): " +
+          regionInfo.getRegionNameAsString());
+        return;
+      }
+    } catch (IOException e) {
+      LOG.error("Unrecoverable exception while closing region " +
+        regionInfo.getRegionNameAsString() + ", still finishing close", e);
+    }
+
+    this.rsServices.removeFromOnlineRegions(regionInfo.getEncodedName());
+
+    if (this.zk) setClosedState(expectedVersion, region);
+
+    // Done!  Region is closed on this RS
+    LOG.debug("Closed region " + region.getRegionNameAsString());
+  }
+
+  /**
+   * Transition ZK node to CLOSED
+   * @param expectedVersion
+   */
+  private void setClosedState(final int expectedVersion, final HRegion region) {
+    try {
+      if (ZKAssign.transitionNodeClosed(server.getZooKeeper(), regionInfo,
+          server.getServerName(), expectedVersion) == FAILED) {
+        LOG.warn("Completed the CLOSE of a region but when transitioning from " +
+            " CLOSING to CLOSED got a version mismatch, someone else clashed " +
+            "so now unassigning");
+        region.close();
+        return;
+      }
+    } catch (NullPointerException e) {
+      // I've seen NPE when table was deleted while close was running in unit tests.
+      LOG.warn("NPE during close -- catching and continuing...", e);
+    } catch (KeeperException e) {
+      LOG.error("Failed transitioning node from CLOSING to CLOSED", e);
+      return;
+    } catch (IOException e) {
+      LOG.error("Failed to close region after failing to transition", e);
+      return;
+    }
+  }
+
+  /**
+   * Create ZK node in CLOSING state.
+   * @return The expectedVersion.  If -1, we failed setting CLOSING.
+   */
+  private int setClosingState() {
+    int expectedVersion = FAILED;
+    try {
+      if ((expectedVersion = ZKAssign.createNodeClosing(
+          server.getZooKeeper(), regionInfo, server.getServerName())) == FAILED) {
+        LOG.warn("Error creating node in CLOSING state, aborting close of " +
+          regionInfo.getRegionNameAsString());
+      }
+    } catch (KeeperException e) {
+      LOG.warn("Error creating node in CLOSING state, aborting close of " +
+        regionInfo.getRegionNameAsString());
+    }
+    return expectedVersion;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRootHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRootHandler.java
new file mode 100644
index 0000000..fa38ad6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRootHandler.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.handler;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+
+/**
+ * Handles closing of the root region on a region server.
+ */
+public class CloseRootHandler extends CloseRegionHandler {
+  // This is executed after receiving an CLOSE RPC from the master for root.
+  public CloseRootHandler(final Server server,
+      final RegionServerServices rsServices, HRegionInfo regionInfo) {
+    this(server, rsServices, regionInfo, false, true);
+  }
+
+  // This is called directly by the regionserver when its determined its
+  // shutting down.
+  public CloseRootHandler(final Server server,
+      final RegionServerServices rsServices, HRegionInfo regionInfo,
+      final boolean abort, final boolean zk) {
+    super(server, rsServices, regionInfo, abort, zk, EventType.M_RS_CLOSE_ROOT);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java
new file mode 100644
index 0000000..111c7e6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java
@@ -0,0 +1,36 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.handler;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+
+/**
+ * Handles opening of a meta region on a region server.
+ * <p>
+ * This is executed after receiving an OPEN RPC from the master for meta.
+ */
+public class OpenMetaHandler extends OpenRegionHandler {
+  public OpenMetaHandler(final Server server,
+      final RegionServerServices rsServices, HRegionInfo regionInfo) {
+    super(server,rsServices,  regionInfo, EventType.M_RS_OPEN_META);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
new file mode 100644
index 0000000..84b030f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
@@ -0,0 +1,340 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.handler;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.hadoop.util.Progressable;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Handles opening of a region on a region server.
+ * <p>
+ * This is executed after receiving an OPEN RPC from the master or client.
+ */
+public class OpenRegionHandler extends EventHandler {
+  private static final Log LOG = LogFactory.getLog(OpenRegionHandler.class);
+
+  private final RegionServerServices rsServices;
+
+  private final HRegionInfo regionInfo;
+
+  // We get version of our znode at start of open process and monitor it across
+  // the total open. We'll fail the open if someone hijacks our znode; we can
+  // tell this has happened if version is not as expected.
+  private volatile int version = -1;
+
+  public OpenRegionHandler(final Server server,
+      final RegionServerServices rsServices, HRegionInfo regionInfo) {
+    this(server, rsServices, regionInfo, EventType.M_RS_OPEN_REGION);
+  }
+
+  protected OpenRegionHandler(final Server server,
+      final RegionServerServices rsServices, final HRegionInfo regionInfo,
+      EventType eventType) {
+    super(server, eventType);
+    this.rsServices = rsServices;
+    this.regionInfo = regionInfo;
+  }
+
+  public HRegionInfo getRegionInfo() {
+    return regionInfo;
+  }
+
+  @Override
+  public void process() throws IOException {
+    final String name = regionInfo.getRegionNameAsString();
+    LOG.debug("Processing open of " + name);
+    if (this.server.isStopped() || this.rsServices.isStopping()) {
+      LOG.info("Server stopping or stopped, skipping open of " + name);
+      return;
+    }
+    final String encodedName = regionInfo.getEncodedName();
+
+    // Check that this region is not already online
+    HRegion region = this.rsServices.getFromOnlineRegions(encodedName);
+    if (region != null) {
+      LOG.warn("Attempted open of " + name +
+        " but already online on this server");
+      return;
+    }
+
+    // If fails, just return.  Someone stole the region from under us.
+    // Calling transitionZookeeperOfflineToOpening initalizes this.version.
+    if (!transitionZookeeperOfflineToOpening(encodedName)) return;
+
+    // Open region.  After a successful open, failures in subsequent processing
+    // needs to do a close as part of cleanup.
+    region = openRegion();
+    if (region == null) return;
+    boolean failed = true;
+    if (tickleOpening("post_region_open")) {
+      if (updateMeta(region)) failed = false;
+    }
+
+    if (failed || this.server.isStopped() || this.rsServices.isStopping()) {
+      cleanupFailedOpen(region);
+      return;
+    }
+
+    if (!transitionToOpened(region)) {
+      cleanupFailedOpen(region);
+      return;
+    }
+
+    // Done!  Successful region open
+    LOG.debug("Opened " + name);
+  }
+
+  /**
+   * Update ZK, ROOT or META.  This can take a while if for example the
+   * .META. is not available -- if server hosting .META. crashed and we are
+   * waiting on it to come back -- so run in a thread and keep updating znode
+   * state meantime so master doesn't timeout our region-in-transition.
+   * Caller must cleanup region if this fails.
+   */
+  private boolean updateMeta(final HRegion r) {
+    if (this.server.isStopped() || this.rsServices.isStopping()) {
+      return false;
+    }
+    // Object we do wait/notify on.  Make it boolean.  If set, we're done.
+    // Else, wait.
+    final AtomicBoolean signaller = new AtomicBoolean(false);
+    PostOpenDeployTasksThread t = new PostOpenDeployTasksThread(r,
+      this.server, this.rsServices, signaller);
+    t.start();
+    int assignmentTimeout = this.server.getConfiguration().
+      getInt("hbase.master.assignment.timeoutmonitor.period", 10000);
+    // Total timeout for meta edit.  If we fail adding the edit then close out
+    // the region and let it be assigned elsewhere.
+    long timeout = assignmentTimeout * 10;
+    long now = System.currentTimeMillis();
+    long endTime = now + timeout;
+    // Let our period at which we update OPENING state to be be 1/3rd of the
+    // regions-in-transition timeout period.
+    long period = Math.max(1, assignmentTimeout/ 3);
+    long lastUpdate = now;
+    while (!signaller.get() && t.isAlive() && !this.server.isStopped() &&
+        !this.rsServices.isStopping() && (endTime > now)) {
+      long elapsed = now - lastUpdate;
+      if (elapsed > period) {
+        // Only tickle OPENING if postOpenDeployTasks is taking some time.
+        lastUpdate = now;
+        tickleOpening("post_open_deploy");
+      }
+      synchronized (signaller) {
+        try {
+          signaller.wait(period);
+        } catch (InterruptedException e) {
+          // Go to the loop check.
+        }
+      }
+      now = System.currentTimeMillis();
+    }
+    // Is thread still alive?  We may have left above loop because server is
+    // stopping or we timed out the edit.  Is so, interrupt it.
+    if (t.isAlive()) {
+      if (!signaller.get()) {
+        // Thread still running; interrupt
+        LOG.debug("Interrupting thread " + t);
+        t.interrupt();
+      }
+      try {
+        t.join();
+      } catch (InterruptedException ie) {
+        LOG.warn("Interrupted joining " +
+          r.getRegionInfo().getRegionNameAsString(), ie);
+        Thread.currentThread().interrupt();
+      }
+    }
+    // Was there an exception opening the region?  This should trigger on
+    // InterruptedException too.  If so, we failed.
+    return !t.interrupted() && t.getException() == null;
+  }
+
+  /**
+   * Thread to run region post open tasks.  Call {@link #getException()} after
+   * the thread finishes to check for exceptions running
+   * {@link RegionServerServices#postOpenDeployTasks(HRegion, org.apache.hadoop.hbase.catalog.CatalogTracker, boolean)}.
+   */
+  static class PostOpenDeployTasksThread extends Thread {
+    private Exception exception = null;
+    private final Server server;
+    private final RegionServerServices services;
+    private final HRegion region;
+    private final AtomicBoolean signaller;
+
+    PostOpenDeployTasksThread(final HRegion region, final Server server,
+        final RegionServerServices services, final AtomicBoolean signaller) {
+      super("PostOpenDeployTasks:" + region.getRegionInfo().getEncodedName());
+      this.setDaemon(true);
+      this.server = server;
+      this.services = services;
+      this.region = region;
+      this.signaller = signaller;
+    }
+
+    public void run() {
+      try {
+        this.services.postOpenDeployTasks(this.region,
+          this.server.getCatalogTracker(), false);
+      } catch (Exception e) {
+        LOG.warn("Exception running postOpenDeployTasks; region=" +
+          this.region.getRegionInfo().getEncodedName(), e);
+        this.exception = e;
+      }
+      // We're done.  Set flag then wake up anyone waiting on thread to complete.
+      this.signaller.set(true);
+      synchronized (this.signaller) {
+        this.signaller.notify();
+      }
+    }
+
+    /**
+     * @return Null or the run exception; call this method after thread is done.
+     */
+    Exception getException() {
+      return this.exception;
+    }
+  }
+
+  /**
+   * @param r Region we're working on.
+   * @return Transition znode to OPENED state.
+   * @throws IOException 
+   */
+  private boolean transitionToOpened(final HRegion r) throws IOException {
+    boolean result = false;
+    HRegionInfo hri = r.getRegionInfo();
+    final String name = hri.getRegionNameAsString();
+    // Finally, Transition ZK node to OPENED
+    try {
+      if (ZKAssign.transitionNodeOpened(this.server.getZooKeeper(), hri,
+          this.server.getServerName(), this.version) == -1) {
+        LOG.warn("Completed the OPEN of region " + name +
+          " but when transitioning from " +
+          " OPENING to OPENED got a version mismatch, someone else clashed " +
+          "so now unassigning -- closing region");
+      } else {
+        result = true;
+      }
+    } catch (KeeperException e) {
+      LOG.error("Failed transitioning node " + name +
+        " from OPENING to OPENED -- closing region", e);
+    }
+    return result;
+  }
+
+  /**
+   * @return Instance of HRegion if successful open else null.
+   */
+  private HRegion openRegion() {
+    HRegion region = null;
+    try {
+      // Instantiate the region.  This also periodically tickles our zk OPENING
+      // state so master doesn't timeout this region in transition.
+      region = HRegion.openHRegion(this.regionInfo, this.rsServices.getWAL(),
+        this.server.getConfiguration(), this.rsServices.getFlushRequester(),
+        new Progressable() {
+          public void progress() {
+            // We may lose the znode ownership during the open.  Currently its
+            // too hard interrupting ongoing region open.  Just let it complete
+            // and check we still have the znode after region open.
+            tickleOpening("open_region_progress");
+          }
+        });
+    } catch (IOException e) {
+      // We failed open.  Let our znode expire in regions-in-transition and
+      // Master will assign elsewhere.  Presumes nothing to close.
+      LOG.error("Failed open of region=" +
+        this.regionInfo.getRegionNameAsString(), e);
+    }
+    return region;
+  }
+
+  private void cleanupFailedOpen(final HRegion region) throws IOException {
+    if (region != null) region.close();
+    this.rsServices.removeFromOnlineRegions(regionInfo.getEncodedName());
+  }
+
+  /**
+   * Transition ZK node from OFFLINE to OPENING.
+   * @param encodedName Name of the znode file (Region encodedName is the znode
+   * name).
+   * @return True if successful transition.
+   */
+  boolean transitionZookeeperOfflineToOpening(final String encodedName) {
+    // TODO: should also handle transition from CLOSED?
+    try {
+      // Initialize the znode version.
+      this.version =
+        ZKAssign.transitionNodeOpening(server.getZooKeeper(),
+          regionInfo, server.getServerName());
+    } catch (KeeperException e) {
+      LOG.error("Error transition from OFFLINE to OPENING for region=" +
+        encodedName, e);
+    }
+    boolean b = isGoodVersion();
+    if (!b) {
+      LOG.warn("Failed transition from OFFLINE to OPENING for region=" +
+        encodedName);
+    }
+    return b;
+  }
+
+  /**
+   * Update our OPENING state in zookeeper.
+   * Do this so master doesn't timeout this region-in-transition.
+   * @param context Some context to add to logs if failure
+   * @return True if successful transition.
+   */
+  boolean tickleOpening(final String context) {
+    // If previous checks failed... do not try again.
+    if (!isGoodVersion()) return false;
+    String encodedName = this.regionInfo.getEncodedName();
+    try {
+      this.version =
+        ZKAssign.retransitionNodeOpening(server.getZooKeeper(),
+          this.regionInfo, this.server.getServerName(), this.version);
+    } catch (KeeperException e) {
+      server.abort("Exception refreshing OPENING; region=" + encodedName +
+        ", context=" + context, e);
+    }
+    boolean b = isGoodVersion();
+    if (!b) {
+      LOG.warn("Failed refreshing OPENING; region=" + encodedName +
+        ", context=" + context);
+    }
+    return b;
+  }
+
+  private boolean isGoodVersion() {
+    return this.version != -1;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRootHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRootHandler.java
new file mode 100644
index 0000000..3e5e1a6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRootHandler.java
@@ -0,0 +1,36 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.handler;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+
+/**
+ * Handles opening of the root region on a region server.
+ * <p>
+ * This is executed after receiving an OPEN RPC from the master for root.
+ */
+public class OpenRootHandler extends OpenRegionHandler {
+  public OpenRootHandler(final Server server,
+      final RegionServerServices rsServices, HRegionInfo regionInfo) {
+    super(server, rsServices, regionInfo, EventType.M_RS_OPEN_ROOT);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java
new file mode 100644
index 0000000..8b0b43f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerMetrics.java
@@ -0,0 +1,369 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.metrics;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.metrics.HBaseInfo;
+import org.apache.hadoop.hbase.metrics.MetricsRate;
+import org.apache.hadoop.hbase.metrics.PersistentMetricsTimeVaryingRate;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.Strings;
+import org.apache.hadoop.metrics.ContextFactory;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.jvm.JvmMetrics;
+import org.apache.hadoop.metrics.util.MetricsIntValue;
+import org.apache.hadoop.metrics.util.MetricsLongValue;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+import org.apache.hadoop.metrics.util.MetricsTimeVaryingRate;
+
+import java.io.IOException;
+import java.lang.management.ManagementFactory;
+import java.lang.management.MemoryUsage;
+import java.util.List;
+
+/**
+ * This class is for maintaining the various regionserver statistics
+ * and publishing them through the metrics interfaces.
+ * <p>
+ * This class has a number of metrics variables that are publicly accessible;
+ * these variables (objects) have methods to update their values.
+ */
+public class RegionServerMetrics implements Updater {
+  @SuppressWarnings({"FieldCanBeLocal"})
+  private final Log LOG = LogFactory.getLog(this.getClass());
+  private final MetricsRecord metricsRecord;
+  private long lastUpdate = System.currentTimeMillis();
+  private long lastExtUpdate = System.currentTimeMillis();
+  private long extendedPeriod = 0;
+  private static final int MB = 1024*1024;
+  private MetricsRegistry registry = new MetricsRegistry();
+  private final RegionServerStatistics statistics;
+
+  public final MetricsTimeVaryingRate atomicIncrementTime =
+      new MetricsTimeVaryingRate("atomicIncrementTime", registry);
+
+  /**
+   * Count of regions carried by this regionserver
+   */
+  public final MetricsIntValue regions =
+    new MetricsIntValue("regions", registry);
+
+  /**
+   * Block cache size.
+   */
+  public final MetricsLongValue blockCacheSize = new MetricsLongValue("blockCacheSize", registry);
+
+  /**
+   * Block cache free size.
+   */
+  public final MetricsLongValue blockCacheFree = new MetricsLongValue("blockCacheFree", registry);
+
+  /**
+   * Block cache item count.
+   */
+  public final MetricsLongValue blockCacheCount = new MetricsLongValue("blockCacheCount", registry);
+
+  /**
+   * Block cache hit count.
+   */
+  public final MetricsLongValue blockCacheHitCount = new MetricsLongValue("blockCacheHitCount", registry);
+
+  /**
+   * Block cache miss count.
+   */
+  public final MetricsLongValue blockCacheMissCount = new MetricsLongValue("blockCacheMissCount", registry);
+
+  /**
+   * Block cache evict count.
+   */
+  public final MetricsLongValue blockCacheEvictedCount = new MetricsLongValue("blockCacheEvictedCount", registry);
+
+  /**
+   * Block hit ratio.
+   */
+  public final MetricsIntValue blockCacheHitRatio = new MetricsIntValue("blockCacheHitRatio", registry);
+
+  /**
+   * Block hit caching ratio.  This only includes the requests to the block
+   * cache where caching was turned on.  See HBASE-2253.
+   */
+  public final MetricsIntValue blockCacheHitCachingRatio = new MetricsIntValue("blockCacheHitCachingRatio", registry);
+
+  /*
+   * Count of requests to the regionservers since last call to metrics update
+   */
+  private final MetricsRate requests = new MetricsRate("requests", registry);
+
+  /**
+   * Count of stores open on the regionserver.
+   */
+  public final MetricsIntValue stores = new MetricsIntValue("stores", registry);
+
+  /**
+   * Count of storefiles open on the regionserver.
+   */
+  public final MetricsIntValue storefiles = new MetricsIntValue("storefiles", registry);
+
+  /**
+   * Sum of all the storefile index sizes in this regionserver in MB
+   */
+  public final MetricsIntValue storefileIndexSizeMB =
+    new MetricsIntValue("storefileIndexSizeMB", registry);
+
+  /**
+   * Sum of all the memstore sizes in this regionserver in MB
+   */
+  public final MetricsIntValue memstoreSizeMB =
+    new MetricsIntValue("memstoreSizeMB", registry);
+
+  /**
+   * Size of the compaction queue.
+   */
+  public final MetricsIntValue compactionQueueSize =
+    new MetricsIntValue("compactionQueueSize", registry);
+
+  /**
+   * filesystem read latency
+   */
+  public final MetricsTimeVaryingRate fsReadLatency =
+    new MetricsTimeVaryingRate("fsReadLatency", registry);
+
+  /**
+   * filesystem write latency
+   */
+  public final MetricsTimeVaryingRate fsWriteLatency =
+    new MetricsTimeVaryingRate("fsWriteLatency", registry);
+
+  /**
+   * filesystem sync latency
+   */
+  public final MetricsTimeVaryingRate fsSyncLatency =
+    new MetricsTimeVaryingRate("fsSyncLatency", registry);
+
+  /**
+   * time each scheduled compaction takes
+   */
+  protected final PersistentMetricsTimeVaryingRate compactionTime =
+    new PersistentMetricsTimeVaryingRate("compactionTime", registry);
+
+  protected final PersistentMetricsTimeVaryingRate compactionSize =
+    new PersistentMetricsTimeVaryingRate("compactionSize", registry);
+
+  /**
+   * time each scheduled flush takes
+   */
+  protected final PersistentMetricsTimeVaryingRate flushTime =
+    new PersistentMetricsTimeVaryingRate("flushTime", registry);
+
+  protected final PersistentMetricsTimeVaryingRate flushSize =
+    new PersistentMetricsTimeVaryingRate("flushSize", registry);
+
+  public RegionServerMetrics() {
+    MetricsContext context = MetricsUtil.getContext("hbase");
+    metricsRecord = MetricsUtil.createRecord(context, "regionserver");
+    String name = Thread.currentThread().getName();
+    metricsRecord.setTag("RegionServer", name);
+    context.registerUpdater(this);
+    // Add jvmmetrics.
+    JvmMetrics.init("RegionServer", name);
+    // Add Hbase Info metrics
+    HBaseInfo.init();
+
+    // export for JMX
+    statistics = new RegionServerStatistics(this.registry, name);
+
+    // get custom attributes
+    try {
+      Object m = ContextFactory.getFactory().getAttribute("hbase.extendedperiod");
+      if (m instanceof String) {
+        this.extendedPeriod = Long.parseLong((String) m)*1000;
+      }
+    } catch (IOException ioe) {
+      LOG.info("Couldn't load ContextFactory for Metrics config info");
+    }
+
+    LOG.info("Initialized");
+  }
+
+  public void shutdown() {
+    if (statistics != null)
+      statistics.shutdown();
+  }
+
+  /**
+   * Since this object is a registered updater, this method will be called
+   * periodically, e.g. every 5 seconds.
+   * @param caller the metrics context that this responsible for calling us
+   */
+  public void doUpdates(MetricsContext caller) {
+    synchronized (this) {
+      this.lastUpdate = System.currentTimeMillis();
+
+      // has the extended period for long-living stats elapsed?
+      if (this.extendedPeriod > 0 &&
+          this.lastUpdate - this.lastExtUpdate >= this.extendedPeriod) {
+        this.lastExtUpdate = this.lastUpdate;
+        this.compactionTime.resetMinMaxAvg();
+        this.compactionSize.resetMinMaxAvg();
+        this.flushTime.resetMinMaxAvg();
+        this.flushSize.resetMinMaxAvg();
+        this.resetAllMinMax();
+      }
+
+      this.stores.pushMetric(this.metricsRecord);
+      this.storefiles.pushMetric(this.metricsRecord);
+      this.storefileIndexSizeMB.pushMetric(this.metricsRecord);
+      this.memstoreSizeMB.pushMetric(this.metricsRecord);
+      this.regions.pushMetric(this.metricsRecord);
+      this.requests.pushMetric(this.metricsRecord);
+      this.compactionQueueSize.pushMetric(this.metricsRecord);
+      this.blockCacheSize.pushMetric(this.metricsRecord);
+      this.blockCacheFree.pushMetric(this.metricsRecord);
+      this.blockCacheCount.pushMetric(this.metricsRecord);
+      this.blockCacheHitCount.pushMetric(this.metricsRecord);
+      this.blockCacheMissCount.pushMetric(this.metricsRecord);
+      this.blockCacheEvictedCount.pushMetric(this.metricsRecord);
+      this.blockCacheHitRatio.pushMetric(this.metricsRecord);
+      this.blockCacheHitCachingRatio.pushMetric(this.metricsRecord);
+
+      // Mix in HFile and HLog metrics
+      // Be careful. Here is code for MTVR from up in hadoop:
+      // public synchronized void inc(final int numOps, final long time) {
+      //   currentData.numOperations += numOps;
+      //   currentData.time += time;
+      //   long timePerOps = time/numOps;
+      //    minMax.update(timePerOps);
+      // }
+      // Means you can't pass a numOps of zero or get a ArithmeticException / by zero.
+      int ops = (int)HFile.getReadOps();
+      if (ops != 0) this.fsReadLatency.inc(ops, HFile.getReadTime());
+      ops = (int)HFile.getWriteOps();
+      if (ops != 0) this.fsWriteLatency.inc(ops, HFile.getWriteTime());
+      // mix in HLog metrics
+      ops = (int)HLog.getWriteOps();
+      if (ops != 0) this.fsWriteLatency.inc(ops, HLog.getWriteTime());
+      ops = (int)HLog.getSyncOps();
+      if (ops != 0) this.fsSyncLatency.inc(ops, HLog.getSyncTime());
+
+      // push the result
+      this.fsReadLatency.pushMetric(this.metricsRecord);
+      this.fsWriteLatency.pushMetric(this.metricsRecord);
+      this.fsSyncLatency.pushMetric(this.metricsRecord);
+      this.compactionTime.pushMetric(this.metricsRecord);
+      this.compactionSize.pushMetric(this.metricsRecord);
+      this.flushTime.pushMetric(this.metricsRecord);
+      this.flushSize.pushMetric(this.metricsRecord);
+    }
+    this.metricsRecord.update();
+  }
+
+  public void resetAllMinMax() {
+    this.atomicIncrementTime.resetMinMax();
+    this.fsReadLatency.resetMinMax();
+    this.fsWriteLatency.resetMinMax();
+    this.fsSyncLatency.resetMinMax();
+  }
+
+  /**
+   * @return Count of requests.
+   */
+  public float getRequests() {
+    return this.requests.getPreviousIntervalValue();
+  }
+
+  /**
+   * @param compact history in <time, size>
+   */
+  public synchronized void addCompaction(final Pair<Long,Long> compact) {
+     this.compactionTime.inc(compact.getFirst());
+     this.compactionSize.inc(compact.getSecond());
+  }
+
+  /**
+   * @param flushes history in <time, size>
+   */
+  public synchronized void addFlush(final List<Pair<Long,Long>> flushes) {
+    for (Pair<Long,Long> f : flushes) {
+      this.flushTime.inc(f.getFirst());
+      this.flushSize.inc(f.getSecond());
+    }
+  }
+
+  /**
+   * @param inc How much to add to requests.
+   */
+  public void incrementRequests(final int inc) {
+    this.requests.inc(inc);
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    int seconds = (int)((System.currentTimeMillis() - this.lastUpdate)/1000);
+    if (seconds == 0) {
+      seconds = 1;
+    }
+    sb = Strings.appendKeyValue(sb, "request",
+      Float.valueOf(this.requests.getPreviousIntervalValue()));
+    sb = Strings.appendKeyValue(sb, "regions",
+      Integer.valueOf(this.regions.get()));
+    sb = Strings.appendKeyValue(sb, "stores",
+      Integer.valueOf(this.stores.get()));
+    sb = Strings.appendKeyValue(sb, "storefiles",
+      Integer.valueOf(this.storefiles.get()));
+    sb = Strings.appendKeyValue(sb, "storefileIndexSize",
+      Integer.valueOf(this.storefileIndexSizeMB.get()));
+    sb = Strings.appendKeyValue(sb, "memstoreSize",
+      Integer.valueOf(this.memstoreSizeMB.get()));
+    sb = Strings.appendKeyValue(sb, "compactionQueueSize",
+      Integer.valueOf(this.compactionQueueSize.get()));
+    // Duplicate from jvmmetrics because metrics are private there so
+    // inaccessible.
+    MemoryUsage memory =
+      ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
+    sb = Strings.appendKeyValue(sb, "usedHeap",
+      Long.valueOf(memory.getUsed()/MB));
+    sb = Strings.appendKeyValue(sb, "maxHeap",
+      Long.valueOf(memory.getMax()/MB));
+    sb = Strings.appendKeyValue(sb, this.blockCacheSize.getName(),
+        Long.valueOf(this.blockCacheSize.get()));
+    sb = Strings.appendKeyValue(sb, this.blockCacheFree.getName(),
+        Long.valueOf(this.blockCacheFree.get()));
+    sb = Strings.appendKeyValue(sb, this.blockCacheCount.getName(),
+        Long.valueOf(this.blockCacheCount.get()));
+    sb = Strings.appendKeyValue(sb, this.blockCacheHitCount.getName(),
+        Long.valueOf(this.blockCacheHitCount.get()));
+    sb = Strings.appendKeyValue(sb, this.blockCacheMissCount.getName(),
+        Long.valueOf(this.blockCacheMissCount.get()));
+    sb = Strings.appendKeyValue(sb, this.blockCacheEvictedCount.getName(),
+        Long.valueOf(this.blockCacheEvictedCount.get()));
+    sb = Strings.appendKeyValue(sb, this.blockCacheHitRatio.getName(),
+        Long.valueOf(this.blockCacheHitRatio.get()));
+    sb = Strings.appendKeyValue(sb, this.blockCacheHitCachingRatio.getName(),
+        Long.valueOf(this.blockCacheHitCachingRatio.get()));
+    return sb.toString();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerStatistics.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerStatistics.java
new file mode 100644
index 0000000..04fe7b1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerStatistics.java
@@ -0,0 +1,47 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.metrics;
+
+import org.apache.hadoop.hbase.metrics.MetricsMBeanBase;
+import org.apache.hadoop.metrics.util.MBeanUtil;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+import javax.management.ObjectName;
+
+/**
+ * Exports metrics recorded by {@link RegionServerMetrics} as an MBean
+ * for JMX monitoring.
+ */
+public class RegionServerStatistics extends MetricsMBeanBase {
+
+  private final ObjectName mbeanName;
+
+  public RegionServerStatistics(MetricsRegistry registry, String rsName) {
+    super(registry, "RegionServerStatistics");
+    mbeanName = MBeanUtil.registerMBean("RegionServer",
+        "RegionServerStatistics", this);
+  }
+
+  public void shutdown() {
+    if (mbeanName != null)
+      MBeanUtil.unregisterMBean(mbeanName);
+  }
+
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FailedLogCloseException.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FailedLogCloseException.java
new file mode 100644
index 0000000..393b1d2
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FailedLogCloseException.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.IOException;
+
+/**
+ * Thrown when we fail close of the write-ahead-log file.
+ * Package private.  Only used inside this package.
+ */
+public class FailedLogCloseException extends IOException {
+  private static final long serialVersionUID = 1759152841462990925L;
+
+  /**
+   *
+   */
+  public FailedLogCloseException() {
+    super();
+  }
+
+  /**
+   * @param arg0
+   */
+  public FailedLogCloseException(String arg0) {
+    super(arg0);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java
new file mode 100644
index 0000000..f06d263
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLog.java
@@ -0,0 +1,1485 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.UnsupportedEncodingException;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.net.URLEncoder;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.Condition;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReentrantLock;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.fs.Syncable;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * HLog stores all the edits to the HStore.  Its the hbase write-ahead-log
+ * implementation.
+ *
+ * It performs logfile-rolling, so external callers are not aware that the
+ * underlying file is being rolled.
+ *
+ * <p>
+ * There is one HLog per RegionServer.  All edits for all Regions carried by
+ * a particular RegionServer are entered first in the HLog.
+ *
+ * <p>
+ * Each HRegion is identified by a unique long <code>int</code>. HRegions do
+ * not need to declare themselves before using the HLog; they simply include
+ * their HRegion-id in the <code>append</code> or
+ * <code>completeCacheFlush</code> calls.
+ *
+ * <p>
+ * An HLog consists of multiple on-disk files, which have a chronological order.
+ * As data is flushed to other (better) on-disk structures, the log becomes
+ * obsolete. We can destroy all the log messages for a given HRegion-id up to
+ * the most-recent CACHEFLUSH message from that HRegion.
+ *
+ * <p>
+ * It's only practical to delete entire files. Thus, we delete an entire on-disk
+ * file F when all of the messages in F have a log-sequence-id that's older
+ * (smaller) than the most-recent CACHEFLUSH message for every HRegion that has
+ * a message in F.
+ *
+ * <p>
+ * Synchronized methods can never execute in parallel. However, between the
+ * start of a cache flush and the completion point, appends are allowed but log
+ * rolling is not. To prevent log rolling taking place during this period, a
+ * separate reentrant lock is used.
+ *
+ * <p>To read an HLog, call {@link #getReader(org.apache.hadoop.fs.FileSystem,
+ * org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration)}.
+ *
+ */
+public class HLog implements Syncable {
+  static final Log LOG = LogFactory.getLog(HLog.class);
+  public static final byte [] METAFAMILY = Bytes.toBytes("METAFAMILY");
+  static final byte [] METAROW = Bytes.toBytes("METAROW");
+
+  /*
+   * Name of directory that holds recovered edits written by the wal log
+   * splitting code, one per region
+   */
+  private static final String RECOVERED_EDITS_DIR = "recovered.edits";
+  private static final Pattern EDITFILES_NAME_PATTERN =
+    Pattern.compile("-?[0-9]+");
+  
+  private final FileSystem fs;
+  private final Path dir;
+  private final Configuration conf;
+  // Listeners that are called on WAL events.
+  private List<WALObserver> listeners =
+    new CopyOnWriteArrayList<WALObserver>();
+  private final long optionalFlushInterval;
+  private final long blocksize;
+  private final int flushlogentries;
+  private final String prefix;
+  private final Path oldLogDir;
+  private boolean logRollRequested;
+
+
+  private static Class<? extends Writer> logWriterClass;
+  private static Class<? extends Reader> logReaderClass;
+
+  static void resetLogReaderClass() {
+    HLog.logReaderClass = null;
+  }
+
+  private OutputStream hdfs_out;     // OutputStream associated with the current SequenceFile.writer
+  private int initialReplication;    // initial replication factor of SequenceFile.writer
+  private Method getNumCurrentReplicas; // refers to DFSOutputStream.getNumCurrentReplicas
+  final static Object [] NO_ARGS = new Object []{};
+
+  // used to indirectly tell syncFs to force the sync
+  private boolean forceSync = false;
+
+  public interface Reader {
+    void init(FileSystem fs, Path path, Configuration c) throws IOException;
+    void close() throws IOException;
+    Entry next() throws IOException;
+    Entry next(Entry reuse) throws IOException;
+    void seek(long pos) throws IOException;
+    long getPosition() throws IOException;
+  }
+
+  public interface Writer {
+    void init(FileSystem fs, Path path, Configuration c) throws IOException;
+    void close() throws IOException;
+    void sync() throws IOException;
+    void append(Entry entry) throws IOException;
+    long getLength() throws IOException;
+  }
+
+  /*
+   * Current log file.
+   */
+  Writer writer;
+
+  /*
+   * Map of all log files but the current one.
+   */
+  final SortedMap<Long, Path> outputfiles =
+    Collections.synchronizedSortedMap(new TreeMap<Long, Path>());
+
+  /*
+   * Map of regions to most recent sequence/edit id in their memstore.
+   * Key is encoded region name.
+   */
+  private final ConcurrentSkipListMap<byte [], Long> lastSeqWritten =
+    new ConcurrentSkipListMap<byte [], Long>(Bytes.BYTES_COMPARATOR);
+
+  private volatile boolean closed = false;
+
+  private final AtomicLong logSeqNum = new AtomicLong(0);
+
+  // The timestamp (in ms) when the log file was created.
+  private volatile long filenum = -1;
+
+  //number of transactions in the current Hlog.
+  private final AtomicInteger numEntries = new AtomicInteger(0);
+
+  // If > than this size, roll the log. This is typically 0.95 times the size
+  // of the default Hdfs block size.
+  private final long logrollsize;
+
+  // This lock prevents starting a log roll during a cache flush.
+  // synchronized is insufficient because a cache flush spans two method calls.
+  private final Lock cacheFlushLock = new ReentrantLock();
+
+  // We synchronize on updateLock to prevent updates and to prevent a log roll
+  // during an update
+  // locked during appends
+  private final Object updateLock = new Object();
+
+  private final boolean enabled;
+
+  /*
+   * If more than this many logs, force flush of oldest region to oldest edit
+   * goes to disk.  If too many and we crash, then will take forever replaying.
+   * Keep the number of logs tidy.
+   */
+  private final int maxLogs;
+
+  /**
+   * Thread that handles optional sync'ing
+   */
+  private final LogSyncer logSyncerThread;
+
+  /**
+   * Pattern used to validate a HLog file name
+   */
+  private static final Pattern pattern = Pattern.compile(".*\\.\\d*");
+
+  static byte [] COMPLETE_CACHE_FLUSH;
+  static {
+    try {
+      COMPLETE_CACHE_FLUSH =
+        "HBASE::CACHEFLUSH".getBytes(HConstants.UTF8_ENCODING);
+    } catch (UnsupportedEncodingException e) {
+      assert(false);
+    }
+  }
+
+  // For measuring latency of writes
+  private static volatile long writeOps;
+  private static volatile long writeTime;
+  // For measuring latency of syncs
+  private static volatile long syncOps;
+  private static volatile long syncTime;
+  
+  public static long getWriteOps() {
+    long ret = writeOps;
+    writeOps = 0;
+    return ret;
+  }
+
+  public static long getWriteTime() {
+    long ret = writeTime;
+    writeTime = 0;
+    return ret;
+  }
+
+  public static long getSyncOps() {
+    long ret = syncOps;
+    syncOps = 0;
+    return ret;
+  }
+
+  public static long getSyncTime() {
+    long ret = syncTime;
+    syncTime = 0;
+    return ret;
+  }
+
+  /**
+   * Constructor.
+   *
+   * @param fs filesystem handle
+   * @param dir path to where hlogs are stored
+   * @param oldLogDir path to where hlogs are archived
+   * @param conf configuration to use
+   * @throws IOException
+   */
+  public HLog(final FileSystem fs, final Path dir, final Path oldLogDir,
+              final Configuration conf)
+  throws IOException {
+    this(fs, dir, oldLogDir, conf, null, true, null);
+  }
+
+  /**
+   * Create an edit log at the given <code>dir</code> location.
+   *
+   * You should never have to load an existing log. If there is a log at
+   * startup, it should have already been processed and deleted by the time the
+   * HLog object is started up.
+   *
+   * @param fs filesystem handle
+   * @param dir path to where hlogs are stored
+   * @param oldLogDir path to where hlogs are archived
+   * @param conf configuration to use
+   * @param listeners Listeners on WAL events. Listeners passed here will
+   * be registered before we do anything else; e.g. the
+   * Constructor {@link #rollWriter()}.
+   * @param prefix should always be hostname and port in distributed env and
+   *        it will be URL encoded before being used.
+   *        If prefix is null, "hlog" will be used
+   * @throws IOException
+   */
+  public HLog(final FileSystem fs, final Path dir, final Path oldLogDir,
+      final Configuration conf, final List<WALObserver> listeners,
+      final String prefix) throws IOException {
+    this(fs, dir, oldLogDir, conf, listeners, true, prefix);
+  }
+
+  /**
+   * Create an edit log at the given <code>dir</code> location.
+   *
+   * You should never have to load an existing log. If there is a log at
+   * startup, it should have already been processed and deleted by the time the
+   * HLog object is started up.
+   *
+   * @param fs filesystem handle
+   * @param dir path to where hlogs are stored
+   * @param oldLogDir path to where hlogs are archived
+   * @param conf configuration to use
+   * @param listeners Listeners on WAL events. Listeners passed here will
+   * be registered before we do anything else; e.g. the
+   * Constructor {@link #rollWriter()}.
+   * @param failIfLogDirExists If true IOException will be thrown if dir already exists.
+   * @param prefix should always be hostname and port in distributed env and
+   *        it will be URL encoded before being used.
+   *        If prefix is null, "hlog" will be used
+   * @throws IOException
+   */
+  public HLog(final FileSystem fs, final Path dir, final Path oldLogDir,
+      final Configuration conf, final List<WALObserver> listeners,
+      final boolean failIfLogDirExists, final String prefix)
+ throws IOException {
+    super();
+    this.fs = fs;
+    this.dir = dir;
+    this.conf = conf;
+    if (listeners != null) {
+      for (WALObserver i: listeners) {
+        registerWALActionsListener(i);
+      }
+    }
+    this.flushlogentries =
+      conf.getInt("hbase.regionserver.flushlogentries", 1);
+    this.blocksize = conf.getLong("hbase.regionserver.hlog.blocksize",
+      this.fs.getDefaultBlockSize());
+    // Roll at 95% of block size.
+    float multi = conf.getFloat("hbase.regionserver.logroll.multiplier", 0.95f);
+    this.logrollsize = (long)(this.blocksize * multi);
+    this.optionalFlushInterval =
+      conf.getLong("hbase.regionserver.optionallogflushinterval", 1 * 1000);
+    if (failIfLogDirExists && fs.exists(dir)) {
+      throw new IOException("Target HLog directory already exists: " + dir);
+    }
+    if (!fs.mkdirs(dir)) {
+      throw new IOException("Unable to mkdir " + dir);
+    }
+    this.oldLogDir = oldLogDir;
+    if (!fs.exists(oldLogDir)) {
+      if (!fs.mkdirs(this.oldLogDir)) {
+        throw new IOException("Unable to mkdir " + this.oldLogDir);
+      }
+    }
+    this.maxLogs = conf.getInt("hbase.regionserver.maxlogs", 32);
+    this.enabled = conf.getBoolean("hbase.regionserver.hlog.enabled", true);
+    LOG.info("HLog configuration: blocksize=" +
+      StringUtils.byteDesc(this.blocksize) +
+      ", rollsize=" + StringUtils.byteDesc(this.logrollsize) +
+      ", enabled=" + this.enabled +
+      ", flushlogentries=" + this.flushlogentries +
+      ", optionallogflushinternal=" + this.optionalFlushInterval + "ms");
+    // If prefix is null||empty then just name it hlog
+    this.prefix = prefix == null || prefix.isEmpty() ?
+        "hlog" : URLEncoder.encode(prefix, "UTF8");
+    // rollWriter sets this.hdfs_out if it can.
+    rollWriter();
+
+    // handle the reflection necessary to call getNumCurrentReplicas()
+    this.getNumCurrentReplicas = null;
+    Exception exception = null;
+    if (this.hdfs_out != null) {
+      try {
+        this.getNumCurrentReplicas = this.hdfs_out.getClass().
+          getMethod("getNumCurrentReplicas", new Class<?> []{});
+        this.getNumCurrentReplicas.setAccessible(true);
+      } catch (NoSuchMethodException e) {
+        // Thrown if getNumCurrentReplicas() function isn't available
+        exception = e;
+      } catch (SecurityException e) {
+        // Thrown if we can't get access to getNumCurrentReplicas()
+        exception = e;
+        this.getNumCurrentReplicas = null; // could happen on setAccessible()
+      }
+    }
+    if (this.getNumCurrentReplicas != null) {
+      LOG.info("Using getNumCurrentReplicas--HDFS-826");
+    } else {
+      LOG.info("getNumCurrentReplicas--HDFS-826 not available; hdfs_out=" +
+        this.hdfs_out + ", exception=" + exception.getMessage());
+    }
+
+    logSyncerThread = new LogSyncer(this.optionalFlushInterval);
+    Threads.setDaemonThreadRunning(logSyncerThread,
+        Thread.currentThread().getName() + ".logSyncer");
+  }
+
+  public void registerWALActionsListener (final WALObserver listener) {
+    this.listeners.add(listener);
+  }
+
+  public boolean unregisterWALActionsListener(final WALObserver listener) {
+    return this.listeners.remove(listener);
+  }
+
+  /**
+   * @return Current state of the monotonically increasing file id.
+   */
+  public long getFilenum() {
+    return this.filenum;
+  }
+
+  /**
+   * Called by HRegionServer when it opens a new region to ensure that log
+   * sequence numbers are always greater than the latest sequence number of the
+   * region being brought on-line.
+   *
+   * @param newvalue We'll set log edit/sequence number to this value if it
+   * is greater than the current value.
+   */
+  public void setSequenceNumber(final long newvalue) {
+    for (long id = this.logSeqNum.get(); id < newvalue &&
+        !this.logSeqNum.compareAndSet(id, newvalue); id = this.logSeqNum.get()) {
+      // This could spin on occasion but better the occasional spin than locking
+      // every increment of sequence number.
+      LOG.debug("Changed sequenceid from " + logSeqNum + " to " + newvalue);
+    }
+  }
+
+  /**
+   * @return log sequence number
+   */
+  public long getSequenceNumber() {
+    return logSeqNum.get();
+  }
+
+  // usage: see TestLogRolling.java
+  OutputStream getOutputStream() {
+    return this.hdfs_out;
+  }
+
+  /**
+   * Roll the log writer. That is, start writing log messages to a new file.
+   *
+   * Because a log cannot be rolled during a cache flush, and a cache flush
+   * spans two method calls, a special lock needs to be obtained so that a cache
+   * flush cannot start when the log is being rolled and the log cannot be
+   * rolled during a cache flush.
+   *
+   * <p>Note that this method cannot be synchronized because it is possible that
+   * startCacheFlush runs, obtaining the cacheFlushLock, then this method could
+   * start which would obtain the lock on this but block on obtaining the
+   * cacheFlushLock and then completeCacheFlush could be called which would wait
+   * for the lock on this and consequently never release the cacheFlushLock
+   *
+   * @return If lots of logs, flush the returned regions so next time through
+   * we can clean logs. Returns null if nothing to flush.  Names are actual
+   * region names as returned by {@link HRegionInfo#getEncodedName()}
+   * @throws org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException
+   * @throws IOException
+   */
+  public byte [][] rollWriter() throws FailedLogCloseException, IOException {
+    // Return if nothing to flush.
+    if (this.writer != null && this.numEntries.get() <= 0) {
+      return null;
+    }
+    byte [][] regionsToFlush = null;
+    this.cacheFlushLock.lock();
+    try {
+      if (closed) {
+        return regionsToFlush;
+      }
+      // Do all the preparation outside of the updateLock to block
+      // as less as possible the incoming writes
+      long currentFilenum = this.filenum;
+      this.filenum = System.currentTimeMillis();
+      Path newPath = computeFilename();
+      HLog.Writer nextWriter = this.createWriterInstance(fs, newPath,
+          HBaseConfiguration.create(conf));
+      int nextInitialReplication = fs.getFileStatus(newPath).getReplication();
+      // Can we get at the dfsclient outputstream?  If an instance of
+      // SFLW, it'll have done the necessary reflection to get at the
+      // protected field name.
+      OutputStream nextHdfsOut = null;
+      if (nextWriter instanceof SequenceFileLogWriter) {
+        nextHdfsOut =
+          ((SequenceFileLogWriter)nextWriter).getDFSCOutputStream();
+      }
+      // Tell our listeners that a new log was created
+      if (!this.listeners.isEmpty()) {
+        for (WALObserver i : this.listeners) {
+          i.logRolled(newPath);
+        }
+      }
+
+      synchronized (updateLock) {
+        // Clean up current writer.
+        Path oldFile = cleanupCurrentWriter(currentFilenum);
+        this.writer = nextWriter;
+        this.initialReplication = nextInitialReplication;
+        this.hdfs_out = nextHdfsOut;
+
+        LOG.info((oldFile != null?
+            "Roll " + FSUtils.getPath(oldFile) + ", entries=" +
+            this.numEntries.get() +
+            ", filesize=" +
+            this.fs.getFileStatus(oldFile).getLen() + ". ": "") +
+          "New hlog " + FSUtils.getPath(newPath));
+        this.numEntries.set(0);
+        this.logRollRequested = false;
+      }
+      // Can we delete any of the old log files?
+      if (this.outputfiles.size() > 0) {
+        if (this.lastSeqWritten.isEmpty()) {
+          LOG.debug("Last sequenceid written is empty. Deleting all old hlogs");
+          // If so, then no new writes have come in since all regions were
+          // flushed (and removed from the lastSeqWritten map). Means can
+          // remove all but currently open log file.
+          for (Map.Entry<Long, Path> e : this.outputfiles.entrySet()) {
+            archiveLogFile(e.getValue(), e.getKey());
+          }
+          this.outputfiles.clear();
+        } else {
+          regionsToFlush = cleanOldLogs();
+        }
+      }
+    } finally {
+      this.cacheFlushLock.unlock();
+    }
+    return regionsToFlush;
+  }
+
+  /**
+   * This method allows subclasses to inject different writers without having to
+   * extend other methods like rollWriter().
+   * 
+   * @param fs
+   * @param path
+   * @param conf
+   * @return Writer instance
+   * @throws IOException
+   */
+  protected Writer createWriterInstance(final FileSystem fs, final Path path,
+      final Configuration conf) throws IOException {
+    return createWriter(fs, path, conf);
+  }
+
+  /**
+   * Get a reader for the WAL.
+   * @param fs
+   * @param path
+   * @param conf
+   * @return A WAL reader.  Close when done with it.
+   * @throws IOException
+   */
+  public static Reader getReader(final FileSystem fs,
+    final Path path, Configuration conf)
+  throws IOException {
+    try {
+
+      if (logReaderClass == null) {
+
+        logReaderClass = conf.getClass("hbase.regionserver.hlog.reader.impl",
+            SequenceFileLogReader.class, Reader.class);
+      }
+
+
+      HLog.Reader reader = logReaderClass.newInstance();
+      reader.init(fs, path, conf);
+      return reader;
+    } catch (IOException e) {
+      throw e;
+    }
+    catch (Exception e) {
+      throw new IOException("Cannot get log reader", e);
+    }
+  }
+
+  /**
+   * Get a writer for the WAL.
+   * @param path
+   * @param conf
+   * @return A WAL writer.  Close when done with it.
+   * @throws IOException
+   */
+  public static Writer createWriter(final FileSystem fs,
+      final Path path, Configuration conf)
+  throws IOException {
+    try {
+      if (logWriterClass == null) {
+        logWriterClass = conf.getClass("hbase.regionserver.hlog.writer.impl",
+            SequenceFileLogWriter.class, Writer.class);
+      }
+      HLog.Writer writer = (HLog.Writer) logWriterClass.newInstance();
+      writer.init(fs, path, conf);
+      return writer;
+    } catch (Exception e) {
+      IOException ie = new IOException("cannot get log writer");
+      ie.initCause(e);
+      throw ie;
+    }
+  }
+
+  /*
+   * Clean up old commit logs.
+   * @return If lots of logs, flush the returned region so next time through
+   * we can clean logs. Returns null if nothing to flush.  Returns array of
+   * encoded region names to flush.
+   * @throws IOException
+   */
+  private byte [][] cleanOldLogs() throws IOException {
+    Long oldestOutstandingSeqNum = getOldestOutstandingSeqNum();
+    // Get the set of all log files whose last sequence number is smaller than
+    // the oldest edit's sequence number.
+    TreeSet<Long> sequenceNumbers =
+      new TreeSet<Long>(this.outputfiles.headMap(
+        (Long.valueOf(oldestOutstandingSeqNum.longValue()))).keySet());
+    // Now remove old log files (if any)
+    int logsToRemove = sequenceNumbers.size();
+    if (logsToRemove > 0) {
+      if (LOG.isDebugEnabled()) {
+        // Find associated region; helps debugging.
+        byte [] oldestRegion = getOldestRegion(oldestOutstandingSeqNum);
+        LOG.debug("Found " + logsToRemove + " hlogs to remove" +
+          " out of total " + this.outputfiles.size() + ";" +
+          " oldest outstanding sequenceid is " + oldestOutstandingSeqNum +
+          " from region " + Bytes.toString(oldestRegion));
+      }
+      for (Long seq : sequenceNumbers) {
+        archiveLogFile(this.outputfiles.remove(seq), seq);
+      }
+    }
+
+    // If too many log files, figure which regions we need to flush.
+    // Array is an array of encoded region names.
+    byte [][] regions = null;
+    int logCount = this.outputfiles.size();
+    if (logCount > this.maxLogs && this.outputfiles != null &&
+        this.outputfiles.size() > 0) {
+      // This is an array of encoded region names.
+      regions = findMemstoresWithEditsEqualOrOlderThan(this.outputfiles.firstKey(),
+        this.lastSeqWritten);
+      if (regions != null) {
+        StringBuilder sb = new StringBuilder();
+        for (int i = 0; i < regions.length; i++) {
+          if (i > 0) sb.append(", ");
+          sb.append(Bytes.toStringBinary(regions[i]));
+        }
+        LOG.info("Too many hlogs: logs=" + logCount + ", maxlogs=" +
+           this.maxLogs + "; forcing flush of " + regions.length + " regions(s): " +
+           sb.toString());
+      }
+    }
+    return regions;
+  }
+
+  /**
+   * Return regions (memstores) that have edits that are equal or less than
+   * the passed <code>oldestWALseqid</code>.
+   * @param oldestWALseqid
+   * @param regionsToSeqids
+   * @return All regions whose seqid is < than <code>oldestWALseqid</code> (Not
+   * necessarily in order).  Null if no regions found.
+   */
+  static byte [][] findMemstoresWithEditsEqualOrOlderThan(final long oldestWALseqid,
+      final Map<byte [], Long> regionsToSeqids) {
+    //  This method is static so it can be unit tested the easier.
+    List<byte []> regions = null;
+    for (Map.Entry<byte [], Long> e: regionsToSeqids.entrySet()) {
+      if (e.getValue().longValue() <= oldestWALseqid) {
+        if (regions == null) regions = new ArrayList<byte []>();
+        regions.add(e.getKey());
+      }
+    }
+    return regions == null?
+      null: regions.toArray(new byte [][] {HConstants.EMPTY_BYTE_ARRAY});
+  }
+
+  /*
+   * @return Logs older than this id are safe to remove.
+   */
+  private Long getOldestOutstandingSeqNum() {
+    return Collections.min(this.lastSeqWritten.values());
+  }
+
+  /**
+   * @param oldestOutstandingSeqNum
+   * @return (Encoded) name of oldest outstanding region.
+   */
+  private byte [] getOldestRegion(final Long oldestOutstandingSeqNum) {
+    byte [] oldestRegion = null;
+    for (Map.Entry<byte [], Long> e: this.lastSeqWritten.entrySet()) {
+      if (e.getValue().longValue() == oldestOutstandingSeqNum.longValue()) {
+        oldestRegion = e.getKey();
+        break;
+      }
+    }
+    return oldestRegion;
+  }
+
+  /*
+   * Cleans up current writer closing and adding to outputfiles.
+   * Presumes we're operating inside an updateLock scope.
+   * @return Path to current writer or null if none.
+   * @throws IOException
+   */
+  private Path cleanupCurrentWriter(final long currentfilenum)
+  throws IOException {
+    Path oldFile = null;
+    if (this.writer != null) {
+      // Close the current writer, get a new one.
+      try {
+        this.writer.close();
+      } catch (IOException e) {
+        // Failed close of log file.  Means we're losing edits.  For now,
+        // shut ourselves down to minimize loss.  Alternative is to try and
+        // keep going.  See HBASE-930.
+        FailedLogCloseException flce =
+          new FailedLogCloseException("#" + currentfilenum);
+        flce.initCause(e);
+        throw e;
+      }
+      if (currentfilenum >= 0) {
+        oldFile = computeFilename(currentfilenum);
+        this.outputfiles.put(Long.valueOf(this.logSeqNum.get()), oldFile);
+      }
+    }
+    return oldFile;
+  }
+
+  private void archiveLogFile(final Path p, final Long seqno) throws IOException {
+    Path newPath = getHLogArchivePath(this.oldLogDir, p);
+    LOG.info("moving old hlog file " + FSUtils.getPath(p) +
+      " whose highest sequenceid is " + seqno + " to " +
+      FSUtils.getPath(newPath));
+    if (!this.fs.rename(p, newPath)) {
+      throw new IOException("Unable to rename " + p + " to " + newPath);
+    }
+  }
+
+  /**
+   * This is a convenience method that computes a new filename with a given
+   * using the current HLog file-number
+   * @return Path
+   */
+  protected Path computeFilename() {
+    return computeFilename(this.filenum);
+  }
+
+  /**
+   * This is a convenience method that computes a new filename with a given
+   * file-number.
+   * @param filenum to use
+   * @return Path
+   */
+  protected Path computeFilename(long filenum) {
+    if (filenum < 0) {
+      throw new RuntimeException("hlog file number can't be < 0");
+    }
+    return new Path(dir, prefix + "." + filenum);
+  }
+
+  /**
+   * Shut down the log and delete the log directory
+   *
+   * @throws IOException
+   */
+  public void closeAndDelete() throws IOException {
+    close();
+    FileStatus[] files = fs.listStatus(this.dir);
+    for(FileStatus file : files) {
+      Path p = getHLogArchivePath(this.oldLogDir, file.getPath());
+      if (!fs.rename(file.getPath(),p)) {
+        throw new IOException("Unable to rename " + file.getPath() + " to " + p);
+      }
+    }
+    LOG.debug("Moved " + files.length + " log files to " +
+        FSUtils.getPath(this.oldLogDir));
+    if (!fs.delete(dir, true)) {
+      LOG.info("Unable to delete " + dir);
+    }
+  }
+
+  /**
+   * Shut down the log.
+   *
+   * @throws IOException
+   */
+  public void close() throws IOException {
+    try {
+      logSyncerThread.interrupt();
+      // Make sure we synced everything
+      logSyncerThread.join(this.optionalFlushInterval*2);
+    } catch (InterruptedException e) {
+      LOG.error("Exception while waiting for syncer thread to die", e);
+    }
+
+    cacheFlushLock.lock();
+    try {
+      // Tell our listeners that the log is closing
+      if (!this.listeners.isEmpty()) {
+        for (WALObserver i : this.listeners) {
+          i.logCloseRequested();
+        }
+      }
+      synchronized (updateLock) {
+        this.closed = true;
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("closing hlog writer in " + this.dir.toString());
+        }
+        this.writer.close();
+      }
+    } finally {
+      cacheFlushLock.unlock();
+    }
+  }
+
+   /** Append an entry to the log.
+   *
+   * @param regionInfo
+   * @param logEdit
+   * @param now Time of this edit write.
+   * @throws IOException
+   */
+  public void append(HRegionInfo regionInfo, WALEdit logEdit,
+    final long now,
+    final boolean isMetaRegion)
+  throws IOException {
+    byte [] regionName = regionInfo.getEncodedNameAsBytes();
+    byte [] tableName = regionInfo.getTableDesc().getName();
+    this.append(regionInfo, makeKey(regionName, tableName, -1, now), logEdit);
+  }
+
+  /**
+   * @param now
+   * @param regionName
+   * @param tableName
+   * @return New log key.
+   */
+  protected HLogKey makeKey(byte[] regionName, byte[] tableName, long seqnum, long now) {
+    return new HLogKey(regionName, tableName, seqnum, now);
+  }
+
+
+
+  /** Append an entry to the log.
+   *
+   * @param regionInfo
+   * @param logEdit
+   * @param logKey
+   * @throws IOException
+   */
+  public void append(HRegionInfo regionInfo, HLogKey logKey, WALEdit logEdit)
+  throws IOException {
+    if (this.closed) {
+      throw new IOException("Cannot append; log is closed");
+    }
+    synchronized (updateLock) {
+      long seqNum = obtainSeqNum();
+      logKey.setLogSeqNum(seqNum);
+      // The 'lastSeqWritten' map holds the sequence number of the oldest
+      // write for each region (i.e. the first edit added to the particular
+      // memstore). When the cache is flushed, the entry for the
+      // region being flushed is removed if the sequence number of the flush
+      // is greater than or equal to the value in lastSeqWritten.
+      this.lastSeqWritten.putIfAbsent(regionInfo.getEncodedNameAsBytes(),
+        Long.valueOf(seqNum));
+      doWrite(regionInfo, logKey, logEdit);
+      this.numEntries.incrementAndGet();
+    }
+
+    // Sync if catalog region, and if not then check if that table supports
+    // deferred log flushing
+    if (regionInfo.isMetaRegion() ||
+        !regionInfo.getTableDesc().isDeferredLogFlush()) {
+      // sync txn to file system
+      this.sync();
+    }
+  }
+
+  /**
+   * Append a set of edits to the log. Log edits are keyed by (encoded)
+   * regionName, rowname, and log-sequence-id.
+   *
+   * Later, if we sort by these keys, we obtain all the relevant edits for a
+   * given key-range of the HRegion (TODO). Any edits that do not have a
+   * matching COMPLETE_CACHEFLUSH message can be discarded.
+   *
+   * <p>
+   * Logs cannot be restarted once closed, or once the HLog process dies. Each
+   * time the HLog starts, it must create a new log. This means that other
+   * systems should process the log appropriately upon each startup (and prior
+   * to initializing HLog).
+   *
+   * synchronized prevents appends during the completion of a cache flush or for
+   * the duration of a log roll.
+   *
+   * @param info
+   * @param tableName
+   * @param edits
+   * @param now
+   * @throws IOException
+   */
+  public void append(HRegionInfo info, byte [] tableName, WALEdit edits,
+    final long now)
+  throws IOException {
+    if (edits.isEmpty()) return;
+    if (this.closed) {
+      throw new IOException("Cannot append; log is closed");
+    }
+    synchronized (this.updateLock) {
+      long seqNum = obtainSeqNum();
+      // The 'lastSeqWritten' map holds the sequence number of the oldest
+      // write for each region (i.e. the first edit added to the particular
+      // memstore). . When the cache is flushed, the entry for the
+      // region being flushed is removed if the sequence number of the flush
+      // is greater than or equal to the value in lastSeqWritten.
+      // Use encoded name.  Its shorter, guaranteed unique and a subset of
+      // actual  name.
+      byte [] hriKey = info.getEncodedNameAsBytes();
+      this.lastSeqWritten.putIfAbsent(hriKey, seqNum);
+      HLogKey logKey = makeKey(hriKey, tableName, seqNum, now);
+      doWrite(info, logKey, edits);
+      this.numEntries.incrementAndGet();
+    }
+    // Sync if catalog region, and if not then check if that table supports
+    // deferred log flushing
+    if (info.isMetaRegion() ||
+        !info.getTableDesc().isDeferredLogFlush()) {
+      // sync txn to file system
+      this.sync();
+    }
+  }
+
+  /**
+   * This thread is responsible to call syncFs and buffer up the writers while
+   * it happens.
+   */
+   class LogSyncer extends Thread {
+
+    private final long optionalFlushInterval;
+
+    private boolean syncerShuttingDown = false;
+
+    LogSyncer(long optionalFlushInterval) {
+      this.optionalFlushInterval = optionalFlushInterval;
+    }
+
+    @Override
+    public void run() {
+      try {
+        // awaiting with a timeout doesn't always
+        // throw exceptions on interrupt
+        while(!this.isInterrupted()) {
+
+          Thread.sleep(this.optionalFlushInterval);
+          sync();
+        }
+      } catch (IOException e) {
+        LOG.error("Error while syncing, requesting close of hlog ", e);
+        requestLogRoll();
+      } catch (InterruptedException e) {
+        LOG.debug(getName() + " interrupted while waiting for sync requests");
+      } finally {
+        syncerShuttingDown = true;
+        LOG.info(getName() + " exiting");
+      }
+    }
+  }
+
+  public void sync() throws IOException {
+    synchronized (this.updateLock) {
+      if (this.closed) {
+        return;
+      }
+    }
+    try {
+      long now = System.currentTimeMillis();
+      // Done in parallel for all writer threads, thanks to HDFS-895
+      this.writer.sync();
+      synchronized (this.updateLock) {
+        syncTime += System.currentTimeMillis() - now;
+        syncOps++;
+        if (!logRollRequested) {
+          checkLowReplication();
+          if (this.writer.getLength() > this.logrollsize) {
+            requestLogRoll();
+          }
+        }
+      }
+
+    } catch (IOException e) {
+      LOG.fatal("Could not append. Requesting close of hlog", e);
+      requestLogRoll();
+      throw e;
+    }
+  }
+
+  private void checkLowReplication() {
+    // if the number of replicas in HDFS has fallen below the initial
+    // value, then roll logs.
+    try {
+      int numCurrentReplicas = getLogReplication();
+      if (numCurrentReplicas != 0 &&
+          numCurrentReplicas < this.initialReplication) {
+        LOG.warn("HDFS pipeline error detected. " +
+            "Found " + numCurrentReplicas + " replicas but expecting " +
+            this.initialReplication + " replicas. " +
+            " Requesting close of hlog.");
+        requestLogRoll();
+        logRollRequested = true;
+      }
+    } catch (Exception e) {
+      LOG.warn("Unable to invoke DFSOutputStream.getNumCurrentReplicas" + e +
+          " still proceeding ahead...");
+    }
+  }
+
+  /**
+   * This method gets the datanode replication count for the current HLog.
+   *
+   * If the pipeline isn't started yet or is empty, you will get the default
+   * replication factor.  Therefore, if this function returns 0, it means you
+   * are not properly running with the HDFS-826 patch.
+   * @throws InvocationTargetException
+   * @throws IllegalAccessException
+   * @throws IllegalArgumentException
+   *
+   * @throws Exception
+   */
+  int getLogReplication() throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {
+    if(this.getNumCurrentReplicas != null && this.hdfs_out != null) {
+      Object repl = this.getNumCurrentReplicas.invoke(this.hdfs_out, NO_ARGS);
+      if (repl instanceof Integer) {
+        return ((Integer)repl).intValue();
+      }
+    }
+    return 0;
+  }
+
+  boolean canGetCurReplicas() {
+    return this.getNumCurrentReplicas != null;
+  }
+
+  public void hsync() throws IOException {
+    // Not yet implemented up in hdfs so just call hflush.
+    sync();
+  }
+
+  private void requestLogRoll() {
+    if (!this.listeners.isEmpty()) {
+      for (WALObserver i: this.listeners) {
+        i.logRollRequested();
+      }
+    }
+  }
+
+  protected void doWrite(HRegionInfo info, HLogKey logKey, WALEdit logEdit)
+  throws IOException {
+    if (!this.enabled) {
+      return;
+    }
+    if (!this.listeners.isEmpty()) {
+      for (WALObserver i: this.listeners) {
+        i.visitLogEntryBeforeWrite(info, logKey, logEdit);
+      }
+    }
+    try {
+      long now = System.currentTimeMillis();
+      this.writer.append(new HLog.Entry(logKey, logEdit));
+      long took = System.currentTimeMillis() - now;
+      writeTime += took;
+      writeOps++;
+      if (took > 1000) {
+        long len = 0;
+        for(KeyValue kv : logEdit.getKeyValues()) { 
+          len += kv.getLength(); 
+        }
+        LOG.warn(String.format(
+          "%s took %d ms appending an edit to hlog; editcount=%d, len~=%s",
+          Thread.currentThread().getName(), took, this.numEntries.get(), 
+          StringUtils.humanReadableInt(len)));
+      }
+    } catch (IOException e) {
+      LOG.fatal("Could not append. Requesting close of hlog", e);
+      requestLogRoll();
+      throw e;
+    }
+  }
+
+  /** @return How many items have been added to the log */
+  int getNumEntries() {
+    return numEntries.get();
+  }
+
+  /**
+   * Obtain a log sequence number.
+   */
+  private long obtainSeqNum() {
+    return this.logSeqNum.incrementAndGet();
+  }
+
+  /** @return the number of log files in use */
+  int getNumLogFiles() {
+    return outputfiles.size();
+  }
+
+  /**
+   * By acquiring a log sequence ID, we can allow log messages to continue while
+   * we flush the cache.
+   *
+   * Acquire a lock so that we do not roll the log between the start and
+   * completion of a cache-flush. Otherwise the log-seq-id for the flush will
+   * not appear in the correct logfile.
+   *
+   * @return sequence ID to pass {@link #completeCacheFlush(byte[], byte[], long, boolean)}
+   * (byte[], byte[], long)}
+   * @see #completeCacheFlush(byte[], byte[], long, boolean)
+   * @see #abortCacheFlush()
+   */
+  public long startCacheFlush() {
+    this.cacheFlushLock.lock();
+    return obtainSeqNum();
+  }
+
+  /**
+   * Complete the cache flush
+   *
+   * Protected by cacheFlushLock
+   *
+   * @param encodedRegionName
+   * @param tableName
+   * @param logSeqId
+   * @throws IOException
+   */
+  public void completeCacheFlush(final byte [] encodedRegionName,
+      final byte [] tableName, final long logSeqId, final boolean isMetaRegion)
+  throws IOException {
+    try {
+      if (this.closed) {
+        return;
+      }
+      synchronized (updateLock) {
+        long now = System.currentTimeMillis();
+        WALEdit edit = completeCacheFlushLogEdit();
+        HLogKey key = makeKey(encodedRegionName, tableName, logSeqId,
+            System.currentTimeMillis());
+        this.writer.append(new Entry(key, edit));
+        writeTime += System.currentTimeMillis() - now;
+        writeOps++;
+        this.numEntries.incrementAndGet();
+        Long seq = this.lastSeqWritten.get(encodedRegionName);
+        if (seq != null && logSeqId >= seq.longValue()) {
+          this.lastSeqWritten.remove(encodedRegionName);
+        }
+      }
+      // sync txn to file system
+      this.sync();
+
+    } finally {
+      this.cacheFlushLock.unlock();
+    }
+  }
+
+  private WALEdit completeCacheFlushLogEdit() {
+    KeyValue kv = new KeyValue(METAROW, METAFAMILY, null,
+      System.currentTimeMillis(), COMPLETE_CACHE_FLUSH);
+    WALEdit e = new WALEdit();
+    e.add(kv);
+    return e;
+  }
+
+  /**
+   * Abort a cache flush.
+   * Call if the flush fails. Note that the only recovery for an aborted flush
+   * currently is a restart of the regionserver so the snapshot content dropped
+   * by the failure gets restored to the memstore.
+   */
+  public void abortCacheFlush() {
+    this.cacheFlushLock.unlock();
+  }
+
+  /**
+   * @param family
+   * @return true if the column is a meta column
+   */
+  public static boolean isMetaFamily(byte [] family) {
+    return Bytes.equals(METAFAMILY, family);
+  }
+
+  @SuppressWarnings("unchecked")
+  public static Class<? extends HLogKey> getKeyClass(Configuration conf) {
+     return (Class<? extends HLogKey>)
+       conf.getClass("hbase.regionserver.hlog.keyclass", HLogKey.class);
+  }
+
+  public static HLogKey newKey(Configuration conf) throws IOException {
+    Class<? extends HLogKey> keyClass = getKeyClass(conf);
+    try {
+      return keyClass.newInstance();
+    } catch (InstantiationException e) {
+      throw new IOException("cannot create hlog key");
+    } catch (IllegalAccessException e) {
+      throw new IOException("cannot create hlog key");
+    }
+  }
+
+  /**
+   * Utility class that lets us keep track of the edit with it's key
+   * Only used when splitting logs
+   */
+  public static class Entry implements Writable {
+    private WALEdit edit;
+    private HLogKey key;
+
+    public Entry() {
+      edit = new WALEdit();
+      key = new HLogKey();
+    }
+
+    /**
+     * Constructor for both params
+     * @param edit log's edit
+     * @param key log's key
+     */
+    public Entry(HLogKey key, WALEdit edit) {
+      super();
+      this.key = key;
+      this.edit = edit;
+    }
+    /**
+     * Gets the edit
+     * @return edit
+     */
+    public WALEdit getEdit() {
+      return edit;
+    }
+    /**
+     * Gets the key
+     * @return key
+     */
+    public HLogKey getKey() {
+      return key;
+    }
+
+    @Override
+    public String toString() {
+      return this.key + "=" + this.edit;
+    }
+
+    @Override
+    public void write(DataOutput dataOutput) throws IOException {
+      this.key.write(dataOutput);
+      this.edit.write(dataOutput);
+    }
+
+    @Override
+    public void readFields(DataInput dataInput) throws IOException {
+      this.key.readFields(dataInput);
+      this.edit.readFields(dataInput);
+    }
+  }
+
+  /**
+   * Construct the HLog directory name
+   *
+   * @param info HServerInfo for server
+   * @return the HLog directory name
+   */
+  public static String getHLogDirectoryName(HServerInfo info) {
+    return getHLogDirectoryName(info.getServerName());
+  }
+
+  /**
+   * Construct the HLog directory name
+   *
+   * @param serverAddress
+   * @param startCode
+   * @return the HLog directory name
+   */
+  public static String getHLogDirectoryName(String serverAddress,
+      long startCode) {
+    if (serverAddress == null || serverAddress.length() == 0) {
+      return null;
+    }
+    return getHLogDirectoryName(
+        HServerInfo.getServerName(serverAddress, startCode));
+  }
+
+  /**
+   * Construct the HLog directory name
+   *
+   * @param serverName
+   * @return the HLog directory name
+   */
+  public static String getHLogDirectoryName(String serverName) {
+    StringBuilder dirName = new StringBuilder(HConstants.HREGION_LOGDIR_NAME);
+    dirName.append("/");
+    dirName.append(serverName);
+    return dirName.toString();
+  }
+
+  /**
+   * Get the directory we are making logs in.
+   * 
+   * @return dir
+   */
+  protected Path getDir() {
+    return dir;
+  }
+  
+  public static boolean validateHLogFilename(String filename) {
+    return pattern.matcher(filename).matches();
+  }
+
+  static Path getHLogArchivePath(Path oldLogDir, Path p) {
+    return new Path(oldLogDir, p.getName());
+  }
+
+  static String formatRecoveredEditsFileName(final long seqid) {
+    return String.format("%019d", seqid);
+  }
+
+  /**
+   * Returns sorted set of edit files made by wal-log splitter.
+   * @param fs
+   * @param regiondir
+   * @return Files in passed <code>regiondir</code> as a sorted set.
+   * @throws IOException
+   */
+  public static NavigableSet<Path> getSplitEditFilesSorted(final FileSystem fs,
+      final Path regiondir)
+  throws IOException {
+    Path editsdir = getRegionDirRecoveredEditsDir(regiondir);
+    FileStatus[] files = fs.listStatus(editsdir, new PathFilter() {
+      @Override
+      public boolean accept(Path p) {
+        boolean result = false;
+        try {
+          // Return files and only files that match the editfile names pattern.
+          // There can be other files in this directory other than edit files.
+          // In particular, on error, we'll move aside the bad edit file giving
+          // it a timestamp suffix.  See moveAsideBadEditsFile.
+          Matcher m = EDITFILES_NAME_PATTERN.matcher(p.getName());
+          result = fs.isFile(p) && m.matches();
+        } catch (IOException e) {
+          LOG.warn("Failed isFile check on " + p);
+        }
+        return result;
+      }
+    });
+    NavigableSet<Path> filesSorted = new TreeSet<Path>();
+    if (files == null) return filesSorted;
+    for (FileStatus status: files) {
+      filesSorted.add(status.getPath());
+    }
+    return filesSorted;
+  }
+
+  /**
+   * Move aside a bad edits file.
+   * @param fs
+   * @param edits Edits file to move aside.
+   * @return The name of the moved aside file.
+   * @throws IOException
+   */
+  public static Path moveAsideBadEditsFile(final FileSystem fs,
+      final Path edits)
+  throws IOException {
+    Path moveAsideName = new Path(edits.getParent(), edits.getName() + "." +
+      System.currentTimeMillis());
+    if (!fs.rename(edits, moveAsideName)) {
+      LOG.warn("Rename failed from " + edits + " to " + moveAsideName);
+    }
+    return moveAsideName;
+  }
+
+  /**
+   * @param regiondir This regions directory in the filesystem.
+   * @return The directory that holds recovered edits files for the region
+   * <code>regiondir</code>
+   */
+  public static Path getRegionDirRecoveredEditsDir(final Path regiondir) {
+    return new Path(regiondir, RECOVERED_EDITS_DIR);
+  }
+
+  public static final long FIXED_OVERHEAD = ClassSize.align(
+    ClassSize.OBJECT + (5 * ClassSize.REFERENCE) +
+    ClassSize.ATOMIC_INTEGER + Bytes.SIZEOF_INT + (3 * Bytes.SIZEOF_LONG));
+
+  private static void usage() {
+    System.err.println("Usage: HLog <ARGS>");
+    System.err.println("Arguments:");
+    System.err.println(" --dump  Dump textual representation of passed one or more files");
+    System.err.println("         For example: HLog --dump hdfs://example.com:9000/hbase/.logs/MACHINE/LOGFILE");
+    System.err.println(" --split Split the passed directory of WAL logs");
+    System.err.println("         For example: HLog --split hdfs://example.com:9000/hbase/.logs/DIR");
+  }
+
+  private static void dump(final Configuration conf, final Path p)
+  throws IOException {
+    FileSystem fs = FileSystem.get(conf);
+    if (!fs.exists(p)) {
+      throw new FileNotFoundException(p.toString());
+    }
+    if (!fs.isFile(p)) {
+      throw new IOException(p + " is not a file");
+    }
+    Reader log = getReader(fs, p, conf);
+    try {
+      int count = 0;
+      HLog.Entry entry;
+      while ((entry = log.next()) != null) {
+        System.out.println("#" + count + ", pos=" + log.getPosition() + " " +
+          entry.toString());
+        count++;
+      }
+    } finally {
+      log.close();
+    }
+  }
+
+  private static void split(final Configuration conf, final Path p)
+  throws IOException {
+    FileSystem fs = FileSystem.get(conf);
+    if (!fs.exists(p)) {
+      throw new FileNotFoundException(p.toString());
+    }
+    final Path baseDir = new Path(conf.get(HConstants.HBASE_DIR));
+    final Path oldLogDir = new Path(baseDir, HConstants.HREGION_OLDLOGDIR_NAME);
+    if (!fs.getFileStatus(p).isDir()) {
+      throw new IOException(p + " is not a directory");
+    }
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(
+        conf, baseDir, p, oldLogDir, fs);
+    logSplitter.splitLog();
+  }
+
+  /**
+   * Pass one or more log file names and it will either dump out a text version
+   * on <code>stdout</code> or split the specified log files.
+   *
+   * @param args
+   * @throws IOException
+   */
+  public static void main(String[] args) throws IOException {
+    if (args.length < 2) {
+      usage();
+      System.exit(-1);
+    }
+    boolean dump = true;
+    if (args[0].compareTo("--dump") != 0) {
+      if (args[0].compareTo("--split") == 0) {
+        dump = false;
+      } else {
+        usage();
+        System.exit(-1);
+      }
+    }
+    Configuration conf = HBaseConfiguration.create();
+    for (int i = 1; i < args.length; i++) {
+      try {
+        conf.set("fs.default.name", args[i]);
+        conf.set("fs.defaultFS", args[i]);
+        Path logPath = new Path(args[i]);
+        if (dump) {
+          dump(conf, logPath);
+        } else {
+          split(conf, logPath);
+        }
+      } catch (Throwable t) {
+        t.printStackTrace(System.err);
+        System.exit(-1);
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java
new file mode 100644
index 0000000..5cb31b4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogKey.java
@@ -0,0 +1,208 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.EOFException;
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * A Key for an entry in the change log.
+ *
+ * The log intermingles edits to many tables and rows, so each log entry
+ * identifies the appropriate table and row.  Within a table and row, they're
+ * also sorted.
+ *
+ * <p>Some Transactional edits (START, COMMIT, ABORT) will not have an
+ * associated row.
+ */
+public class HLogKey implements WritableComparable<HLogKey> {
+  //  The encoded region name.
+  private byte [] encodedRegionName;
+  private byte [] tablename;
+  private long logSeqNum;
+  // Time at which this edit was written.
+  private long writeTime;
+
+  private byte clusterId;
+
+  /** Writable Consructor -- Do not use. */
+  public HLogKey() {
+    this(null, null, 0L, HConstants.LATEST_TIMESTAMP);
+  }
+
+  /**
+   * Create the log key!
+   * We maintain the tablename mainly for debugging purposes.
+   * A regionName is always a sub-table object.
+   *
+   * @param encodedRegionName Encoded name of the region as returned by
+   * <code>HRegionInfo#getEncodedNameAsBytes()</code>.
+   * @param tablename   - name of table
+   * @param logSeqNum   - log sequence number
+   * @param now Time at which this edit was written.
+   */
+  public HLogKey(final byte [] encodedRegionName, final byte [] tablename,
+      long logSeqNum, final long now) {
+    this.encodedRegionName = encodedRegionName;
+    this.tablename = tablename;
+    this.logSeqNum = logSeqNum;
+    this.writeTime = now;
+    this.clusterId = HConstants.DEFAULT_CLUSTER_ID;
+  }
+
+  /** @return encoded region name */
+  public byte [] getEncodedRegionName() {
+    return encodedRegionName;
+  }
+
+  /** @return table name */
+  public byte [] getTablename() {
+    return tablename;
+  }
+
+  /** @return log sequence number */
+  public long getLogSeqNum() {
+    return logSeqNum;
+  }
+
+  void setLogSeqNum(long logSeqNum) {
+    this.logSeqNum = logSeqNum;
+  }
+
+  /**
+   * @return the write time
+   */
+  public long getWriteTime() {
+    return this.writeTime;
+  }
+
+  /**
+   * Get the id of the original cluster
+   * @return Cluster id.
+   */
+  public byte getClusterId() {
+    return clusterId;
+  }
+
+  /**
+   * Set the cluster id of this key
+   * @param clusterId
+   */
+  public void setClusterId(byte clusterId) {
+    this.clusterId = clusterId;
+  }
+
+  @Override
+  public String toString() {
+    return Bytes.toString(tablename) + "/" + Bytes.toString(encodedRegionName) + "/" +
+      logSeqNum;
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+    if (this == obj) {
+      return true;
+    }
+    if (obj == null || getClass() != obj.getClass()) {
+      return false;
+    }
+    return compareTo((HLogKey)obj) == 0;
+  }
+
+  @Override
+  public int hashCode() {
+    int result = Bytes.hashCode(this.encodedRegionName);
+    result ^= this.logSeqNum;
+    result ^= this.writeTime;
+    result ^= this.clusterId;
+    return result;
+  }
+
+  public int compareTo(HLogKey o) {
+    int result = Bytes.compareTo(this.encodedRegionName, o.encodedRegionName);
+    if (result == 0) {
+      if (this.logSeqNum < o.logSeqNum) {
+        result = -1;
+      } else if (this.logSeqNum > o.logSeqNum) {
+        result = 1;
+      }
+      if (result == 0) {
+        if (this.writeTime < o.writeTime) {
+          result = -1;
+        } else if (this.writeTime > o.writeTime) {
+          return 1;
+        }
+      }
+    }
+    return result;
+  }
+
+  /**
+   * Drop this instance's tablename byte array and instead
+   * hold a reference to the provided tablename. This is not
+   * meant to be a general purpose setter - it's only used
+   * to collapse references to conserve memory.
+   */
+  void internTableName(byte []tablename) {
+    // We should not use this as a setter - only to swap
+    // in a new reference to the same table name.
+    assert Bytes.equals(tablename, this.tablename);
+    this.tablename = tablename;
+  }
+
+  /**
+   * Drop this instance's region name byte array and instead
+   * hold a reference to the provided region name. This is not
+   * meant to be a general purpose setter - it's only used
+   * to collapse references to conserve memory.
+   */
+  void internEncodedRegionName(byte []encodedRegionName) {
+    // We should not use this as a setter - only to swap
+    // in a new reference to the same table name.
+    assert Bytes.equals(this.encodedRegionName, encodedRegionName);
+    this.encodedRegionName = encodedRegionName;
+  }
+
+  public void write(DataOutput out) throws IOException {
+    Bytes.writeByteArray(out, this.encodedRegionName);
+    Bytes.writeByteArray(out, this.tablename);
+    out.writeLong(this.logSeqNum);
+    out.writeLong(this.writeTime);
+    out.writeByte(this.clusterId);
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    this.encodedRegionName = Bytes.readByteArray(in);
+    this.tablename = Bytes.readByteArray(in);
+    this.logSeqNum = in.readLong();
+    this.writeTime = in.readLong();
+    try {
+      this.clusterId = in.readByte();
+    } catch(EOFException e) {
+      // Means it's an old key, just continue
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
new file mode 100644
index 0000000..af11fc7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
@@ -0,0 +1,872 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import static org.apache.hadoop.hbase.util.FSUtils.recoverFileLease;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.text.ParseException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.wal.HLog.Entry;
+import org.apache.hadoop.hbase.regionserver.wal.HLog.Reader;
+import org.apache.hadoop.hbase.regionserver.wal.HLog.Writer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.io.MultipleIOException;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+
+/**
+ * This class is responsible for splitting up a bunch of regionserver commit log
+ * files that are no longer being written to, into new files, one per region for
+ * region to replay on startup. Delete the old log files when finished.
+ */
+public class HLogSplitter {
+
+  private static final String LOG_SPLITTER_IMPL = "hbase.hlog.splitter.impl";
+
+  /**
+   * Name of file that holds recovered edits written by the wal log splitting
+   * code, one per region
+   */
+  public static final String RECOVERED_EDITS = "recovered.edits";
+
+  
+  static final Log LOG = LogFactory.getLog(HLogSplitter.class);
+
+  private boolean hasSplit = false;
+  private long splitTime = 0;
+  private long splitSize = 0;
+
+
+  // Parameters for split process
+  protected final Path rootDir;
+  protected final Path srcDir;
+  protected final Path oldLogDir;
+  protected final FileSystem fs;
+  protected final Configuration conf;
+  
+  // Major subcomponents of the split process.
+  // These are separated into inner classes to make testing easier.
+  OutputSink outputSink;
+  EntryBuffers entryBuffers;
+
+  // If an exception is thrown by one of the other threads, it will be
+  // stored here.
+  protected AtomicReference<Throwable> thrown = new AtomicReference<Throwable>();
+
+  // Wait/notify for when data has been produced by the reader thread,
+  // consumed by the reader thread, or an exception occurred
+  Object dataAvailable = new Object();
+
+  
+  /**
+   * Create a new HLogSplitter using the given {@link Configuration} and the
+   * <code>hbase.hlog.splitter.impl</code> property to derived the instance
+   * class to use.
+   *
+   * @param conf
+   * @param rootDir hbase directory
+   * @param srcDir logs directory
+   * @param oldLogDir directory where processed logs are archived to
+   * @param fs FileSystem
+   */
+  public static HLogSplitter createLogSplitter(Configuration conf,
+      final Path rootDir, final Path srcDir,
+      Path oldLogDir, final FileSystem fs)  {
+
+    @SuppressWarnings("unchecked")
+    Class<? extends HLogSplitter> splitterClass = (Class<? extends HLogSplitter>) conf
+        .getClass(LOG_SPLITTER_IMPL, HLogSplitter.class);
+    try {
+       Constructor<? extends HLogSplitter> constructor =
+         splitterClass.getConstructor(
+          Configuration.class, // conf
+          Path.class, // rootDir
+          Path.class, // srcDir
+          Path.class, // oldLogDir
+          FileSystem.class); // fs
+      return constructor.newInstance(conf, rootDir, srcDir, oldLogDir, fs);
+    } catch (IllegalArgumentException e) {
+      throw new RuntimeException(e);
+    } catch (InstantiationException e) {
+      throw new RuntimeException(e);
+    } catch (IllegalAccessException e) {
+      throw new RuntimeException(e);
+    } catch (InvocationTargetException e) {
+      throw new RuntimeException(e);
+    } catch (SecurityException e) {
+      throw new RuntimeException(e);
+    } catch (NoSuchMethodException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  public HLogSplitter(Configuration conf, Path rootDir, Path srcDir,
+      Path oldLogDir, FileSystem fs) {
+    this.conf = conf;
+    this.rootDir = rootDir;
+    this.srcDir = srcDir;
+    this.oldLogDir = oldLogDir;
+    this.fs = fs;
+    
+    entryBuffers = new EntryBuffers(
+        conf.getInt("hbase.regionserver.hlog.splitlog.buffersize",
+            128*1024*1024));
+    outputSink = new OutputSink();
+  }
+  
+  /**
+   * Split up a bunch of regionserver commit log files that are no longer being
+   * written to, into new files, one per region for region to replay on startup.
+   * Delete the old log files when finished.
+   * 
+   * @throws IOException will throw if corrupted hlogs aren't tolerated
+   * @return the list of splits
+   */
+  public List<Path> splitLog()
+      throws IOException {
+    Preconditions.checkState(!hasSplit,
+        "An HLogSplitter instance may only be used once");
+    hasSplit = true;
+
+    long startTime = System.currentTimeMillis();
+    List<Path> splits = null;
+    if (!fs.exists(srcDir)) {
+      // Nothing to do
+      return splits;
+    }
+    FileStatus[] logfiles = fs.listStatus(srcDir);
+    if (logfiles == null || logfiles.length == 0) {
+      // Nothing to do
+      return splits;
+    }
+    LOG.info("Splitting " + logfiles.length + " hlog(s) in "
+        + srcDir.toString());
+    splits = splitLog(logfiles);
+    
+    splitTime = System.currentTimeMillis() - startTime;
+    LOG.info("hlog file splitting completed in " + splitTime +
+        " ms for " + srcDir.toString());
+    return splits;
+  }
+  
+  /**
+   * @return time that this split took
+   */
+  public long getTime() {
+    return this.splitTime;
+  }
+  
+  /**
+   * @return aggregate size of hlogs that were split
+   */
+  public long getSize() {
+    return this.splitSize;
+  }
+
+  /**
+   * @return a map from encoded region ID to the number of edits written out
+   * for that region.
+   */
+  Map<byte[], Long> getOutputCounts() {
+    Preconditions.checkState(hasSplit);
+    return outputSink.getOutputCounts();
+  }
+   
+  /**
+   * Splits the HLog edits in the given list of logfiles (that are a mix of edits
+   * on multiple regions) by region and then splits them per region directories,
+   * in batches of (hbase.hlog.split.batch.size)
+   * 
+   * This process is split into multiple threads. In the main thread, we loop
+   * through the logs to be split. For each log, we:
+   * <ul>
+   *   <li> Recover it (take and drop HDFS lease) to ensure no other process can write</li>
+   *   <li> Read each edit (see {@link #parseHLog}</li>
+   *   <li> Mark as "processed" or "corrupt" depending on outcome</li>
+   * </ul>
+   * 
+   * Each edit is passed into the EntryBuffers instance, which takes care of
+   * memory accounting and splitting the edits by region.
+   * 
+   * The OutputSink object then manages N other WriterThreads which pull chunks
+   * of edits from EntryBuffers and write them to the output region directories.
+   * 
+   * After the process is complete, the log files are archived to a separate
+   * directory.
+   */
+  private List<Path> splitLog(final FileStatus[] logfiles) throws IOException {
+    List<Path> processedLogs = new ArrayList<Path>();
+    List<Path> corruptedLogs = new ArrayList<Path>();
+    List<Path> splits = null;
+
+    boolean skipErrors = conf.getBoolean("hbase.hlog.split.skip.errors", false);
+
+    splitSize = 0;
+
+    outputSink.startWriterThreads(entryBuffers);
+    
+    try {
+      int i = 0;
+      for (FileStatus log : logfiles) {
+       Path logPath = log.getPath();
+        long logLength = log.getLen();
+        splitSize += logLength;
+        LOG.debug("Splitting hlog " + (i++ + 1) + " of " + logfiles.length
+            + ": " + logPath + ", length=" + logLength);
+        try {
+          recoverFileLease(fs, logPath, conf);
+          parseHLog(log, entryBuffers, fs, conf);
+          processedLogs.add(logPath);
+        } catch (EOFException eof) {
+          // truncated files are expected if a RS crashes (see HBASE-2643)
+          LOG.info("EOF from hlog " + logPath + ".  continuing");
+          processedLogs.add(logPath);
+        } catch (IOException e) {
+          // If the IOE resulted from bad file format,
+          // then this problem is idempotent and retrying won't help
+          if (e.getCause() instanceof ParseException) {
+            LOG.warn("ParseException from hlog " + logPath + ".  continuing");
+            processedLogs.add(logPath);
+          } else {
+            if (skipErrors) {
+              LOG.info("Got while parsing hlog " + logPath +
+                ". Marking as corrupted", e);
+              corruptedLogs.add(logPath);
+            } else {
+              throw e;
+            }
+          }
+        }
+      }
+      if (fs.listStatus(srcDir).length > processedLogs.size()
+          + corruptedLogs.size()) {
+        throw new OrphanHLogAfterSplitException(
+            "Discovered orphan hlog after split. Maybe the "
+            + "HRegionServer was not dead when we started");
+      }
+      archiveLogs(srcDir, corruptedLogs, processedLogs, oldLogDir, fs, conf);      
+    } finally {
+      splits = outputSink.finishWritingAndClose();
+    }
+    return splits;
+  }
+
+  /**
+   * Moves processed logs to a oldLogDir after successful processing Moves
+   * corrupted logs (any log that couldn't be successfully parsed to corruptDir
+   * (.corrupt) for later investigation
+   * 
+   * @param corruptedLogs
+   * @param processedLogs
+   * @param oldLogDir
+   * @param fs
+   * @param conf
+   * @throws IOException
+   */
+  private static void archiveLogs(
+      final Path srcDir,
+      final List<Path> corruptedLogs,
+      final List<Path> processedLogs, final Path oldLogDir,
+      final FileSystem fs, final Configuration conf) throws IOException {
+    final Path corruptDir = new Path(conf.get(HConstants.HBASE_DIR), conf.get(
+        "hbase.regionserver.hlog.splitlog.corrupt.dir", ".corrupt"));
+
+    if (!fs.mkdirs(corruptDir)) {
+      LOG.info("Unable to mkdir " + corruptDir);
+    }
+    fs.mkdirs(oldLogDir);
+
+    for (Path corrupted : corruptedLogs) {
+      Path p = new Path(corruptDir, corrupted.getName());
+      if (!fs.rename(corrupted, p)) { 
+        LOG.info("Unable to move corrupted log " + corrupted + " to " + p);
+      } else {
+        LOG.info("Moving corrupted log " + corrupted + " to " + p);
+      }
+    }
+
+    for (Path p : processedLogs) {
+      Path newPath = HLog.getHLogArchivePath(oldLogDir, p);
+      if (!fs.rename(p, newPath)) {
+        LOG.info("Unable to move  " + p + " to " + newPath);
+      } else {
+        LOG.info("Archived processed log " + p + " to " + newPath);
+      }
+    }
+    
+    if (!fs.delete(srcDir, true)) {
+      throw new IOException("Unable to delete src dir: " + srcDir);
+    }
+  }
+
+  /**
+   * Path to a file under RECOVERED_EDITS_DIR directory of the region found in
+   * <code>logEntry</code> named for the sequenceid in the passed
+   * <code>logEntry</code>: e.g. /hbase/some_table/2323432434/recovered.edits/2332.
+   * This method also ensures existence of RECOVERED_EDITS_DIR under the region
+   * creating it if necessary.
+   * @param fs
+   * @param logEntry
+   * @param rootDir HBase root dir.
+   * @return Path to file into which to dump split log edits.
+   * @throws IOException
+   */
+  static Path getRegionSplitEditsPath(final FileSystem fs,
+      final Entry logEntry, final Path rootDir) throws IOException {
+    Path tableDir = HTableDescriptor.getTableDir(rootDir, logEntry.getKey()
+        .getTablename());
+    Path regiondir = HRegion.getRegionDir(tableDir,
+        Bytes.toString(logEntry.getKey().getEncodedRegionName()));
+    if (!fs.exists(regiondir)) {
+      LOG.info("This region's directory doesn't exist: "
+          + regiondir.toString() + ". It is very likely that it was" +
+          " already split so it's safe to discard those edits.");
+      return null;
+    }
+    Path dir = HLog.getRegionDirRecoveredEditsDir(regiondir);
+    if (!fs.exists(dir)) {
+      if (!fs.mkdirs(dir)) LOG.warn("mkdir failed on " + dir);
+    }
+    return new Path(dir, formatRecoveredEditsFileName(logEntry.getKey()
+        .getLogSeqNum()));
+  }
+
+  static String formatRecoveredEditsFileName(final long seqid) {
+    return String.format("%019d", seqid);
+  }
+  
+  /*
+   * Parse a single hlog and put the edits in @splitLogsMap
+   *
+   * @param logfile to split
+   * @param splitLogsMap output parameter: a map with region names as keys and a
+   * list of edits as values
+   * @param fs the filesystem
+   * @param conf the configuration
+   * @throws IOException if hlog is corrupted, or can't be open
+   */
+  private void parseHLog(final FileStatus logfile,
+		EntryBuffers entryBuffers, final FileSystem fs,
+    final Configuration conf) 
+	throws IOException {
+    // Check for possibly empty file. With appends, currently Hadoop reports a
+    // zero length even if the file has been sync'd. Revisit if HDFS-376 or
+    // HDFS-878 is committed.
+    long length = logfile.getLen();
+    if (length <= 0) {
+      LOG.warn("File " + logfile.getPath() + " might be still open, length is 0");
+    }
+    Path path = logfile.getPath();
+    Reader in;
+    int editsCount = 0;
+    try {
+      in = getReader(fs, path, conf);
+    } catch (EOFException e) {
+      if (length <= 0) {
+	      //TODO should we ignore an empty, not-last log file if skip.errors is false?
+        //Either way, the caller should decide what to do. E.g. ignore if this is the last
+        //log in sequence.
+        //TODO is this scenario still possible if the log has been recovered (i.e. closed)
+        LOG.warn("Could not open " + path + " for reading. File is empty" + e);
+        return;
+      } else {
+        throw e;
+      }
+    }
+    try {
+      Entry entry;
+      while ((entry = in.next()) != null) {
+        entryBuffers.appendEntry(entry);
+        editsCount++;
+      }
+    } catch (InterruptedException ie) {
+      throw new RuntimeException(ie);
+    } finally {
+      LOG.debug("Pushed=" + editsCount + " entries from " + path);
+      try {
+        if (in != null) {
+          in.close();
+        }
+      } catch (IOException e) {
+        LOG.warn("Close log reader in finally threw exception -- continuing",
+                 e);
+      }
+    }
+  }
+
+  private void writerThreadError(Throwable t) {
+    thrown.compareAndSet(null, t);
+  }
+  
+  /**
+   * Check for errors in the writer threads. If any is found, rethrow it.
+   */
+  private void checkForErrors() throws IOException {
+    Throwable thrown = this.thrown.get();
+    if (thrown == null) return;
+    if (thrown instanceof IOException) {
+      throw (IOException)thrown;
+    } else {
+      throw new RuntimeException(thrown);
+    }
+  }
+  /**
+   * Create a new {@link Writer} for writing log splits.
+   */
+  protected Writer createWriter(FileSystem fs, Path logfile, Configuration conf)
+      throws IOException {
+    return HLog.createWriter(fs, logfile, conf);
+  }
+
+  /**
+   * Create a new {@link Reader} for reading logs to split.
+   */
+  protected Reader getReader(FileSystem fs, Path curLogFile, Configuration conf)
+      throws IOException {
+    return HLog.getReader(fs, curLogFile, conf);
+  }
+
+
+  /**
+   * Class which accumulates edits and separates them into a buffer per region
+   * while simultaneously accounting RAM usage. Blocks if the RAM usage crosses
+   * a predefined threshold.
+   * 
+   * Writer threads then pull region-specific buffers from this class.
+   */
+  class EntryBuffers {
+    Map<byte[], RegionEntryBuffer> buffers =
+      new TreeMap<byte[], RegionEntryBuffer>(Bytes.BYTES_COMPARATOR);
+    
+    /* Track which regions are currently in the middle of writing. We don't allow
+       an IO thread to pick up bytes from a region if we're already writing
+       data for that region in a different IO thread. */ 
+    Set<byte[]> currentlyWriting = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+
+    long totalBuffered = 0;
+    long maxHeapUsage;
+    
+    EntryBuffers(long maxHeapUsage) {
+      this.maxHeapUsage = maxHeapUsage;
+    }
+
+    /**
+     * Append a log entry into the corresponding region buffer.
+     * Blocks if the total heap usage has crossed the specified threshold.
+     * 
+     * @throws InterruptedException
+     * @throws IOException 
+     */
+    void appendEntry(Entry entry) throws InterruptedException, IOException {
+      HLogKey key = entry.getKey();
+      
+      RegionEntryBuffer buffer;
+      synchronized (this) {
+        buffer = buffers.get(key.getEncodedRegionName());
+        if (buffer == null) {
+          buffer = new RegionEntryBuffer(key.getTablename(), key.getEncodedRegionName());
+          buffers.put(key.getEncodedRegionName(), buffer);
+        }
+        long incrHeap = buffer.appendEntry(entry);
+        totalBuffered += incrHeap;
+      }
+
+      // If we crossed the chunk threshold, wait for more space to be available
+      synchronized (dataAvailable) {
+        while (totalBuffered > maxHeapUsage && thrown == null) {
+          LOG.debug("Used " + totalBuffered + " bytes of buffered edits, waiting for IO threads...");
+          dataAvailable.wait(3000);
+        }
+        dataAvailable.notifyAll();
+      }
+      checkForErrors();
+    }
+
+    synchronized RegionEntryBuffer getChunkToWrite() {
+      long biggestSize=0;
+      byte[] biggestBufferKey=null;
+
+      for (Map.Entry<byte[], RegionEntryBuffer> entry : buffers.entrySet()) {
+        long size = entry.getValue().heapSize();
+        if (size > biggestSize && !currentlyWriting.contains(entry.getKey())) {
+          biggestSize = size;
+          biggestBufferKey = entry.getKey();
+        }
+      }
+      if (biggestBufferKey == null) {
+        return null;
+      }
+
+      RegionEntryBuffer buffer = buffers.remove(biggestBufferKey);
+      currentlyWriting.add(biggestBufferKey);
+      return buffer;
+    }
+
+    void doneWriting(RegionEntryBuffer buffer) {
+      synchronized (this) {
+        boolean removed = currentlyWriting.remove(buffer.encodedRegionName);
+        assert removed;
+      }
+      long size = buffer.heapSize();
+
+      synchronized (dataAvailable) {
+        totalBuffered -= size;
+        // We may unblock writers
+        dataAvailable.notifyAll();
+      }
+    }
+    
+    synchronized boolean isRegionCurrentlyWriting(byte[] region) {
+      return currentlyWriting.contains(region);
+    }
+  }
+
+  /**
+   * A buffer of some number of edits for a given region.
+   * This accumulates edits and also provides a memory optimization in order to
+   * share a single byte array instance for the table and region name.
+   * Also tracks memory usage of the accumulated edits.
+   */
+  static class RegionEntryBuffer implements HeapSize {
+    long heapInBuffer = 0;
+    List<Entry> entryBuffer;
+    byte[] tableName;
+    byte[] encodedRegionName;
+
+    RegionEntryBuffer(byte[] table, byte[] region) {
+      this.tableName = table;
+      this.encodedRegionName = region;
+      this.entryBuffer = new LinkedList<Entry>();
+    }
+
+    long appendEntry(Entry entry) {
+      internify(entry);
+      entryBuffer.add(entry);
+      long incrHeap = entry.getEdit().heapSize() +
+        ClassSize.align(2 * ClassSize.REFERENCE) + // HLogKey pointers
+        0; // TODO linkedlist entry
+      heapInBuffer += incrHeap;
+      return incrHeap;
+    }
+
+    private void internify(Entry entry) {
+      HLogKey k = entry.getKey();
+      k.internTableName(this.tableName);
+      k.internEncodedRegionName(this.encodedRegionName);
+    }
+
+    public long heapSize() {
+      return heapInBuffer;
+    }
+  }
+
+
+  class WriterThread extends Thread {
+    private volatile boolean shouldStop = false;
+    
+    WriterThread(int i) {
+      super("WriterThread-" + i);
+    }
+    
+    public void run()  {
+      try {
+        doRun();
+      } catch (Throwable t) {
+        LOG.error("Error in log splitting write thread", t);
+        writerThreadError(t);
+      }
+    }
+    
+    private void doRun() throws IOException {
+      LOG.debug("Writer thread " + this + ": starting");
+      while (true) {
+        RegionEntryBuffer buffer = entryBuffers.getChunkToWrite();
+        if (buffer == null) {
+          // No data currently available, wait on some more to show up
+          synchronized (dataAvailable) {
+            if (shouldStop) return;
+            try {
+              dataAvailable.wait(1000);
+            } catch (InterruptedException ie) {
+              if (!shouldStop) {
+                throw new RuntimeException(ie);
+              }
+            }
+          }
+          continue;
+        }
+        
+        assert buffer != null;
+        try {
+          writeBuffer(buffer);
+        } finally {
+          entryBuffers.doneWriting(buffer);
+        }
+      }
+    }
+       
+    private void writeBuffer(RegionEntryBuffer buffer) throws IOException {
+      List<Entry> entries = buffer.entryBuffer;      
+      if (entries.isEmpty()) {
+        LOG.warn(this.getName() + " got an empty buffer, skipping");
+        return;
+      }
+
+      WriterAndPath wap = null;
+      
+      long startTime = System.nanoTime();
+      try {
+        int editsCount = 0;
+
+        for (Entry logEntry : entries) {
+          if (wap == null) {
+            wap = outputSink.getWriterAndPath(logEntry);
+            if (wap == null) {
+              // getWriterAndPath decided we don't need to write these edits
+              // Message was already logged
+              return;
+            }
+          }
+          wap.w.append(logEntry);
+          editsCount++;
+        }
+        // Pass along summary statistics
+        wap.incrementEdits(editsCount);
+        wap.incrementNanoTime(System.nanoTime() - startTime);
+      } catch (IOException e) {
+        e = RemoteExceptionHandler.checkIOException(e);
+        LOG.fatal(this.getName() + " Got while writing log entry to log", e);
+        throw e;
+      }
+    }
+    
+    void finish() {
+      shouldStop = true;
+    }
+  }
+
+  /**
+   * Class that manages the output streams from the log splitting process.
+   */
+  class OutputSink {
+    private final Map<byte[], WriterAndPath> logWriters = Collections.synchronizedMap(
+          new TreeMap<byte[], WriterAndPath>(Bytes.BYTES_COMPARATOR));
+    private final List<WriterThread> writerThreads = Lists.newArrayList();
+    
+    /* Set of regions which we've decided should not output edits */
+    private final Set<byte[]> blacklistedRegions = Collections.synchronizedSet(
+        new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR));
+    
+    private boolean hasClosed = false;    
+    
+    /**
+     * Start the threads that will pump data from the entryBuffers
+     * to the output files.
+     * @return the list of started threads
+     */
+    synchronized void startWriterThreads(EntryBuffers entryBuffers) {
+      // More threads could potentially write faster at the expense
+      // of causing more disk seeks as the logs are split.
+      // 3. After a certain setting (probably around 3) the
+      // process will be bound on the reader in the current
+      // implementation anyway.
+      int numThreads = conf.getInt(
+          "hbase.regionserver.hlog.splitlog.writer.threads", 3);
+
+      for (int i = 0; i < numThreads; i++) {
+        WriterThread t = new WriterThread(i);
+        t.start();
+        writerThreads.add(t);
+      }
+    }
+    
+    List<Path> finishWritingAndClose() throws IOException {
+      LOG.info("Waiting for split writer threads to finish");
+      for (WriterThread t : writerThreads) {
+        t.finish();
+      }
+      for (WriterThread t: writerThreads) {
+        try {
+          t.join();
+        } catch (InterruptedException ie) {
+          throw new IOException(ie);
+        }
+        checkForErrors();
+      }
+      LOG.info("Split writers finished");
+      
+      return closeStreams();
+    }
+
+    /**
+     * Close all of the output streams.
+     * @return the list of paths written.
+     */
+    private List<Path> closeStreams() throws IOException {
+      Preconditions.checkState(!hasClosed);
+      
+      List<Path> paths = new ArrayList<Path>();
+      List<IOException> thrown = Lists.newArrayList();
+      
+      for (WriterAndPath wap : logWriters.values()) {
+        try {
+          wap.w.close();
+        } catch (IOException ioe) {
+          LOG.error("Couldn't close log at " + wap.p, ioe);
+          thrown.add(ioe);
+          continue;
+        }
+        paths.add(wap.p);
+        LOG.info("Closed path " + wap.p +" (wrote " + wap.editsWritten + " edits in "
+            + (wap.nanosSpent / 1000/ 1000) + "ms)");
+      }
+      if (!thrown.isEmpty()) {
+        throw MultipleIOException.createIOException(thrown);
+      }
+      
+      hasClosed = true;
+      return paths;
+    }
+
+    /**
+     * Get a writer and path for a log starting at the given entry.
+     * 
+     * This function is threadsafe so long as multiple threads are always
+     * acting on different regions.
+     * 
+     * @return null if this region shouldn't output any logs
+     */
+    WriterAndPath getWriterAndPath(Entry entry) throws IOException {
+    
+      byte region[] = entry.getKey().getEncodedRegionName();
+      WriterAndPath ret = logWriters.get(region);
+      if (ret != null) {
+        return ret;
+      }
+      
+      // If we already decided that this region doesn't get any output
+      // we don't need to check again.
+      if (blacklistedRegions.contains(region)) {
+        return null;
+      }
+      
+      // Need to create writer
+      Path regionedits = getRegionSplitEditsPath(fs,
+          entry, rootDir);
+      if (regionedits == null) {
+        // Edits dir doesn't exist
+        blacklistedRegions.add(region);
+        return null;
+      }
+      deletePreexistingOldEdits(regionedits);
+      Writer w = createWriter(fs, regionedits, conf);
+      ret = new WriterAndPath(regionedits, w);
+      logWriters.put(region, ret);
+      LOG.debug("Creating writer path=" + regionedits + " region="
+          + Bytes.toStringBinary(region));
+
+      return ret;
+    }
+
+    /**
+     * If the specified path exists, issue a warning and delete it.
+     */
+    private void deletePreexistingOldEdits(Path regionedits) throws IOException {
+      if (fs.exists(regionedits)) {
+        LOG.warn("Found existing old edits file. It could be the "
+            + "result of a previous failed split attempt. Deleting "
+            + regionedits + ", length="
+            + fs.getFileStatus(regionedits).getLen());
+        if (!fs.delete(regionedits, false)) {
+          LOG.warn("Failed delete of old " + regionedits);
+        }
+      }
+    }
+
+    /**
+     * @return a map from encoded region ID to the number of edits written out
+     * for that region.
+     */
+    private Map<byte[], Long> getOutputCounts() {
+      TreeMap<byte[], Long> ret = new TreeMap<byte[], Long>(
+          Bytes.BYTES_COMPARATOR);
+      synchronized (logWriters) {
+        for (Map.Entry<byte[], WriterAndPath> entry : logWriters.entrySet()) {
+          ret.put(entry.getKey(), entry.getValue().editsWritten);
+        }
+      }
+      return ret;
+    }
+  }
+
+  /**
+   *  Private data structure that wraps a Writer and its Path,
+   *  also collecting statistics about the data written to this
+   *  output.
+   */
+  private final static class WriterAndPath {
+    final Path p;
+    final Writer w;
+
+    /* Count of edits written to this path */
+    long editsWritten = 0;
+    /* Number of nanos spent writing to this log */
+    long nanosSpent = 0;
+
+    WriterAndPath(final Path p, final Writer w) {
+      this.p = p;
+      this.w = w;
+    }
+
+    void incrementEdits(int edits) {
+      editsWritten += edits;
+    }
+
+    void incrementNanoTime(long nanos) {
+      nanosSpent += nanos;
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/OrphanHLogAfterSplitException.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/OrphanHLogAfterSplitException.java
new file mode 100644
index 0000000..1c93def
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/OrphanHLogAfterSplitException.java
@@ -0,0 +1,40 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.IOException;
+
+public class OrphanHLogAfterSplitException extends IOException {
+
+  /**
+   * Create this exception without a message
+   */
+  public OrphanHLogAfterSplitException() {
+    super();
+  }
+
+  /**
+   * Create this exception with a message
+   * @param message why it failed
+   */
+  public OrphanHLogAfterSplitException(String message) {
+    super(message);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
new file mode 100644
index 0000000..497c5d0
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogReader.java
@@ -0,0 +1,252 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.FilterInputStream;
+import java.io.IOException;
+import java.lang.Class;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.Field;
+import java.lang.reflect.Method;
+ 
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.SequenceFile;
+
+public class SequenceFileLogReader implements HLog.Reader {
+  private static final Log LOG = LogFactory.getLog(SequenceFileLogReader.class);
+
+  /**
+   * Hack just to set the correct file length up in SequenceFile.Reader.
+   * See HADOOP-6307.  The below is all about setting the right length on the
+   * file we are reading.  fs.getFileStatus(file).getLen() is passed down to
+   * a private SequenceFile.Reader constructor.  This won't work.  Need to do
+   * the available on the stream.  The below is ugly.  It makes getPos, the
+   * first time its called, return length of the file -- i.e. tell a lie -- just
+   * so this line up in SF.Reader's constructor ends up with right answer:
+   *
+   *         this.end = in.getPos() + length;
+   *
+   */
+  static class WALReader extends SequenceFile.Reader {
+
+    WALReader(final FileSystem fs, final Path p, final Configuration c)
+    throws IOException {
+      super(fs, p, c);
+
+    }
+
+    @Override
+    protected FSDataInputStream openFile(FileSystem fs, Path file,
+      int bufferSize, long length)
+    throws IOException {
+      return new WALReaderFSDataInputStream(super.openFile(fs, file,
+        bufferSize, length), length);
+    }
+
+    /**
+     * Override just so can intercept first call to getPos.
+     */
+    static class WALReaderFSDataInputStream extends FSDataInputStream {
+      private boolean firstGetPosInvocation = true;
+      private long length;
+
+      WALReaderFSDataInputStream(final FSDataInputStream is, final long l)
+      throws IOException {
+        super(is);
+        this.length = l;
+      }
+
+      // This section can be confusing.  It is specific to how HDFS works.
+      // Let me try to break it down.  This is the problem:
+      //
+      //  1. HDFS DataNodes update the NameNode about a filename's length 
+      //     on block boundaries or when a file is closed. Therefore, 
+      //     if an RS dies, then the NN's fs.getLength() can be out of date
+      //  2. this.in.available() would work, but it returns int &
+      //     therefore breaks for files > 2GB (happens on big clusters)
+      //  3. DFSInputStream.getFileLength() gets the actual length from the DNs
+      //  4. DFSInputStream is wrapped 2 levels deep : this.in.in
+      //
+      // So, here we adjust getPos() using getFileLength() so the
+      // SequenceFile.Reader constructor (aka: first invocation) comes out 
+      // with the correct end of the file:
+      //         this.end = in.getPos() + length;
+      @Override
+      public long getPos() throws IOException {
+        if (this.firstGetPosInvocation) {
+          this.firstGetPosInvocation = false;
+          long adjust = 0;
+
+          try {
+            Field fIn = FilterInputStream.class.getDeclaredField("in");
+            fIn.setAccessible(true);
+            Object realIn = fIn.get(this.in);
+            Method getFileLength = realIn.getClass().
+              getMethod("getFileLength", new Class<?> []{});
+            getFileLength.setAccessible(true);
+            long realLength = ((Long)getFileLength.
+              invoke(realIn, new Object []{})).longValue();
+            assert(realLength >= this.length);
+            adjust = realLength - this.length;
+          } catch(Exception e) {
+            SequenceFileLogReader.LOG.warn(
+              "Error while trying to get accurate file length.  " +
+              "Truncation / data loss may occur if RegionServers die.", e);
+          }
+
+          return adjust + super.getPos();
+        }
+        return super.getPos();
+      }
+    }
+  }
+
+  Configuration conf;
+  WALReader reader;
+  // Needed logging exceptions
+  Path path;
+  int edit = 0;
+  long entryStart = 0;
+
+  protected Class<? extends HLogKey> keyClass;
+
+  /**
+   * Default constructor.
+   */
+  public SequenceFileLogReader() {
+  }
+
+  /**
+   * This constructor allows a specific HLogKey implementation to override that
+   * which would otherwise be chosen via configuration property.
+   * 
+   * @param keyClass
+   */
+  public SequenceFileLogReader(Class<? extends HLogKey> keyClass) {
+    this.keyClass = keyClass;
+  }
+
+
+  @Override
+  public void init(FileSystem fs, Path path, Configuration conf)
+      throws IOException {
+    this.conf = conf;
+    this.path = path;
+    reader = new WALReader(fs, path, conf);
+  }
+
+  @Override
+  public void close() throws IOException {
+    try {
+      reader.close();
+    } catch (IOException ioe) {
+      throw addFileInfoToException(ioe);
+    }
+  }
+
+  @Override
+  public HLog.Entry next() throws IOException {
+    return next(null);
+  }
+
+  @Override
+  public HLog.Entry next(HLog.Entry reuse) throws IOException {
+    this.entryStart = this.reader.getPosition();
+    HLog.Entry e = reuse;
+    if (e == null) {
+      HLogKey key;
+      if (keyClass == null) {
+        key = HLog.newKey(conf);
+      } else {
+        try {
+          key = keyClass.newInstance();
+        } catch (InstantiationException ie) {
+          throw new IOException(ie);
+        } catch (IllegalAccessException iae) {
+          throw new IOException(iae);
+        }
+      }
+      
+      WALEdit val = new WALEdit();
+      e = new HLog.Entry(key, val);
+    }
+    boolean b = false;
+    try {
+      b = this.reader.next(e.getKey(), e.getEdit());
+    } catch (IOException ioe) {
+      throw addFileInfoToException(ioe);
+    }
+    edit++;
+    return b? e: null;
+  }
+
+  @Override
+  public void seek(long pos) throws IOException {
+    try {
+      reader.seek(pos);
+    } catch (IOException ioe) {
+      throw addFileInfoToException(ioe);
+    }
+  }
+
+  @Override
+  public long getPosition() throws IOException {
+    return reader.getPosition();
+  }
+
+  protected IOException addFileInfoToException(final IOException ioe)
+  throws IOException {
+    long pos = -1;
+    try {
+      pos = getPosition();
+    } catch (IOException e) {
+      LOG.warn("Failed getting position to add to throw", e);
+    }
+
+    // See what SequenceFile.Reader thinks is the end of the file
+    long end = Long.MAX_VALUE;
+    try {
+      Field fEnd = SequenceFile.Reader.class.getDeclaredField("end");
+      fEnd.setAccessible(true);
+      end = fEnd.getLong(this.reader);
+    } catch(Exception e) { /* reflection fail. keep going */ }
+
+    String msg = (this.path == null? "": this.path.toString()) +
+      ", entryStart=" + entryStart + ", pos=" + pos + 
+      ((end == Long.MAX_VALUE) ? "" : ", end=" + end) + 
+      ", edit=" + this.edit;
+
+    // Enhance via reflection so we don't change the original class type
+    try {
+      return (IOException) ioe.getClass()
+        .getConstructor(String.class)
+        .newInstance(msg)
+        .initCause(ioe);
+    } catch(Exception e) { /* reflection fail. keep going */ }
+    
+    return ioe;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java
new file mode 100644
index 0000000..8dc9a5e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java
@@ -0,0 +1,164 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.reflect.Field;
+import java.lang.reflect.Method;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.io.SequenceFile.Metadata;
+import org.apache.hadoop.io.compress.DefaultCodec;
+
+/**
+ * Implementation of {@link HLog.Writer} that delegates to
+ * SequenceFile.Writer.
+ */
+public class SequenceFileLogWriter implements HLog.Writer {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+  // The sequence file we delegate to.
+  private SequenceFile.Writer writer;
+  // The dfsclient out stream gotten made accessible or null if not available.
+  private OutputStream dfsClient_out;
+  // The syncFs method from hdfs-200 or null if not available.
+  private Method syncFs;
+
+  private Class<? extends HLogKey> keyClass;
+
+  /**
+   * Default constructor.
+   */
+  public SequenceFileLogWriter() {
+    super();
+  }
+
+  /**
+   * This constructor allows a specific HLogKey implementation to override that
+   * which would otherwise be chosen via configuration property.
+   * 
+   * @param keyClass
+   */
+  public SequenceFileLogWriter(Class<? extends HLogKey> keyClass) {
+    this.keyClass = keyClass;
+  }
+  
+  @Override
+  public void init(FileSystem fs, Path path, Configuration conf)
+      throws IOException {
+
+    if (null == keyClass) {
+      keyClass = HLog.getKeyClass(conf);
+    }
+
+    // Create a SF.Writer instance.
+    this.writer = SequenceFile.createWriter(fs, conf, path,
+      keyClass, WALEdit.class,
+      fs.getConf().getInt("io.file.buffer.size", 4096),
+      (short) conf.getInt("hbase.regionserver.hlog.replication",
+      fs.getDefaultReplication()),
+      conf.getLong("hbase.regionserver.hlog.blocksize",
+      fs.getDefaultBlockSize()),
+      SequenceFile.CompressionType.NONE,
+      new DefaultCodec(),
+      null,
+      new Metadata());
+
+    // Get at the private FSDataOutputStream inside in SequenceFile so we can
+    // call sync on it.  Make it accessible.  Stash it aside for call up in
+    // the sync method.
+    final Field fields [] = this.writer.getClass().getDeclaredFields();
+    final String fieldName = "out";
+    for (int i = 0; i < fields.length; ++i) {
+      if (fieldName.equals(fields[i].getName())) {
+        try {
+          // Make the 'out' field up in SF.Writer accessible.
+          fields[i].setAccessible(true);
+          FSDataOutputStream out =
+            (FSDataOutputStream)fields[i].get(this.writer);
+          this.dfsClient_out = out.getWrappedStream();
+          break;
+        } catch (IllegalAccessException ex) {
+          throw new IOException("Accessing " + fieldName, ex);
+        }
+      }
+    }
+
+    // Now do dirty work to see if syncFs is available.
+    // Test if syncfs is available.
+    Method m = null;
+    boolean append = conf.getBoolean("dfs.support.append", false);
+    if (append) {
+      try {
+        // function pointer to writer.syncFs()
+        m = this.writer.getClass().getMethod("syncFs", new Class<?> []{});
+      } catch (SecurityException e) {
+        throw new IOException("Failed test for syncfs", e);
+      } catch (NoSuchMethodException e) {
+        // Not available
+      }
+    }
+    this.syncFs = m;
+    LOG.info((this.syncFs != null)?
+      "Using syncFs -- HDFS-200":
+      ("syncFs -- HDFS-200 -- not available, dfs.support.append=" + append));
+  }
+
+  @Override
+  public void append(HLog.Entry entry) throws IOException {
+    this.writer.append(entry.getKey(), entry.getEdit());
+  }
+
+  @Override
+  public void close() throws IOException {
+    this.writer.close();
+  }
+
+  @Override
+  public void sync() throws IOException {
+    if (this.syncFs != null) {
+      try {
+       this.syncFs.invoke(this.writer, HLog.NO_ARGS);
+      } catch (Exception e) {
+        throw new IOException("Reflection", e);
+      }
+    }
+  }
+
+  @Override
+  public long getLength() throws IOException {
+    return this.writer.getLength();
+  }
+
+  /**
+   * @return The dfsclient out stream up inside SF.Writer made accessible, or
+   * null if not available.
+   */
+  public OutputStream getDFSCOutputStream() {
+    return this.dfsClient_out;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java
new file mode 100644
index 0000000..e1117ef
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEdit.java
@@ -0,0 +1,188 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * WALEdit: Used in HBase's transaction log (WAL) to represent
+ * the collection of edits (KeyValue objects) corresponding to a
+ * single transaction. The class implements "Writable" interface
+ * for serializing/deserializing a set of KeyValue items.
+ *
+ * Previously, if a transaction contains 3 edits to c1, c2, c3 for a row R,
+ * the HLog would have three log entries as follows:
+ *
+ *    <logseq1-for-edit1>:<KeyValue-for-edit-c1>
+ *    <logseq2-for-edit2>:<KeyValue-for-edit-c2>
+ *    <logseq3-for-edit3>:<KeyValue-for-edit-c3>
+ *
+ * This presents problems because row level atomicity of transactions
+ * was not guaranteed. If we crash after few of the above appends make
+ * it, then recovery will restore a partial transaction.
+ *
+ * In the new world, all the edits for a given transaction are written
+ * out as a single record, for example:
+ *
+ *   <logseq#-for-entire-txn>:<WALEdit-for-entire-txn>
+ *
+ * where, the WALEdit is serialized as:
+ *   <-1, # of edits, <KeyValue>, <KeyValue>, ... >
+ * For example:
+ *   <-1, 3, <Keyvalue-for-edit-c1>, <KeyValue-for-edit-c2>, <KeyValue-for-edit-c3>>
+ *
+ * The -1 marker is just a special way of being backward compatible with
+ * an old HLog which would have contained a single <KeyValue>.
+ *
+ * The deserializer for WALEdit backward compatibly detects if the record
+ * is an old style KeyValue or the new style WALEdit.
+ *
+ */
+public class WALEdit implements Writable, HeapSize {
+
+  private final int VERSION_2 = -1;
+
+  private final ArrayList<KeyValue> kvs = new ArrayList<KeyValue>();
+  private NavigableMap<byte[], Integer> scopes;
+
+  public WALEdit() {
+  }
+
+  public void add(KeyValue kv) {
+    this.kvs.add(kv);
+  }
+
+  public boolean isEmpty() {
+    return kvs.isEmpty();
+  }
+
+  public int size() {
+    return kvs.size();
+  }
+
+  public List<KeyValue> getKeyValues() {
+    return kvs;
+  }
+
+  public NavigableMap<byte[], Integer> getScopes() {
+    return scopes;
+  }
+
+
+  public void setScopes (NavigableMap<byte[], Integer> scopes) {
+    // We currently process the map outside of WALEdit,
+    // TODO revisit when replication is part of core
+    this.scopes = scopes;
+  }
+
+  public void readFields(DataInput in) throws IOException {
+    kvs.clear();
+    if (scopes != null) {
+      scopes.clear();
+    }
+    int versionOrLength = in.readInt();
+    if (versionOrLength == VERSION_2) {
+      // this is new style HLog entry containing multiple KeyValues.
+      int numEdits = in.readInt();
+      for (int idx = 0; idx < numEdits; idx++) {
+        KeyValue kv = new KeyValue();
+        kv.readFields(in);
+        this.add(kv);
+      }
+      int numFamilies = in.readInt();
+      if (numFamilies > 0) {
+        if (scopes == null) {
+          scopes = new TreeMap<byte[], Integer>(Bytes.BYTES_COMPARATOR);
+        }
+        for (int i = 0; i < numFamilies; i++) {
+          byte[] fam = Bytes.readByteArray(in);
+          int scope = in.readInt();
+          scopes.put(fam, scope);
+        }
+      }
+    } else {
+      // this is an old style HLog entry. The int that we just
+      // read is actually the length of a single KeyValue.
+      KeyValue kv = new KeyValue();
+      kv.readFields(versionOrLength, in);
+      this.add(kv);
+    }
+
+  }
+
+  public void write(DataOutput out) throws IOException {
+    out.writeInt(VERSION_2);
+    out.writeInt(kvs.size());
+    // We interleave the two lists for code simplicity
+    for (KeyValue kv : kvs) {
+      kv.write(out);
+    }
+    if (scopes == null) {
+      out.writeInt(0);
+    } else {
+      out.writeInt(scopes.size());
+      for (byte[] key : scopes.keySet()) {
+        Bytes.writeByteArray(out, key);
+        out.writeInt(scopes.get(key));
+      }
+    }
+  }
+
+  public long heapSize() {
+    long ret = 0;
+    for (KeyValue kv : kvs) {
+      ret += kv.heapSize();
+    }
+    if (scopes != null) {
+      ret += ClassSize.TREEMAP;
+      ret += ClassSize.align(scopes.size() * ClassSize.MAP_ENTRY);
+      // TODO this isn't quite right, need help here
+    }
+    return ret;
+  }
+
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+
+    sb.append("[#edits: " + kvs.size() + " = <");
+    for (KeyValue kv : kvs) {
+      sb.append(kv.toString());
+      sb.append("; ");
+    }
+    if (scopes != null) {
+      sb.append(" scopes: " + scopes.toString());
+    }
+    sb.append(">]");
+    return sb.toString();
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALObserver.java b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALObserver.java
new file mode 100644
index 0000000..3def4b6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALObserver.java
@@ -0,0 +1,54 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+
+/**
+ * Get notification of {@link HLog}/WAL log events. The invocations are inline
+ * so make sure your implementation is fast else you'll slow hbase.
+ */
+public interface WALObserver {
+  /**
+   * The WAL was rolled.
+   * @param newFile the path to the new hlog
+   */
+  public void logRolled(Path newFile);
+
+  /**
+   * A request was made that the WAL be rolled.
+   */
+  public void logRollRequested();
+
+  /**
+   * The WAL is about to close.
+   */
+  public void logCloseRequested();
+
+  /**
+  * Called before each write.
+  * @param info
+  * @param logKey
+  * @param logEdit
+  */
+ public void visitLogEntryBeforeWrite(HRegionInfo info, HLogKey logKey,
+   WALEdit logEdit);
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java
new file mode 100644
index 0000000..548c8eb
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java
@@ -0,0 +1,120 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+
+/**
+ * This class acts as a wrapper for all the objects used to identify and
+ * communicate with remote peers. Everything needs to be created for objects
+ * of this class as it doesn't encapsulate any specific functionality e.g.
+ * it's a container class.
+ */
+public class ReplicationPeer {
+
+  private final String clusterKey;
+  private final String id;
+  private List<HServerAddress> regionServers =
+      new ArrayList<HServerAddress>(0);
+  private final AtomicBoolean peerEnabled = new AtomicBoolean();
+  // Cannot be final since a new object needs to be recreated when session fails
+  private ZooKeeperWatcher zkw;
+  private final Configuration conf;
+
+  /**
+   * Constructor that takes all the objects required to communicate with the
+   * specified peer, except for the region server addresses.
+   * @param conf configuration object to this peer
+   * @param key cluster key used to locate the peer
+   * @param id string representation of this peer's identifier
+   * @param zkw zookeeper connection to the peer
+   */
+  public ReplicationPeer(Configuration conf, String key,
+      String id, ZooKeeperWatcher zkw) {
+    this.conf = conf;
+    this.clusterKey = key;
+    this.id = id;
+    this.zkw = zkw;
+  }
+
+  /**
+   * Get the cluster key of that peer
+   * @return string consisting of zk ensemble addresses, client port
+   * and root znode
+   */
+  public String getClusterKey() {
+    return clusterKey;
+  }
+
+  /**
+   * Get the state of this peer
+   * @return atomic boolean that holds the status
+   */
+  public AtomicBoolean getPeerEnabled() {
+    return peerEnabled;
+  }
+
+  /**
+   * Get a list of all the addresses of all the region servers
+   * for this peer cluster
+   * @return list of addresses
+   */
+  public List<HServerAddress> getRegionServers() {
+    return regionServers;
+  }
+
+  /**
+   * Set the list of region servers for that peer
+   * @param regionServers list of addresses for the region servers
+   */
+  public void setRegionServers(List<HServerAddress> regionServers) {
+    this.regionServers = regionServers;
+  }
+
+  /**
+   * Get the ZK connection to this peer
+   * @return zk connection
+   */
+  public ZooKeeperWatcher getZkw() {
+    return zkw;
+  }
+
+  /**
+   * Get the identifier of this peer
+   * @return string representation of the id (short)
+   */
+  public String getId() {
+    return id;
+  }
+
+  /**
+   * Get the configuration object required to communicate with this peer
+   * @return configuration object
+   */
+  public Configuration getConfiguration() {
+    return conf;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java
new file mode 100644
index 0000000..f4ae3c3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java
@@ -0,0 +1,704 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.SortedSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * This class serves as a helper for all things related to zookeeper
+ * in replication.
+ * <p/>
+ * The layout looks something like this under zookeeper.znode.parent
+ * for the master cluster:
+ * <p/>
+ * <pre>
+ * replication/
+ *  state      {contains true or false}
+ *  clusterId  {contains a byte}
+ *  peers/
+ *    1/   {contains a full cluster address}
+ *    2/
+ *    ...
+ *  rs/ {lists all RS that replicate}
+ *    startcode1/ {lists all peer clusters}
+ *      1/ {lists hlogs to process}
+ *        10.10.1.76%3A53488.123456789 {contains nothing or a position}
+ *        10.10.1.76%3A53488.123456790
+ *        ...
+ *      2/
+ *      ...
+ *    startcode2/
+ *    ...
+ * </pre>
+ */
+public class ReplicationZookeeper {
+  private static final Log LOG =
+    LogFactory.getLog(ReplicationZookeeper.class);
+  // Name of znode we use to lock when failover
+  private final static String RS_LOCK_ZNODE = "lock";
+  // Our handle on zookeeper
+  private final ZooKeeperWatcher zookeeper;
+  // Map of peer clusters keyed by their id
+  private Map<String, ReplicationPeer> peerClusters;
+  // Path to the root replication znode
+  private String replicationZNode;
+  // Path to the peer clusters znode
+  private String peersZNode;
+  // Path to the znode that contains all RS that replicates
+  private String rsZNode;
+  // Path to this region server's name under rsZNode
+  private String rsServerNameZnode;
+  // Name node if the replicationState znode
+  private String replicationStateNodeName;
+  private final Configuration conf;
+  // Is this cluster replicating at the moment?
+  private AtomicBoolean replicating;
+  // Byte (stored as string here) that identifies this cluster
+  private String clusterId;
+  // The key to our own cluster
+  private String ourClusterKey;
+  // Abortable
+  private Abortable abortable;
+  private ReplicationStatusTracker statusTracker;
+
+  /**
+   * Constructor used by clients of replication (like master and HBase clients)
+   * @param conf  conf to use
+   * @param zk    zk connection to use
+   * @throws IOException
+   */
+  public ReplicationZookeeper(final Abortable abortable, final Configuration conf,
+                              final ZooKeeperWatcher zk)
+    throws KeeperException {
+
+    this.conf = conf;
+    this.zookeeper = zk;
+    this.replicating = new AtomicBoolean();
+    setZNodes(abortable);
+  }
+
+  /**
+   * Constructor used by region servers, connects to the peer cluster right away.
+   *
+   * @param server
+   * @param replicating    atomic boolean to start/stop replication
+   * @throws IOException
+   * @throws KeeperException 
+   */
+  public ReplicationZookeeper(final Server server, final AtomicBoolean replicating)
+  throws IOException, KeeperException {
+    this.abortable = server;
+    this.zookeeper = server.getZooKeeper();
+    this.conf = server.getConfiguration();
+    this.replicating = replicating;
+    setZNodes(server);
+
+    this.peerClusters = new HashMap<String, ReplicationPeer>();
+    ZKUtil.createWithParents(this.zookeeper,
+        ZKUtil.joinZNode(this.replicationZNode, this.replicationStateNodeName));
+    this.rsServerNameZnode = ZKUtil.joinZNode(rsZNode, server.getServerName());
+    ZKUtil.createWithParents(this.zookeeper, this.rsServerNameZnode);
+    connectExistingPeers();
+  }
+
+  private void setZNodes(Abortable abortable) throws KeeperException {
+    String replicationZNodeName =
+        conf.get("zookeeper.znode.replication", "replication");
+    String peersZNodeName =
+        conf.get("zookeeper.znode.replication.peers", "peers");
+    String repMasterZNodeName =
+        conf.get("zookeeper.znode.replication.master", "master");
+    this.replicationStateNodeName =
+        conf.get("zookeeper.znode.replication.state", "state");
+    String clusterIdZNodeName =
+        conf.get("zookeeper.znode.replication.clusterId", "clusterId");
+    String rsZNodeName =
+        conf.get("zookeeper.znode.replication.rs", "rs");
+    this.ourClusterKey = ZKUtil.getZooKeeperClusterKey(this.conf);
+    this.replicationZNode =
+      ZKUtil.joinZNode(this.zookeeper.baseZNode, replicationZNodeName);
+    this.peersZNode = ZKUtil.joinZNode(replicationZNode, peersZNodeName);
+    ZKUtil.createWithParents(this.zookeeper, this.peersZNode);
+    this.rsZNode = ZKUtil.joinZNode(replicationZNode, rsZNodeName);
+    ZKUtil.createWithParents(this.zookeeper, this.rsZNode);
+
+    String znode = ZKUtil.joinZNode(this.replicationZNode, clusterIdZNodeName);
+    byte [] data = ZKUtil.getData(this.zookeeper, znode);
+    String idResult = Bytes.toString(data);
+    this.clusterId = idResult == null?
+      Byte.toString(HConstants.DEFAULT_CLUSTER_ID): idResult;
+    // Set a tracker on replicationStateNodeNode
+    this.statusTracker =
+        new ReplicationStatusTracker(this.zookeeper, abortable);
+    statusTracker.start();
+    readReplicationStateZnode();
+  }
+
+  private void connectExistingPeers() throws IOException, KeeperException {
+    List<String> znodes = ZKUtil.listChildrenNoWatch(this.zookeeper, this.peersZNode);
+    if (znodes != null) {
+      for (String z : znodes) {
+        connectToPeer(z);
+      }
+    }
+  }
+
+  /**
+   * List this cluster's peers' IDs
+   * @return list of all peers' identifiers
+   */
+  public List<String> listPeersIdsAndWatch() {
+    List<String> ids = null;
+    try {
+      ids = ZKUtil.listChildrenAndWatchThem(this.zookeeper, this.peersZNode);
+    } catch (KeeperException e) {
+      this.abortable.abort("Cannot get the list of peers ", e);
+    }
+    return ids;
+  }
+
+  /**
+   * Returns all region servers from given peer
+   *
+   * @param peerClusterId (byte) the cluster to interrogate
+   * @return addresses of all region servers
+   */
+  public List<HServerAddress> getSlavesAddresses(String peerClusterId)
+      throws KeeperException {
+    if (this.peerClusters.size() == 0) {
+      return new ArrayList<HServerAddress>(0);
+    }
+    ReplicationPeer peer = this.peerClusters.get(peerClusterId);
+    if (peer == null) {
+      return new ArrayList<HServerAddress>(0);
+    }
+    peer.setRegionServers(fetchSlavesAddresses(peer.getZkw()));
+    return peer.getRegionServers();
+  }
+
+  /**
+   * Get the list of all the region servers from the specified peer
+   * @param zkw zk connection to use
+   * @return list of region server addresses
+   */
+  private List<HServerAddress> fetchSlavesAddresses(ZooKeeperWatcher zkw) {
+    List<HServerAddress> rss = null;
+    try {
+      rss = ZKUtil.listChildrenAndGetAsAddresses(zkw, zkw.rsZNode);
+    } catch (KeeperException e) {
+      LOG.warn("Cannot get peer's region server addresses", e);
+    }
+    return rss;
+  }
+
+  /**
+   * This method connects this cluster to another one and registers it
+   * in this region server's replication znode
+   * @param peerId id of the peer cluster
+   * @throws KeeperException 
+   */
+  public boolean connectToPeer(String peerId)
+      throws IOException, KeeperException {
+    if (peerClusters == null) {
+      return false;
+    }
+    if (this.peerClusters.containsKey(peerId)) {
+      return false;
+      // TODO remove when we support it
+    } else if (this.peerClusters.size() > 0) {
+      LOG.warn("Multiple slaves feature not supported");
+      return false;
+    }
+    ReplicationPeer peer = getPeer(peerId);
+    if (peer == null) {
+      return false;
+    }
+    this.peerClusters.put(peerId, peer);
+    ZKUtil.createWithParents(this.zookeeper, ZKUtil.joinZNode(
+        this.rsServerNameZnode, peerId));
+    LOG.info("Added new peer cluster " + peer.getClusterKey());
+    return true;
+  }
+
+  /**
+   * Helper method to connect to a peer
+   * @param peerId peer's identifier
+   * @return object representing the peer
+   * @throws IOException
+   * @throws KeeperException
+   */
+  public ReplicationPeer getPeer(String peerId) throws IOException, KeeperException{
+    String znode = ZKUtil.joinZNode(this.peersZNode, peerId);
+    byte [] data = ZKUtil.getData(this.zookeeper, znode);
+    String otherClusterKey = Bytes.toString(data);
+    if (this.ourClusterKey.equals(otherClusterKey)) {
+      LOG.debug("Not connecting to " + peerId + " because it's us");
+      return null;
+    }
+    // Construct the connection to the new peer
+    Configuration otherConf = new Configuration(this.conf);
+    try {
+      ZKUtil.applyClusterKeyToConf(otherConf, otherClusterKey);
+    } catch (IOException e) {
+      LOG.error("Can't get peer because:", e);
+      return null;
+    }
+
+    ZooKeeperWatcher zkw = new ZooKeeperWatcher(otherConf,
+        "connection to cluster: " + peerId, this.abortable);
+    return new ReplicationPeer(otherConf, peerId,
+        otherClusterKey, zkw);
+  }
+
+  /**
+   * Set the new replication state for this cluster
+   * @param newState
+   */
+  public void setReplicating(boolean newState) throws KeeperException {
+    ZKUtil.createWithParents(this.zookeeper,
+        ZKUtil.joinZNode(this.replicationZNode, this.replicationStateNodeName));
+    ZKUtil.setData(this.zookeeper,
+        ZKUtil.joinZNode(this.replicationZNode, this.replicationStateNodeName),
+        Bytes.toBytes(Boolean.toString(newState)));
+  }
+
+  /**
+   * Remove the peer from zookeeper. which will trigger the watchers on every
+   * region server and close their sources
+   * @param id
+   * @throws IllegalArgumentException Thrown when the peer doesn't exist
+   */
+  public void removePeer(String id) throws IOException {
+    try {
+      if (!peerExists(id)) {
+        throw new IllegalArgumentException("Cannot remove inexisting peer");
+      }
+      ZKUtil.deleteNode(this.zookeeper, ZKUtil.joinZNode(this.peersZNode, id));
+    } catch (KeeperException e) {
+      throw new IOException("Unable to remove a peer", e);
+    }
+  }
+
+  /**
+   * Add a new peer to this cluster
+   * @param id peer's identifier
+   * @param clusterKey ZK ensemble's addresses, client port and root znode
+   * @throws IllegalArgumentException Thrown when the peer doesn't exist
+   * @throws IllegalStateException Thrown when a peer already exists, since
+   *         multi-slave isn't supported yet.
+   */
+  public void addPeer(String id, String clusterKey) throws IOException {
+    try {
+      if (peerExists(id)) {
+        throw new IllegalArgumentException("Cannot add existing peer");
+      } else if (countPeers() > 0) {
+        throw new IllegalStateException("Multi-slave isn't supported yet");
+      }
+      ZKUtil.createWithParents(this.zookeeper, this.peersZNode);
+      ZKUtil.createAndWatch(this.zookeeper,
+          ZKUtil.joinZNode(this.peersZNode, id), Bytes.toBytes(clusterKey));
+    } catch (KeeperException e) {
+      throw new IOException("Unable to add peer", e);
+    }
+  }
+
+  private boolean peerExists(String id) throws KeeperException {
+    return ZKUtil.checkExists(this.zookeeper,
+          ZKUtil.joinZNode(this.peersZNode, id)) >= 0;
+  }
+
+  private int countPeers() throws KeeperException {
+    List<String> peers =
+        ZKUtil.listChildrenNoWatch(this.zookeeper, this.peersZNode);
+    return peers == null ? 0 : peers.size();
+  }
+
+  /**
+   * This reads the state znode for replication and sets the atomic boolean
+   */
+  private void readReplicationStateZnode() {
+    try {
+      this.replicating.set(getReplication());
+      LOG.info("Replication is now " + (this.replicating.get()?
+        "started" : "stopped"));
+    } catch (KeeperException e) {
+      this.abortable.abort("Failed getting data on from " + getRepStateNode(), e);
+    }
+  }
+
+  /**
+   * Get the replication status of this cluster. If the state znode doesn't
+   * exist it will also create it and set it true.
+   * @return returns true when it's enabled, else false
+   * @throws KeeperException
+   */
+  public boolean getReplication() throws KeeperException {
+    byte [] data = this.statusTracker.getData();
+    if (data == null || data.length == 0) {
+      setReplicating(true);
+      return true;
+    }
+    return Boolean.parseBoolean(Bytes.toString(data));
+  }
+
+  private String getRepStateNode() {
+    return ZKUtil.joinZNode(this.replicationZNode, this.replicationStateNodeName);
+  }
+
+  /**
+   * Add a new log to the list of hlogs in zookeeper
+   * @param filename name of the hlog's znode
+   * @param clusterId name of the cluster's znode
+   */
+  public void addLogToList(String filename, String clusterId) {
+    try {
+      String znode = ZKUtil.joinZNode(this.rsServerNameZnode, clusterId);
+      znode = ZKUtil.joinZNode(znode, filename);
+      ZKUtil.createWithParents(this.zookeeper, znode);
+    } catch (KeeperException e) {
+      this.abortable.abort("Failed add log to list", e);
+    }
+  }
+
+  /**
+   * Remove a log from the list of hlogs in zookeeper
+   * @param filename name of the hlog's znode
+   * @param clusterId name of the cluster's znode
+   */
+  public void removeLogFromList(String filename, String clusterId) {
+    try {
+      String znode = ZKUtil.joinZNode(rsServerNameZnode, clusterId);
+      znode = ZKUtil.joinZNode(znode, filename);
+      ZKUtil.deleteNode(this.zookeeper, znode);
+    } catch (KeeperException e) {
+      this.abortable.abort("Failed remove from list", e);
+    }
+  }
+
+  /**
+   * Set the current position of the specified cluster in the current hlog
+   * @param filename filename name of the hlog's znode
+   * @param clusterId clusterId name of the cluster's znode
+   * @param position the position in the file
+   * @throws IOException
+   */
+  public void writeReplicationStatus(String filename, String clusterId,
+      long position) {
+    try {
+      String znode = ZKUtil.joinZNode(this.rsServerNameZnode, clusterId);
+      znode = ZKUtil.joinZNode(znode, filename);
+      // Why serialize String of Long and note Long as bytes?
+      ZKUtil.setData(this.zookeeper, znode,
+        Bytes.toBytes(Long.toString(position)));
+    } catch (KeeperException e) {
+      this.abortable.abort("Writing replication status", e);
+    }
+  }
+
+  /**
+   * Get a list of all the other region servers in this cluster
+   * and set a watch
+   * @return a list of server nanes
+   */
+  public List<String> getRegisteredRegionServers() {
+    List<String> result = null;
+    try {
+      result = ZKUtil.listChildrenAndWatchThem(
+          this.zookeeper, this.zookeeper.rsZNode);
+    } catch (KeeperException e) {
+      this.abortable.abort("Get list of registered region servers", e);
+    }
+    return result;
+  }
+
+  /**
+   * Get the list of the replicators that have queues, they can be alive, dead
+   * or simply from a previous run
+   * @return a list of server names
+   */
+  public List<String> getListOfReplicators() {
+    List<String> result = null;
+    try {
+      result = ZKUtil.listChildrenNoWatch(this.zookeeper, rsZNode);
+    } catch (KeeperException e) {
+      this.abortable.abort("Get list of replicators", e);
+    }
+    return result;
+  }
+
+  /**
+   * Get the list of peer clusters for the specified server names
+   * @param rs server names of the rs
+   * @return a list of peer cluster
+   */
+  public List<String> getListPeersForRS(String rs) {
+    String znode = ZKUtil.joinZNode(rsZNode, rs);
+    List<String> result = null;
+    try {
+      result = ZKUtil.listChildrenNoWatch(this.zookeeper, znode);
+    } catch (KeeperException e) {
+      this.abortable.abort("Get list of peers for rs", e);
+    }
+    return result;
+  }
+
+  /**
+   * Get the list of hlogs for the specified region server and peer cluster
+   * @param rs server names of the rs
+   * @param id peer cluster
+   * @return a list of hlogs
+   */
+  public List<String> getListHLogsForPeerForRS(String rs, String id) {
+    String znode = ZKUtil.joinZNode(rsZNode, rs);
+    znode = ZKUtil.joinZNode(znode, id);
+    List<String> result = null;
+    try {
+      result = ZKUtil.listChildrenNoWatch(this.zookeeper, znode);
+    } catch (KeeperException e) {
+      this.abortable.abort("Get list of hlogs for peer", e);
+    }
+    return result;
+  }
+
+  /**
+   * Try to set a lock in another server's znode.
+   * @param znode the server names of the other server
+   * @return true if the lock was acquired, false in every other cases
+   */
+  public boolean lockOtherRS(String znode) {
+    try {
+      String parent = ZKUtil.joinZNode(this.rsZNode, znode);
+      if (parent.equals(rsServerNameZnode)) {
+        LOG.warn("Won't lock because this is us, we're dead!");
+        return false;
+      }
+      String p = ZKUtil.joinZNode(parent, RS_LOCK_ZNODE);
+      ZKUtil.createAndWatch(this.zookeeper, p, Bytes.toBytes(rsServerNameZnode));
+    } catch (KeeperException e) {
+      LOG.info("Failed lock other rs", e);
+      return false;
+    }
+    return true;
+  }
+
+  /**
+   * This methods copies all the hlogs queues from another region server
+   * and returns them all sorted per peer cluster (appended with the dead
+   * server's znode)
+   * @param znode server names to copy
+   * @return all hlogs for all peers of that cluster, null if an error occurred
+   */
+  public SortedMap<String, SortedSet<String>> copyQueuesFromRS(String znode) {
+    // TODO this method isn't atomic enough, we could start copying and then
+    // TODO fail for some reason and we would end up with znodes we don't want.
+    SortedMap<String,SortedSet<String>> queues =
+        new TreeMap<String,SortedSet<String>>();
+    try {
+      String nodePath = ZKUtil.joinZNode(rsZNode, znode);
+      List<String> clusters =
+        ZKUtil.listChildrenNoWatch(this.zookeeper, nodePath);
+      // We have a lock znode in there, it will count as one.
+      if (clusters == null || clusters.size() <= 1) {
+        return queues;
+      }
+      // The lock isn't a peer cluster, remove it
+      clusters.remove(RS_LOCK_ZNODE);
+      for (String cluster : clusters) {
+        // We add the name of the recovered RS to the new znode, we can even
+        // do that for queues that were recovered 10 times giving a znode like
+        // number-startcode-number-otherstartcode-number-anotherstartcode-etc
+        String newCluster = cluster+"-"+znode;
+        String newClusterZnode = ZKUtil.joinZNode(rsServerNameZnode, newCluster);
+        ZKUtil.createNodeIfNotExistsAndWatch(this.zookeeper, newClusterZnode,
+          HConstants.EMPTY_BYTE_ARRAY);
+        String clusterPath = ZKUtil.joinZNode(nodePath, cluster);
+        List<String> hlogs = ZKUtil.listChildrenNoWatch(this.zookeeper, clusterPath);
+        // That region server didn't have anything to replicate for this cluster
+        if (hlogs == null || hlogs.size() == 0) {
+          continue;
+        }
+        SortedSet<String> logQueue = new TreeSet<String>();
+        queues.put(newCluster, logQueue);
+        for (String hlog : hlogs) {
+          String z = ZKUtil.joinZNode(clusterPath, hlog);
+          byte [] position = ZKUtil.getData(this.zookeeper, z);
+          LOG.debug("Creating " + hlog + " with data " + Bytes.toString(position));
+          String child = ZKUtil.joinZNode(newClusterZnode, hlog);
+          ZKUtil.createAndWatch(this.zookeeper, child, position);
+          logQueue.add(hlog);
+        }
+      }
+    } catch (KeeperException e) {
+      this.abortable.abort("Copy queues from rs", e);
+    }
+    return queues;
+  }
+
+  /**
+   * Delete a complete queue of hlogs
+   * @param peerZnode znode of the peer cluster queue of hlogs to delete
+   */
+  public void deleteSource(String peerZnode, boolean closeConnection) {
+    try {
+      ZKUtil.deleteNodeRecursively(this.zookeeper,
+          ZKUtil.joinZNode(rsServerNameZnode, peerZnode));
+      if (closeConnection) {
+        this.peerClusters.get(peerZnode).getZkw().close();
+        this.peerClusters.remove(peerZnode);
+      }
+    } catch (KeeperException e) {
+      this.abortable.abort("Failed delete of " + peerZnode, e);
+    }
+  }
+
+  /**
+   * Recursive deletion of all znodes in specified rs' znode
+   * @param znode
+   */
+  public void deleteRsQueues(String znode) {
+    try {
+      ZKUtil.deleteNodeRecursively(this.zookeeper,
+          ZKUtil.joinZNode(rsZNode, znode));
+    } catch (KeeperException e) {
+      this.abortable.abort("Failed delete of " + znode, e);
+    }
+  }
+
+  /**
+   * Delete this cluster's queues
+   */
+  public void deleteOwnRSZNode() {
+    try {
+      ZKUtil.deleteNodeRecursively(this.zookeeper,
+          this.rsServerNameZnode);
+    } catch (KeeperException e) {
+      // if the znode is already expired, don't bother going further
+      if (e instanceof KeeperException.SessionExpiredException) {
+        return;
+      }
+      this.abortable.abort("Failed delete of " + this.rsServerNameZnode, e);
+    }
+  }
+
+  /**
+   * Get the position of the specified hlog in the specified peer znode
+   * @param peerId znode of the peer cluster
+   * @param hlog name of the hlog
+   * @return the position in that hlog
+   * @throws KeeperException 
+   */
+  public long getHLogRepPosition(String peerId, String hlog)
+  throws KeeperException {
+    String clusterZnode = ZKUtil.joinZNode(rsServerNameZnode, peerId);
+    String znode = ZKUtil.joinZNode(clusterZnode, hlog);
+    String data = Bytes.toString(ZKUtil.getData(this.zookeeper, znode));
+    return data == null || data.length() == 0 ? 0 : Long.parseLong(data);
+  }
+
+  public void registerRegionServerListener(ZooKeeperListener listener) {
+    this.zookeeper.registerListener(listener);
+  }
+
+  /**
+   * Get the identification of the cluster
+   *
+   * @return the id for the cluster
+   */
+  public String getClusterId() {
+    return this.clusterId;
+  }
+
+  /**
+   * Get a map of all peer clusters
+   * @return map of peer cluster keyed by id
+   */
+  public Map<String, ReplicationPeer> getPeerClusters() {
+    return this.peerClusters;
+  }
+
+  /**
+   * Extracts the znode name of a peer cluster from a ZK path
+   * @param fullPath Path to extract the id from
+   * @return the id or an empty string if path is invalid
+   */
+  public static String getZNodeName(String fullPath) {
+    String[] parts = fullPath.split("/");
+    return parts.length > 0 ? parts[parts.length-1] : "";
+  }
+
+  /**
+   * Get this cluster's zk connection
+   * @return zk connection
+   */
+  public ZooKeeperWatcher getZookeeperWatcher() {
+    return this.zookeeper;
+  }
+
+
+  /**
+   * Get the full path to the peers' znode
+   * @return path to peers in zk
+   */
+  public String getPeersZNode() {
+    return peersZNode;
+  }
+
+  /**
+   * Tracker for status of the replication
+   */
+  public class ReplicationStatusTracker extends ZooKeeperNodeTracker {
+    public ReplicationStatusTracker(ZooKeeperWatcher watcher,
+        Abortable abortable) {
+      super(watcher, getRepStateNode(), abortable);
+    }
+
+    @Override
+    public synchronized void nodeDataChanged(String path) {
+      if (path.equals(node)) {
+        super.nodeDataChanged(path);
+        readReplicationStateZnode();
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationLogCleaner.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationLogCleaner.java
new file mode 100644
index 0000000..133da33
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/master/ReplicationLogCleaner.java
@@ -0,0 +1,170 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.master;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.master.LogCleanerDelegate;
+import org.apache.hadoop.hbase.replication.ReplicationZookeeper;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+/**
+ * Implementation of a log cleaner that checks if a log is still scheduled for
+ * replication before deleting it when its TTL is over.
+ */
+public class ReplicationLogCleaner implements LogCleanerDelegate, Abortable {
+  private static final Log LOG = LogFactory.getLog(ReplicationLogCleaner.class);
+  private Configuration conf;
+  private ReplicationZookeeper zkHelper;
+  private Set<String> hlogs = new HashSet<String>();
+  private boolean stopped = false;
+
+  /**
+   * Instantiates the cleaner, does nothing more.
+   */
+  public ReplicationLogCleaner() {}
+
+  @Override
+  public boolean isLogDeletable(Path filePath) {
+
+    try {
+      if (!zkHelper.getReplication()) {
+        return false;
+      }
+    } catch (KeeperException e) {
+      abort("Cannot get the state of replication", e);
+      return false;
+    }
+
+    // all members of this class are null if replication is disabled, and we
+    // return true since false would render the LogsCleaner useless
+    if (this.conf == null) {
+      return true;
+    }
+    String log = filePath.getName();
+    // If we saw the hlog previously, let's consider it's still used
+    // At some point in the future we will refresh the list and it will be gone
+    if (this.hlogs.contains(log)) {
+      return false;
+    }
+
+    // Let's see it's still there
+    // This solution makes every miss very expensive to process since we
+    // almost completely refresh the cache each time
+    return !refreshHLogsAndSearch(log);
+  }
+
+  /**
+   * Search through all the hlogs we have in ZK to refresh the cache
+   * If a log is specified and found, then we early out and return true
+   * @param searchedLog log we are searching for, pass null to cache everything
+   *                    that's in zookeeper.
+   * @return false until a specified log is found.
+   */
+  private boolean refreshHLogsAndSearch(String searchedLog) {
+    this.hlogs.clear();
+    final boolean lookForLog = searchedLog != null;
+    List<String> rss = zkHelper.getListOfReplicators();
+    if (rss == null) {
+      LOG.debug("Didn't find any region server that replicates, deleting: " +
+          searchedLog);
+      return false;
+    }
+    for (String rs: rss) {
+      List<String> listOfPeers = zkHelper.getListPeersForRS(rs);
+      // if rs just died, this will be null
+      if (listOfPeers == null) {
+        continue;
+      }
+      for (String id : listOfPeers) {
+        List<String> peersHlogs = zkHelper.getListHLogsForPeerForRS(rs, id);
+        if (peersHlogs != null) {
+          this.hlogs.addAll(peersHlogs);
+        }
+        // early exit if we found the log
+        if(lookForLog && this.hlogs.contains(searchedLog)) {
+          LOG.debug("Found log in ZK, keeping: " + searchedLog);
+          return true;
+        }
+      }
+    }
+    LOG.debug("Didn't find this log in ZK, deleting: " + searchedLog);
+    return false;
+  }
+
+  @Override
+  public void setConf(Configuration conf) {
+    // If replication is disabled, keep all members null
+    if (!conf.getBoolean(HConstants.REPLICATION_ENABLE_KEY, false)) {
+      return;
+    }
+    // Make my own Configuration.  Then I'll have my own connection to zk that
+    // I can close myself when comes time.
+    this.conf = new Configuration(conf);
+    try {
+      ZooKeeperWatcher zkw =
+          new ZooKeeperWatcher(this.conf, "replicationLogCleaner", null);
+      this.zkHelper = new ReplicationZookeeper(this, this.conf, zkw);
+    } catch (KeeperException e) {
+      LOG.error("Error while configuring " + this.getClass().getName(), e);
+    } catch (IOException e) {
+      LOG.error("Error while configuring " + this.getClass().getName(), e);
+    }
+    refreshHLogsAndSearch(null);
+  }
+
+  @Override
+  public Configuration getConf() {
+    return conf;
+  }
+
+  @Override
+  public void stop(String why) {
+    if (this.stopped) return;
+    this.stopped = true;
+    if (this.zkHelper != null) {
+      LOG.info("Stopping " + this.zkHelper.getZookeeperWatcher());
+      this.zkHelper.getZookeeperWatcher().close();
+    }
+    HConnectionManager.deleteConnection(this.conf, true);
+  }
+
+  @Override
+  public boolean isStopped() {
+    return this.stopped;
+  }
+
+  @Override
+  public void abort(String why, Throwable e) {
+    LOG.warn("Aborting ReplicationLogCleaner because " + why, e);
+    stop(why);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java
new file mode 100644
index 0000000..1a87947
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java
@@ -0,0 +1,181 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import java.io.IOException;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.regionserver.wal.WALObserver;
+import org.apache.hadoop.hbase.replication.ReplicationZookeeper;
+import org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.zookeeper.KeeperException;
+
+import static org.apache.hadoop.hbase.HConstants.HBASE_MASTER_LOGCLEANER_PLUGINS;
+import static org.apache.hadoop.hbase.HConstants.REPLICATION_ENABLE_KEY;
+import static org.apache.hadoop.hbase.HConstants.REPLICATION_SCOPE_LOCAL;
+
+/**
+ * Gateway to Replication.  Used by {@link org.apache.hadoop.hbase.regionserver.HRegionServer}.
+ */
+public class Replication implements WALObserver {
+  private final boolean replication;
+  private final ReplicationSourceManager replicationManager;
+  private final AtomicBoolean replicating = new AtomicBoolean(true);
+  private final ReplicationZookeeper zkHelper;
+  private final Configuration conf;
+  private ReplicationSink replicationSink;
+  // Hosting server
+  private final Server server;
+
+  /**
+   * Instantiate the replication management (if rep is enabled).
+   * @param server Hosting server
+   * @param fs handle to the filesystem
+   * @param logDir
+   * @param oldLogDir directory where logs are archived
+   * @throws IOException
+   * @throws KeeperException 
+   */
+  public Replication(final Server server, final FileSystem fs,
+      final Path logDir, final Path oldLogDir)
+  throws IOException, KeeperException {
+    this.server = server;
+    this.conf = this.server.getConfiguration();
+    this.replication = isReplication(this.conf);
+    if (replication) {
+      this.zkHelper = new ReplicationZookeeper(server, this.replicating);
+      this.replicationManager = new ReplicationSourceManager(zkHelper, conf,
+          this.server, fs, this.replicating, logDir, oldLogDir) ;
+    } else {
+      this.replicationManager = null;
+      this.zkHelper = null;
+    }
+  }
+
+  /**
+   * @param c Configuration to look at
+   * @return True if replication is enabled.
+   */
+  public static boolean isReplication(final Configuration c) {
+    return c.getBoolean(REPLICATION_ENABLE_KEY, false);
+  }
+
+  /**
+   * Join with the replication threads
+   */
+  public void join() {
+    if (this.replication) {
+      this.replicationManager.join();
+    }
+  }
+
+  /**
+   * Carry on the list of log entries down to the sink
+   * @param entries list of entries to replicate
+   * @throws IOException
+   */
+  public void replicateLogEntries(HLog.Entry[] entries) throws IOException {
+    if (this.replication) {
+      this.replicationSink.replicateEntries(entries);
+    }
+  }
+
+  /**
+   * If replication is enabled and this cluster is a master,
+   * it starts
+   * @throws IOException
+   */
+  public void startReplicationServices() throws IOException {
+    if (this.replication) {
+      this.replicationManager.init();
+      this.replicationSink = new ReplicationSink(this.conf, this.server);
+    }
+  }
+
+  /**
+   * Get the replication sources manager
+   * @return the manager if replication is enabled, else returns false
+   */
+  public ReplicationSourceManager getReplicationManager() {
+    return this.replicationManager;
+  }
+
+  @Override
+  public void visitLogEntryBeforeWrite(HRegionInfo info, HLogKey logKey,
+      WALEdit logEdit) {
+    NavigableMap<byte[], Integer> scopes =
+        new TreeMap<byte[], Integer>(Bytes.BYTES_COMPARATOR);
+    byte[] family;
+    for (KeyValue kv : logEdit.getKeyValues()) {
+      family = kv.getFamily();
+      int scope = info.getTableDesc().getFamily(family).getScope();
+      if (scope != REPLICATION_SCOPE_LOCAL &&
+          !scopes.containsKey(family)) {
+        scopes.put(family, scope);
+      }
+    }
+    if (!scopes.isEmpty()) {
+      logEdit.setScopes(scopes);
+    }
+  }
+
+  @Override
+  public void logRolled(Path p) {
+    getReplicationManager().logRolled(p);
+  }
+
+  /**
+   * This method modifies the master's configuration in order to inject
+   * replication-related features
+   * @param conf
+   */
+  public static void decorateMasterConfiguration(Configuration conf) {
+    if (!isReplication(conf)) {
+      return;
+    }
+    String plugins = conf.get(HBASE_MASTER_LOGCLEANER_PLUGINS);
+    if (!plugins.contains(ReplicationLogCleaner.class.toString())) {
+      conf.set(HBASE_MASTER_LOGCLEANER_PLUGINS,
+          plugins + "," + ReplicationLogCleaner.class.getCanonicalName());
+    }
+  }
+
+  @Override
+  public void logRollRequested() {
+    // Not interested
+  }
+
+  @Override
+  public void logCloseRequested() {
+    // not interested
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
new file mode 100644
index 0000000..5343403
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java
@@ -0,0 +1,194 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.Stoppable;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * This class is responsible for replicating the edits coming
+ * from another cluster.
+ * <p/>
+ * This replication process is currently waiting for the edits to be applied
+ * before the method can return. This means that the replication of edits
+ * is synchronized (after reading from HLogs in ReplicationSource) and that a
+ * single region server cannot receive edits from two sources at the same time
+ * <p/>
+ * This class uses the native HBase client in order to replicate entries.
+ * <p/>
+ *
+ * TODO make this class more like ReplicationSource wrt log handling
+ */
+public class ReplicationSink {
+
+  private static final Log LOG = LogFactory.getLog(ReplicationSink.class);
+  // Name of the HDFS directory that contains the temporary rep logs
+  public static final String REPLICATION_LOG_DIR = ".replogs";
+  private final Configuration conf;
+  // Pool used to replicated
+  private final HTablePool pool;
+  // Chain to pull on when we want all to stop.
+  private final Stoppable stopper;
+  private final ReplicationSinkMetrics metrics;
+
+  /**
+   * Create a sink for replication
+   *
+   * @param conf                conf object
+   * @param stopper             boolean to tell this thread to stop
+   * @throws IOException thrown when HDFS goes bad or bad file name
+   */
+  public ReplicationSink(Configuration conf, Stoppable stopper)
+      throws IOException {
+    this.conf = conf;
+    this.pool = new HTablePool(this.conf,
+        conf.getInt("replication.sink.htablepool.capacity", 10));
+    this.stopper = stopper;
+    this.metrics = new ReplicationSinkMetrics();
+  }
+
+  /**
+   * Replicate this array of entries directly into the local cluster
+   * using the native client.
+   *
+   * @param entries
+   * @throws IOException
+   */
+  public void replicateEntries(HLog.Entry[] entries)
+      throws IOException {
+    if (entries.length == 0) {
+      return;
+    }
+    // Very simple optimization where we batch sequences of rows going
+    // to the same table.
+    try {
+      long totalReplicated = 0;
+      // Map of table => list of puts, we only want to flushCommits once per
+      // invocation of this method per table.
+      Map<byte[], List<Put>> puts = new TreeMap<byte[], List<Put>>(Bytes.BYTES_COMPARATOR);
+      for (HLog.Entry entry : entries) {
+        WALEdit edit = entry.getEdit();
+        List<KeyValue> kvs = edit.getKeyValues();
+        if (kvs.get(0).isDelete()) {
+          Delete delete = new Delete(kvs.get(0).getRow(),
+              kvs.get(0).getTimestamp(), null);
+          for (KeyValue kv : kvs) {
+            if (kv.isDeleteFamily()) {
+              delete.deleteFamily(kv.getFamily());
+            } else if (!kv.isEmptyColumn()) {
+              delete.deleteColumn(kv.getFamily(),
+                  kv.getQualifier());
+            }
+          }
+          delete(entry.getKey().getTablename(), delete);
+        } else {
+          byte[] table = entry.getKey().getTablename();
+          List<Put> tableList = puts.get(table);
+          if (tableList == null) {
+            tableList = new ArrayList<Put>();
+            puts.put(table, tableList);
+          }
+          // With mini-batching, we need to expect multiple rows per edit
+          byte[] lastKey = kvs.get(0).getRow();
+          Put put = new Put(kvs.get(0).getRow(),
+              kvs.get(0).getTimestamp());
+          for (KeyValue kv : kvs) {
+            if (!Bytes.equals(lastKey, kv.getRow())) {
+              tableList.add(put);
+              put = new Put(kv.getRow(), kv.getTimestamp());
+            }
+            put.add(kv.getFamily(), kv.getQualifier(), kv.getValue());
+            lastKey = kv.getRow();
+          }
+          tableList.add(put);
+        }
+        totalReplicated++;
+      }
+      for(byte [] table : puts.keySet()) {
+        put(table, puts.get(table));
+      }
+      this.metrics.setAgeOfLastAppliedOp(
+          entries[entries.length-1].getKey().getWriteTime());
+      this.metrics.appliedBatchesRate.inc(1);
+      LOG.info("Total replicated: " + totalReplicated);
+    } catch (IOException ex) {
+      LOG.error("Unable to accept edit because:", ex);
+      throw ex;
+    }
+  }
+
+  /**
+   * Do the puts and handle the pool
+   * @param tableName table to insert into
+   * @param puts list of puts
+   * @throws IOException
+   */
+  private void put(byte[] tableName, List<Put> puts) throws IOException {
+    if (puts.isEmpty()) {
+      return;
+    }
+    HTableInterface table = null;
+    try {
+      table = this.pool.getTable(tableName);
+      table.put(puts);
+      this.metrics.appliedOpsRate.inc(puts.size());
+    } finally {
+      if (table != null) {
+        this.pool.putTable(table);
+      }
+    }
+  }
+
+  /**
+   * Do the delete and handle the pool
+   * @param tableName table to delete in
+   * @param delete the delete to use
+   * @throws IOException
+   */
+  private void delete(byte[] tableName, Delete delete) throws IOException {
+    HTableInterface table = null;
+    try {
+      table = this.pool.getTable(tableName);
+      table.delete(delete);
+      this.metrics.appliedOpsRate.inc(1);
+    } finally {
+      if (table != null) {
+        this.pool.putTable(table);
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkMetrics.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkMetrics.java
new file mode 100644
index 0000000..ae14375
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSinkMetrics.java
@@ -0,0 +1,81 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+import org.apache.hadoop.hbase.metrics.MetricsRate;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.jvm.JvmMetrics;
+import org.apache.hadoop.metrics.util.MetricsIntValue;
+import org.apache.hadoop.metrics.util.MetricsLongValue;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+/**
+ * This class is for maintaining the various replication statistics
+ * for a sink and publishing them through the metrics interfaces.
+ */
+public class ReplicationSinkMetrics implements Updater {
+  private final MetricsRecord metricsRecord;
+  private MetricsRegistry registry = new MetricsRegistry();
+  private static ReplicationSinkMetrics instance;
+
+  /** Rate of operations applied by the sink */
+  public final MetricsRate appliedOpsRate =
+      new MetricsRate("appliedOpsRate", registry);
+
+  /** Rate of batches (of operations) applied by the sink */
+  public final MetricsRate appliedBatchesRate =
+      new MetricsRate("appliedBatchesRate", registry);
+
+  /** Age of the last operation that was applied by the sink */
+  private final MetricsLongValue ageOfLastAppliedOp =
+      new MetricsLongValue("ageOfLastAppliedOp", registry);
+
+  /**
+   * Constructor used to register the metrics
+   */
+  public ReplicationSinkMetrics() {
+    MetricsContext context = MetricsUtil.getContext("hbase");
+    String name = Thread.currentThread().getName();
+    metricsRecord = MetricsUtil.createRecord(context, "replication");
+    metricsRecord.setTag("RegionServer", name);
+    context.registerUpdater(this);
+    // export for JMX
+    new ReplicationStatistics(this.registry, "ReplicationSink");
+  }
+
+  /**
+   * Set the age of the last edit that was applied
+   * @param timestamp write time of the edit
+   */
+  public void setAgeOfLastAppliedOp(long timestamp) {
+    ageOfLastAppliedOp.set(System.currentTimeMillis() - timestamp);
+  }
+  @Override
+  public void doUpdates(MetricsContext metricsContext) {
+    synchronized (this) {
+      this.appliedOpsRate.pushMetric(this.metricsRecord);
+      this.appliedBatchesRate.pushMetric(this.metricsRecord);
+      this.ageOfLastAppliedOp.pushMetric(this.metricsRecord);
+    }
+    this.metricsRecord.update();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
new file mode 100644
index 0000000..ac9bb77
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
@@ -0,0 +1,729 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import java.io.EOFException;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.HashSet;
+import java.util.List;
+import java.util.NavigableMap;
+import java.util.Random;
+import java.util.Set;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.PriorityBlockingQueue;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.replication.ReplicationZookeeper;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Class that handles the source of a replication stream.
+ * Currently does not handle more than 1 slave
+ * For each slave cluster it selects a random number of peers
+ * using a replication ratio. For example, if replication ration = 0.1
+ * and slave cluster has 100 region servers, 10 will be selected.
+ * <p/>
+ * A stream is considered down when we cannot contact a region server on the
+ * peer cluster for more than 55 seconds by default.
+ * <p/>
+ *
+ */
+public class ReplicationSource extends Thread
+    implements ReplicationSourceInterface {
+
+  private static final Log LOG = LogFactory.getLog(ReplicationSource.class);
+  // Queue of logs to process
+  private PriorityBlockingQueue<Path> queue;
+  // container of entries to replicate
+  private HLog.Entry[] entriesArray;
+  private HConnection conn;
+  // Helper class for zookeeper
+  private ReplicationZookeeper zkHelper;
+  private Configuration conf;
+  // ratio of region servers to chose from a slave cluster
+  private float ratio;
+  private Random random;
+  // should we replicate or not?
+  private AtomicBoolean replicating;
+  // id of the peer cluster this source replicates to
+  private String peerClusterId;
+  // The manager of all sources to which we ping back our progress
+  private ReplicationSourceManager manager;
+  // Should we stop everything?
+  private Stoppable stopper;
+  // List of chosen sinks (region servers)
+  private List<HServerAddress> currentPeers;
+  // How long should we sleep for each retry
+  private long sleepForRetries;
+  // Max size in bytes of entriesArray
+  private long replicationQueueSizeCapacity;
+  // Max number of entries in entriesArray
+  private int replicationQueueNbCapacity;
+  // Our reader for the current log
+  private HLog.Reader reader;
+  // Current position in the log
+  private long position = 0;
+  // Path of the current log
+  private volatile Path currentPath;
+  private FileSystem fs;
+  // id of this cluster
+  private byte clusterId;
+  // total number of edits we replicated
+  private long totalReplicatedEdits = 0;
+  // The znode we currently play with
+  private String peerClusterZnode;
+  // Indicates if this queue is recovered (and will be deleted when depleted)
+  private boolean queueRecovered;
+  // List of all the dead region servers that had this queue (if recovered)
+  private String[] deadRegionServers;
+  // Maximum number of retries before taking bold actions
+  private long maxRetriesMultiplier;
+  // Current number of entries that we need to replicate
+  private int currentNbEntries = 0;
+  // Current number of operations (Put/Delete) that we need to replicate
+  private int currentNbOperations = 0;
+  // Indicates if this particular source is running
+  private volatile boolean running = true;
+  // Metrics for this source
+  private ReplicationSourceMetrics metrics;
+  // If source is enabled, replication happens. If disabled, nothing will be
+  // replicated but HLogs will still be queued
+  private AtomicBoolean sourceEnabled = new AtomicBoolean();
+
+  /**
+   * Instantiation method used by region servers
+   *
+   * @param conf configuration to use
+   * @param fs file system to use
+   * @param manager replication manager to ping to
+   * @param stopper     the atomic boolean to use to stop the regionserver
+   * @param replicating the atomic boolean that starts/stops replication
+   * @param peerClusterZnode the name of our znode
+   * @throws IOException
+   */
+  public void init(final Configuration conf,
+                   final FileSystem fs,
+                   final ReplicationSourceManager manager,
+                   final Stoppable stopper,
+                   final AtomicBoolean replicating,
+                   final String peerClusterZnode)
+      throws IOException {
+    this.stopper = stopper;
+    this.conf = conf;
+    this.replicationQueueSizeCapacity =
+        this.conf.getLong("replication.source.size.capacity", 1024*1024*64);
+    this.replicationQueueNbCapacity =
+        this.conf.getInt("replication.source.nb.capacity", 25000);
+    this.entriesArray = new HLog.Entry[this.replicationQueueNbCapacity];
+    for (int i = 0; i < this.replicationQueueNbCapacity; i++) {
+      this.entriesArray[i] = new HLog.Entry();
+    }
+    this.maxRetriesMultiplier =
+        this.conf.getLong("replication.source.maxretriesmultiplier", 10);
+    this.queue =
+        new PriorityBlockingQueue<Path>(
+            conf.getInt("hbase.regionserver.maxlogs", 32),
+            new LogsComparator());
+    this.conn = HConnectionManager.getConnection(conf);
+    this.zkHelper = manager.getRepZkWrapper();
+    this.ratio = this.conf.getFloat("replication.source.ratio", 0.1f);
+    this.currentPeers = new ArrayList<HServerAddress>();
+    this.random = new Random();
+    this.replicating = replicating;
+    this.manager = manager;
+    this.sleepForRetries =
+        this.conf.getLong("replication.source.sleepforretries", 1000);
+    this.fs = fs;
+    this.clusterId = Byte.valueOf(zkHelper.getClusterId());
+    this.metrics = new ReplicationSourceMetrics(peerClusterZnode);
+
+    // Finally look if this is a recovered queue
+    this.checkIfQueueRecovered(peerClusterZnode);
+  }
+
+  // The passed znode will be either the id of the peer cluster or
+  // the handling story of that queue in the form of id-servername-*
+  private void checkIfQueueRecovered(String peerClusterZnode) {
+    String[] parts = peerClusterZnode.split("-");
+    this.queueRecovered = parts.length != 1;
+    this.peerClusterId = this.queueRecovered ?
+        parts[0] : peerClusterZnode;
+    this.peerClusterZnode = peerClusterZnode;
+    this.deadRegionServers = new String[parts.length-1];
+    // Extract all the places where we could find the hlogs
+    for (int i = 1; i < parts.length; i++) {
+      this.deadRegionServers[i-1] = parts[i];
+    }
+  }
+
+  /**
+   * Select a number of peers at random using the ratio. Mininum 1.
+   */
+  private void chooseSinks() throws KeeperException {
+    this.currentPeers.clear();
+    List<HServerAddress> addresses =
+        this.zkHelper.getSlavesAddresses(peerClusterId);
+    Set<HServerAddress> setOfAddr = new HashSet<HServerAddress>();
+    int nbPeers = (int) (Math.ceil(addresses.size() * ratio));
+    LOG.info("Getting " + nbPeers +
+        " rs from peer cluster # " + peerClusterId);
+    for (int i = 0; i < nbPeers; i++) {
+      HServerAddress address;
+      // Make sure we get one address that we don't already have
+      do {
+        address = addresses.get(this.random.nextInt(addresses.size()));
+      } while (setOfAddr.contains(address));
+      LOG.info("Choosing peer " + address);
+      setOfAddr.add(address);
+    }
+    this.currentPeers.addAll(setOfAddr);
+  }
+
+  @Override
+  public void enqueueLog(Path log) {
+    this.queue.put(log);
+    this.metrics.sizeOfLogQueue.set(queue.size());
+  }
+
+  @Override
+  public void run() {
+    connectToPeers();
+    // We were stopped while looping to connect to sinks, just abort
+    if (this.stopper.isStopped()) {
+      return;
+    }
+    // If this is recovered, the queue is already full and the first log
+    // normally has a position (unless the RS failed between 2 logs)
+    if (this.queueRecovered) {
+      try {
+        this.position = this.zkHelper.getHLogRepPosition(
+            this.peerClusterZnode, this.queue.peek().getName());
+      } catch (KeeperException e) {
+        this.terminate("Couldn't get the position of this recovered queue " +
+            peerClusterZnode, e);
+      }
+    }
+    int sleepMultiplier = 1;
+    // Loop until we close down
+    while (!stopper.isStopped() && this.running) {
+      // Sleep until replication is enabled again
+      if (!this.replicating.get() || !this.sourceEnabled.get()) {
+        if (sleepForRetries("Replication is disabled", sleepMultiplier)) {
+          sleepMultiplier++;
+        }
+        continue;
+      }
+      // Get a new path
+      if (!getNextPath()) {
+        if (sleepForRetries("No log to process", sleepMultiplier)) {
+          sleepMultiplier++;
+        }
+        continue;
+      }
+      // Open a reader on it
+      if (!openReader(sleepMultiplier)) {
+        // Reset the sleep multiplier, else it'd be reused for the next file
+        sleepMultiplier = 1;
+        continue;
+      }
+
+      // If we got a null reader but didn't continue, then sleep and continue
+      if (this.reader == null) {
+        if (sleepForRetries("Unable to open a reader", sleepMultiplier)) {
+          sleepMultiplier++;
+        }
+        continue;
+      }
+
+      boolean gotIOE = false;
+      currentNbEntries = 0;
+      try {
+        if(readAllEntriesToReplicateOrNextFile()) {
+          continue;
+        }
+      } catch (IOException ioe) {
+        LOG.warn(peerClusterZnode + " Got: ", ioe);
+        gotIOE = true;
+        if (ioe.getCause() instanceof EOFException) {
+
+          boolean considerDumping = false;
+          if (this.queueRecovered) {
+            try {
+              FileStatus stat = this.fs.getFileStatus(this.currentPath);
+              if (stat.getLen() == 0) {
+                LOG.warn(peerClusterZnode + " Got EOF and the file was empty");
+              }
+              considerDumping = true;
+            } catch (IOException e) {
+              LOG.warn(peerClusterZnode + " Got while getting file size: ", e);
+            }
+          } else if (currentNbEntries != 0) {
+            LOG.warn(peerClusterZnode + " Got EOF while reading, " +
+                "looks like this file is broken? " + currentPath);
+            considerDumping = true;
+            currentNbEntries = 0;
+          }
+
+          if (considerDumping &&
+              sleepMultiplier == this.maxRetriesMultiplier &&
+              processEndOfFile()) {
+            continue;
+          }
+        }
+      } finally {
+        try {
+          // if current path is null, it means we processEndOfFile hence
+          if (this.currentPath != null && !gotIOE) {
+            this.position = this.reader.getPosition();
+          }
+          if (this.reader != null) {
+            this.reader.close();
+          }
+        } catch (IOException e) {
+          gotIOE = true;
+          LOG.warn("Unable to finalize the tailing of a file", e);
+        }
+      }
+
+      // If we didn't get anything to replicate, or if we hit a IOE,
+      // wait a bit and retry.
+      // But if we need to stop, don't bother sleeping
+      if (!stopper.isStopped() && (gotIOE || currentNbEntries == 0)) {
+        this.manager.logPositionAndCleanOldLogs(this.currentPath,
+            this.peerClusterZnode, this.position, queueRecovered);
+        if (sleepForRetries("Nothing to replicate", sleepMultiplier)) {
+          sleepMultiplier++;
+        }
+        continue;
+      }
+      sleepMultiplier = 1;
+      shipEdits();
+
+    }
+    LOG.debug("Source exiting " + peerClusterId);
+  }
+
+  /**
+   * Read all the entries from the current log files and retain those
+   * that need to be replicated. Else, process the end of the current file.
+   * @return true if we got nothing and went to the next file, false if we got
+   * entries
+   * @throws IOException
+   */
+  protected boolean readAllEntriesToReplicateOrNextFile() throws IOException{
+    long seenEntries = 0;
+    if (this.position != 0) {
+      this.reader.seek(this.position);
+    }
+    HLog.Entry entry = this.reader.next(this.entriesArray[currentNbEntries]);
+    while (entry != null) {
+      WALEdit edit = entry.getEdit();
+      this.metrics.logEditsReadRate.inc(1);
+      seenEntries++;
+      // Remove all KVs that should not be replicated
+      removeNonReplicableEdits(edit);
+      HLogKey logKey = entry.getKey();
+      // Don't replicate catalog entries, if the WALEdit wasn't
+      // containing anything to replicate and if we're currently not set to replicate
+      if (!(Bytes.equals(logKey.getTablename(), HConstants.ROOT_TABLE_NAME) ||
+          Bytes.equals(logKey.getTablename(), HConstants.META_TABLE_NAME)) &&
+          edit.size() != 0 && replicating.get()) {
+        logKey.setClusterId(this.clusterId);
+        currentNbOperations += countDistinctRowKeys(edit);
+        currentNbEntries++;
+      } else {
+        this.metrics.logEditsFilteredRate.inc(1);
+      }
+      // Stop if too many entries or too big
+      if ((this.reader.getPosition() - this.position)
+          >= this.replicationQueueSizeCapacity ||
+          currentNbEntries >= this.replicationQueueNbCapacity) {
+        break;
+      }
+      entry = this.reader.next(entriesArray[currentNbEntries]);
+    }
+    LOG.debug("currentNbOperations:" + currentNbOperations +
+        " and seenEntries:" + seenEntries +
+        " and size: " + (this.reader.getPosition() - this.position));
+    // If we didn't get anything and the queue has an object, it means we
+    // hit the end of the file for sure
+    return seenEntries == 0 && processEndOfFile();
+  }
+
+  private void connectToPeers() {
+    // Connect to peer cluster first, unless we have to stop
+    while (!this.stopper.isStopped() && this.currentPeers.size() == 0) {
+      try {
+        chooseSinks();
+        Thread.sleep(this.sleepForRetries);
+      } catch (InterruptedException e) {
+        LOG.error("Interrupted while trying to connect to sinks", e);
+      } catch (KeeperException e) {
+        LOG.error("Error talking to zookeeper, retrying", e);
+      }
+    }
+  }
+
+  /**
+   * Poll for the next path
+   * @return true if a path was obtained, false if not
+   */
+  protected boolean getNextPath() {
+    try {
+      if (this.currentPath == null) {
+        this.currentPath = queue.poll(this.sleepForRetries, TimeUnit.MILLISECONDS);
+        this.metrics.sizeOfLogQueue.set(queue.size());
+      }
+    } catch (InterruptedException e) {
+      LOG.warn("Interrupted while reading edits", e);
+    }
+    return this.currentPath != null;
+  }
+
+  /**
+   * Open a reader on the current path
+   *
+   * @param sleepMultiplier by how many times the default sleeping time is augmented
+   * @return true if we should continue with that file, false if we are over with it
+   */
+  protected boolean openReader(int sleepMultiplier) {
+    try {
+      LOG.debug("Opening log for replication " + this.currentPath.getName() +
+          " at " + this.position);
+      try {
+       this.reader = null;
+       this.reader = HLog.getReader(this.fs, this.currentPath, this.conf);
+      } catch (FileNotFoundException fnfe) {
+        if (this.queueRecovered) {
+          // We didn't find the log in the archive directory, look if it still
+          // exists in the dead RS folder (there could be a chain of failures
+          // to look at)
+          LOG.info("NB dead servers : " + deadRegionServers.length);
+          for (int i = this.deadRegionServers.length - 1; i >= 0; i--) {
+
+            Path deadRsDirectory =
+                new Path(manager.getLogDir().getParent(), this.deadRegionServers[i]);
+            Path possibleLogLocation =
+                new Path(deadRsDirectory, currentPath.getName());
+            LOG.info("Possible location " + possibleLogLocation.toUri().toString());
+            if (this.manager.getFs().exists(possibleLogLocation)) {
+              // We found the right new location
+              LOG.info("Log " + this.currentPath + " still exists at " +
+                  possibleLogLocation);
+              // Breaking here will make us sleep since reader is null
+              return true;
+            }
+          }
+          // TODO What happens if the log was missing from every single location?
+          // Although we need to check a couple of times as the log could have
+          // been moved by the master between the checks
+          // It can also happen if a recovered queue wasn't properly cleaned,
+          // such that the znode pointing to a log exists but the log was
+          // deleted a long time ago.
+          // For the moment, we'll throw the IO and processEndOfFile
+          throw new IOException("File from recovered queue is " +
+              "nowhere to be found", fnfe);
+        } else {
+          // If the log was archived, continue reading from there
+          Path archivedLogLocation =
+              new Path(manager.getOldLogDir(), currentPath.getName());
+          if (this.manager.getFs().exists(archivedLogLocation)) {
+            currentPath = archivedLogLocation;
+            LOG.info("Log " + this.currentPath + " was moved to " +
+                archivedLogLocation);
+            // Open the log at the new location
+            this.openReader(sleepMultiplier);
+
+          }
+          // TODO What happens the log is missing in both places?
+        }
+      }
+    } catch (IOException ioe) {
+      LOG.warn(peerClusterZnode + " Got: ", ioe);
+      // TODO Need a better way to determinate if a file is really gone but
+      // TODO without scanning all logs dir
+      if (sleepMultiplier == this.maxRetriesMultiplier) {
+        LOG.warn("Waited too long for this file, considering dumping");
+        return !processEndOfFile();
+      }
+    }
+    return true;
+  }
+
+  /**
+   * Do the sleeping logic
+   * @param msg Why we sleep
+   * @param sleepMultiplier by how many times the default sleeping time is augmented
+   * @return True if <code>sleepMultiplier</code> is &lt; <code>maxRetriesMultiplier</code>
+   */
+  protected boolean sleepForRetries(String msg, int sleepMultiplier) {
+    try {
+      LOG.debug(msg + ", sleeping " + sleepForRetries + " times " + sleepMultiplier);
+      Thread.sleep(this.sleepForRetries * sleepMultiplier);
+    } catch (InterruptedException e) {
+      LOG.debug("Interrupted while sleeping between retries");
+    }
+    return sleepMultiplier < maxRetriesMultiplier;
+  }
+
+  /**
+   * We only want KVs that are scoped other than local
+   * @param edit The KV to check for replication
+   */
+  protected void removeNonReplicableEdits(WALEdit edit) {
+    NavigableMap<byte[], Integer> scopes = edit.getScopes();
+    List<KeyValue> kvs = edit.getKeyValues();
+    for (int i = 0; i < edit.size(); i++) {
+      KeyValue kv = kvs.get(i);
+      // The scope will be null or empty if
+      // there's nothing to replicate in that WALEdit
+      if (scopes == null || !scopes.containsKey(kv.getFamily())) {
+        kvs.remove(i);
+        i--;
+      }
+    }
+  }
+
+  /**
+   * Count the number of different row keys in the given edit because of
+   * mini-batching. We assume that there's at least one KV in the WALEdit.
+   * @param edit edit to count row keys from
+   * @return number of different row keys
+   */
+  private int countDistinctRowKeys(WALEdit edit) {
+    List<KeyValue> kvs = edit.getKeyValues();
+    int distinctRowKeys = 1;
+    KeyValue lastKV = kvs.get(0);
+    for (int i = 0; i < edit.size(); i++) {
+      if (!kvs.get(i).matchingRow(lastKV)) {
+        distinctRowKeys++;
+      }
+    }
+    return distinctRowKeys;
+  }
+
+  /**
+   * Do the shipping logic
+   */
+  protected void shipEdits() {
+    int sleepMultiplier = 1;
+    if (this.currentNbEntries == 0) {
+      LOG.warn("Was given 0 edits to ship");
+      return;
+    }
+    while (!this.stopper.isStopped()) {
+      try {
+        HRegionInterface rrs = getRS();
+        LOG.debug("Replicating " + currentNbEntries);
+        rrs.replicateLogEntries(Arrays.copyOf(this.entriesArray, currentNbEntries));
+        this.manager.logPositionAndCleanOldLogs(this.currentPath,
+            this.peerClusterZnode, this.position, queueRecovered);
+        this.totalReplicatedEdits += currentNbEntries;
+        this.metrics.shippedBatchesRate.inc(1);
+        this.metrics.shippedOpsRate.inc(
+            this.currentNbOperations);
+        this.metrics.setAgeOfLastShippedOp(
+            this.entriesArray[this.entriesArray.length-1].getKey().getWriteTime());
+        LOG.debug("Replicated in total: " + this.totalReplicatedEdits);
+        break;
+
+      } catch (IOException ioe) {
+        LOG.warn("Unable to replicate because ", ioe);
+        try {
+          boolean down;
+          do {
+            down = isSlaveDown();
+            if (down) {
+              LOG.debug("The region server we tried to ping didn't answer, " +
+                  "sleeping " + sleepForRetries + " times " + sleepMultiplier);
+              Thread.sleep(this.sleepForRetries * sleepMultiplier);
+              if (sleepMultiplier < maxRetriesMultiplier) {
+                sleepMultiplier++;
+              } else {
+                chooseSinks();
+              }
+            }
+          } while (!this.stopper.isStopped() && down);
+        } catch (InterruptedException e) {
+          LOG.debug("Interrupted while trying to contact the peer cluster");
+        } catch (KeeperException e) {
+          LOG.error("Error talking to zookeeper, retrying", e);
+        }
+
+      }
+    }
+  }
+
+  /**
+   * If the queue isn't empty, switch to the next one
+   * Else if this is a recovered queue, it means we're done!
+   * Else we'll just continue to try reading the log file
+   * @return true if we're done with the current file, false if we should
+   * continue trying to read from it
+   */
+  protected boolean processEndOfFile() {
+    if (this.queue.size() != 0) {
+      this.currentPath = null;
+      this.position = 0;
+      return true;
+    } else if (this.queueRecovered) {
+      this.manager.closeRecoveredQueue(this);
+      LOG.info("Finished recovering the queue");
+      this.running = false;
+      return true;
+    }
+    return false;
+  }
+
+  public void startup() {
+    String n = Thread.currentThread().getName();
+    Thread.UncaughtExceptionHandler handler =
+        new Thread.UncaughtExceptionHandler() {
+          public void uncaughtException(final Thread t, final Throwable e) {
+            terminate("Uncaught exception during runtime", new Exception(e));
+          }
+        };
+    Threads.setDaemonThreadRunning(
+        this, n + ".replicationSource," + peerClusterZnode, handler);
+  }
+
+  public void terminate(String reason) {
+    terminate(reason, null);
+  }
+
+  public void terminate(String reason, Exception cause) {
+    if (cause == null) {
+      LOG.info("Closing source "
+          + this.peerClusterZnode + " because: " + reason);
+
+    } else {
+      LOG.error("Closing source " + this.peerClusterZnode
+          + " because an error occurred: " + reason, cause);
+    }
+    this.running = false;
+    Threads.shutdown(this, this.sleepForRetries);
+  }
+
+  /**
+   * Get a new region server at random from this peer
+   * @return
+   * @throws IOException
+   */
+  private HRegionInterface getRS() throws IOException {
+    if (this.currentPeers.size() == 0) {
+      throw new IOException(this.peerClusterZnode + " has 0 region servers");
+    }
+    HServerAddress address =
+        currentPeers.get(random.nextInt(this.currentPeers.size()));
+    return this.conn.getHRegionConnection(address);
+  }
+
+  /**
+   * Check if the slave is down by trying to establish a connection
+   * @return true if down, false if up
+   * @throws InterruptedException
+   */
+  public boolean isSlaveDown() throws InterruptedException {
+    final CountDownLatch latch = new CountDownLatch(1);
+    Thread pingThread = new Thread() {
+      public void run() {
+        try {
+          HRegionInterface rrs = getRS();
+          // Dummy call which should fail
+          rrs.getHServerInfo();
+          latch.countDown();
+        } catch (IOException ex) {
+          LOG.info("Slave cluster looks down: " + ex.getMessage());
+        }
+      }
+    };
+    pingThread.start();
+    // awaits returns true if countDown happened
+    boolean down = ! latch.await(this.sleepForRetries, TimeUnit.MILLISECONDS);
+    pingThread.interrupt();
+    return down;
+  }
+
+  public String getPeerClusterZnode() {
+    return this.peerClusterZnode;
+  }
+
+  public String getPeerClusterId() {
+    return this.peerClusterId;
+  }
+
+  public Path getCurrentPath() {
+    return this.currentPath;
+  }
+
+  public void setSourceEnabled(boolean status) {
+    this.sourceEnabled.set(status);
+  }
+
+  /**
+   * Comparator used to compare logs together based on their start time
+   */
+  public static class LogsComparator implements Comparator<Path> {
+
+    @Override
+    public int compare(Path o1, Path o2) {
+      return Long.valueOf(getTS(o1)).compareTo(getTS(o2));
+    }
+
+    @Override
+    public boolean equals(Object o) {
+      return true;
+    }
+
+    /**
+     * Split a path to get the start time
+     * For example: 10.20.20.171%3A60020.1277499063250
+     * @param p path to split
+     * @return start time
+     */
+    private long getTS(Path p) {
+      String[] parts = p.getName().split("\\.");
+      return Long.parseLong(parts[parts.length-1]);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceInterface.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceInterface.java
new file mode 100644
index 0000000..62a4bee
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceInterface.java
@@ -0,0 +1,101 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Stoppable;
+
+/**
+ * Interface that defines a replication source
+ */
+public interface ReplicationSourceInterface {
+
+  /**
+   * Initializer for the source
+   * @param conf the configuration to use
+   * @param fs the file system to use
+   * @param manager the manager to use
+   * @param stopper the stopper object for this region server
+   * @param replicating the status of the replication on this cluster
+   * @param peerClusterId the id of the peer cluster
+   * @throws IOException
+   */
+  public void init(final Configuration conf,
+                   final FileSystem fs,
+                   final ReplicationSourceManager manager,
+                   final Stoppable stopper,
+                   final AtomicBoolean replicating,
+                   final String peerClusterId) throws IOException;
+
+  /**
+   * Add a log to the list of logs to replicate
+   * @param log path to the log to replicate
+   */
+  public void enqueueLog(Path log);
+
+  /**
+   * Get the current log that's replicated
+   * @return the current log
+   */
+  public Path getCurrentPath();
+
+  /**
+   * Start the replication
+   */
+  public void startup();
+
+  /**
+   * End the replication
+   * @param reason why it's terminating
+   */
+  public void terminate(String reason);
+
+  /**
+   * End the replication
+   * @param reason why it's terminating
+   * @param cause the error that's causing it
+   */
+  public void terminate(String reason, Exception cause);
+
+  /**
+   * Get the id that the source is replicating to
+   *
+   * @return peer cluster id
+   */
+  public String getPeerClusterZnode();
+
+  /**
+   * Get the id that the source is replicating to.
+   *
+   * @return peer cluster id
+   */
+  public String getPeerClusterId();
+
+  /**
+   * Set if this source is enabled or disabled
+   * @param status the new status
+   */
+  public void setSourceEnabled(boolean status);
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
new file mode 100644
index 0000000..3adb290
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
@@ -0,0 +1,535 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedMap;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.replication.ReplicationZookeeper;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * This class is responsible to manage all the replication
+ * sources. There are two classes of sources:
+ * <li> Normal sources are persistent and one per peer cluster</li>
+ * <li> Old sources are recovered from a failed region server and our
+ * only goal is to finish replicating the HLog queue it had up in ZK</li>
+ *
+ * When a region server dies, this class uses a watcher to get notified and it
+ * tries to grab a lock in order to transfer all the queues in a local
+ * old source.
+ */
+public class ReplicationSourceManager {
+  private static final Log LOG =
+      LogFactory.getLog(ReplicationSourceManager.class);
+  // List of all the sources that read this RS's logs
+  private final List<ReplicationSourceInterface> sources;
+  // List of all the sources we got from died RSs
+  private final List<ReplicationSourceInterface> oldsources;
+  // Indicates if we are currently replicating
+  private final AtomicBoolean replicating;
+  // Helper for zookeeper
+  private final ReplicationZookeeper zkHelper;
+  // All about stopping
+  private final Stoppable stopper;
+  // All logs we are currently trackign
+  private final SortedSet<String> hlogs;
+  private final Configuration conf;
+  private final FileSystem fs;
+  // The path to the latest log we saw, for new coming sources
+  private Path latestPath;
+  // List of all the other region servers in this cluster
+  private final List<String> otherRegionServers;
+  // Path to the hlogs directories
+  private final Path logDir;
+  // Path to the hlog archive
+  private final Path oldLogDir;
+
+  /**
+   * Creates a replication manager and sets the watch on all the other
+   * registered region servers
+   * @param zkHelper the zk helper for replication
+   * @param conf the configuration to use
+   * @param stopper the stopper object for this region server
+   * @param fs the file system to use
+   * @param replicating the status of the replication on this cluster
+   * @param logDir the directory that contains all hlog directories of live RSs
+   * @param oldLogDir the directory where old logs are archived
+   */
+  public ReplicationSourceManager(final ReplicationZookeeper zkHelper,
+                                  final Configuration conf,
+                                  final Stoppable stopper,
+                                  final FileSystem fs,
+                                  final AtomicBoolean replicating,
+                                  final Path logDir,
+                                  final Path oldLogDir) {
+    this.sources = new ArrayList<ReplicationSourceInterface>();
+    this.replicating = replicating;
+    this.zkHelper = zkHelper;
+    this.stopper = stopper;
+    this.hlogs = new TreeSet<String>();
+    this.oldsources = new ArrayList<ReplicationSourceInterface>();
+    this.conf = conf;
+    this.fs = fs;
+    this.logDir = logDir;
+    this.oldLogDir = oldLogDir;
+    this.zkHelper.registerRegionServerListener(
+        new OtherRegionServerWatcher(this.zkHelper.getZookeeperWatcher()));
+    List<String> otherRSs =
+        this.zkHelper.getRegisteredRegionServers();
+    this.zkHelper.registerRegionServerListener(
+        new PeersWatcher(this.zkHelper.getZookeeperWatcher()));
+    this.zkHelper.listPeersIdsAndWatch();
+    this.otherRegionServers = otherRSs == null ? new ArrayList<String>() : otherRSs;
+  }
+
+  /**
+   * Provide the id of the peer and a log key and this method will figure which
+   * hlog it belongs to and will log, for this region server, the current
+   * position. It will also clean old logs from the queue.
+   * @param log Path to the log currently being replicated from
+   * replication status in zookeeper. It will also delete older entries.
+   * @param id id of the peer cluster
+   * @param position current location in the log
+   * @param queueRecovered indicates if this queue comes from another region server
+   */
+  public void logPositionAndCleanOldLogs(Path log, String id, long position, boolean queueRecovered) {
+    String key = log.getName();
+    LOG.info("Going to report log #" + key + " for position " + position + " in " + log);
+    this.zkHelper.writeReplicationStatus(key.toString(), id, position);
+    synchronized (this.hlogs) {
+      if (!queueRecovered && this.hlogs.first() != key) {
+        SortedSet<String> hlogSet = this.hlogs.headSet(key);
+        LOG.info("Removing " + hlogSet.size() +
+            " logs in the list: " + hlogSet);
+        for (String hlog : hlogSet) {
+          this.zkHelper.removeLogFromList(hlog.toString(), id);
+        }
+        hlogSet.clear();
+      }
+    }
+  }
+
+  /**
+   * Adds a normal source per registered peer cluster and tries to process all
+   * old region server hlog queues
+   */
+  public void init() throws IOException {
+    for (String id : this.zkHelper.getPeerClusters().keySet()) {
+      addSource(id);
+    }
+    List<String> currentReplicators = this.zkHelper.getListOfReplicators();
+    if (currentReplicators == null || currentReplicators.size() == 0) {
+      return;
+    }
+    synchronized (otherRegionServers) {
+      LOG.info("Current list of replicators: " + currentReplicators
+          + " other RSs: " + otherRegionServers);
+    }
+    // Look if there's anything to process after a restart
+    for (String rs : currentReplicators) {
+      synchronized (otherRegionServers) {
+        if (!this.otherRegionServers.contains(rs)) {
+          transferQueues(rs);
+        }
+      }
+    }
+  }
+
+  /**
+   * Add a new normal source to this region server
+   * @param id the id of the peer cluster
+   * @return the source that was created
+   * @throws IOException
+   */
+  public ReplicationSourceInterface addSource(String id) throws IOException {
+    ReplicationSourceInterface src =
+        getReplicationSource(this.conf, this.fs, this, stopper, replicating, id);
+    // TODO set it to what's in ZK
+    src.setSourceEnabled(true);
+    synchronized (this.hlogs) {
+      this.sources.add(src);
+      if (this.hlogs.size() > 0) {
+        // Add the latest hlog to that source's queue
+        this.zkHelper.addLogToList(this.hlogs.last(),
+            this.sources.get(0).getPeerClusterZnode());
+        src.enqueueLog(this.latestPath);
+      }
+    }
+    src.startup();
+    return src;
+  }
+
+  /**
+   * Terminate the replication on this region server
+   */
+  public void join() {
+    if (this.sources.size() == 0) {
+      this.zkHelper.deleteOwnRSZNode();
+    }
+    for (ReplicationSourceInterface source : this.sources) {
+      source.terminate("Region server is closing");
+    }
+  }
+
+  /**
+   * Get a copy of the hlogs of the first source on this rs
+   * @return a sorted set of hlog names
+   */
+  protected SortedSet<String> getHLogs() {
+    return new TreeSet<String>(this.hlogs);
+  }
+
+  /**
+   * Get a list of all the normal sources of this rs
+   * @return lis of all sources
+   */
+  public List<ReplicationSourceInterface> getSources() {
+    return this.sources;
+  }
+
+  void logRolled(Path newLog) {
+    if (!this.replicating.get()) {
+      LOG.warn("Replication stopped, won't add new log");
+      return;
+    }
+    
+    if (this.sources.size() > 0) {
+      this.zkHelper.addLogToList(newLog.getName(),
+          this.sources.get(0).getPeerClusterZnode());
+    }
+    synchronized (this.hlogs) {
+      this.hlogs.add(newLog.getName());
+    }
+    this.latestPath = newLog;
+    // This only update the sources we own, not the recovered ones
+    for (ReplicationSourceInterface source : this.sources) {
+      source.enqueueLog(newLog);
+    }
+  }
+
+  /**
+   * Get the ZK help of this manager
+   * @return the helper
+   */
+  public ReplicationZookeeper getRepZkWrapper() {
+    return zkHelper;
+  }
+
+  /**
+   * Factory method to create a replication source
+   * @param conf the configuration to use
+   * @param fs the file system to use
+   * @param manager the manager to use
+   * @param stopper the stopper object for this region server
+   * @param replicating the status of the replication on this cluster
+   * @param peerClusterId the id of the peer cluster
+   * @return the created source
+   * @throws IOException
+   */
+  public ReplicationSourceInterface getReplicationSource(
+      final Configuration conf,
+      final FileSystem fs,
+      final ReplicationSourceManager manager,
+      final Stoppable stopper,
+      final AtomicBoolean replicating,
+      final String peerClusterId) throws IOException {
+    ReplicationSourceInterface src;
+    try {
+      @SuppressWarnings("rawtypes")
+      Class c = Class.forName(conf.get("replication.replicationsource.implementation",
+          ReplicationSource.class.getCanonicalName()));
+      src = (ReplicationSourceInterface) c.newInstance();
+    } catch (Exception e) {
+      LOG.warn("Passed replication source implemention throws errors, " +
+          "defaulting to ReplicationSource", e);
+      src = new ReplicationSource();
+
+    }
+    src.init(conf, fs, manager, stopper, replicating, peerClusterId);
+    return src;
+  }
+
+  /**
+   * Transfer all the queues of the specified to this region server.
+   * First it tries to grab a lock and if it works it will move the
+   * znodes and finally will delete the old znodes.
+   *
+   * It creates one old source for any type of source of the old rs.
+   * @param rsZnode
+   */
+  public void transferQueues(String rsZnode) {
+    // We try to lock that rs' queue directory
+    if (this.stopper.isStopped()) {
+      LOG.info("Not transferring queue since we are shutting down");
+      return;
+    }
+    if (!this.zkHelper.lockOtherRS(rsZnode)) {
+      return;
+    }
+    LOG.info("Moving " + rsZnode + "'s hlogs to my queue");
+    SortedMap<String, SortedSet<String>> newQueues =
+        this.zkHelper.copyQueuesFromRS(rsZnode);
+    this.zkHelper.deleteRsQueues(rsZnode);
+    if (newQueues == null || newQueues.size() == 0) {
+      return;
+    }
+
+    for (Map.Entry<String, SortedSet<String>> entry : newQueues.entrySet()) {
+      String peerId = entry.getKey();
+      try {
+        ReplicationSourceInterface src = getReplicationSource(this.conf,
+            this.fs, this, this.stopper, this.replicating, peerId);
+        if (!zkHelper.getPeerClusters().containsKey(src.getPeerClusterId())) {
+          src.terminate("Recovered queue doesn't belong to any current peer");
+          break;
+        }
+        this.oldsources.add(src);
+        for (String hlog : entry.getValue()) {
+          src.enqueueLog(new Path(this.oldLogDir, hlog));
+        }
+        // TODO set it to what's in ZK
+        src.setSourceEnabled(true);
+        src.startup();
+      } catch (IOException e) {
+        // TODO manage it
+        LOG.error("Failed creating a source", e);
+      }
+    }
+  }
+
+  /**
+   * Clear the references to the specified old source
+   * @param src source to clear
+   */
+  public void closeRecoveredQueue(ReplicationSourceInterface src) {
+    LOG.info("Done with the recovered queue " + src.getPeerClusterZnode());
+    this.oldsources.remove(src);
+    this.zkHelper.deleteSource(src.getPeerClusterZnode(), false);
+  }
+
+  /**
+   * Thie method first deletes all the recovered sources for the specified
+   * id, then deletes the normal source (deleting all related data in ZK).
+   * @param id The id of the peer cluster
+   */
+  public void removePeer(String id) {
+    LOG.info("Closing the following queue " + id + ", currently have "
+        + sources.size() + " and another "
+        + oldsources.size() + " that were recovered");
+    ReplicationSourceInterface srcToRemove = null;
+    List<ReplicationSourceInterface> oldSourcesToDelete =
+        new ArrayList<ReplicationSourceInterface>();
+    // First close all the recovered sources for this peer
+    for (ReplicationSourceInterface src : oldsources) {
+      if (id.equals(src.getPeerClusterId())) {
+        oldSourcesToDelete.add(src);
+      }
+    }
+    for (ReplicationSourceInterface src : oldSourcesToDelete) {
+      closeRecoveredQueue((src));
+    }
+    LOG.info("Number of deleted recovered sources for " + id + ": "
+        + oldSourcesToDelete.size());
+    // Now look for the one on this cluster
+    for (ReplicationSourceInterface src : this.sources) {
+      if (id.equals(src.getPeerClusterId())) {
+        srcToRemove = src;
+        break;
+      }
+    }
+    if (srcToRemove == null) {
+      LOG.error("The queue we wanted to close is missing " + id);
+      return;
+    }
+    srcToRemove.terminate("Replication stream was removed by a user");
+    this.sources.remove(srcToRemove);
+    this.zkHelper.deleteSource(id, true);
+  }
+
+  /**
+   * Watcher used to be notified of the other region server's death
+   * in the local cluster. It initiates the process to transfer the queues
+   * if it is able to grab the lock.
+   */
+  public class OtherRegionServerWatcher extends ZooKeeperListener {
+
+    /**
+     * Construct a ZooKeeper event listener.
+     */
+    public OtherRegionServerWatcher(ZooKeeperWatcher watcher) {
+      super(watcher);
+    }
+
+    /**
+     * Called when a new node has been created.
+     * @param path full path of the new node
+     */
+    public void nodeCreated(String path) {
+      refreshRegionServersList(path);
+    }
+
+    /**
+     * Called when a node has been deleted
+     * @param path full path of the deleted node
+     */
+    public void nodeDeleted(String path) {
+      if (stopper.isStopped()) {
+        return;
+      }
+      boolean cont = refreshRegionServersList(path);
+      if (!cont) {
+        return;
+      }
+      LOG.info(path + " znode expired, trying to lock it");
+      transferQueues(zkHelper.getZNodeName(path));
+    }
+
+    /**
+     * Called when an existing node has a child node added or removed.
+     * @param path full path of the node whose children have changed
+     */
+    public void nodeChildrenChanged(String path) {
+      if (stopper.isStopped()) {
+        return;
+      }
+      refreshRegionServersList(path);
+    }
+
+    private boolean refreshRegionServersList(String path) {
+      if (!path.startsWith(zkHelper.getZookeeperWatcher().rsZNode)) {
+        return false;
+      }
+      List<String> newRsList = (zkHelper.getRegisteredRegionServers());
+      if (newRsList == null) {
+        return false;
+      } else {
+        synchronized (otherRegionServers) {
+          otherRegionServers.clear();
+          otherRegionServers.addAll(newRsList);
+        }
+      }
+      return true;
+    }
+  }
+
+  /**
+   * Watcher used to follow the creation and deletion of peer clusters.
+   */
+  public class PeersWatcher extends ZooKeeperListener {
+
+    /**
+     * Construct a ZooKeeper event listener.
+     */
+    public PeersWatcher(ZooKeeperWatcher watcher) {
+      super(watcher);
+    }
+
+    /**
+     * Called when a node has been deleted
+     * @param path full path of the deleted node
+     */
+    public void nodeDeleted(String path) {
+      List<String> peers = refreshPeersList(path);
+      if (peers == null) {
+        return;
+      }
+      String id = zkHelper.getZNodeName(path);
+      removePeer(id);
+    }
+
+    /**
+     * Called when an existing node has a child node added or removed.
+     * @param path full path of the node whose children have changed
+     */
+    public void nodeChildrenChanged(String path) {
+      List<String> peers = refreshPeersList(path);
+      if (peers == null) {
+        return;
+      }
+      for (String id : peers) {
+        try {
+          boolean added = zkHelper.connectToPeer(id);
+          if (added) {
+            addSource(id);
+          }
+        } catch (IOException e) {
+          // TODO manage better than that ?
+          LOG.error("Error while adding a new peer", e);
+        } catch (KeeperException e) {
+          LOG.error("Error while adding a new peer", e);
+        }
+      }
+    }
+
+    /**
+     * Verify if this event is meant for us, and if so then get the latest
+     * peers' list from ZK. Also reset the watches.
+     * @param path path to check against
+     * @return A list of peers' identifiers if the event concerns this watcher,
+     * else null.
+     */
+    private List<String> refreshPeersList(String path) {
+      if (!path.startsWith(zkHelper.getPeersZNode())) {
+        return null;
+      }
+      return zkHelper.listPeersIdsAndWatch();
+    }
+  }
+
+  /**
+   * Get the directory where hlogs are archived
+   * @return the directory where hlogs are archived
+   */
+  public Path getOldLogDir() {
+    return this.oldLogDir;
+  }
+
+  /**
+   * Get the directory where hlogs are stored by their RSs
+   * @return the directory where hlogs are stored by their RSs
+   */
+  public Path getLogDir() {
+    return this.logDir;
+  }
+
+  /**
+   * Get the handle on the local file system
+   * @return Handle on the local file system
+   */
+  public FileSystem getFs() {
+    return this.fs;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceMetrics.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceMetrics.java
new file mode 100644
index 0000000..c90eb22
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceMetrics.java
@@ -0,0 +1,108 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+import java.io.UnsupportedEncodingException;
+import java.net.URLEncoder;
+
+import org.apache.hadoop.hbase.metrics.MetricsRate;
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.jvm.JvmMetrics;
+import org.apache.hadoop.metrics.util.MetricsIntValue;
+import org.apache.hadoop.metrics.util.MetricsLongValue;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+/**
+ * This class is for maintaining the various replication statistics
+ * for a source and publishing them through the metrics interfaces.
+ */
+public class ReplicationSourceMetrics implements Updater {
+  private final MetricsRecord metricsRecord;
+  private MetricsRegistry registry = new MetricsRegistry();
+
+  /** Rate of shipped operations by the source */
+  public final MetricsRate shippedOpsRate =
+      new MetricsRate("shippedOpsRate", registry);
+
+  /** Rate of shipped batches by the source */
+  public final MetricsRate shippedBatchesRate =
+      new MetricsRate("shippedBatchesRate", registry);
+
+  /** Rate of log entries (can be multiple Puts) read from the logs */
+  public final MetricsRate logEditsReadRate =
+      new MetricsRate("logEditsReadRate", registry);
+
+  /** Rate of log entries filtered by the source */
+  public final MetricsRate logEditsFilteredRate =
+      new MetricsRate("logEditsFilteredRate", registry);
+
+  /** Age of the last operation that was shipped by the source */
+  private final MetricsLongValue ageOfLastShippedOp =
+      new MetricsLongValue("ageOfLastShippedOp", registry);
+
+  /**
+   * Current size of the queue of logs to replicate,
+   * excluding the one being processed at the moment
+   */
+  public final MetricsIntValue sizeOfLogQueue =
+      new MetricsIntValue("sizeOfLogQueue", registry);
+
+  /**
+   * Constructor used to register the metrics
+   * @param id Name of the source this class is monitoring
+   */
+  public ReplicationSourceMetrics(String id) {
+    MetricsContext context = MetricsUtil.getContext("hbase");
+    String name = Thread.currentThread().getName();
+    metricsRecord = MetricsUtil.createRecord(context, "replication");
+    metricsRecord.setTag("RegionServer", name);
+    context.registerUpdater(this);
+    try {
+      id = URLEncoder.encode(id, "UTF8");
+    } catch (UnsupportedEncodingException e) {
+      id = "CAN'T ENCODE UTF8";
+    }
+    // export for JMX
+    new ReplicationStatistics(this.registry, "ReplicationSource for " + id);
+  }
+
+  /**
+   * Set the age of the last edit that was shipped
+   * @param timestamp write time of the edit
+   */
+  public void setAgeOfLastShippedOp(long timestamp) {
+    ageOfLastShippedOp.set(System.currentTimeMillis() - timestamp);
+  }
+
+  @Override
+  public void doUpdates(MetricsContext metricsContext) {
+    synchronized (this) {
+      this.shippedOpsRate.pushMetric(this.metricsRecord);
+      this.shippedBatchesRate.pushMetric(this.metricsRecord);
+      this.logEditsReadRate.pushMetric(this.metricsRecord);
+      this.logEditsFilteredRate.pushMetric(this.metricsRecord);
+      this.ageOfLastShippedOp.pushMetric(this.metricsRecord);
+      this.sizeOfLogQueue.pushMetric(this.metricsRecord);
+    }
+    this.metricsRecord.update();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationStatistics.java b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationStatistics.java
new file mode 100644
index 0000000..54ca3df
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationStatistics.java
@@ -0,0 +1,45 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import org.apache.hadoop.hbase.metrics.MetricsMBeanBase;
+import org.apache.hadoop.metrics.util.MBeanUtil;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+import javax.management.ObjectName;
+
+/**
+ * Exports metrics recorded by {@link ReplicationSourceMetrics} as an MBean
+ * for JMX monitoring.
+ */
+public class ReplicationStatistics extends MetricsMBeanBase {
+
+  private final ObjectName mbeanName;
+
+  /**
+   * Constructor to register the MBean
+   * @param registry which rehistry to use
+   * @param name name to get to this bean
+   */
+  public ReplicationStatistics(MetricsRegistry registry, String name) {
+    super(registry, name);
+    mbeanName = MBeanUtil.registerMBean("Replication", name, this);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/Constants.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/Constants.java
new file mode 100644
index 0000000..55ca1c6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/Constants.java
@@ -0,0 +1,39 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+/**
+ * Common constants for org.apache.hadoop.hbase.rest
+ */
+public interface Constants {
+  public static final String VERSION_STRING = "0.0.2";
+
+  public static final int DEFAULT_MAX_AGE = 60 * 60 * 4;  // 4 hours
+
+  public static final int DEFAULT_LISTEN_PORT = 8080;
+
+  public static final String MIMETYPE_TEXT = "text/plain";
+  public static final String MIMETYPE_HTML = "text/html";
+  public static final String MIMETYPE_XML = "text/xml";
+  public static final String MIMETYPE_BINARY = "application/octet-stream";
+  public static final String MIMETYPE_PROTOBUF = "application/x-protobuf";
+  public static final String MIMETYPE_JSON = "application/json";
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ExistsResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ExistsResource.java
new file mode 100644
index 0000000..435c82b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ExistsResource.java
@@ -0,0 +1,70 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import javax.ws.rs.GET;
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.ws.rs.core.Response.ResponseBuilder;
+
+public class ExistsResource extends ResourceBase {
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  TableResource tableResource;
+
+  /**
+   * Constructor
+   * @param tableResource
+   * @throws IOException
+   */
+  public ExistsResource(TableResource tableResource) throws IOException {
+    super();
+    this.tableResource = tableResource;
+  }
+
+  @GET
+  @Produces({MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF,
+    MIMETYPE_BINARY})
+  public Response get(final @Context UriInfo uriInfo) {
+    try {
+      if (!tableResource.exists()) {
+        throw new WebApplicationException(Response.Status.NOT_FOUND);
+      }
+    } catch (IOException e) {
+      throw new WebApplicationException(Response.Status.SERVICE_UNAVAILABLE);
+    }
+    ResponseBuilder response = Response.ok();
+    response.cacheControl(cacheControl);
+    return response.build();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/Main.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/Main.java
new file mode 100644
index 0000000..722b061
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/Main.java
@@ -0,0 +1,141 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.cli.ParseException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.rest.filter.GzipFilter;
+
+import java.util.List;
+import java.util.ArrayList;
+
+import org.mortbay.jetty.Server;
+import org.mortbay.jetty.servlet.Context;
+import org.mortbay.jetty.servlet.ServletHolder;
+
+import com.sun.jersey.spi.container.servlet.ServletContainer;
+
+/**
+ * Main class for launching REST gateway as a servlet hosted by Jetty.
+ * <p>
+ * The following options are supported:
+ * <ul>
+ * <li>-p --port : service port</li>
+ * <li>-ro --readonly : server mode</li>
+ * </ul>
+ */
+public class Main implements Constants {
+
+  private static void printUsageAndExit(Options options, int exitCode) {
+    HelpFormatter formatter = new HelpFormatter();
+    formatter.printHelp("bin/hbase rest start", "", options,
+      "\nTo run the REST server as a daemon, execute " +
+      "bin/hbase-daemon.sh start|stop rest [-p <port>] [-ro]\n", true);
+    System.exit(exitCode);
+  }
+
+  /**
+   * The main method for the HBase rest server.
+   * @param args command-line arguments
+   * @throws Exception exception
+   */
+  public static void main(String[] args) throws Exception {
+    Log LOG = LogFactory.getLog("RESTServer");
+
+    Configuration conf = HBaseConfiguration.create();
+    RESTServlet servlet = RESTServlet.getInstance(conf);
+
+    Options options = new Options();
+    options.addOption("p", "port", true, "Port to bind to [default: 8080]");
+    options.addOption("ro", "readonly", false, "Respond only to GET HTTP " +
+      "method requests [default: false]");
+
+    CommandLine commandLine = null;
+    try {
+      commandLine = new PosixParser().parse(options, args);
+    } catch (ParseException e) {
+      LOG.error("Could not parse: ", e);
+      printUsageAndExit(options, -1);
+    }
+
+    // check for user-defined port setting, if so override the conf
+    if (commandLine != null && commandLine.hasOption("port")) {
+      String val = commandLine.getOptionValue("port");
+      servlet.getConfiguration()
+          .setInt("hbase.rest.port", Integer.valueOf(val));
+      LOG.debug("port set to " + val);
+    }
+
+    // check if server should only process GET requests, if so override the conf
+    if (commandLine != null && commandLine.hasOption("readonly")) {
+      servlet.getConfiguration().setBoolean("hbase.rest.readonly", true);
+      LOG.debug("readonly set to true");
+    }
+
+    @SuppressWarnings("unchecked")
+    List<String> remainingArgs = commandLine != null ?
+        commandLine.getArgList() : new ArrayList<String>();
+    if (remainingArgs.size() != 1) {
+      printUsageAndExit(options, 1);
+    }
+
+    String command = remainingArgs.get(0);
+    if ("start".equals(command)) {
+      // continue and start container
+    } else if ("stop".equals(command)) {
+      System.exit(1);
+    } else {
+      printUsageAndExit(options, 1);
+    }
+
+    // set up the Jersey servlet container for Jetty
+    ServletHolder sh = new ServletHolder(ServletContainer.class);
+    sh.setInitParameter(
+      "com.sun.jersey.config.property.resourceConfigClass",
+      ResourceConfig.class.getCanonicalName());
+    sh.setInitParameter("com.sun.jersey.config.property.packages",
+      "jetty");
+
+    // set up Jetty and run the embedded server
+
+    int port = servlet.getConfiguration().getInt("hbase.rest.port", 8080);
+
+    Server server = new Server(port);
+    server.setSendServerVersion(false);
+    server.setSendDateHeader(false);
+    server.setStopAtShutdown(true);
+      // set up context
+    Context context = new Context(server, "/", Context.SESSIONS);
+    context.addServlet(sh, "/*");
+    context.addFilter(GzipFilter.class, "/*", 0);
+
+    server.start();
+    server.join();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ProtobufMessageHandler.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ProtobufMessageHandler.java
new file mode 100644
index 0000000..405cace
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ProtobufMessageHandler.java
@@ -0,0 +1,44 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+/**
+ * Common interface for models capable of supporting protobuf marshalling
+ * and unmarshalling. Hooks up to the ProtobufMessageBodyConsumer and
+ * ProtobufMessageBodyProducer adapters. 
+ */
+public abstract interface ProtobufMessageHandler {
+  /**
+   * @return the protobuf represention of the model
+   */
+  public byte[] createProtobufOutput();
+
+  /**
+   * Initialize the model from a protobuf representation.
+   * @param message the raw bytes of the protobuf message
+   * @return reference to self for convenience
+   * @throws IOException
+   */
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+    throws IOException;
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/RESTServlet.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RESTServlet.java
new file mode 100644
index 0000000..1b83f47
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RESTServlet.java
@@ -0,0 +1,94 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.rest.metrics.RESTMetrics;
+
+/**
+ * Singleton class encapsulating global REST servlet state and functions.
+ */
+public class RESTServlet implements Constants {
+  private static RESTServlet INSTANCE;
+  private final Configuration conf;
+  private final HTablePool pool;
+  private final RESTMetrics metrics = new RESTMetrics();
+
+  /**
+   * @return the RESTServlet singleton instance
+   * @throws IOException
+   */
+  public synchronized static RESTServlet getInstance() throws IOException {
+    assert(INSTANCE != null);
+    return INSTANCE;
+  }
+
+  /**
+   * @param conf Existing configuration to use in rest servlet
+   * @return the RESTServlet singleton instance
+   * @throws IOException
+   */
+  public synchronized static RESTServlet getInstance(Configuration conf)
+  throws IOException {
+    if (INSTANCE == null) {
+      INSTANCE = new RESTServlet(conf);
+    }
+    return INSTANCE;
+  }
+
+  public synchronized static void stop() {
+    if (INSTANCE != null)  INSTANCE = null;
+  }
+
+  /**
+   * Constructor with existing configuration
+   * @param conf existing configuration
+   * @throws IOException.
+   */
+  RESTServlet(Configuration conf) throws IOException {
+    this.conf = conf;
+    this.pool = new HTablePool(conf, 10);
+  }
+
+  HTablePool getTablePool() {
+    return pool;
+  }
+
+  Configuration getConfiguration() {
+    return conf;
+  }
+
+  RESTMetrics getMetrics() {
+    return metrics;
+  }
+
+  /**
+   * Helper method to determine if server should
+   * only respond to GET HTTP method requests.
+   * @return boolean for server read-only state
+   */
+  boolean isReadOnly() {
+    return getConfiguration().getBoolean("hbase.rest.readonly", false);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/RegionsResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RegionsResource.java
new file mode 100644
index 0000000..bf85bc1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RegionsResource.java
@@ -0,0 +1,111 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Map;
+
+import javax.ws.rs.GET;
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.ws.rs.core.Response.ResponseBuilder;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.rest.model.TableInfoModel;
+import org.apache.hadoop.hbase.rest.model.TableRegionModel;
+
+public class RegionsResource extends ResourceBase {
+  private static final Log LOG = LogFactory.getLog(RegionsResource.class);
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  TableResource tableResource;
+
+  /**
+   * Constructor
+   * @param tableResource
+   * @throws IOException
+   */
+  public RegionsResource(TableResource tableResource) throws IOException {
+    super();
+    this.tableResource = tableResource;
+  }
+
+  private Map<HRegionInfo,HServerAddress> getTableRegions()
+      throws IOException {
+    HTablePool pool = servlet.getTablePool();
+    HTableInterface table = pool.getTable(tableResource.getName());
+    try {
+      return ((HTable)table).getRegionsInfo();
+    } finally {
+      pool.putTable(table);
+    }
+  }
+
+  @GET
+  @Produces({MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response get(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      String tableName = tableResource.getName();
+      TableInfoModel model = new TableInfoModel(tableName);
+      Map<HRegionInfo,HServerAddress> regions = getTableRegions();
+      for (Map.Entry<HRegionInfo,HServerAddress> e: regions.entrySet()) {
+        HRegionInfo hri = e.getKey();
+        HServerAddress addr = e.getValue();
+        InetSocketAddress sa = addr.getInetSocketAddress();
+        model.add(
+          new TableRegionModel(tableName, hri.getRegionId(),
+            hri.getStartKey(), hri.getEndKey(),
+            sa.getHostName() + ":" + Integer.valueOf(sa.getPort())));
+      }
+      ResponseBuilder response = Response.ok(model);
+      response.cacheControl(cacheControl);
+      return response.build();
+    } catch (TableNotFoundException e) {
+      throw new WebApplicationException(Response.Status.NOT_FOUND);
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+                  Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResourceBase.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResourceBase.java
new file mode 100644
index 0000000..6167ccc
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResourceBase.java
@@ -0,0 +1,32 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+public class ResourceBase implements Constants {
+
+  RESTServlet servlet;
+
+  public ResourceBase() throws IOException {
+    servlet = RESTServlet.getInstance();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResourceConfig.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResourceConfig.java
new file mode 100644
index 0000000..19c99e7
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResourceConfig.java
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import com.sun.jersey.api.core.PackagesResourceConfig;
+
+public class ResourceConfig extends PackagesResourceConfig {
+  public ResourceConfig() {
+    super("org.apache.hadoop.hbase.rest");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResultGenerator.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResultGenerator.java
new file mode 100644
index 0000000..4e7edf9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ResultGenerator.java
@@ -0,0 +1,48 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.Iterator;
+ 
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.rest.model.ScannerModel;
+
+public abstract class ResultGenerator implements Iterator<KeyValue> {
+
+  public static ResultGenerator fromRowSpec(final String table, 
+      final RowSpec rowspec, final Filter filter) throws IOException {
+    if (rowspec.isSingleRow()) {
+      return new RowResultGenerator(table, rowspec, filter);
+    } else {
+      return new ScannerResultGenerator(table, rowspec, filter);
+    }
+  }
+
+  public static Filter buildFilter(final String filter) throws Exception {
+    return ScannerModel.buildFilter(filter);
+  }
+
+  public abstract void putBack(KeyValue kv);
+
+  public abstract void close();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java
new file mode 100644
index 0000000..4cf37a8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RootResource.java
@@ -0,0 +1,106 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.ws.rs.core.Response.ResponseBuilder;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.model.TableListModel;
+import org.apache.hadoop.hbase.rest.model.TableModel;
+
+@Path("/")
+public class RootResource extends ResourceBase {
+  private static final Log LOG = LogFactory.getLog(RootResource.class);
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  /**
+   * Constructor
+   * @throws IOException
+   */
+  public RootResource() throws IOException {
+    super();
+  }
+
+  private final TableListModel getTableList() throws IOException {
+    TableListModel tableList = new TableListModel();
+    HBaseAdmin admin = new HBaseAdmin(servlet.getConfiguration());
+    HTableDescriptor[] list = admin.listTables();
+    for (HTableDescriptor htd: list) {
+      tableList.add(new TableModel(htd.getNameAsString()));
+    }
+    return tableList;
+  }
+
+  @GET
+  @Produces({MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response get(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      ResponseBuilder response = Response.ok(getTableList());
+      response.cacheControl(cacheControl);
+      return response.build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e, 
+                  Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+
+  @Path("status/cluster")
+  public StorageClusterStatusResource getClusterStatusResource()
+      throws IOException {
+    return new StorageClusterStatusResource();
+  }
+
+  @Path("version")
+  public VersionResource getVersionResource() throws IOException {
+    return new VersionResource();
+  }
+
+  @Path("{table}")
+  public TableResource getTableResource(
+      final @PathParam("table") String table) throws IOException {
+    return new TableResource(table);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
new file mode 100644
index 0000000..31e111b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowResource.java
@@ -0,0 +1,345 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.List;
+
+import javax.ws.rs.Consumes;
+import javax.ws.rs.DELETE;
+import javax.ws.rs.GET;
+import javax.ws.rs.POST;
+import javax.ws.rs.PUT;
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.HttpHeaders;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.ws.rs.core.Response.ResponseBuilder;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.rest.model.CellModel;
+import org.apache.hadoop.hbase.rest.model.CellSetModel;
+import org.apache.hadoop.hbase.rest.model.RowModel;
+import org.apache.hadoop.hbase.rest.transform.Transform;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class RowResource extends ResourceBase {
+  private static final Log LOG = LogFactory.getLog(RowResource.class);
+
+  TableResource tableResource;
+  RowSpec rowspec;
+
+  /**
+   * Constructor
+   * @param tableResource
+   * @param rowspec
+   * @param versions
+   * @throws IOException
+   */
+  public RowResource(TableResource tableResource, String rowspec,
+      String versions) throws IOException {
+    super();
+    this.tableResource = tableResource;
+    this.rowspec = new RowSpec(rowspec);
+    if (versions != null) {
+      this.rowspec.setMaxVersions(Integer.valueOf(versions));
+    }
+  }
+
+  @GET
+  @Produces({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response get(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      ResultGenerator generator =
+        ResultGenerator.fromRowSpec(tableResource.getName(), rowspec, null);
+      if (!generator.hasNext()) {
+        throw new WebApplicationException(Response.Status.NOT_FOUND);
+      }
+      int count = 0;
+      CellSetModel model = new CellSetModel();
+      KeyValue value = generator.next();
+      byte[] rowKey = value.getRow();
+      RowModel rowModel = new RowModel(rowKey);
+      do {
+        if (!Bytes.equals(value.getRow(), rowKey)) {
+          model.addRow(rowModel);
+          rowKey = value.getRow();
+          rowModel = new RowModel(rowKey);
+        }
+        byte[] family = value.getFamily();
+        byte[] qualifier = value.getQualifier();
+        byte[] data = tableResource.transform(family, qualifier,
+          value.getValue(), Transform.Direction.OUT);
+        rowModel.addCell(new CellModel(family, qualifier,
+          value.getTimestamp(), data));
+        if (++count > rowspec.getMaxValues()) {
+          break;
+        }
+        value = generator.next();
+      } while (value != null);
+      model.addRow(rowModel);
+      return Response.ok(model).build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+                  Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+
+  @GET
+  @Produces(MIMETYPE_BINARY)
+  public Response getBinary(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath() + " as "+ MIMETYPE_BINARY);
+    }
+    servlet.getMetrics().incrementRequests(1);
+    // doesn't make sense to use a non specific coordinate as this can only
+    // return a single cell
+    if (!rowspec.hasColumns() || rowspec.getColumns().length > 1) {
+      throw new WebApplicationException(Response.Status.BAD_REQUEST);
+    }
+    try {
+      ResultGenerator generator =
+        ResultGenerator.fromRowSpec(tableResource.getName(), rowspec, null);
+      if (!generator.hasNext()) {
+        throw new WebApplicationException(Response.Status.NOT_FOUND);
+      }
+      KeyValue value = generator.next();
+      byte[] family = value.getFamily();
+      byte[] qualifier = value.getQualifier();
+      byte[] data = tableResource.transform(family, qualifier,
+        value.getValue(), Transform.Direction.OUT);
+      ResponseBuilder response = Response.ok(data);
+      response.header("X-Timestamp", value.getTimestamp());
+      return response.build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+                  Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+
+  Response update(final CellSetModel model, final boolean replace) {
+    servlet.getMetrics().incrementRequests(1);
+    if (servlet.isReadOnly()) {
+      throw new WebApplicationException(Response.Status.FORBIDDEN);
+    }
+    HTablePool pool = servlet.getTablePool();
+    HTableInterface table = null;
+    try {
+      List<RowModel> rows = model.getRows();
+      table = pool.getTable(tableResource.getName());
+      ((HTable)table).setAutoFlush(false);
+      for (RowModel row: rows) {
+        byte[] key = row.getKey();
+        Put put = new Put(key);
+        for (CellModel cell: row.getCells()) {
+          byte [][] parts = KeyValue.parseColumn(cell.getColumn());
+          if (parts.length == 2 && parts[1].length > 0) {
+            put.add(parts[0], parts[1], cell.getTimestamp(),
+              tableResource.transform(parts[0], parts[1], cell.getValue(),
+                Transform.Direction.IN));
+          } else {
+            put.add(parts[0], null, cell.getTimestamp(),
+              tableResource.transform(parts[0], null, cell.getValue(),
+                Transform.Direction.IN));
+          }
+        }
+        table.put(put);
+        if (LOG.isDebugEnabled()) {
+          LOG.debug("PUT " + put.toString());
+        }
+      }
+      ((HTable)table).setAutoFlush(true);
+      table.flushCommits();
+      ResponseBuilder response = Response.ok();
+      return response.build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+                  Response.Status.SERVICE_UNAVAILABLE);
+    } finally {
+      if (table != null) {
+        pool.putTable(table);
+      }
+    }
+  }
+
+  // This currently supports only update of one row at a time.
+  Response updateBinary(final byte[] message, final HttpHeaders headers,
+      final boolean replace) {
+    servlet.getMetrics().incrementRequests(1);
+    if (servlet.isReadOnly()) {
+      throw new WebApplicationException(Response.Status.FORBIDDEN);
+    }
+    HTablePool pool = servlet.getTablePool();
+    HTableInterface table = null;
+    try {
+      byte[] row = rowspec.getRow();
+      byte[][] columns = rowspec.getColumns();
+      byte[] column = null;
+      if (columns != null) {
+        column = columns[0];
+      }
+      long timestamp = HConstants.LATEST_TIMESTAMP;
+      List<String> vals = headers.getRequestHeader("X-Row");
+      if (vals != null && !vals.isEmpty()) {
+        row = Bytes.toBytes(vals.get(0));
+      }
+      vals = headers.getRequestHeader("X-Column");
+      if (vals != null && !vals.isEmpty()) {
+        column = Bytes.toBytes(vals.get(0));
+      }
+      vals = headers.getRequestHeader("X-Timestamp");
+      if (vals != null && !vals.isEmpty()) {
+        timestamp = Long.valueOf(vals.get(0));
+      }
+      if (column == null) {
+        throw new WebApplicationException(Response.Status.BAD_REQUEST);
+      }
+      Put put = new Put(row);
+      byte parts[][] = KeyValue.parseColumn(column);
+      if (parts.length == 2 && parts[1].length > 0) {
+        put.add(parts[0], parts[1], timestamp,
+          tableResource.transform(parts[0], parts[1], message,
+            Transform.Direction.IN));
+      } else {
+        put.add(parts[0], null, timestamp,
+          tableResource.transform(parts[0], null, message,
+            Transform.Direction.IN));
+      }
+      table = pool.getTable(tableResource.getName());
+      table.put(put);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("PUT " + put.toString());
+      }
+      return Response.ok().build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+                  Response.Status.SERVICE_UNAVAILABLE);
+    } finally {
+      if (table != null) {
+        pool.putTable(table);
+      }
+    }
+  }
+
+  @PUT
+  @Consumes({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response put(final CellSetModel model,
+      final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("PUT " + uriInfo.getAbsolutePath());
+    }
+    return update(model, true);
+  }
+
+  @PUT
+  @Consumes(MIMETYPE_BINARY)
+  public Response putBinary(final byte[] message,
+      final @Context UriInfo uriInfo, final @Context HttpHeaders headers) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("PUT " + uriInfo.getAbsolutePath() + " as "+ MIMETYPE_BINARY);
+    }
+    return updateBinary(message, headers, true);
+  }
+
+  @POST
+  @Consumes({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response post(final CellSetModel model,
+      final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("POST " + uriInfo.getAbsolutePath());
+    }
+    return update(model, false);
+  }
+
+  @POST
+  @Consumes(MIMETYPE_BINARY)
+  public Response postBinary(final byte[] message,
+      final @Context UriInfo uriInfo, final @Context HttpHeaders headers) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("POST " + uriInfo.getAbsolutePath() + " as "+MIMETYPE_BINARY);
+    }
+    return updateBinary(message, headers, false);
+  }
+
+  @DELETE
+  public Response delete(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("DELETE " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    if (servlet.isReadOnly()) {
+      throw new WebApplicationException(Response.Status.FORBIDDEN);
+    }
+    Delete delete = null;
+    if (rowspec.hasTimestamp())
+      delete = new Delete(rowspec.getRow(), rowspec.getTimestamp(), null);
+    else
+      delete = new Delete(rowspec.getRow());
+
+    for (byte[] column: rowspec.getColumns()) {
+      byte[][] split = KeyValue.parseColumn(column);
+      if (rowspec.hasTimestamp()) {
+        if (split.length == 2 && split[1].length != 0) {
+          delete.deleteColumns(split[0], split[1], rowspec.getTimestamp());
+        } else {
+          delete.deleteFamily(split[0], rowspec.getTimestamp());
+        }
+      } else {
+        if (split.length == 2 && split[1].length != 0) {
+          delete.deleteColumns(split[0], split[1]);
+        } else {
+          delete.deleteFamily(split[0]);
+        }
+      }
+    }
+    HTablePool pool = servlet.getTablePool();
+    HTableInterface table = null;
+    try {
+      table = pool.getTable(tableResource.getName());
+      table.delete(delete);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("DELETE " + delete.toString());
+      }
+    } catch (IOException e) {
+      throw new WebApplicationException(e, 
+                  Response.Status.SERVICE_UNAVAILABLE);
+    } finally {
+      if (table != null) {
+        pool.putTable(table);
+      }
+    }
+    return Response.ok().build();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowResultGenerator.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowResultGenerator.java
new file mode 100644
index 0000000..72c6302
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowResultGenerator.java
@@ -0,0 +1,126 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.NoSuchElementException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException;
+
+public class RowResultGenerator extends ResultGenerator {
+  private static final Log LOG = LogFactory.getLog(RowResultGenerator.class);
+
+  private Iterator<KeyValue> valuesI;
+  private KeyValue cache;
+
+  public RowResultGenerator(final String tableName, final RowSpec rowspec,
+      final Filter filter) throws IllegalArgumentException, IOException {
+    HTablePool pool = RESTServlet.getInstance().getTablePool(); 
+    HTableInterface table = pool.getTable(tableName);
+    try {
+      Get get = new Get(rowspec.getRow());
+      if (rowspec.hasColumns()) {
+        for (byte[] col: rowspec.getColumns()) {
+          byte[][] split = KeyValue.parseColumn(col);
+          if (split.length == 2 && split[1].length != 0) {
+            get.addColumn(split[0], split[1]);
+          } else {
+            get.addFamily(split[0]);
+          }
+        }
+      } else {
+        // rowspec does not explicitly specify columns, return them all
+        for (HColumnDescriptor family: 
+            table.getTableDescriptor().getFamilies()) {
+          get.addFamily(family.getName());
+        }
+      }
+      get.setTimeRange(rowspec.getStartTime(), rowspec.getEndTime());
+      get.setMaxVersions(rowspec.getMaxVersions());
+      if (filter != null) {
+        get.setFilter(filter);
+      }
+      Result result = table.get(get);
+      if (result != null && !result.isEmpty()) {
+        valuesI = result.list().iterator();
+      }
+    } catch (NoSuchColumnFamilyException e) {
+      // Warn here because Stargate will return 404 in the case if multiple
+      // column families were specified but one did not exist -- currently
+      // HBase will fail the whole Get.
+      // Specifying multiple columns in a URI should be uncommon usage but
+      // help to avoid confusion by leaving a record of what happened here in
+      // the log.
+      LOG.warn(StringUtils.stringifyException(e));
+    } finally {
+      pool.putTable(table);
+    }
+  }
+
+  public void close() {
+  }
+
+  public boolean hasNext() {
+    if (cache != null) {
+      return true;
+    }
+    if (valuesI == null) {
+      return false;
+    }
+    return valuesI.hasNext();
+  }
+
+  public KeyValue next() {
+    if (cache != null) {
+      KeyValue kv = cache;
+      cache = null;
+      return kv;
+    }
+    if (valuesI == null) {
+      return null;
+    }
+    try {
+      return valuesI.next();
+    } catch (NoSuchElementException e) {
+      return null;
+    }
+  }
+
+  public void putBack(KeyValue kv) {
+    this.cache = kv;
+  }
+
+  public void remove() {
+    throw new UnsupportedOperationException("remove not supported");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java
new file mode 100644
index 0000000..4b1610c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java
@@ -0,0 +1,401 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.UnsupportedEncodingException;
+import java.net.URLDecoder;
+import java.util.Collection;
+import java.util.TreeSet;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Parses a path based row/column/timestamp specification into its component
+ * elements.
+ * <p>
+ *  
+ */
+public class RowSpec {
+  public static final long DEFAULT_START_TIMESTAMP = 0;
+  public static final long DEFAULT_END_TIMESTAMP = Long.MAX_VALUE;
+  
+  private byte[] row = HConstants.EMPTY_START_ROW;
+  private byte[] endRow = null;
+  private TreeSet<byte[]> columns =
+    new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+  private long startTime = DEFAULT_START_TIMESTAMP;
+  private long endTime = DEFAULT_END_TIMESTAMP;
+  private int maxVersions = HColumnDescriptor.DEFAULT_VERSIONS;
+  private int maxValues = Integer.MAX_VALUE;
+
+  public RowSpec(String path) throws IllegalArgumentException {
+    int i = 0;
+    while (path.charAt(i) == '/') {
+      i++;
+    }
+    i = parseRowKeys(path, i);
+    i = parseColumns(path, i);
+    i = parseTimestamp(path, i);
+    i = parseQueryParams(path, i);
+  }
+
+  private int parseRowKeys(final String path, int i)
+      throws IllegalArgumentException {
+    String startRow = null, endRow = null;
+    try {
+      StringBuilder sb = new StringBuilder();
+      char c;
+      while (i < path.length() && (c = path.charAt(i)) != '/') {
+        sb.append(c);
+        i++;
+      }
+      i++;
+      startRow = sb.toString();
+      int idx = startRow.indexOf(',');
+      if (idx != -1) {
+        startRow = URLDecoder.decode(startRow.substring(0, idx),
+          HConstants.UTF8_ENCODING);
+        endRow = URLDecoder.decode(startRow.substring(idx + 1),
+          HConstants.UTF8_ENCODING);
+      } else {
+        startRow = URLDecoder.decode(startRow, HConstants.UTF8_ENCODING);
+      }
+    } catch (IndexOutOfBoundsException e) {
+      throw new IllegalArgumentException(e);
+    } catch (UnsupportedEncodingException e) {
+      throw new RuntimeException(e);
+    }
+    // HBase does not support wildcards on row keys so we will emulate a
+    // suffix glob by synthesizing appropriate start and end row keys for
+    // table scanning
+    if (startRow.charAt(startRow.length() - 1) == '*') {
+      if (endRow != null)
+        throw new IllegalArgumentException("invalid path: start row "+
+          "specified with wildcard");
+      this.row = Bytes.toBytes(startRow.substring(0, 
+        startRow.lastIndexOf("*")));
+      this.endRow = new byte[this.row.length + 1];
+      System.arraycopy(this.row, 0, this.endRow, 0, this.row.length);
+      this.endRow[this.row.length] = (byte)255;
+    } else {
+      this.row = Bytes.toBytes(startRow.toString());
+      if (endRow != null) {
+        this.endRow = Bytes.toBytes(endRow.toString());
+      }
+    }
+    return i;
+  }
+
+  private int parseColumns(final String path, int i)
+      throws IllegalArgumentException {
+    if (i >= path.length()) {
+      return i;
+    }
+    try {
+      char c;
+      StringBuilder column = new StringBuilder();
+      while (i < path.length() && (c = path.charAt(i)) != '/') {
+        if (c == ',') {
+          if (column.length() < 1) {
+            throw new IllegalArgumentException("invalid path");
+          }
+          String s = URLDecoder.decode(column.toString(),
+            HConstants.UTF8_ENCODING);
+          if (!s.contains(":")) {
+            this.columns.add(Bytes.toBytes(s + ":"));
+          } else {
+            this.columns.add(Bytes.toBytes(s));
+          }
+          column.setLength(0);
+          i++;
+          continue;
+        }
+        column.append(c);
+        i++;
+      }
+      i++;
+      // trailing list entry
+      if (column.length() > 1) {
+        String s = URLDecoder.decode(column.toString(),
+          HConstants.UTF8_ENCODING);
+        if (!s.contains(":")) {
+          this.columns.add(Bytes.toBytes(s + ":"));
+        } else {
+          this.columns.add(Bytes.toBytes(s));
+        }
+      }
+    } catch (IndexOutOfBoundsException e) {
+      throw new IllegalArgumentException(e);
+    } catch (UnsupportedEncodingException e) {
+      // shouldn't happen
+      throw new RuntimeException(e);
+    }
+    return i;
+  }
+
+  private int parseTimestamp(final String path, int i)
+      throws IllegalArgumentException {
+    if (i >= path.length()) {
+      return i;
+    }
+    long time0 = 0, time1 = 0;
+    try {
+      char c = 0;
+      StringBuilder stamp = new StringBuilder();
+      while (i < path.length()) {
+        c = path.charAt(i);
+        if (c == '/' || c == ',') {
+          break;
+        }
+        stamp.append(c);
+        i++;
+      }
+      try {
+        time0 = Long.valueOf(URLDecoder.decode(stamp.toString(),
+          HConstants.UTF8_ENCODING));
+      } catch (NumberFormatException e) {
+        throw new IllegalArgumentException(e);
+      }
+      if (c == ',') {
+        stamp = new StringBuilder();
+        i++;
+        while (i < path.length() && ((c = path.charAt(i)) != '/')) {
+          stamp.append(c);
+          i++;
+        }
+        try {
+          time1 = Long.valueOf(URLDecoder.decode(stamp.toString(),
+            HConstants.UTF8_ENCODING));
+        } catch (NumberFormatException e) {
+          throw new IllegalArgumentException(e);
+        }
+      }
+      if (c == '/') {
+        i++;
+      }
+    } catch (IndexOutOfBoundsException e) {
+      throw new IllegalArgumentException(e);
+    } catch (UnsupportedEncodingException e) {
+      // shouldn't happen
+      throw new RuntimeException(e);
+    }
+    if (time1 != 0) {
+      startTime = time0;
+      endTime = time1;
+    } else {
+      endTime = time0;
+    }
+    return i;
+  }
+
+  private int parseQueryParams(final String path, int i) {
+    if (i >= path.length()) {
+      return i;
+    }
+    StringBuilder query = new StringBuilder();
+    try {
+      query.append(URLDecoder.decode(path.substring(i), 
+        HConstants.UTF8_ENCODING));
+    } catch (UnsupportedEncodingException e) {
+      // should not happen
+      throw new RuntimeException(e);
+    }
+    i += query.length();
+    int j = 0;
+    while (j < query.length()) {
+      char c = query.charAt(j);
+      if (c != '?' && c != '&') {
+        break;
+      }
+      if (++j > query.length()) {
+        throw new IllegalArgumentException("malformed query parameter");
+      }
+      char what = query.charAt(j);
+      if (++j > query.length()) {
+        break;
+      }
+      c = query.charAt(j);
+      if (c != '=') {
+        throw new IllegalArgumentException("malformed query parameter");
+      }
+      if (++j > query.length()) {
+        break;
+      }
+      switch (what) {
+      case 'm': {
+        StringBuilder sb = new StringBuilder();
+        while (j <= query.length()) {
+          c = query.charAt(i);
+          if (c < '0' || c > '9') {
+            j--;
+            break;
+          }
+          sb.append(c);
+        }
+        maxVersions = Integer.valueOf(sb.toString());
+      } break;
+      case 'n': {
+        StringBuilder sb = new StringBuilder();
+        while (j <= query.length()) {
+          c = query.charAt(i);
+          if (c < '0' || c > '9') {
+            j--;
+            break;
+          }
+          sb.append(c);
+        }
+        maxValues = Integer.valueOf(sb.toString());
+      } break;
+      default:
+        throw new IllegalArgumentException("unknown parameter '" + c + "'");
+      }
+    }
+    return i;
+  }
+
+  public RowSpec(byte[] startRow, byte[] endRow, byte[][] columns,
+      long startTime, long endTime, int maxVersions) {
+    this.row = startRow;
+    this.endRow = endRow;
+    if (columns != null) {
+      for (byte[] col: columns) {
+        this.columns.add(col);
+      }
+    }
+    this.startTime = startTime;
+    this.endTime = endTime;
+    this.maxVersions = maxVersions;
+  }
+
+  public RowSpec(byte[] startRow, byte[] endRow, Collection<byte[]> columns,
+      long startTime, long endTime, int maxVersions) {
+    this.row = startRow;
+    this.endRow = endRow;
+    if (columns != null) {
+      this.columns.addAll(columns);
+    }
+    this.startTime = startTime;
+    this.endTime = endTime;
+    this.maxVersions = maxVersions;
+  }
+
+  public boolean isSingleRow() {
+    return endRow == null;
+  }
+
+  public int getMaxVersions() {
+    return maxVersions;
+  }
+
+  public void setMaxVersions(final int maxVersions) {
+    this.maxVersions = maxVersions;
+  }
+
+  public int getMaxValues() {
+    return maxValues;
+  }
+
+  public void setMaxValues(final int maxValues) {
+    this.maxValues = maxValues;
+  }
+
+  public boolean hasColumns() {
+    return !columns.isEmpty();
+  }
+
+  public byte[] getRow() {
+    return row;
+  }
+
+  public byte[] getStartRow() {
+    return row;
+  }
+
+  public boolean hasEndRow() {
+    return endRow != null;
+  }
+
+  public byte[] getEndRow() {
+    return endRow;
+  }
+
+  public void addColumn(final byte[] column) {
+    columns.add(column);
+  }
+
+  public byte[][] getColumns() {
+    return columns.toArray(new byte[columns.size()][]);
+  }
+
+  public boolean hasTimestamp() {
+    return (startTime == 0) && (endTime != Long.MAX_VALUE);
+  }
+
+  public long getTimestamp() {
+    return endTime;
+  }
+
+  public long getStartTime() {
+    return startTime;
+  }
+
+  public void setStartTime(final long startTime) {
+    this.startTime = startTime;
+  }
+
+  public long getEndTime() {
+    return endTime;
+  }
+
+  public void setEndTime(long endTime) {
+    this.endTime = endTime;
+  }
+
+  public String toString() {
+    StringBuilder result = new StringBuilder();
+    result.append("{startRow => '");
+    if (row != null) {
+      result.append(Bytes.toString(row));
+    }
+    result.append("', endRow => '");
+    if (endRow != null)  {
+      result.append(Bytes.toString(endRow));
+    }
+    result.append("', columns => [");
+    for (byte[] col: columns) {
+      result.append(" '");
+      result.append(Bytes.toString(col));
+      result.append("'");
+    }
+    result.append(" ], startTime => ");
+    result.append(Long.toString(startTime));
+    result.append(", endTime => ");
+    result.append(Long.toString(endTime));
+    result.append(", maxVersions => ");
+    result.append(Integer.toString(maxVersions));
+    result.append(", maxValues => ");
+    result.append(Integer.toString(maxValues));
+    result.append("}");
+    return result.toString();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerInstanceResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerInstanceResource.java
new file mode 100644
index 0000000..75f1065
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerInstanceResource.java
@@ -0,0 +1,168 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import javax.ws.rs.DELETE;
+import javax.ws.rs.GET;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.Response.ResponseBuilder;
+import javax.ws.rs.core.UriInfo;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.rest.model.CellModel;
+import org.apache.hadoop.hbase.rest.model.CellSetModel;
+import org.apache.hadoop.hbase.rest.model.RowModel;
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class ScannerInstanceResource extends ResourceBase {
+  private static final Log LOG =
+    LogFactory.getLog(ScannerInstanceResource.class);
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  ResultGenerator generator;
+  String id;
+  int batch = 1;
+
+  public ScannerInstanceResource(String table, String id, 
+      ResultGenerator generator, int batch) throws IOException {
+    this.id = id;
+    this.generator = generator;
+    this.batch = batch;
+  }
+
+  @GET
+  @Produces({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response get(final @Context UriInfo uriInfo, 
+      @QueryParam("n") int maxRows, final @QueryParam("c") int maxValues) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    CellSetModel model = new CellSetModel();
+    RowModel rowModel = null;
+    byte[] rowKey = null;
+    int limit = batch;
+    if (maxValues > 0) {
+      limit = maxValues;
+    }
+    int count = limit;
+    do {
+      KeyValue value = null;
+      try {
+        value = generator.next();
+      } catch (IllegalStateException e) {
+        ScannerResource.delete(id);
+        throw new WebApplicationException(Response.Status.GONE);
+      }
+      if (value == null) {
+        LOG.info("generator exhausted");
+        // respond with 204 (No Content) if an empty cell set would be
+        // returned
+        if (count == limit) {
+          return Response.noContent().build();
+        }
+        break;
+      }
+      if (rowKey == null) {
+        rowKey = value.getRow();
+        rowModel = new RowModel(rowKey);
+      }
+      if (!Bytes.equals(value.getRow(), rowKey)) {
+        // if maxRows was given as a query param, stop if we would exceed the
+        // specified number of rows
+        if (maxRows > 0) { 
+          if (--maxRows == 0) {
+            generator.putBack(value);
+            break;
+          }
+        }
+        model.addRow(rowModel);
+        rowKey = value.getRow();
+        rowModel = new RowModel(rowKey);
+      }
+      rowModel.addCell(
+        new CellModel(value.getFamily(), value.getQualifier(), 
+          value.getTimestamp(), value.getValue()));
+    } while (--count > 0);
+    model.addRow(rowModel);
+    ResponseBuilder response = Response.ok(model);
+    response.cacheControl(cacheControl);
+    return response.build();
+  }
+
+  @GET
+  @Produces(MIMETYPE_BINARY)
+  public Response getBinary(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath() + " as " +
+        MIMETYPE_BINARY);
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      KeyValue value = generator.next();
+      if (value == null) {
+        LOG.info("generator exhausted");
+        return Response.noContent().build();
+      }
+      ResponseBuilder response = Response.ok(value.getValue());
+      response.cacheControl(cacheControl);
+      response.header("X-Row", Base64.encodeBytes(value.getRow()));      
+      response.header("X-Column", 
+        Base64.encodeBytes(
+          KeyValue.makeColumn(value.getFamily(), value.getQualifier())));
+      response.header("X-Timestamp", value.getTimestamp());
+      return response.build();
+    } catch (IllegalStateException e) {
+      ScannerResource.delete(id);
+      throw new WebApplicationException(Response.Status.GONE);
+    }
+  }
+
+  @DELETE
+  public Response delete(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("DELETE " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    if (servlet.isReadOnly()) {
+      throw new WebApplicationException(Response.Status.FORBIDDEN);
+    }
+    ScannerResource.delete(id);
+    return Response.ok().build();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerResource.java
new file mode 100644
index 0000000..fe59023
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerResource.java
@@ -0,0 +1,139 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.net.URI;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.ws.rs.Consumes;
+import javax.ws.rs.POST;
+import javax.ws.rs.PUT;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriBuilder;
+import javax.ws.rs.core.UriInfo;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.rest.model.ScannerModel;
+
+public class ScannerResource extends ResourceBase {
+
+  private static final Log LOG = LogFactory.getLog(ScannerResource.class);
+
+  static final Map<String,ScannerInstanceResource> scanners =
+   Collections.synchronizedMap(new HashMap<String,ScannerInstanceResource>());
+
+  TableResource tableResource;
+
+  /**
+   * Constructor
+   * @param tableResource
+   * @throws IOException
+   */
+  public ScannerResource(TableResource tableResource)throws IOException {
+    super();
+    this.tableResource = tableResource;
+  }
+
+  static void delete(final String id) {
+    ScannerInstanceResource instance = scanners.remove(id);
+    if (instance != null) {
+      instance.generator.close();
+    }
+  }
+
+  Response update(final ScannerModel model, final boolean replace,
+      final UriInfo uriInfo) {
+    servlet.getMetrics().incrementRequests(1);
+    if (servlet.isReadOnly()) {
+      throw new WebApplicationException(Response.Status.FORBIDDEN);
+    }
+    byte[] endRow = model.hasEndRow() ? model.getEndRow() : null;
+    RowSpec spec = new RowSpec(model.getStartRow(), endRow,
+      model.getColumns(), model.getStartTime(), model.getEndTime(), 1);
+    try {
+      Filter filter = ScannerResultGenerator.buildFilterFromModel(model);
+      String tableName = tableResource.getName();
+      ScannerResultGenerator gen =
+        new ScannerResultGenerator(tableName, spec, filter);
+      String id = gen.getID();
+      ScannerInstanceResource instance =
+        new ScannerInstanceResource(tableName, id, gen, model.getBatch());
+      scanners.put(id, instance);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("new scanner: " + id);
+      }
+      UriBuilder builder = uriInfo.getAbsolutePathBuilder();
+      URI uri = builder.path(id).build();
+      return Response.created(uri).build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+              Response.Status.SERVICE_UNAVAILABLE);
+    } catch (RuntimeException e) {
+      if (e.getCause() instanceof TableNotFoundException) {
+        throw new WebApplicationException(e, Response.Status.NOT_FOUND);
+      }
+      throw new WebApplicationException(e, Response.Status.BAD_REQUEST);
+    } catch (Exception e) {
+      throw new WebApplicationException(e, Response.Status.BAD_REQUEST);
+    }
+  }
+
+  @PUT
+  @Consumes({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response put(final ScannerModel model, 
+      final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("PUT " + uriInfo.getAbsolutePath());
+    }
+    return update(model, true, uriInfo);
+  }
+
+  @POST
+  @Consumes({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response post(final ScannerModel model,
+      final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("POST " + uriInfo.getAbsolutePath());
+    }
+    return update(model, false, uriInfo);
+  }
+
+  @Path("{scanner: .+}")
+  public ScannerInstanceResource getScannerInstanceResource(
+      final @PathParam("scanner") String id) {
+    ScannerInstanceResource instance = scanners.get(id);
+    if (instance == null) {
+      throw new WebApplicationException(Response.Status.NOT_FOUND);
+    }
+    return instance;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerResultGenerator.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerResultGenerator.java
new file mode 100644
index 0000000..d4f1dfc
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/ScannerResultGenerator.java
@@ -0,0 +1,178 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.Iterator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.rest.model.ScannerModel;
+import org.apache.hadoop.util.StringUtils;
+
+public class ScannerResultGenerator extends ResultGenerator {
+
+  private static final Log LOG =
+    LogFactory.getLog(ScannerResultGenerator.class);
+
+  public static Filter buildFilterFromModel(final ScannerModel model) 
+      throws Exception {
+    String filter = model.getFilter();
+    if (filter == null || filter.length() == 0) {
+      return null;
+    }
+    return buildFilter(filter);
+  }
+
+  private String id;
+  private Iterator<KeyValue> rowI;
+  private KeyValue cache;
+  private ResultScanner scanner;
+  private Result cached;
+
+  public ScannerResultGenerator(final String tableName, final RowSpec rowspec,
+      final Filter filter) throws IllegalArgumentException, IOException {
+    HTablePool pool = RESTServlet.getInstance().getTablePool(); 
+    HTableInterface table = pool.getTable(tableName);
+    try {
+      Scan scan;
+      if (rowspec.hasEndRow()) {
+        scan = new Scan(rowspec.getStartRow(), rowspec.getEndRow());
+      } else {
+        scan = new Scan(rowspec.getStartRow());
+      }
+      if (rowspec.hasColumns()) {
+        byte[][] columns = rowspec.getColumns();
+        for (byte[] column: columns) {
+          byte[][] split = KeyValue.parseColumn(column);
+          if (split.length > 1 && (split[1] != null && split[1].length != 0)) {
+            scan.addColumn(split[0], split[1]);
+          } else {
+            scan.addFamily(split[0]);
+          }
+        }
+      } else {
+        for (HColumnDescriptor family: 
+            table.getTableDescriptor().getFamilies()) {
+          scan.addFamily(family.getName());
+        }
+      }
+      scan.setTimeRange(rowspec.getStartTime(), rowspec.getEndTime());          
+      scan.setMaxVersions(rowspec.getMaxVersions());
+      if (filter != null) {
+        scan.setFilter(filter);
+      }
+      // always disable block caching on the cluster when scanning
+      scan.setCacheBlocks(false);
+      scanner = table.getScanner(scan);
+      cached = null;
+      id = Long.toString(System.currentTimeMillis()) +
+             Integer.toHexString(scanner.hashCode());
+    } finally {
+      pool.putTable(table);
+    }
+  }
+
+  public String getID() {
+    return id;
+  }
+
+  public void close() {
+  }
+
+  public boolean hasNext() {
+    if (cache != null) {
+      return true;
+    }
+    if (rowI != null && rowI.hasNext()) {
+      return true;
+    }
+    if (cached != null) {
+      return true;
+    }
+    try {
+      Result result = scanner.next();
+      if (result != null && !result.isEmpty()) {
+        cached = result;
+      }
+    } catch (UnknownScannerException e) {
+      throw new IllegalArgumentException(e);
+    } catch (IOException e) {
+      LOG.error(StringUtils.stringifyException(e));
+    }
+    return cached != null;
+  }
+
+  public KeyValue next() {
+    if (cache != null) {
+      KeyValue kv = cache;
+      cache = null;
+      return kv;
+    }
+    boolean loop;
+    do {
+      loop = false;
+      if (rowI != null) {
+        if (rowI.hasNext()) {
+          return rowI.next();
+        } else {
+          rowI = null;
+        }
+      }
+      if (cached != null) {
+        rowI = cached.list().iterator();
+        loop = true;
+        cached = null;
+      } else {
+        Result result = null;
+        try {
+          result = scanner.next();
+        } catch (UnknownScannerException e) {
+          throw new IllegalArgumentException(e);
+        } catch (IOException e) {
+          LOG.error(StringUtils.stringifyException(e));
+        }
+        if (result != null && !result.isEmpty()) {
+          rowI = result.list().iterator();
+          loop = true;
+        }
+      }
+    } while (loop);
+    return null;
+  }
+
+  public void putBack(KeyValue kv) {
+    this.cache = kv;
+  }
+
+  public void remove() {
+    throw new UnsupportedOperationException("remove not supported");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
new file mode 100644
index 0000000..5e9a5be
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/SchemaResource.java
@@ -0,0 +1,240 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.Map;
+
+import javax.ws.rs.Consumes;
+import javax.ws.rs.DELETE;
+import javax.ws.rs.GET;
+import javax.ws.rs.POST;
+import javax.ws.rs.PUT;
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.ws.rs.core.Response.ResponseBuilder;
+
+import javax.xml.namespace.QName;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTablePool;
+import org.apache.hadoop.hbase.rest.model.ColumnSchemaModel;
+import org.apache.hadoop.hbase.rest.model.TableSchemaModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class SchemaResource extends ResourceBase {
+  private static final Log LOG = LogFactory.getLog(SchemaResource.class);
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  TableResource tableResource;
+
+  /**
+   * Constructor
+   * @param tableResource
+   * @throws IOException
+   */
+  public SchemaResource(TableResource tableResource) throws IOException {
+    super();
+    this.tableResource = tableResource;
+  }
+
+  private HTableDescriptor getTableSchema() throws IOException,
+      TableNotFoundException {
+    HTablePool pool = servlet.getTablePool();
+    HTableInterface table = pool.getTable(tableResource.getName());
+    try {
+      return table.getTableDescriptor();
+    } finally {
+      pool.putTable(table);
+    }
+  }
+
+  @GET
+  @Produces({MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response get(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      ResponseBuilder response =
+        Response.ok(new TableSchemaModel(getTableSchema()));
+      response.cacheControl(cacheControl);
+      return response.build();
+    } catch (TableNotFoundException e) {
+      throw new WebApplicationException(Response.Status.NOT_FOUND);
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+                  Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+
+  private Response replace(final byte[] name, final TableSchemaModel model,
+      final UriInfo uriInfo, final HBaseAdmin admin) {
+    if (servlet.isReadOnly()) {
+      throw new WebApplicationException(Response.Status.FORBIDDEN);
+    }
+    try {
+      HTableDescriptor htd = new HTableDescriptor(name);
+      for (Map.Entry<QName,Object> e: model.getAny().entrySet()) {
+        htd.setValue(e.getKey().getLocalPart(), e.getValue().toString());
+      }
+      for (ColumnSchemaModel family: model.getColumns()) {
+        HColumnDescriptor hcd = new HColumnDescriptor(family.getName());
+        for (Map.Entry<QName,Object> e: family.getAny().entrySet()) {
+          hcd.setValue(e.getKey().getLocalPart(), e.getValue().toString());
+        }
+        htd.addFamily(hcd);
+      }
+      if (admin.tableExists(name)) {
+        admin.disableTable(name);
+        admin.modifyTable(name, htd);
+        admin.enableTable(name);
+      } else try {
+        admin.createTable(htd);
+      } catch (TableExistsException e) {
+        // race, someone else created a table with the same name
+        throw new WebApplicationException(e, Response.Status.NOT_MODIFIED);
+      }
+      return Response.created(uriInfo.getAbsolutePath()).build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+            Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+
+  private Response update(final byte[] name, final TableSchemaModel model,
+      final UriInfo uriInfo, final HBaseAdmin admin) {
+    if (servlet.isReadOnly()) {
+      throw new WebApplicationException(Response.Status.FORBIDDEN);
+    }
+    try {
+      HTableDescriptor htd = admin.getTableDescriptor(name);
+      admin.disableTable(name);
+      try {
+        for (ColumnSchemaModel family: model.getColumns()) {
+          HColumnDescriptor hcd = new HColumnDescriptor(family.getName());
+          for (Map.Entry<QName,Object> e: family.getAny().entrySet()) {
+            hcd.setValue(e.getKey().getLocalPart(), e.getValue().toString());
+          }
+          if (htd.hasFamily(hcd.getName())) {
+            admin.modifyColumn(name, hcd);
+          } else {
+            admin.addColumn(name, hcd);
+          }
+        }
+      } catch (IOException e) {
+        throw new WebApplicationException(e,
+            Response.Status.INTERNAL_SERVER_ERROR);
+      } finally {
+        admin.enableTable(tableResource.getName());
+      }
+      return Response.ok().build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e,
+          Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+
+  private Response update(final TableSchemaModel model, final boolean replace,
+      final UriInfo uriInfo) {
+    try {
+      byte[] name = Bytes.toBytes(tableResource.getName());
+      HBaseAdmin admin = new HBaseAdmin(servlet.getConfiguration());
+      if (replace || !admin.tableExists(name)) {
+        return replace(name, model, uriInfo, admin);
+      } else {
+        return update(name, model, uriInfo, admin);
+      }
+    } catch (IOException e) {
+      throw new WebApplicationException(e, 
+            Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+
+  @PUT
+  @Consumes({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response put(final TableSchemaModel model, 
+      final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("PUT " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    return update(model, true, uriInfo);
+  }
+
+  @POST
+  @Consumes({MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response post(final TableSchemaModel model, 
+      final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("PUT " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    return update(model, false, uriInfo);
+  }
+
+  @DELETE
+  public Response delete(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("DELETE " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      HBaseAdmin admin = new HBaseAdmin(servlet.getConfiguration());
+      boolean success = false;
+      for (int i = 0; i < 10; i++) try {
+        admin.disableTable(tableResource.getName());
+        success = true;
+        break;
+      } catch (IOException e) {
+      }
+      if (!success) {
+        throw new IOException("could not disable table");
+      }
+      admin.deleteTable(tableResource.getName());
+      return Response.ok().build();
+    } catch (TableNotFoundException e) {
+      throw new WebApplicationException(Response.Status.NOT_FOUND);
+    } catch (IOException e) {
+      throw new WebApplicationException(e, 
+            Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/StorageClusterStatusResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/StorageClusterStatusResource.java
new file mode 100644
index 0000000..69ed646
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/StorageClusterStatusResource.java
@@ -0,0 +1,102 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import javax.ws.rs.GET;
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.Response.ResponseBuilder;
+import javax.ws.rs.core.UriInfo;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HServerLoad;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.model.StorageClusterStatusModel;
+
+public class StorageClusterStatusResource extends ResourceBase {
+  private static final Log LOG =
+    LogFactory.getLog(StorageClusterStatusResource.class);
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  /**
+   * Constructor
+   * @throws IOException
+   */
+  public StorageClusterStatusResource() throws IOException {
+    super();
+  }
+
+  @GET
+  @Produces({MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response get(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      HBaseAdmin admin = new HBaseAdmin(servlet.getConfiguration());
+      ClusterStatus status = admin.getClusterStatus();
+      StorageClusterStatusModel model = new StorageClusterStatusModel();
+      model.setRegions(status.getRegionsCount());
+      model.setRequests(status.getRequestsCount());
+      model.setAverageLoad(status.getAverageLoad());
+      for (HServerInfo info: status.getServerInfo()) {
+        HServerLoad load = info.getLoad();
+        StorageClusterStatusModel.Node node = 
+          model.addLiveNode(
+            info.getServerAddress().getHostname() + ":" + 
+            Integer.toString(info.getServerAddress().getPort()),
+            info.getStartCode(), load.getUsedHeapMB(),
+            load.getMaxHeapMB());
+        node.setRequests(load.getNumberOfRequests());
+        for (HServerLoad.RegionLoad region: load.getRegionsLoad()) {
+          node.addRegion(region.getName(), region.getStores(),
+            region.getStorefiles(), region.getStorefileSizeMB(),
+            region.getMemStoreSizeMB(), region.getStorefileIndexSizeMB());
+        }
+      }
+      for (String name: status.getDeadServerNames()) {
+        model.addDeadNode(name);
+      }
+      ResponseBuilder response = Response.ok(model);
+      response.cacheControl(cacheControl);
+      return response.build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e, 
+                  Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/StorageClusterVersionResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/StorageClusterVersionResource.java
new file mode 100644
index 0000000..106c9dc
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/StorageClusterVersionResource.java
@@ -0,0 +1,78 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import javax.ws.rs.GET;
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.ws.rs.core.Response.ResponseBuilder;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.model.StorageClusterVersionModel;
+
+public class StorageClusterVersionResource extends ResourceBase {
+  private static final Log LOG =
+    LogFactory.getLog(StorageClusterVersionResource.class);
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  /**
+   * Constructor
+   * @throws IOException
+   */
+  public StorageClusterVersionResource() throws IOException {
+    super();
+  }
+
+  @GET
+  @Produces({MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON})
+  public Response get(final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    try {
+      HBaseAdmin admin = new HBaseAdmin(servlet.getConfiguration());
+      StorageClusterVersionModel model = new StorageClusterVersionModel();
+      model.setVersion(admin.getClusterStatus().getHBaseVersion());
+      ResponseBuilder response = Response.ok(model);
+      response.cacheControl(cacheControl);
+      return response.build();
+    } catch (IOException e) {
+      throw new WebApplicationException(e, 
+                  Response.Status.SERVICE_UNAVAILABLE);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java
new file mode 100644
index 0000000..e19893d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/TableResource.java
@@ -0,0 +1,280 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import javax.ws.rs.Encoded;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.QueryParam;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.rest.transform.NullTransform;
+import org.apache.hadoop.hbase.rest.transform.Transform;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.StringUtils;
+
+public class TableResource extends ResourceBase {
+  private static final Log LOG = LogFactory.getLog(TableResource.class);
+
+  /**
+   * HCD attributes starting with this string are considered transform
+   * directives
+   */
+  private static final String DIRECTIVE_KEY = "Transform$";
+
+  /**
+   * Transform directives are of the form <tt>&lt;qualifier&gt;:&lt;class&gt;</tt>
+   * where <tt>qualifier</tt> is a string for exact matching or '*' as a wildcard
+   * that will match anything; and <tt>class</tt> is either the fully qualified
+   * class name of a transform implementation or can be the short name of a
+   * transform in the <tt>org.apache.hadoop.hbase.rest.transform package</tt>.
+   */
+  private static final Pattern DIRECTIVE_PATTERN =
+    Pattern.compile("([^\\:]+)\\:([^\\,]+)\\,?");
+  private static final Transform defaultTransform = new NullTransform();
+  private static final
+    Map<String,Map<byte[],Map<byte[],Transform>>> transformMap =
+      new ConcurrentHashMap<String,Map<byte[],Map<byte[],Transform>>>();
+  private static final Map<String,Long> lastCheckedMap =
+    new ConcurrentHashMap<String,Long>();
+
+  /**
+   * @param table the table
+   * @param family the column family
+   * @param qualifier the column qualifier, or null
+   * @return the transformation specified for the given family or qualifier, if
+   * any, otherwise the default
+   */
+  static Transform getTransform(String table, byte[] family, byte[] qualifier) {
+    if (qualifier == null) {
+      qualifier = HConstants.EMPTY_BYTE_ARRAY;
+    }
+    Map<byte[],Map<byte[],Transform>> familyMap = transformMap.get(table);
+    if (familyMap != null) {
+      Map<byte[],Transform> columnMap = familyMap.get(family);
+      if (columnMap != null) {
+        Transform t = columnMap.get(qualifier);
+        // check as necessary if there is a wildcard entry
+        if (t == null) {
+          t = columnMap.get(HConstants.EMPTY_BYTE_ARRAY);
+        }
+        // if we found something, return it, otherwise we will return the
+        // default by falling through
+        if (t != null) {
+          return t;
+        }
+      }
+    }
+    return defaultTransform;
+  }
+
+  synchronized static void setTransform(String table, byte[] family,
+      byte[] qualifier, Transform transform) {
+    Map<byte[],Map<byte[],Transform>> familyMap = transformMap.get(table);
+    if (familyMap == null) {
+      familyMap =  new ConcurrentSkipListMap<byte[],Map<byte[],Transform>>(
+          Bytes.BYTES_COMPARATOR);
+      transformMap.put(table, familyMap);
+    }
+    Map<byte[],Transform> columnMap = familyMap.get(family);
+    if (columnMap == null) {
+      columnMap = new ConcurrentSkipListMap<byte[],Transform>(
+          Bytes.BYTES_COMPARATOR);
+      familyMap.put(family, columnMap);
+    }
+    // if transform is null, remove any existing entry
+    if (transform != null) {
+      columnMap.put(qualifier, transform);
+    } else {
+      columnMap.remove(qualifier);
+    }
+  }
+
+  String table;
+
+  /**
+   * Scan the table schema for transform directives. These are column family
+   * attributes containing a comma-separated list of elements of the form
+   * <tt>&lt;qualifier&gt;:&lt;transform-class&gt;</tt>, where qualifier
+   * can be a string for exact matching or '*' as a wildcard to match anything.
+   * The attribute key must begin with the string "Transform$".
+   */
+  void scanTransformAttrs() throws IOException {
+    try {
+      HBaseAdmin admin = new HBaseAdmin(servlet.getConfiguration());
+      HTableDescriptor htd = admin.getTableDescriptor(Bytes.toBytes(table));
+      for (HColumnDescriptor hcd: htd.getFamilies()) {
+        for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+            hcd.getValues().entrySet()) {
+          // does the key start with the transform directive tag?
+          String key = Bytes.toString(e.getKey().get());
+          if (!key.startsWith(DIRECTIVE_KEY)) {
+            // no, skip
+            continue;
+          }
+          // match a comma separated list of one or more directives
+          byte[] value = e.getValue().get();
+          Matcher m = DIRECTIVE_PATTERN.matcher(Bytes.toString(value));
+          while (m.find()) {
+            byte[] qualifier = HConstants.EMPTY_BYTE_ARRAY;
+            String s = m.group(1);
+            if (s.length() > 0 && !s.equals("*")) {
+              qualifier = Bytes.toBytes(s);
+            }
+            boolean retry = false;
+            String className = m.group(2);
+            while (true) {
+              try {
+                // if a transform was previously configured for the qualifier,
+                // this will simply replace it
+                setTransform(table, hcd.getName(), qualifier,
+                  (Transform)Class.forName(className).newInstance());
+                break;
+              } catch (InstantiationException ex) {
+                LOG.error(StringUtils.stringifyException(ex));
+                if (retry) {
+                  break;
+                }
+                retry = true;
+              } catch (IllegalAccessException ex) {
+                LOG.error(StringUtils.stringifyException(ex));
+                if (retry) {
+                  break;
+                }
+                retry = true;
+              } catch (ClassNotFoundException ex) {
+                if (retry) {
+                  LOG.error(StringUtils.stringifyException(ex));
+                  break;
+                }
+                className = "org.apache.hadoop.hbase.rest.transform." + className;
+                retry = true;
+              }
+            }
+          }
+        }
+      }
+    } catch (TableNotFoundException e) {
+      // ignore
+    }
+  }
+
+  /**
+   * Constructor
+   * @param table
+   * @throws IOException
+   */
+  public TableResource(String table) throws IOException {
+    super();
+    this.table = table;
+    // Scanning the table schema is too expensive to do for every operation.
+    // Do it once per minute by default.
+    // Setting hbase.rest.transform.check.interval to <= 0 disables rescanning.
+    long now = System.currentTimeMillis();
+    Long lastChecked = lastCheckedMap.get(table);
+    if (lastChecked != null) {
+      long interval = servlet.getConfiguration()
+        .getLong("hbase.rest.transform.check.interval", 60000);
+      if (interval > 0 && (now - lastChecked.longValue()) > interval) {
+        scanTransformAttrs();
+        lastCheckedMap.put(table, now);
+      }
+    } else {
+      scanTransformAttrs();
+      lastCheckedMap.put(table, now);
+    }
+  }
+
+  /** @return the table name */
+  String getName() {
+    return table;
+  }
+
+  /**
+   * @return true if the table exists
+   * @throws IOException
+   */
+  boolean exists() throws IOException {
+    HBaseAdmin admin = new HBaseAdmin(servlet.getConfiguration());
+    return admin.tableExists(table);
+  }
+
+  /**
+   * Apply any configured transformations to the value
+   * @param family
+   * @param qualifier
+   * @param value
+   * @param direction
+   * @return
+   * @throws IOException
+   */
+  byte[] transform(byte[] family, byte[] qualifier, byte[] value,
+      Transform.Direction direction) throws IOException {
+    Transform t = getTransform(table, family, qualifier);
+    if (t != null) {
+      return t.transform(value, direction);
+    }
+    return value;
+  }
+
+  @Path("exists")
+  public ExistsResource getExistsResource() throws IOException {
+    return new ExistsResource(this);
+  }
+
+  @Path("regions")
+  public RegionsResource getRegionsResource() throws IOException {
+    return new RegionsResource(this);
+  }
+
+  @Path("scanner")
+  public ScannerResource getScannerResource() throws IOException {
+    return new ScannerResource(this);
+  }
+
+  @Path("schema")
+  public SchemaResource getSchemaResource() throws IOException {
+    return new SchemaResource(this);
+  }
+
+  @Path("{rowspec: .+}")
+  public RowResource getRowResource(
+      // We need the @Encoded decorator so Jersey won't urldecode before
+      // the RowSpec constructor has a chance to parse
+      final @PathParam("rowspec") @Encoded String rowspec,
+      final @QueryParam("v") String versions) throws IOException {
+    return new RowResource(this, rowspec, versions);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/VersionResource.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/VersionResource.java
new file mode 100644
index 0000000..3d0a9b3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/VersionResource.java
@@ -0,0 +1,101 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.IOException;
+
+import javax.servlet.ServletContext;
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.CacheControl;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.ws.rs.core.Response.ResponseBuilder;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.rest.model.VersionModel;
+
+/**
+ * Implements REST software version reporting
+ * <p>
+ * <tt>/version/rest</tt>
+ * <p>
+ * <tt>/version</tt> (alias for <tt>/version/rest</tt>)
+ */
+public class VersionResource extends ResourceBase {
+
+  private static final Log LOG = LogFactory.getLog(VersionResource.class);
+
+  static CacheControl cacheControl;
+  static {
+    cacheControl = new CacheControl();
+    cacheControl.setNoCache(true);
+    cacheControl.setNoTransform(false);
+  }
+
+  /**
+   * Constructor
+   * @throws IOException
+   */
+  public VersionResource() throws IOException {
+    super();
+  }
+
+  /**
+   * Build a response for a version request.
+   * @param context servlet context
+   * @param uriInfo (JAX-RS context variable) request URL
+   * @return a response for a version request 
+   */
+  @GET
+  @Produces({MIMETYPE_TEXT, MIMETYPE_XML, MIMETYPE_JSON, MIMETYPE_PROTOBUF})
+  public Response get(final @Context ServletContext context, 
+      final @Context UriInfo uriInfo) {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("GET " + uriInfo.getAbsolutePath());
+    }
+    servlet.getMetrics().incrementRequests(1);
+    ResponseBuilder response = Response.ok(new VersionModel(context));
+    response.cacheControl(cacheControl);
+    return response.build();
+  }
+
+  /**
+   * Dispatch to StorageClusterVersionResource
+   */
+  @Path("cluster")
+  public StorageClusterVersionResource getClusterVersionResource() 
+      throws IOException {
+    return new StorageClusterVersionResource();
+  }
+
+  /**
+   * Dispatch <tt>/version/rest</tt> to self.
+   */
+  @Path("rest")
+  public VersionResource getVersionResource() {
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java
new file mode 100644
index 0000000..4ecbfa8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Client.java
@@ -0,0 +1,456 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.client;
+
+import java.io.IOException;
+
+import org.apache.commons.httpclient.Header;
+import org.apache.commons.httpclient.HttpClient;
+import org.apache.commons.httpclient.HttpMethod;
+import org.apache.commons.httpclient.HttpVersion;
+import org.apache.commons.httpclient.MultiThreadedHttpConnectionManager;
+import org.apache.commons.httpclient.URI;
+import org.apache.commons.httpclient.methods.ByteArrayRequestEntity;
+import org.apache.commons.httpclient.methods.DeleteMethod;
+import org.apache.commons.httpclient.methods.GetMethod;
+import org.apache.commons.httpclient.methods.HeadMethod;
+import org.apache.commons.httpclient.methods.PostMethod;
+import org.apache.commons.httpclient.methods.PutMethod;
+import org.apache.commons.httpclient.params.HttpClientParams;
+import org.apache.commons.httpclient.params.HttpConnectionManagerParams;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * A wrapper around HttpClient which provides some useful function and
+ * semantics for interacting with the REST gateway.
+ */
+public class Client {
+  public static final Header[] EMPTY_HEADER_ARRAY = new Header[0];
+
+  private static final Log LOG = LogFactory.getLog(Client.class);
+
+  private HttpClient httpClient;
+  private Cluster cluster;
+
+  /**
+   * Default Constructor
+   */
+  public Client() {
+    this(null);
+  }
+
+  /**
+   * Constructor
+   * @param cluster the cluster definition
+   */
+  public Client(Cluster cluster) {
+    this.cluster = cluster;
+    MultiThreadedHttpConnectionManager manager = 
+      new MultiThreadedHttpConnectionManager();
+    HttpConnectionManagerParams managerParams = manager.getParams();
+    managerParams.setConnectionTimeout(2000); // 2 s
+    managerParams.setDefaultMaxConnectionsPerHost(10);
+    managerParams.setMaxTotalConnections(100);
+    this.httpClient = new HttpClient(manager);
+    HttpClientParams clientParams = httpClient.getParams();
+    clientParams.setVersion(HttpVersion.HTTP_1_1);
+  }
+
+  /**
+   * Shut down the client. Close any open persistent connections. 
+   */
+  public void shutdown() {
+    MultiThreadedHttpConnectionManager manager = 
+      (MultiThreadedHttpConnectionManager) httpClient.getHttpConnectionManager();
+    manager.shutdown();
+  }
+
+  /**
+   * Execute a transaction method given only the path. Will select at random
+   * one of the members of the supplied cluster definition and iterate through
+   * the list until a transaction can be successfully completed. The
+   * definition of success here is a complete HTTP transaction, irrespective
+   * of result code.  
+   * @param cluster the cluster definition
+   * @param method the transaction method
+   * @param headers HTTP header values to send
+   * @param path the properly urlencoded path
+   * @return the HTTP response code
+   * @throws IOException
+   */
+  public int executePathOnly(Cluster cluster, HttpMethod method,
+      Header[] headers, String path) throws IOException {
+    IOException lastException;
+    if (cluster.nodes.size() < 1) {
+      throw new IOException("Cluster is empty");
+    }
+    int start = (int)Math.round((cluster.nodes.size() - 1) * Math.random());
+    int i = start;
+    do {
+      cluster.lastHost = cluster.nodes.get(i);
+      try {
+        StringBuilder sb = new StringBuilder();
+        sb.append("http://");
+        sb.append(cluster.lastHost);
+        sb.append(path);
+        URI uri = new URI(sb.toString(), true);
+        return executeURI(method, headers, uri.toString());
+      } catch (IOException e) {
+        lastException = e;
+      }
+    } while (++i != start && i < cluster.nodes.size());
+    throw lastException;
+  }
+
+  /**
+   * Execute a transaction method given a complete URI.
+   * @param method the transaction method
+   * @param headers HTTP header values to send
+   * @param uri a properly urlencoded URI
+   * @return the HTTP response code
+   * @throws IOException
+   */
+  public int executeURI(HttpMethod method, Header[] headers, String uri)
+      throws IOException {
+    method.setURI(new URI(uri, true));
+    if (headers != null) {
+      for (Header header: headers) {
+        method.addRequestHeader(header);
+      }
+    }
+    long startTime = System.currentTimeMillis();
+    int code = httpClient.executeMethod(method);
+    long endTime = System.currentTimeMillis();
+    if (LOG.isDebugEnabled()) {
+      LOG.debug(method.getName() + " " + uri + " " + code + " " +
+        method.getStatusText() + " in " + (endTime - startTime) + " ms");
+    }
+    return code;
+  }
+
+  /**
+   * Execute a transaction method. Will call either <tt>executePathOnly</tt>
+   * or <tt>executeURI</tt> depending on whether a path only is supplied in
+   * 'path', or if a complete URI is passed instead, respectively.
+   * @param cluster the cluster definition
+   * @param method the HTTP method
+   * @param headers HTTP header values to send
+   * @param path the properly urlencoded path or URI
+   * @return the HTTP response code
+   * @throws IOException
+   */
+  public int execute(Cluster cluster, HttpMethod method, Header[] headers,
+      String path) throws IOException {
+    if (path.startsWith("/")) {
+      return executePathOnly(cluster, method, headers, path);
+    }
+    return executeURI(method, headers, path);
+  }
+
+  /**
+   * @return the cluster definition
+   */
+  public Cluster getCluster() {
+    return cluster;
+  }
+
+  /**
+   * @param cluster the cluster definition
+   */
+  public void setCluster(Cluster cluster) {
+    this.cluster = cluster;
+  }
+
+  /**
+   * Send a HEAD request 
+   * @param path the path or URI
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response head(String path) throws IOException {
+    return head(cluster, path, null);
+  }
+
+  /**
+   * Send a HEAD request 
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @param headers the HTTP headers to include in the request
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response head(Cluster cluster, String path, Header[] headers) 
+      throws IOException {
+    HeadMethod method = new HeadMethod();
+    try {
+      int code = execute(cluster, method, null, path);
+      headers = method.getResponseHeaders();
+      return new Response(code, headers, null);
+    } finally {
+      method.releaseConnection();
+    }
+  }
+
+  /**
+   * Send a GET request 
+   * @param path the path or URI
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response get(String path) throws IOException {
+    return get(cluster, path);
+  }
+
+  /**
+   * Send a GET request 
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response get(Cluster cluster, String path) throws IOException {
+    return get(cluster, path, EMPTY_HEADER_ARRAY);
+  }
+
+  /**
+   * Send a GET request 
+   * @param path the path or URI
+   * @param accept Accept header value
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response get(String path, String accept) throws IOException {
+    return get(cluster, path, accept);
+  }
+
+  /**
+   * Send a GET request 
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @param accept Accept header value
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response get(Cluster cluster, String path, String accept)
+      throws IOException {
+    Header[] headers = new Header[1];
+    headers[0] = new Header("Accept", accept);
+    return get(cluster, path, headers);
+  }
+
+  /**
+   * Send a GET request
+   * @param path the path or URI
+   * @param headers the HTTP headers to include in the request, 
+   * <tt>Accept</tt> must be supplied
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response get(String path, Header[] headers) throws IOException {
+    return get(cluster, path, headers);
+  }
+
+  /**
+   * Send a GET request
+   * @param c the cluster definition
+   * @param path the path or URI
+   * @param headers the HTTP headers to include in the request
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response get(Cluster c, String path, Header[] headers) 
+      throws IOException {
+    GetMethod method = new GetMethod();
+    try {
+      int code = execute(c, method, headers, path);
+      headers = method.getResponseHeaders();
+      byte[] body = method.getResponseBody();
+      return new Response(code, headers, body);
+    } finally {
+      method.releaseConnection();
+    }
+  }
+
+  /**
+   * Send a PUT request
+   * @param path the path or URI
+   * @param contentType the content MIME type
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response put(String path, String contentType, byte[] content)
+      throws IOException {
+    return put(cluster, path, contentType, content);
+  }
+
+  /**
+   * Send a PUT request
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @param contentType the content MIME type
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response put(Cluster cluster, String path, String contentType, 
+      byte[] content) throws IOException {
+    Header[] headers = new Header[1];
+    headers[0] = new Header("Content-Type", contentType);
+    return put(cluster, path, headers, content);
+  }
+
+  /**
+   * Send a PUT request
+   * @param path the path or URI
+   * @param headers the HTTP headers to include, <tt>Content-Type</tt> must be
+   * supplied
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response put(String path, Header[] headers, byte[] content) 
+      throws IOException {
+    return put(cluster, path, headers, content);
+  }
+
+  /**
+   * Send a PUT request
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @param headers the HTTP headers to include, <tt>Content-Type</tt> must be
+   * supplied
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response put(Cluster cluster, String path, Header[] headers, 
+      byte[] content) throws IOException {
+    PutMethod method = new PutMethod();
+    try {
+      method.setRequestEntity(new ByteArrayRequestEntity(content));
+      int code = execute(cluster, method, headers, path);
+      headers = method.getResponseHeaders();
+      content = method.getResponseBody();
+      return new Response(code, headers, content);
+    } finally {
+      method.releaseConnection();
+    }
+  }
+
+  /**
+   * Send a POST request
+   * @param path the path or URI
+   * @param contentType the content MIME type
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response post(String path, String contentType, byte[] content)
+      throws IOException {
+    return post(cluster, path, contentType, content);
+  }
+
+  /**
+   * Send a POST request
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @param contentType the content MIME type
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response post(Cluster cluster, String path, String contentType, 
+      byte[] content) throws IOException {
+    Header[] headers = new Header[1];
+    headers[0] = new Header("Content-Type", contentType);
+    return post(cluster, path, headers, content);
+  }
+
+  /**
+   * Send a POST request
+   * @param path the path or URI
+   * @param headers the HTTP headers to include, <tt>Content-Type</tt> must be
+   * supplied
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response post(String path, Header[] headers, byte[] content) 
+      throws IOException {
+    return post(cluster, path, headers, content);
+  }
+
+  /**
+   * Send a POST request
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @param headers the HTTP headers to include, <tt>Content-Type</tt> must be
+   * supplied
+   * @param content the content bytes
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response post(Cluster cluster, String path, Header[] headers, 
+      byte[] content) throws IOException {
+    PostMethod method = new PostMethod();
+    try {
+      method.setRequestEntity(new ByteArrayRequestEntity(content));
+      int code = execute(cluster, method, headers, path);
+      headers = method.getResponseHeaders();
+      content = method.getResponseBody();
+      return new Response(code, headers, content);
+    } finally {
+      method.releaseConnection();
+    }
+  }
+
+  /**
+   * Send a DELETE request
+   * @param path the path or URI
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response delete(String path) throws IOException {
+    return delete(cluster, path);
+  }
+
+  /**
+   * Send a DELETE request
+   * @param cluster the cluster definition
+   * @param path the path or URI
+   * @return a Response object with response detail
+   * @throws IOException
+   */
+  public Response delete(Cluster cluster, String path) throws IOException {
+    DeleteMethod method = new DeleteMethod();
+    try {
+      int code = execute(cluster, method, null, path);
+      Header[] headers = method.getResponseHeaders();
+      byte[] content = method.getResponseBody();
+      return new Response(code, headers, content);
+    } finally {
+      method.releaseConnection();
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Cluster.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Cluster.java
new file mode 100644
index 0000000..4672447
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Cluster.java
@@ -0,0 +1,99 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.client;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * A list of 'host:port' addresses of HTTP servers operating as a single
+ * entity, for example multiple redundant web service gateways.
+ */
+public class Cluster {
+  protected List<String> nodes = 
+    Collections.synchronizedList(new ArrayList<String>());
+  protected String lastHost;
+
+  /**
+   * Constructor
+   */
+  public Cluster() {}
+
+  /**
+   * Constructor
+   * @param nodes a list of service locations, in 'host:port' format
+   */
+  public Cluster(List<String> nodes) {
+    nodes.addAll(nodes);
+  }
+
+  /**
+   * @return true if no locations have been added, false otherwise
+   */
+  public boolean isEmpty() {
+    return nodes.isEmpty();
+  }
+
+  /**
+   * Add a node to the cluster
+   * @param node the service location in 'host:port' format
+   */
+  public Cluster add(String node) {
+    nodes.add(node);
+    return this;
+  }
+
+  /**
+   * Add a node to the cluster
+   * @param name host name
+   * @param port service port
+   */
+  public Cluster add(String name, int port) {
+    StringBuilder sb = new StringBuilder();
+    sb.append(name);
+    sb.append(':');
+    sb.append(port);
+    return add(sb.toString());
+  }
+
+  /**
+   * Remove a node from the cluster
+   * @param node the service location in 'host:port' format
+   */
+  public Cluster remove(String node) {
+    nodes.remove(node);
+    return this;
+  }
+
+  /**
+   * Remove a node from the cluster
+   * @param name host name
+   * @param port service port
+   */
+  public Cluster remove(String name, int port) {
+    StringBuilder sb = new StringBuilder();
+    sb.append(name);
+    sb.append(':');
+    sb.append(port);
+    return remove(sb.toString());
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteAdmin.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteAdmin.java
new file mode 100644
index 0000000..f7c0394
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteAdmin.java
@@ -0,0 +1,188 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.client;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.rest.Constants;
+import org.apache.hadoop.hbase.rest.model.TableSchemaModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class RemoteAdmin {
+
+  final Client client;
+  final Configuration conf;
+  final String accessToken;
+  final int maxRetries;
+  final long sleepTime;
+
+  /**
+   * Constructor
+   * @param client
+   * @param conf
+   */
+  public RemoteAdmin(Client client, Configuration conf) {
+    this(client, conf, null);
+  }
+
+  /**
+   * Constructor
+   * @param client
+   * @param conf
+   * @param accessToken
+   */
+  public RemoteAdmin(Client client, Configuration conf, String accessToken) {
+    this.client = client;
+    this.conf = conf;
+    this.accessToken = accessToken;
+    this.maxRetries = conf.getInt("hbase.rest.client.max.retries", 10);
+    this.sleepTime = conf.getLong("hbase.rest.client.sleep", 1000);
+  }
+
+  /**
+   * @param tableName name of table to check
+   * @return true if all regions of the table are available
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableAvailable(String tableName) throws IOException {
+    return isTableAvailable(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * @param tableName name of table to check
+   * @return true if all regions of the table are available
+   * @throws IOException if a remote or network exception occurs
+   */
+  public boolean isTableAvailable(byte[] tableName) throws IOException {
+    StringBuilder sb = new StringBuilder();
+    sb.append('/');
+    if (accessToken != null) {
+      sb.append(accessToken);
+      sb.append('/');
+    }
+    sb.append(Bytes.toStringBinary(tableName));
+    sb.append('/');
+    sb.append("exists");
+    int code = 0;
+    for (int i = 0; i < maxRetries; i++) {
+      Response response = client.get(sb.toString());
+      code = response.getCode();
+      switch (code) {
+      case 200:
+        return true;
+      case 404:
+        return false;
+      case 509:
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("exists request returned " + code);
+      }
+    }
+    throw new IOException("exists request timed out");
+  }
+
+  /**
+   * Creates a new table.
+   * @param desc table descriptor for table
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void createTable(HTableDescriptor desc)
+      throws IOException {
+    TableSchemaModel model = new TableSchemaModel(desc);
+    StringBuilder sb = new StringBuilder();
+    sb.append('/');
+    if (accessToken != null) {
+      sb.append(accessToken);
+      sb.append('/');
+    }
+    sb.append(Bytes.toStringBinary(desc.getName()));
+    sb.append('/');
+    sb.append("schema");
+    int code = 0;
+    for (int i = 0; i < maxRetries; i++) {
+      Response response = client.put(sb.toString(), Constants.MIMETYPE_PROTOBUF,
+        model.createProtobufOutput());
+      code = response.getCode();
+      switch (code) {
+      case 201:
+        return;
+      case 509:
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("create request returned " + code);
+      }
+    }
+    throw new IOException("create request timed out");
+  }
+
+  /**
+   * Deletes a table.
+   * @param tableName name of table to delete
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void deleteTable(final String tableName) throws IOException {
+    deleteTable(Bytes.toBytes(tableName));
+  }
+
+  /**
+   * Deletes a table.
+   * @param tableName name of table to delete
+   * @throws IOException if a remote or network exception occurs
+   */
+  public void deleteTable(final byte [] tableName) throws IOException {
+    StringBuilder sb = new StringBuilder();
+    sb.append('/');
+    if (accessToken != null) {
+      sb.append(accessToken);
+      sb.append('/');
+    }
+    sb.append(Bytes.toStringBinary(tableName));
+    sb.append('/');
+    sb.append("schema");
+    int code = 0;
+    for (int i = 0; i < maxRetries; i++) { 
+      Response response = client.delete(sb.toString());
+      code = response.getCode();
+      switch (code) {
+      case 200:
+        return;
+      case 509:
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("delete request returned " + code);
+      }
+    }
+    throw new IOException("delete request timed out");
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
new file mode 100644
index 0000000..526ba89
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/RemoteHTable.java
@@ -0,0 +1,621 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.client;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.util.StringUtils;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.Increment;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.RowLock;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.rest.Constants;
+import org.apache.hadoop.hbase.rest.model.CellModel;
+import org.apache.hadoop.hbase.rest.model.CellSetModel;
+import org.apache.hadoop.hbase.rest.model.RowModel;
+import org.apache.hadoop.hbase.rest.model.ScannerModel;
+import org.apache.hadoop.hbase.rest.model.TableSchemaModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * HTable interface to remote tables accessed via REST gateway
+ */
+public class RemoteHTable implements HTableInterface {
+
+  private static final Log LOG = LogFactory.getLog(RemoteHTable.class);
+
+  final Client client;
+  final Configuration conf;
+  final byte[] name;
+  final String accessToken;
+  final int maxRetries;
+  final long sleepTime;
+
+  @SuppressWarnings("unchecked")
+  protected String buildRowSpec(final byte[] row, final Map familyMap, 
+      final long startTime, final long endTime, final int maxVersions) {
+    StringBuffer sb = new StringBuffer();
+    sb.append('/');
+    if (accessToken != null) {
+      sb.append(accessToken);
+      sb.append('/');
+    }
+    sb.append(Bytes.toStringBinary(name));
+    sb.append('/');
+    sb.append(Bytes.toStringBinary(row));
+    Set families = familyMap.entrySet();
+    if (families != null) {
+      Iterator i = familyMap.entrySet().iterator();
+      if (i.hasNext()) {
+        sb.append('/');
+      }
+      while (i.hasNext()) {
+        Map.Entry e = (Map.Entry)i.next();
+        Collection quals = (Collection)e.getValue();
+        if (quals != null && !quals.isEmpty()) {
+          Iterator ii = quals.iterator();
+          while (ii.hasNext()) {
+            sb.append(Bytes.toStringBinary((byte[])e.getKey()));
+            sb.append(':');
+            Object o = ii.next();
+            // Puts use byte[] but Deletes use KeyValue
+            if (o instanceof byte[]) {
+              sb.append(Bytes.toStringBinary((byte[])o));
+            } else if (o instanceof KeyValue) {
+              sb.append(Bytes.toStringBinary(((KeyValue)o).getQualifier()));
+            } else {
+              throw new RuntimeException("object type not handled");
+            }
+            if (ii.hasNext()) {
+              sb.append(',');
+            }
+          }
+        } else {
+          sb.append(Bytes.toStringBinary((byte[])e.getKey()));
+          sb.append(':');
+        }
+        if (i.hasNext()) {
+          sb.append(',');
+        }
+      }
+    }
+    if (startTime != 0 && endTime != Long.MAX_VALUE) {
+      sb.append('/');
+      sb.append(startTime);
+      if (startTime != endTime) {
+        sb.append(',');
+        sb.append(endTime);
+      }
+    } else if (endTime != Long.MAX_VALUE) {
+      sb.append('/');
+      sb.append(endTime);
+    }
+    if (maxVersions > 1) {
+      sb.append("?v=");
+      sb.append(maxVersions);
+    }
+    return sb.toString();
+  }
+
+  protected Result[] buildResultFromModel(final CellSetModel model) {
+    List<Result> results = new ArrayList<Result>();
+    for (RowModel row: model.getRows()) {
+      List<KeyValue> kvs = new ArrayList<KeyValue>();
+      for (CellModel cell: row.getCells()) {
+        byte[][] split = KeyValue.parseColumn(cell.getColumn());
+        byte[] column = split[0];
+        byte[] qualifier = split.length > 1 ? split[1] : null;
+        kvs.add(new KeyValue(row.getKey(), column, qualifier, 
+          cell.getTimestamp(), cell.getValue()));
+      }
+      results.add(new Result(kvs));
+    }
+    return results.toArray(new Result[results.size()]);
+  }
+
+  protected CellSetModel buildModelFromPut(Put put) {
+    RowModel row = new RowModel(put.getRow());
+    long ts = put.getTimeStamp();
+    for (List<KeyValue> kvs: put.getFamilyMap().values()) {
+      for (KeyValue kv: kvs) {
+        row.addCell(new CellModel(kv.getFamily(), kv.getQualifier(),
+          ts != HConstants.LATEST_TIMESTAMP ? ts : kv.getTimestamp(),
+          kv.getValue()));
+      }
+    }
+    CellSetModel model = new CellSetModel();
+    model.addRow(row);
+    return model;
+  }
+
+  /**
+   * Constructor
+   * @param client
+   * @param name
+   */
+  public RemoteHTable(Client client, String name) {
+    this(client, HBaseConfiguration.create(), Bytes.toBytes(name), null);
+  }
+
+  /**
+   * Constructor
+   * @param client
+   * @param name
+   * @param accessToken
+   */
+  public RemoteHTable(Client client, String name, String accessToken) {
+    this(client, HBaseConfiguration.create(), Bytes.toBytes(name), accessToken);
+  }
+
+  /**
+   * Constructor
+   * @param client
+   * @param conf
+   * @param name
+   * @param accessToken
+   */
+  public RemoteHTable(Client client, Configuration conf, String name,
+      String accessToken) {
+    this(client, conf, Bytes.toBytes(name), accessToken);
+  }
+
+  /**
+   * Constructor
+   * @param conf
+   */
+  public RemoteHTable(Client client, Configuration conf, byte[] name,
+      String accessToken) {
+    this.client = client;
+    this.conf = conf;
+    this.name = name;
+    this.accessToken = accessToken;
+    this.maxRetries = conf.getInt("hbase.rest.client.max.retries", 10);
+    this.sleepTime = conf.getLong("hbase.rest.client.sleep", 1000);
+  }
+
+  public byte[] getTableName() {
+    return name.clone();
+  }
+
+  public Configuration getConfiguration() {
+    return conf;
+  }
+
+  public HTableDescriptor getTableDescriptor() throws IOException {
+    StringBuilder sb = new StringBuilder();
+    sb.append('/');
+    if (accessToken != null) {
+      sb.append(accessToken);
+      sb.append('/');
+    }
+    sb.append(Bytes.toStringBinary(name));
+    sb.append('/');
+    sb.append("schema");
+    for (int i = 0; i < maxRetries; i++) {
+      Response response = client.get(sb.toString(), Constants.MIMETYPE_PROTOBUF);
+      int code = response.getCode();
+      switch (code) {
+      case 200:
+        TableSchemaModel schema = new TableSchemaModel();
+        schema.getObjectFromMessage(response.getBody());
+        return schema.getTableDescriptor();
+      case 509: 
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("schema request returned " + code);
+      }
+    }
+    throw new IOException("schema request timed out");
+  }
+
+  public void close() throws IOException {
+    client.shutdown();
+  }
+
+  public Result get(Get get) throws IOException {
+    TimeRange range = get.getTimeRange();
+    String spec = buildRowSpec(get.getRow(), get.getFamilyMap(),
+      range.getMin(), range.getMax(), get.getMaxVersions());
+    if (get.getFilter() != null) {
+      LOG.warn("filters not supported on gets");
+    }
+    for (int i = 0; i < maxRetries; i++) {
+      Response response = client.get(spec, Constants.MIMETYPE_PROTOBUF);
+      int code = response.getCode();
+      switch (code) {
+      case 200:
+        CellSetModel model = new CellSetModel();
+        model.getObjectFromMessage(response.getBody());
+        Result[] results = buildResultFromModel(model);
+        if (results.length > 0) {
+          if (results.length > 1) {
+            LOG.warn("too many results for get (" + results.length + ")");
+          }
+          return results[0];
+        }
+        // fall through
+      case 404:
+        return new Result();
+      case 509:
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("get request returned " + code);
+      }
+    }
+    throw new IOException("get request timed out");
+  }
+
+  public boolean exists(Get get) throws IOException {
+    LOG.warn("exists() is really get(), just use get()");
+    Result result = get(get);
+    return (result != null && !(result.isEmpty()));
+  }
+
+  public void put(Put put) throws IOException {
+    CellSetModel model = buildModelFromPut(put);
+    StringBuilder sb = new StringBuilder();
+    sb.append('/');
+    if (accessToken != null) {
+      sb.append(accessToken);
+      sb.append('/');      
+    }
+    sb.append(Bytes.toStringBinary(name));
+    sb.append('/');
+    sb.append(Bytes.toStringBinary(put.getRow()));
+    for (int i = 0; i < maxRetries; i++) {
+      Response response = client.put(sb.toString(), Constants.MIMETYPE_PROTOBUF,
+        model.createProtobufOutput());
+      int code = response.getCode();
+      switch (code) {
+      case 200:
+        return;
+      case 509:
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("put request failed with " + code);
+      }
+    }
+    throw new IOException("put request timed out");
+  }
+
+  public void put(List<Put> puts) throws IOException {
+    // this is a trick: The gateway accepts multiple rows in a cell set and
+    // ignores the row specification in the URI
+
+    // separate puts by row
+    TreeMap<byte[],List<KeyValue>> map =
+      new TreeMap<byte[],List<KeyValue>>(Bytes.BYTES_COMPARATOR);
+    for (Put put: puts) {
+      byte[] row = put.getRow();
+      List<KeyValue> kvs = map.get(row);
+      if (kvs == null) {
+        kvs = new ArrayList<KeyValue>();
+        map.put(row, kvs);
+      }
+      for (List<KeyValue> l: put.getFamilyMap().values()) {
+        kvs.addAll(l);
+      }
+    }
+
+    // build the cell set
+    CellSetModel model = new CellSetModel();
+    for (Map.Entry<byte[], List<KeyValue>> e: map.entrySet()) {
+      RowModel row = new RowModel(e.getKey());
+      for (KeyValue kv: e.getValue()) {
+        row.addCell(new CellModel(kv));
+      }
+      model.addRow(row);
+    }
+
+    // build path for multiput
+    StringBuilder sb = new StringBuilder();
+    sb.append('/');
+    if (accessToken != null) {
+      sb.append(accessToken);
+      sb.append('/');      
+    }
+    sb.append(Bytes.toStringBinary(name));
+    sb.append("/$multiput"); // can be any nonexistent row
+    for (int i = 0; i < maxRetries; i++) {
+      Response response = client.put(sb.toString(), Constants.MIMETYPE_PROTOBUF,
+        model.createProtobufOutput());
+      int code = response.getCode();
+      switch (code) {
+      case 200:
+        return;
+      case 509:
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("multiput request failed with " + code);
+      }
+    }
+    throw new IOException("multiput request timed out");
+  }
+
+  public void delete(Delete delete) throws IOException {
+    String spec = buildRowSpec(delete.getRow(), delete.getFamilyMap(),
+      delete.getTimeStamp(), delete.getTimeStamp(), 1);
+    for (int i = 0; i < maxRetries; i++) {
+      Response response = client.delete(spec);
+      int code = response.getCode();
+      switch (code) {
+      case 200:
+        return;
+      case 509:
+        try {
+          Thread.sleep(sleepTime);
+        } catch (InterruptedException e) { }
+        break;
+      default:
+        throw new IOException("delete request failed with " + code);
+      }
+    }
+    throw new IOException("delete request timed out");
+  }
+
+  public void delete(List<Delete> deletes) throws IOException {
+    for (Delete delete: deletes) {
+      delete(delete);
+    }
+  }
+
+  public void flushCommits() throws IOException {
+    // no-op
+  }
+
+  class Scanner implements ResultScanner {
+
+    String uri;
+
+    public Scanner(Scan scan) throws IOException {
+      ScannerModel model;
+      try {
+        model = ScannerModel.fromScan(scan);
+      } catch (Exception e) {
+        throw new IOException(e);
+      }
+      StringBuffer sb = new StringBuffer();
+      sb.append('/');
+      if (accessToken != null) {
+        sb.append(accessToken);
+        sb.append('/');
+      }
+      sb.append(Bytes.toStringBinary(name));
+      sb.append('/');
+      sb.append("scanner");
+      for (int i = 0; i < maxRetries; i++) {
+        Response response = client.post(sb.toString(),
+          Constants.MIMETYPE_PROTOBUF, model.createProtobufOutput());
+        int code = response.getCode();
+        switch (code) {
+        case 201:
+          uri = response.getLocation();
+          return;
+        case 509:
+          try {
+            Thread.sleep(sleepTime);
+          } catch (InterruptedException e) { }
+          break;
+        default:
+          throw new IOException("scan request failed with " + code);
+        }
+      }
+      throw new IOException("scan request timed out");
+    }
+
+    @Override
+    public Result[] next(int nbRows) throws IOException {
+      StringBuilder sb = new StringBuilder(uri);
+      sb.append("?n=");
+      sb.append(nbRows);
+      for (int i = 0; i < maxRetries; i++) {
+        Response response = client.get(sb.toString(),
+          Constants.MIMETYPE_PROTOBUF);
+        int code = response.getCode();
+        switch (code) {
+        case 200:
+          CellSetModel model = new CellSetModel();
+          model.getObjectFromMessage(response.getBody());
+          return buildResultFromModel(model);
+        case 204:
+        case 206:
+          return null;
+        case 509:
+          try {
+            Thread.sleep(sleepTime);
+          } catch (InterruptedException e) { }
+          break;
+        default:
+          throw new IOException("scanner.next request failed with " + code);
+        }
+      }
+      throw new IOException("scanner.next request timed out");
+    }
+
+    @Override
+    public Result next() throws IOException {
+      Result[] results = next(1);
+      if (results == null || results.length < 1) {
+        return null;
+      }
+      return results[0];
+    }
+    
+    class Iter implements Iterator<Result> {
+
+      Result cache;
+
+      public Iter() {
+        try {
+          cache = Scanner.this.next();
+        } catch (IOException e) {
+          LOG.warn(StringUtils.stringifyException(e));
+        }
+      }
+
+      @Override
+      public boolean hasNext() {
+        return cache != null;
+      }
+
+      @Override
+      public Result next() {
+        Result result = cache;
+        try {
+          cache = Scanner.this.next();
+        } catch (IOException e) {
+          LOG.warn(StringUtils.stringifyException(e));
+          cache = null;
+        }
+        return result;
+      }
+
+      @Override
+      public void remove() {
+        throw new RuntimeException("remove() not supported");
+      }
+      
+    }
+
+    @Override
+    public Iterator<Result> iterator() {
+      return new Iter();
+    }
+
+    @Override
+    public void close() {
+      try {
+        client.delete(uri);
+      } catch (IOException e) {
+        LOG.warn(StringUtils.stringifyException(e));
+      }
+    }
+
+  }
+
+  public ResultScanner getScanner(Scan scan) throws IOException {
+    return new Scanner(scan);
+  }
+
+  public ResultScanner getScanner(byte[] family) throws IOException {
+    Scan scan = new Scan();
+    scan.addFamily(family);
+    return new Scanner(scan);
+  }
+
+  public ResultScanner getScanner(byte[] family, byte[] qualifier)
+      throws IOException {
+    Scan scan = new Scan();
+    scan.addColumn(family, qualifier);
+    return new Scanner(scan);
+  }
+
+  public boolean isAutoFlush() {
+    return true;
+  }
+
+  public Result getRowOrBefore(byte[] row, byte[] family) throws IOException {
+    throw new IOException("getRowOrBefore not supported");
+  }
+
+  public RowLock lockRow(byte[] row) throws IOException {
+    throw new IOException("lockRow not implemented");
+  }
+
+  public void unlockRow(RowLock rl) throws IOException {
+    throw new IOException("unlockRow not implemented");
+  }
+
+  public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier,
+      byte[] value, Put put) throws IOException {
+    throw new IOException("checkAndPut not supported");
+  }
+
+  public boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier,
+      byte[] value, Delete delete) throws IOException {
+    throw new IOException("checkAndDelete not supported");
+  }
+
+  public Result increment(Increment increment) throws IOException {
+    throw new IOException("Increment not supported");
+  }
+
+  public long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
+      long amount) throws IOException {
+    throw new IOException("incrementColumnValue not supported");
+  }
+
+  public long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier,
+      long amount, boolean writeToWAL) throws IOException {
+    throw new IOException("incrementColumnValue not supported");
+  }
+
+  @Override
+  public void batch(List<Row> actions, Object[] results) throws IOException {
+    throw new IOException("batch not supported");
+  }
+
+  @Override
+  public Object[] batch(List<Row> actions) throws IOException {
+    throw new IOException("batch not supported");
+  }
+
+  @Override
+  public Result[] get(List<Get> gets) throws IOException {
+    throw new IOException("get(List<Get>) not supported");
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java
new file mode 100644
index 0000000..421065b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/client/Response.java
@@ -0,0 +1,126 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.client;
+
+import org.apache.commons.httpclient.Header;
+
+/**
+ * The HTTP result code, response headers, and body of a HTTP response.
+ */
+public class Response {
+  private int code;
+  private Header[] headers;
+  private byte[] body;
+
+  /**
+   * Constructor
+   * @param code the HTTP response code
+   */
+  public Response(int code) {
+    this(code, null, null);
+  }
+
+  /**
+   * Constructor
+   * @param code the HTTP response code
+   * @param headers the HTTP response headers
+   */
+  public Response(int code, Header[] headers) {
+    this(code, headers, null);
+  }
+
+  /**
+   * Constructor
+   * @param code the HTTP response code
+   * @param headers the HTTP response headers
+   * @param body the response body, can be null
+   */
+  public Response(int code, Header[] headers, byte[] body) {
+    this.code = code;
+    this.headers = headers;
+    this.body = body;
+  }
+
+  /**
+   * @return the HTTP response code
+   */
+  public int getCode() {
+    return code;
+  }
+
+  /**
+   * @return the HTTP response headers
+   */
+  public Header[] getHeaders() {
+    return headers;
+  }
+
+  public String getHeader(String key) {
+    for (Header header: headers) {
+      if (header.getName().equalsIgnoreCase(key)) {
+        return header.getValue();
+      }
+    }
+    return null;
+  }
+
+  /**
+   * @return the value of the Location header
+   */
+  public String getLocation() {
+    return getHeader("Location");
+  }
+
+  /**
+   * @return true if a response body was sent
+   */
+  public boolean hasBody() {
+    return body != null;
+  }
+
+  /**
+   * @return the HTTP response body
+   */
+  public byte[] getBody() {
+    return body;
+  }
+
+  /**
+   * @param code the HTTP response code
+   */
+  public void setCode(int code) {
+    this.code = code;
+  }
+
+  /**
+   * @param headers the HTTP response headers
+   */
+  public void setHeaders(Header[] headers) {
+    this.headers = headers;
+  }
+
+  /**
+   * @param body the response body
+   */
+  public void setBody(byte[] body) {
+    this.body = body;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPRequestStream.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPRequestStream.java
new file mode 100644
index 0000000..0bd5f65
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPRequestStream.java
@@ -0,0 +1,56 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.filter;
+
+import java.io.IOException;
+import java.util.zip.GZIPInputStream;
+
+import javax.servlet.ServletInputStream;
+import javax.servlet.http.HttpServletRequest;
+
+public class GZIPRequestStream extends ServletInputStream
+{
+  private GZIPInputStream in;
+
+  public GZIPRequestStream(HttpServletRequest request) throws IOException {
+    this.in = new GZIPInputStream(request.getInputStream());
+  }
+
+  @Override
+  public int read() throws IOException {
+    return in.read();
+  }
+
+  @Override
+  public int read(byte[] b) throws IOException {
+    return in.read(b);
+  }
+
+  @Override
+  public int read(byte[] b, int off, int len) throws IOException {
+    return in.read(b, off, len);
+  }
+
+  @Override
+  public void close() throws IOException {
+    in.close();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPRequestWrapper.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPRequestWrapper.java
new file mode 100644
index 0000000..764576c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPRequestWrapper.java
@@ -0,0 +1,50 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.filter;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+
+import javax.servlet.ServletInputStream;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletRequestWrapper;
+
+public class GZIPRequestWrapper extends HttpServletRequestWrapper {
+  private ServletInputStream is;
+  private BufferedReader reader;
+
+  public GZIPRequestWrapper(HttpServletRequest request) throws IOException {
+    super(request);
+    this.is = new GZIPRequestStream(request);
+    this.reader = new BufferedReader(new InputStreamReader(this.is));
+  }
+
+  @Override
+  public ServletInputStream getInputStream() throws IOException {
+    return is;
+  }
+
+  @Override
+  public BufferedReader getReader() throws IOException {
+    return reader;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseStream.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseStream.java
new file mode 100644
index 0000000..d27b37b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseStream.java
@@ -0,0 +1,76 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.filter;
+
+import java.io.IOException;
+import java.util.zip.GZIPOutputStream;
+
+import javax.servlet.ServletOutputStream;
+import javax.servlet.http.HttpServletResponse;
+
+public class GZIPResponseStream extends ServletOutputStream
+{
+  private HttpServletResponse response;
+  private GZIPOutputStream out;
+
+  public GZIPResponseStream(HttpServletResponse response) throws IOException {
+    this.response = response;
+    this.out = new GZIPOutputStream(response.getOutputStream());
+    response.addHeader("Content-Encoding", "gzip");
+  }
+
+  public void resetBuffer() {
+    if (out != null && !response.isCommitted()) {
+      response.setHeader("Content-Encoding", null);
+    }
+    out = null;
+  }
+
+  @Override
+  public void write(int b) throws IOException {
+    out.write(b);
+  }
+
+  @Override
+  public void write(byte[] b) throws IOException {
+    out.write(b);
+  }
+
+  @Override
+  public void write(byte[] b, int off, int len) throws IOException {
+    out.write(b, off, len);
+  }
+
+  @Override
+  public void close() throws IOException {
+    finish();
+    out.close();
+  }
+
+  @Override
+  public void flush() throws IOException {
+    out.flush();
+  }
+
+  public void finish() throws IOException {
+    out.finish();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseWrapper.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseWrapper.java
new file mode 100644
index 0000000..84184e9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseWrapper.java
@@ -0,0 +1,144 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.filter;
+
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import javax.servlet.ServletOutputStream;
+import javax.servlet.http.HttpServletResponse;
+import javax.servlet.http.HttpServletResponseWrapper;
+
+public class GZIPResponseWrapper extends HttpServletResponseWrapper {
+  private HttpServletResponse response;
+  private GZIPResponseStream os;
+  private PrintWriter writer;
+  private boolean compress = true;
+
+  public GZIPResponseWrapper(HttpServletResponse response) {
+    super(response);
+    this.response = response;
+  }
+
+  @Override
+  public void setStatus(int status) {
+    if (status < 200 || status >= 300) {
+      compress = false;
+    }
+  }
+
+  @Override
+  public void addHeader(String name, String value) {
+    if (!"content-length".equalsIgnoreCase(name)) {
+      super.addHeader(name, value);
+    }
+  }
+
+  @Override
+  public void setContentLength(int length) {
+    // do nothing
+  }
+
+  @Override
+  public void setIntHeader(String name, int value) {
+    if (!"content-length".equalsIgnoreCase(name)) {
+      super.setIntHeader(name, value);
+    }
+  }
+
+  @Override
+  public void setHeader(String name, String value) {
+    if (!"content-length".equalsIgnoreCase(name)) {
+      super.setHeader(name, value);
+    }
+  }
+
+  @Override
+  public void flushBuffer() throws IOException {
+    if (writer != null) {
+      writer.flush();
+    }
+    if (os != null) {
+      os.finish();
+    } else {
+      getResponse().flushBuffer();
+    }
+  }
+
+  @Override
+  public void reset() {
+    super.reset();
+    if (os != null) {
+      os.resetBuffer();
+    }
+    writer = null;
+    os = null;
+    compress = true;
+  }
+
+  @Override
+  public void resetBuffer() {
+    super.resetBuffer();
+    if (os != null) {
+      os.resetBuffer();
+    }
+    writer = null;
+    os = null;
+  }
+
+  @Override
+  public void sendError(int status, String msg) throws IOException {
+    resetBuffer();
+    super.sendError(status, msg);
+  }
+
+  @Override
+  public void sendError(int status) throws IOException {
+    resetBuffer();
+    super.sendError(status);
+  }
+
+  @Override
+  public void sendRedirect(String location) throws IOException {
+    resetBuffer();
+    super.sendRedirect(location);
+  }
+
+  @Override
+  public ServletOutputStream getOutputStream() throws IOException {
+    if (!response.isCommitted() && compress) {
+      if (os == null) {
+        os = new GZIPResponseStream(response);
+      }
+      return os;
+    } else {
+      return response.getOutputStream();
+    }
+  }
+
+  @Override
+  public PrintWriter getWriter() throws IOException {
+    if (writer == null) {
+      writer = new PrintWriter(getOutputStream());
+    }
+    return writer;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GzipFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GzipFilter.java
new file mode 100644
index 0000000..3cc35bc
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/filter/GzipFilter.java
@@ -0,0 +1,57 @@
+package org.apache.hadoop.hbase.rest.filter;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.StringTokenizer;
+
+import javax.servlet.Filter;
+import javax.servlet.FilterChain;
+import javax.servlet.FilterConfig;
+import javax.servlet.ServletException;
+import javax.servlet.ServletRequest;
+import javax.servlet.ServletResponse;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+
+public class GzipFilter implements Filter {
+  private Set<String> mimeTypes = new HashSet<String>();
+
+  @Override
+  public void init(FilterConfig filterConfig) throws ServletException {
+    String s = filterConfig.getInitParameter("mimeTypes");
+    if (s != null) {
+      StringTokenizer tok = new StringTokenizer(s, ",", false);
+      while (tok.hasMoreTokens()) {
+        mimeTypes.add(tok.nextToken());
+      }
+    }
+  }
+
+  @Override
+  public void destroy() {
+  }
+
+  @Override
+  public void doFilter(ServletRequest req, ServletResponse rsp,
+      FilterChain chain) throws IOException, ServletException {
+    HttpServletRequest request = (HttpServletRequest)req;
+    HttpServletResponse response = (HttpServletResponse)rsp;
+    String contentEncoding = request.getHeader("content-encoding");
+    String acceptEncoding = request.getHeader("accept-encoding");
+    String contentType = request.getHeader("content-type");
+    if ((contentEncoding != null) &&
+        (contentEncoding.toLowerCase().indexOf("gzip") > -1)) {
+      request = new GZIPRequestWrapper(request);
+    }
+    if (((acceptEncoding != null) &&
+          (acceptEncoding.toLowerCase().indexOf("gzip") > -1)) ||
+        ((contentType != null) && mimeTypes.contains(contentType))) {
+      response = new GZIPResponseWrapper(response);
+    }
+    chain.doFilter(request, response);
+    if ((response instanceof GZIPResponseWrapper)) {
+      ((GZIPResponseStream)response.getOutputStream()).finish();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/metrics/RESTMetrics.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/metrics/RESTMetrics.java
new file mode 100644
index 0000000..284bbc5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/metrics/RESTMetrics.java
@@ -0,0 +1,87 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.metrics;
+
+import org.apache.hadoop.hbase.metrics.MetricsRate;
+
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.Updater;
+import org.apache.hadoop.metrics.jvm.JvmMetrics;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+public class RESTMetrics implements Updater {
+  private final MetricsRecord metricsRecord;
+  private final MetricsRegistry registry = new MetricsRegistry();
+  private final RESTStatistics restStatistics;
+
+  private MetricsRate requests = new MetricsRate("requests", registry);
+
+  public RESTMetrics() {
+    MetricsContext context = MetricsUtil.getContext("rest");
+    metricsRecord = MetricsUtil.createRecord(context, "rest");
+    String name = Thread.currentThread().getName();
+    metricsRecord.setTag("REST", name);
+    context.registerUpdater(this);
+    JvmMetrics.init("rest", name);
+    // expose the MBean for metrics
+    restStatistics = new RESTStatistics(registry);
+
+  }
+
+  public void shutdown() {
+    if (restStatistics != null) {
+      restStatistics.shutdown();
+    }
+  }
+
+  /**
+   * Since this object is a registered updater, this method will be called
+   * periodically, e.g. every 5 seconds.
+   * @param unused 
+   */
+  public void doUpdates(MetricsContext unused) {
+    synchronized (this) {
+      requests.pushMetric(metricsRecord);
+    }
+    this.metricsRecord.update();
+  }
+  
+  public void resetAllMinMax() {
+    // Nothing to do
+  }
+
+  /**
+   * @return Count of requests.
+   */
+  public float getRequests() {
+    return requests.getPreviousIntervalValue();
+  }
+  
+  /**
+   * @param inc How much to add to requests.
+   */
+  public void incrementRequests(final int inc) {
+    requests.inc(inc);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/metrics/RESTStatistics.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/metrics/RESTStatistics.java
new file mode 100644
index 0000000..d29d50d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/metrics/RESTStatistics.java
@@ -0,0 +1,44 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.metrics;
+
+import javax.management.ObjectName;
+
+import org.apache.hadoop.hbase.metrics.MetricsMBeanBase;
+
+import org.apache.hadoop.metrics.util.MBeanUtil;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+
+public class RESTStatistics  extends MetricsMBeanBase {
+  private final ObjectName mbeanName;
+
+  public RESTStatistics(MetricsRegistry registry) {
+    super(registry, "restStatistics");
+    mbeanName = MBeanUtil.registerMBean("rest", "restStatistics", this);
+  }
+
+  public void shutdown() {
+    if (mbeanName != null) {
+      MBeanUtil.unregisterMBean(mbeanName);
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/CellModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/CellModel.java
new file mode 100644
index 0000000..3413d00
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/CellModel.java
@@ -0,0 +1,198 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlValue;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell;
+
+import com.google.protobuf.ByteString;
+
+/**
+ * Representation of a cell. A cell is a single value associated a column and
+ * optional qualifier, and either the timestamp when it was stored or the user-
+ * provided timestamp if one was explicitly supplied.
+ *
+ * <pre>
+ * &lt;complexType name="Cell"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="value" maxOccurs="1" minOccurs="1"&gt;
+ *       &lt;simpleType&gt;
+ *         &lt;restriction base="base64Binary"/&gt;
+ *       &lt;/simpleType&gt;
+ *     &lt;/element&gt;
+ *   &lt;/sequence&gt;
+ *   &lt;attribute name="column" type="base64Binary" /&gt;
+ *   &lt;attribute name="timestamp" type="int" /&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="Cell")
+public class CellModel implements ProtobufMessageHandler, Serializable {
+  private static final long serialVersionUID = 1L;
+  
+  private long timestamp = HConstants.LATEST_TIMESTAMP;
+  private byte[] column;
+  private byte[] value;
+
+  /**
+   * Default constructor
+   */
+  public CellModel() {}
+
+  /**
+   * Constructor
+   * @param column
+   * @param value
+   */
+  public CellModel(byte[] column, byte[] value) {
+    this(column, HConstants.LATEST_TIMESTAMP, value);
+  }
+
+  /**
+   * Constructor
+   * @param column
+   * @param qualifier
+   * @param value
+   */
+  public CellModel(byte[] column, byte[] qualifier, byte[] value) {
+    this(column, qualifier, HConstants.LATEST_TIMESTAMP, value);
+  }
+
+  /**
+   * Constructor from KeyValue
+   * @param kv
+   */
+  public CellModel(KeyValue kv) {
+    this(kv.getFamily(), kv.getQualifier(), kv.getTimestamp(), kv.getValue());
+  }
+
+  /**
+   * Constructor
+   * @param column
+   * @param timestamp
+   * @param value
+   */
+  public CellModel(byte[] column, long timestamp, byte[] value) {
+    this.column = column;
+    this.timestamp = timestamp;
+    this.value = value;
+  }
+
+  /**
+   * Constructor
+   * @param column
+   * @param qualifier
+   * @param timestamp
+   * @param value
+   */
+  public CellModel(byte[] column, byte[] qualifier, long timestamp,
+      byte[] value) {
+    this.column = KeyValue.makeColumn(column, qualifier);
+    this.timestamp = timestamp;
+    this.value = value;
+  }
+  
+  /**
+   * @return the column
+   */
+  @XmlAttribute
+  public byte[] getColumn() {
+    return column;
+  }
+
+  /**
+   * @param column the column to set
+   */
+  public void setColumn(byte[] column) {
+    this.column = column;
+  }
+
+  /**
+   * @return true if the timestamp property has been specified by the
+   * user
+   */
+  public boolean hasUserTimestamp() {
+    return timestamp != HConstants.LATEST_TIMESTAMP;
+  }
+
+  /**
+   * @return the timestamp
+   */
+  @XmlAttribute
+  public long getTimestamp() {
+    return timestamp;
+  }
+
+  /**
+   * @param timestamp the timestamp to set
+   */
+  public void setTimestamp(long timestamp) {
+    this.timestamp = timestamp;
+  }
+
+  /**
+   * @return the value
+   */
+  @XmlValue
+  public byte[] getValue() {
+    return value;
+  }
+
+  /**
+   * @param value the value to set
+   */
+  public void setValue(byte[] value) {
+    this.value = value;
+  }
+
+  @Override
+  public byte[] createProtobufOutput() {
+    Cell.Builder builder = Cell.newBuilder();
+    builder.setColumn(ByteString.copyFrom(getColumn()));
+    builder.setData(ByteString.copyFrom(getValue()));
+    if (hasUserTimestamp()) {
+      builder.setTimestamp(getTimestamp());
+    }
+    return builder.build().toByteArray();
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+      throws IOException {
+    Cell.Builder builder = Cell.newBuilder();
+    builder.mergeFrom(message);
+    setColumn(builder.getColumn().toByteArray());
+    setValue(builder.getData().toByteArray());
+    if (builder.hasTimestamp()) {
+      setTimestamp(builder.getTimestamp());
+    }
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/CellSetModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/CellSetModel.java
new file mode 100644
index 0000000..7e7073c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/CellSetModel.java
@@ -0,0 +1,149 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlElement;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell;
+import org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet;
+
+import com.google.protobuf.ByteString;
+
+/**
+ * Representation of a grouping of cells. May contain cells from more than
+ * one row. Encapsulates RowModel and CellModel models.
+ * 
+ * <pre>
+ * &lt;complexType name="CellSet"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="row" type="tns:Row" maxOccurs="unbounded" 
+ *       minOccurs="1"&gt;&lt;/element&gt;
+ *   &lt;/sequence&gt;
+ * &lt;/complexType&gt;
+ * 
+ * &lt;complexType name="Row"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="key" type="base64Binary"&gt;&lt;/element&gt;
+ *     &lt;element name="cell" type="tns:Cell" 
+ *       maxOccurs="unbounded" minOccurs="1"&gt;&lt;/element&gt;
+ *   &lt;/sequence&gt;
+ * &lt;/complexType&gt;
+ *
+ * &lt;complexType name="Cell"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="value" maxOccurs="1" minOccurs="1"&gt;
+ *       &lt;simpleType&gt;
+ *         &lt;restriction base="base64Binary"/&gt;
+ *       &lt;/simpleType&gt;
+ *     &lt;/element&gt;
+ *   &lt;/sequence&gt;
+ *   &lt;attribute name="column" type="base64Binary" /&gt;
+ *   &lt;attribute name="timestamp" type="int" /&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="CellSet")
+public class CellSetModel implements Serializable, ProtobufMessageHandler {
+
+  private static final long serialVersionUID = 1L;
+  
+  private List<RowModel> rows;
+
+  /**  
+   * Constructor
+   */
+  public CellSetModel() {
+    this.rows = new ArrayList<RowModel>();
+  }
+  
+  /**
+   * @param rows the rows
+   */
+  public CellSetModel(List<RowModel> rows) {
+    super();
+    this.rows = rows;
+  }
+  
+  /**
+   * Add a row to this cell set
+   * @param row the row
+   */
+  public void addRow(RowModel row) {
+    rows.add(row);
+  }
+
+  /**
+   * @return the rows
+   */
+  @XmlElement(name="Row")
+  public List<RowModel> getRows() {
+    return rows;
+  }
+
+  @Override
+  public byte[] createProtobufOutput() {
+    CellSet.Builder builder = CellSet.newBuilder();
+    for (RowModel row: getRows()) {
+      CellSet.Row.Builder rowBuilder = CellSet.Row.newBuilder();
+      rowBuilder.setKey(ByteString.copyFrom(row.getKey()));
+      for (CellModel cell: row.getCells()) {
+        Cell.Builder cellBuilder = Cell.newBuilder();
+        cellBuilder.setColumn(ByteString.copyFrom(cell.getColumn()));
+        cellBuilder.setData(ByteString.copyFrom(cell.getValue()));
+        if (cell.hasUserTimestamp()) {
+          cellBuilder.setTimestamp(cell.getTimestamp());
+        }
+        rowBuilder.addValues(cellBuilder);
+      }
+      builder.addRows(rowBuilder);
+    }
+    return builder.build().toByteArray();
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+      throws IOException {
+    CellSet.Builder builder = CellSet.newBuilder();
+    builder.mergeFrom(message);
+    for (CellSet.Row row: builder.getRowsList()) {
+      RowModel rowModel = new RowModel(row.getKey().toByteArray());
+      for (Cell cell: row.getValuesList()) {
+        long timestamp = HConstants.LATEST_TIMESTAMP;
+        if (cell.hasTimestamp()) {
+          timestamp = cell.getTimestamp();
+        }
+        rowModel.addCell(
+            new CellModel(cell.getColumn().toByteArray(), timestamp,
+                  cell.getData().toByteArray()));
+      }
+      addRow(rowModel);
+    }
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/ColumnSchemaModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/ColumnSchemaModel.java
new file mode 100644
index 0000000..caf5368
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/ColumnSchemaModel.java
@@ -0,0 +1,236 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.Serializable;
+import java.util.HashMap;
+import java.util.Map;
+
+import javax.xml.bind.annotation.XmlAnyAttribute;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.namespace.QName;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+
+/**
+ * Representation of a column family schema.
+ * 
+ * <pre>
+ * &lt;complexType name="ColumnSchema"&gt;
+ *   &lt;attribute name="name" type="string"&gt;&lt;/attribute&gt;
+ *   &lt;anyAttribute&gt;&lt;/anyAttribute&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="ColumnSchema")
+public class ColumnSchemaModel implements Serializable {
+  private static final long serialVersionUID = 1L;
+  private static QName BLOCKCACHE = new QName(HColumnDescriptor.BLOCKCACHE);
+  private static QName BLOCKSIZE = new QName(HColumnDescriptor.BLOCKSIZE);
+  private static QName BLOOMFILTER = new QName(HColumnDescriptor.BLOOMFILTER);
+  private static QName COMPRESSION = new QName(HColumnDescriptor.COMPRESSION);
+  private static QName IN_MEMORY = new QName(HConstants.IN_MEMORY);
+  private static QName TTL = new QName(HColumnDescriptor.TTL);
+  private static QName VERSIONS = new QName(HConstants.VERSIONS);
+
+  private String name;
+  private Map<QName,Object> attrs = new HashMap<QName,Object>();
+
+  /**
+   * Default constructor
+   */
+  public ColumnSchemaModel() {}
+
+  /**
+   * Add an attribute to the column family schema
+   * @param name the attribute name
+   * @param value the attribute value
+   */
+  public void addAttribute(String name, Object value) {
+    attrs.put(new QName(name), value);
+  }
+
+  /**
+   * @param name the attribute name
+   * @return the attribute value
+   */
+  public String getAttribute(String name) {
+    Object o = attrs.get(new QName(name));
+    return o != null ? o.toString(): null;
+  }
+
+  /**
+   * @return the column name
+   */
+  @XmlAttribute
+  public String getName() {
+    return name;
+  }
+
+  /**
+   * @return the map for holding unspecified (user) attributes
+   */
+  @XmlAnyAttribute
+  public Map<QName,Object> getAny() {
+    return attrs;
+  }
+
+  /**
+   * @param name the table name
+   */
+  public void setName(String name) {
+    this.name = name;
+  }
+
+  /* (non-Javadoc)
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("{ NAME => '");
+    sb.append(name);
+    sb.append('\'');
+    for (Map.Entry<QName,Object> e: attrs.entrySet()) {
+      sb.append(", ");
+      sb.append(e.getKey().getLocalPart());
+      sb.append(" => '");
+      sb.append(e.getValue().toString());
+      sb.append('\'');
+    }
+    sb.append(" }");
+    return sb.toString();
+  }
+
+  // getters and setters for common schema attributes
+
+  // cannot be standard bean type getters and setters, otherwise this would
+  // confuse JAXB
+
+  /**
+   * @return true if the BLOCKCACHE attribute is present and true
+   */
+  public boolean __getBlockcache() {
+    Object o = attrs.get(BLOCKCACHE);
+    return o != null ? 
+      Boolean.valueOf(o.toString()) : HColumnDescriptor.DEFAULT_BLOCKCACHE;
+  }
+
+  /**
+   * @return the value of the BLOCKSIZE attribute or its default if it is unset
+   */
+  public int __getBlocksize() {
+    Object o = attrs.get(BLOCKSIZE);
+    return o != null ? 
+      Integer.valueOf(o.toString()) : HColumnDescriptor.DEFAULT_BLOCKSIZE;
+  }
+
+  /**
+   * @return the value of the BLOOMFILTER attribute or its default if unset
+   */
+  public String __getBloomfilter() {
+    Object o = attrs.get(BLOOMFILTER);
+    return o != null ? o.toString() : HColumnDescriptor.DEFAULT_BLOOMFILTER;
+  }
+
+  /**
+   * @return the value of the COMPRESSION attribute or its default if unset
+   */
+  public String __getCompression() {
+    Object o = attrs.get(COMPRESSION);
+    return o != null ? o.toString() : HColumnDescriptor.DEFAULT_COMPRESSION;
+  }
+
+  /**
+   * @return true if the IN_MEMORY attribute is present and true
+   */
+  public boolean __getInMemory() {
+    Object o = attrs.get(IN_MEMORY);
+    return o != null ? 
+      Boolean.valueOf(o.toString()) : HColumnDescriptor.DEFAULT_IN_MEMORY;
+  }
+
+  /**
+   * @return the value of the TTL attribute or its default if it is unset
+   */
+  public int __getTTL() {
+    Object o = attrs.get(TTL);
+    return o != null ? 
+      Integer.valueOf(o.toString()) : HColumnDescriptor.DEFAULT_TTL;
+  }
+
+  /**
+   * @return the value of the VERSIONS attribute or its default if it is unset
+   */
+  public int __getVersions() {
+    Object o = attrs.get(VERSIONS);
+    return o != null ? 
+      Integer.valueOf(o.toString()) : HColumnDescriptor.DEFAULT_VERSIONS;
+  }
+
+  /**
+   * @param value the desired value of the BLOCKSIZE attribute
+   */
+  public void __setBlocksize(int value) {
+    attrs.put(BLOCKSIZE, Integer.toString(value));
+  }
+
+  /**
+   * @param value the desired value of the BLOCKCACHE attribute
+   */
+  public void __setBlockcache(boolean value) {
+    attrs.put(BLOCKCACHE, Boolean.toString(value));
+  }
+
+  public void __setBloomfilter(String value) {
+    attrs.put(BLOOMFILTER, value);
+  }
+
+  /**
+   * @param value the desired value of the COMPRESSION attribute
+   */
+  public void __setCompression(String value) {
+    attrs.put(COMPRESSION, value); 
+  }
+
+  /**
+   * @param value the desired value of the IN_MEMORY attribute
+   */
+  public void __setInMemory(boolean value) {
+    attrs.put(IN_MEMORY, Boolean.toString(value));
+  }
+
+  /**
+   * @param value the desired value of the TTL attribute
+   */
+  public void __setTTL(int value) {
+    attrs.put(TTL, Integer.toString(value));
+  }
+
+  /**
+   * @param value the desired value of the VERSIONS attribute
+   */
+  public void __setVersions(int value) {
+    attrs.put(VERSIONS, Integer.toString(value));
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/RowModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/RowModel.java
new file mode 100644
index 0000000..a987695
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/RowModel.java
@@ -0,0 +1,142 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+
+/**
+ * Representation of a row. A row is a related set of cells, grouped by common
+ * row key. RowModels do not appear in results by themselves. They are always
+ * encapsulated within CellSetModels.
+ * 
+ * <pre>
+ * &lt;complexType name="Row"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="key" type="base64Binary"&gt;&lt;/element&gt;
+ *     &lt;element name="cell" type="tns:Cell" 
+ *       maxOccurs="unbounded" minOccurs="1"&gt;&lt;/element&gt;
+ *   &lt;/sequence&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="Row")
+public class RowModel implements ProtobufMessageHandler, Serializable {
+  private static final long serialVersionUID = 1L;
+
+  private byte[] key;
+  private List<CellModel> cells = new ArrayList<CellModel>();
+
+  /**
+   * Default constructor
+   */
+  public RowModel() { }
+
+  /**
+   * Constructor
+   * @param key the row key
+   */
+  public RowModel(final String key) {
+    this(key.getBytes());
+  }
+  
+  /**
+   * Constructor
+   * @param key the row key
+   */
+  public RowModel(final byte[] key) {
+    this.key = key;
+    cells = new ArrayList<CellModel>();
+  }
+
+  /**
+   * Constructor
+   * @param key the row key
+   * @param cells the cells
+   */
+  public RowModel(final String key, final List<CellModel> cells) {
+    this(key.getBytes(), cells);
+  }
+  
+  /**
+   * Constructor
+   * @param key the row key
+   * @param cells the cells
+   */
+  public RowModel(final byte[] key, final List<CellModel> cells) {
+    this.key = key;
+    this.cells = cells;
+  }
+  
+  /**
+   * Adds a cell to the list of cells for this row
+   * @param cell the cell
+   */
+  public void addCell(CellModel cell) {
+    cells.add(cell);
+  }
+
+  /**
+   * @return the row key
+   */
+  @XmlAttribute
+  public byte[] getKey() {
+    return key;
+  }
+
+  /**
+   * @param key the row key
+   */
+  public void setKey(byte[] key) {
+    this.key = key;
+  }
+
+  /**
+   * @return the cells
+   */
+  @XmlElement(name="Cell")
+  public List<CellModel> getCells() {
+    return cells;
+  }
+
+  @Override
+  public byte[] createProtobufOutput() {
+    // there is no standalone row protobuf message
+    throw new UnsupportedOperationException(
+        "no protobuf equivalent to RowModel");
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+      throws IOException {
+    // there is no standalone row protobuf message
+    throw new UnsupportedOperationException(
+        "no protobuf equivalent to RowModel");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/ScannerModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/ScannerModel.java
new file mode 100644
index 0000000..f97781e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/ScannerModel.java
@@ -0,0 +1,628 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.io.StringReader;
+import java.io.StringWriter;
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.filter.BinaryPrefixComparator;
+import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
+import org.apache.hadoop.hbase.filter.InclusiveStopFilter;
+import org.apache.hadoop.hbase.filter.PageFilter;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.QualifierFilter;
+import org.apache.hadoop.hbase.filter.RegexStringComparator;
+import org.apache.hadoop.hbase.filter.RowFilter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.filter.SkipFilter;
+import org.apache.hadoop.hbase.filter.SubstringComparator;
+import org.apache.hadoop.hbase.filter.ValueFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.filter.WritableByteArrayComparable;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner;
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import com.google.protobuf.ByteString;
+
+import com.sun.jersey.api.json.JSONConfiguration;
+import com.sun.jersey.api.json.JSONJAXBContext;
+import com.sun.jersey.api.json.JSONMarshaller;
+import com.sun.jersey.api.json.JSONUnmarshaller;
+
+/**
+ * A representation of Scanner parameters.
+ * 
+ * <pre>
+ * &lt;complexType name="Scanner"&gt;
+ *   &lt;sequence>
+ *     &lt;element name="column" type="base64Binary" minOccurs="0" maxOccurs="unbounded"/&gt;
+ *   &lt;/sequence&gt;
+ *   &lt;element name="filter" type="string" minOccurs="0" maxOccurs="1"&gt;&lt;/element&gt;
+ *   &lt;attribute name="startRow" type="base64Binary"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="endRow" type="base64Binary"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="batch" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="startTime" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="endTime" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="maxVersions" type="int"&gt;&lt;/attribute&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="Scanner")
+public class ScannerModel implements ProtobufMessageHandler, Serializable {
+
+  private static final long serialVersionUID = 1L;
+
+  private byte[] startRow = HConstants.EMPTY_START_ROW;
+  private byte[] endRow = HConstants.EMPTY_END_ROW;;
+  private List<byte[]> columns = new ArrayList<byte[]>();
+  private int batch = Integer.MAX_VALUE;
+  private long startTime = 0;
+  private long endTime = Long.MAX_VALUE;
+  private String filter = null;
+  private int maxVersions = Integer.MAX_VALUE;
+
+  @XmlRootElement
+  static class FilterModel {
+    
+    @XmlRootElement
+    static class WritableByteArrayComparableModel {
+      @XmlAttribute public String type;
+      @XmlAttribute public String value;
+
+      static enum ComparatorType {
+        BinaryComparator,
+        BinaryPrefixComparator,
+        RegexStringComparator,
+        SubstringComparator    
+      }
+
+      public WritableByteArrayComparableModel() { }
+
+      public WritableByteArrayComparableModel(
+          WritableByteArrayComparable comparator) {
+        String typeName = comparator.getClass().getSimpleName();
+        ComparatorType type = ComparatorType.valueOf(typeName);
+        this.type = typeName;
+        switch (type) {
+          case BinaryComparator:
+          case BinaryPrefixComparator:
+            this.value = Base64.encodeBytes(comparator.getValue());
+            break;
+          case RegexStringComparator:
+          case SubstringComparator:
+            this.value = Bytes.toString(comparator.getValue());
+            break;
+          default:
+            throw new RuntimeException("unhandled filter type: " + type);
+        }
+      }
+
+      public WritableByteArrayComparable build() {
+        WritableByteArrayComparable comparator;
+        switch (ComparatorType.valueOf(type)) {
+          case BinaryComparator: {
+            comparator = new BinaryComparator(Base64.decode(value));
+          } break;
+          case BinaryPrefixComparator: {
+            comparator = new BinaryPrefixComparator(Base64.decode(value));
+          } break;
+          case RegexStringComparator: {
+            comparator = new RegexStringComparator(value);
+          } break;
+          case SubstringComparator: {
+            comparator = new SubstringComparator(value);
+          } break;
+          default: {
+            throw new RuntimeException("unhandled comparator type: " + type);
+          }
+        }
+        return comparator;
+      }
+
+    }
+
+    // a grab bag of fields, would have been a union if this were C
+    @XmlAttribute public String type = null;
+    @XmlAttribute public String op = null;
+    @XmlElement WritableByteArrayComparableModel comparator = null;
+    @XmlAttribute public String value = null;
+    @XmlElement public List<FilterModel> filters = null;
+    @XmlAttribute public Integer limit = null;
+    @XmlAttribute public String family = null;
+    @XmlAttribute public String qualifier = null;
+    @XmlAttribute public Boolean ifMissing = null;
+    @XmlAttribute public Boolean latestVersion = null;
+
+    static enum FilterType {
+      ColumnCountGetFilter,
+      FilterList,
+      FirstKeyOnlyFilter,
+      InclusiveStopFilter,
+      PageFilter,
+      PrefixFilter,
+      QualifierFilter,
+      RowFilter,
+      SingleColumnValueFilter,
+      SkipFilter,
+      ValueFilter,
+      WhileMatchFilter    
+    }
+
+    public FilterModel() { }
+    
+    public FilterModel(Filter filter) { 
+      String typeName = filter.getClass().getSimpleName();
+      FilterType type = FilterType.valueOf(typeName);
+      this.type = typeName;
+      switch (type) {
+        case ColumnCountGetFilter:
+          this.limit = ((ColumnCountGetFilter)filter).getLimit();
+          break;
+        case FilterList:
+          this.op = ((FilterList)filter).getOperator().toString();
+          this.filters = new ArrayList<FilterModel>();
+          for (Filter child: ((FilterList)filter).getFilters()) {
+            this.filters.add(new FilterModel(child));
+          }
+          break;
+        case FirstKeyOnlyFilter:
+          break;
+        case InclusiveStopFilter:
+          this.value = 
+            Base64.encodeBytes(((InclusiveStopFilter)filter).getStopRowKey());
+          break;
+        case PageFilter:
+          this.value = Long.toString(((PageFilter)filter).getPageSize());
+          break;
+        case PrefixFilter:
+          this.value = Base64.encodeBytes(((PrefixFilter)filter).getPrefix());
+          break;
+        case QualifierFilter:
+        case RowFilter:
+        case ValueFilter:
+          this.op = ((CompareFilter)filter).getOperator().toString();
+          this.comparator = 
+            new WritableByteArrayComparableModel(
+              ((CompareFilter)filter).getComparator());
+          break;
+        case SingleColumnValueFilter: {
+          SingleColumnValueFilter scvf = (SingleColumnValueFilter) filter;
+          this.family = Base64.encodeBytes(scvf.getFamily());
+          byte[] qualifier = scvf.getQualifier();
+          if (qualifier != null) {
+            this.qualifier = Base64.encodeBytes(qualifier);
+          }
+          this.op = scvf.getOperator().toString();
+          this.comparator = 
+            new WritableByteArrayComparableModel(scvf.getComparator());
+          if (scvf.getFilterIfMissing()) {
+            this.ifMissing = true;
+          }
+          if (scvf.getLatestVersionOnly()) {
+            this.latestVersion = true;
+          }
+        } break;
+        case SkipFilter:
+          this.filters = new ArrayList<FilterModel>();
+          this.filters.add(new FilterModel(((SkipFilter)filter).getFilter()));
+          break;
+        case WhileMatchFilter:
+          this.filters = new ArrayList<FilterModel>();
+          this.filters.add(
+            new FilterModel(((WhileMatchFilter)filter).getFilter()));
+          break;
+        default:
+          throw new RuntimeException("unhandled filter type " + type);
+      }
+    }
+
+    public Filter build() {
+      Filter filter;
+      switch (FilterType.valueOf(type)) {
+      case ColumnCountGetFilter: {
+        filter = new ColumnCountGetFilter(limit);
+      } break;
+      case FilterList: {
+        List<Filter> list = new ArrayList<Filter>();
+        for (FilterModel model: filters) {
+          list.add(model.build());
+        }
+        filter = new FilterList(FilterList.Operator.valueOf(op), list);
+      } break;
+      case FirstKeyOnlyFilter: {
+        filter = new FirstKeyOnlyFilter();
+      } break;
+      case InclusiveStopFilter: {
+        filter = new InclusiveStopFilter(Base64.decode(value));
+      } break;
+      case PageFilter: {
+        filter = new PageFilter(Long.valueOf(value));
+      } break;
+      case PrefixFilter: {
+        filter = new PrefixFilter(Base64.decode(value));
+      } break;
+      case QualifierFilter: {
+        filter = new QualifierFilter(CompareOp.valueOf(op), comparator.build());
+      } break;
+      case RowFilter: {
+        filter = new RowFilter(CompareOp.valueOf(op), comparator.build());
+      } break;
+      case SingleColumnValueFilter: {
+        filter = new SingleColumnValueFilter(Base64.decode(family),
+          qualifier != null ? Base64.decode(qualifier) : null,
+          CompareOp.valueOf(op), comparator.build());
+        if (ifMissing != null) {
+          ((SingleColumnValueFilter)filter).setFilterIfMissing(ifMissing);
+        }
+        if (latestVersion != null) {
+          ((SingleColumnValueFilter)filter).setLatestVersionOnly(latestVersion);
+        }
+      } break;
+      case SkipFilter: {
+        filter = new SkipFilter(filters.get(0).build());
+      } break;
+      case ValueFilter: {
+        filter = new ValueFilter(CompareOp.valueOf(op), comparator.build());
+      } break;
+      case WhileMatchFilter: {
+        filter = new WhileMatchFilter(filters.get(0).build());
+      } break;
+      default:
+        throw new RuntimeException("unhandled filter type: " + type);
+      }
+      return filter;
+    }
+
+  }
+
+  /**
+   * @param s the JSON representation of the filter
+   * @return the filter
+   * @throws Exception
+   */
+  public static Filter buildFilter(String s) throws Exception {
+    JSONJAXBContext context =
+      new JSONJAXBContext(JSONConfiguration.natural().build(),
+        FilterModel.class);
+    JSONUnmarshaller unmarshaller = context.createJSONUnmarshaller();
+    FilterModel model = unmarshaller.unmarshalFromJSON(new StringReader(s),
+      FilterModel.class);
+    return model.build();
+  }
+
+  /**
+   * @param filter the filter
+   * @return the JSON representation of the filter
+   * @throws Exception 
+   */
+  public static String stringifyFilter(final Filter filter) throws Exception {
+    JSONJAXBContext context =
+      new JSONJAXBContext(JSONConfiguration.natural().build(),
+        FilterModel.class);
+    JSONMarshaller marshaller = context.createJSONMarshaller();
+    StringWriter writer = new StringWriter();
+    marshaller.marshallToJSON(new FilterModel(filter), writer);
+    return writer.toString();
+  }
+
+  /**
+   * @param scan the scan specification
+   * @throws Exception 
+   */
+  public static ScannerModel fromScan(Scan scan) throws Exception {
+    ScannerModel model = new ScannerModel();
+    model.setStartRow(scan.getStartRow());
+    model.setEndRow(scan.getStopRow());
+    byte[][] families = scan.getFamilies();
+    if (families != null) {
+      for (byte[] column: families) {
+        model.addColumn(column);
+      }
+    }
+    model.setStartTime(scan.getTimeRange().getMin());
+    model.setEndTime(scan.getTimeRange().getMax());
+    int caching = scan.getCaching();
+    if (caching > 0) {
+      model.setBatch(caching);
+    }
+    int maxVersions = scan.getMaxVersions();
+    if (maxVersions > 0) {
+      model.setMaxVersions(maxVersions);
+    }
+    Filter filter = scan.getFilter();
+    if (filter != null) {
+      model.setFilter(stringifyFilter(filter));
+    }
+    return model;
+  }
+
+  /**
+   * Default constructor
+   */
+  public ScannerModel() {}
+
+  /**
+   * Constructor
+   * @param startRow the start key of the row-range
+   * @param endRow the end key of the row-range
+   * @param columns the columns to scan
+   * @param batch the number of values to return in batch
+   * @param endTime the upper bound on timestamps of values of interest
+   * @param maxVersions the maximum number of versions to return
+   * @param filter a filter specification
+   * (values with timestamps later than this are excluded)
+   */
+  public ScannerModel(byte[] startRow, byte[] endRow, List<byte[]> columns,
+      int batch, long endTime, int maxVersions, String filter) {
+    super();
+    this.startRow = startRow;
+    this.endRow = endRow;
+    this.columns = columns;
+    this.batch = batch;
+    this.endTime = endTime;
+    this.maxVersions = maxVersions;
+    this.filter = filter;
+  }
+
+  /**
+   * Constructor 
+   * @param startRow the start key of the row-range
+   * @param endRow the end key of the row-range
+   * @param columns the columns to scan
+   * @param batch the number of values to return in batch
+   * @param startTime the lower bound on timestamps of values of interest
+   * (values with timestamps earlier than this are excluded)
+   * @param endTime the upper bound on timestamps of values of interest
+   * (values with timestamps later than this are excluded)
+   * @param filter a filter specification
+   */
+  public ScannerModel(byte[] startRow, byte[] endRow, List<byte[]> columns,
+      int batch, long startTime, long endTime, String filter) {
+    super();
+    this.startRow = startRow;
+    this.endRow = endRow;
+    this.columns = columns;
+    this.batch = batch;
+    this.startTime = startTime;
+    this.endTime = endTime;
+    this.filter = filter;
+  }
+
+  /**
+   * Add a column to the column set
+   * @param column the column name, as &lt;column&gt;(:&lt;qualifier&gt;)?
+   */
+  public void addColumn(byte[] column) {
+    columns.add(column);
+  }
+
+  /**
+   * @return true if a start row was specified
+   */
+  public boolean hasStartRow() {
+    return !Bytes.equals(startRow, HConstants.EMPTY_START_ROW);
+  }
+
+  /**
+   * @return start row
+   */
+  @XmlAttribute
+  public byte[] getStartRow() {
+    return startRow;
+  }
+
+  /**
+   * @return true if an end row was specified
+   */
+  public boolean hasEndRow() {
+    return !Bytes.equals(endRow, HConstants.EMPTY_END_ROW);
+  }
+
+  /**
+   * @return end row
+   */
+  @XmlAttribute
+  public byte[] getEndRow() {
+    return endRow;
+  }
+
+  /**
+   * @return list of columns of interest in column:qualifier format, or empty for all
+   */
+  @XmlElement(name="column")
+  public List<byte[]> getColumns() {
+    return columns;
+  }
+  
+  /**
+   * @return the number of cells to return in batch
+   */
+  @XmlAttribute
+  public int getBatch() {
+    return batch;
+  }
+
+  /**
+   * @return the lower bound on timestamps of items of interest
+   */
+  @XmlAttribute
+  public long getStartTime() {
+    return startTime;
+  }
+
+  /**
+   * @return the upper bound on timestamps of items of interest
+   */
+  @XmlAttribute
+  public long getEndTime() {
+    return endTime;
+  }
+
+  /**
+   * @return maximum number of versions to return
+   */
+  @XmlAttribute
+  public int getMaxVersions() {
+    return maxVersions;
+  }
+
+  /**
+   * @return the filter specification
+   */
+  @XmlElement
+  public String getFilter() {
+    return filter;
+  }
+
+  /**
+   * @param startRow start row
+   */
+  public void setStartRow(byte[] startRow) {
+    this.startRow = startRow;
+  }
+
+  /**
+   * @param endRow end row
+   */
+  public void setEndRow(byte[] endRow) {
+    this.endRow = endRow;
+  }
+
+  /**
+   * @param columns list of columns of interest in column:qualifier format, or empty for all
+   */
+  public void setColumns(List<byte[]> columns) {
+    this.columns = columns;
+  }
+
+  /**
+   * @param batch the number of cells to return in batch
+   */
+  public void setBatch(int batch) {
+    this.batch = batch;
+  }
+
+  /**
+   * @param maxVersions maximum number of versions to return
+   */
+  public void setMaxVersions(int maxVersions) {
+    this.maxVersions = maxVersions;
+  }
+
+  /**
+   * @param startTime the lower bound on timestamps of values of interest
+   */
+  public void setStartTime(long startTime) {
+    this.startTime = startTime;
+  }
+
+  /**
+   * @param endTime the upper bound on timestamps of values of interest
+   */
+  public void setEndTime(long endTime) {
+    this.endTime = endTime;
+  }
+
+  /**
+   * @param filter the filter specification
+   */
+  public void setFilter(String filter) {
+    this.filter = filter;
+  }
+
+  @Override
+  public byte[] createProtobufOutput() {
+    Scanner.Builder builder = Scanner.newBuilder();
+    if (!Bytes.equals(startRow, HConstants.EMPTY_START_ROW)) {
+      builder.setStartRow(ByteString.copyFrom(startRow));
+    }
+    if (!Bytes.equals(endRow, HConstants.EMPTY_START_ROW)) {
+      builder.setEndRow(ByteString.copyFrom(endRow));
+    }
+    for (byte[] column: columns) {
+      builder.addColumns(ByteString.copyFrom(column));
+    }
+    builder.setBatch(batch);
+    if (startTime != 0) {
+      builder.setStartTime(startTime);
+    }
+    if (endTime != 0) {
+      builder.setEndTime(endTime);
+    }
+    builder.setBatch(getBatch());
+    builder.setMaxVersions(maxVersions);
+    if (filter != null) {
+      builder.setFilter(filter);
+    }
+    return builder.build().toByteArray();
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+      throws IOException {
+    Scanner.Builder builder = Scanner.newBuilder();
+    builder.mergeFrom(message);
+    if (builder.hasStartRow()) {
+      startRow = builder.getStartRow().toByteArray();
+    }
+    if (builder.hasEndRow()) {
+      endRow = builder.getEndRow().toByteArray();
+    }
+    for (ByteString column: builder.getColumnsList()) {
+      addColumn(column.toByteArray());
+    }
+    if (builder.hasBatch()) {
+      batch = builder.getBatch();
+    }
+    if (builder.hasStartTime()) {
+      startTime = builder.getStartTime();
+    }
+    if (builder.hasEndTime()) {
+      endTime = builder.getEndTime();
+    }
+    if (builder.hasMaxVersions()) {
+      maxVersions = builder.getMaxVersions();
+    }
+    if (builder.hasFilter()) {
+      filter = builder.getFilter();
+    }
+    return this;
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterStatusModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterStatusModel.java
new file mode 100644
index 0000000..f45e902
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterStatusModel.java
@@ -0,0 +1,620 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElementWrapper;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import com.google.protobuf.ByteString;
+
+/**
+ * Representation of the status of a storage cluster:
+ * <p>
+ * <ul>
+ * <li>regions: the total number of regions served by the cluster</li>
+ * <li>requests: the total number of requests per second handled by the
+ * cluster in the last reporting interval</li>
+ * <li>averageLoad: the average load of the region servers in the cluster</li>
+ * <li>liveNodes: detailed status of the live region servers</li>
+ * <li>deadNodes: the names of region servers declared dead</li>
+ * </ul>
+ * 
+ * <pre>
+ * &lt;complexType name="StorageClusterStatus"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="liveNode" type="tns:Node"
+ *       maxOccurs="unbounded" minOccurs="0"&gt;
+ *     &lt;/element&gt;
+ *     &lt;element name="deadNode" type="string" maxOccurs="unbounded"
+ *       minOccurs="0"&gt;
+ *     &lt;/element&gt;
+ *   &lt;/sequence&gt;
+ *   &lt;attribute name="regions" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="requests" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="averageLoad" type="float"&gt;&lt;/attribute&gt;
+ * &lt;/complexType&gt;
+ *
+ * &lt;complexType name="Node"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="region" type="tns:Region" 
+ *       maxOccurs="unbounded" minOccurs="0"&gt;&lt;/element&gt;
+ *   &lt;/sequence&gt;
+ *   &lt;attribute name="name" type="string"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="startCode" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="requests" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="heapSizeMB" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="maxHeapSizeMB" type="int"&gt;&lt;/attribute&gt;
+ * &lt;/complexType&gt;
+ *
+ * &lt;complexType name="Region"&gt;
+ *   &lt;attribute name="name" type="base64Binary"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="stores" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="storefiles" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="storefileSizeMB" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="memstoreSizeMB" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="storefileIndexSizeMB" type="int"&gt;&lt;/attribute&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="ClusterStatus")
+public class StorageClusterStatusModel 
+    implements Serializable, ProtobufMessageHandler {
+	private static final long serialVersionUID = 1L;
+
+	/**
+	 * Represents a region server.
+	 */
+	public static class Node {
+	  
+	  /**
+	   * Represents a region hosted on a region server.
+	   */
+	  public static class Region {
+	    private byte[] name;
+	    private int stores;
+	    private int storefiles;
+	    private int storefileSizeMB;
+	    private int memstoreSizeMB;
+	    private int storefileIndexSizeMB;
+
+	    /**
+	     * Default constructor
+	     */
+	    public Region() {}
+
+	    /**
+	     * Constructor
+	     * @param name the region name
+	     */
+	    public Region(byte[] name) {
+	      this.name = name;
+	    }
+
+	    /**
+	     * Constructor
+	     * @param name the region name
+	     * @param stores the number of stores
+	     * @param storefiles the number of store files
+	     * @param storefileSizeMB total size of store files, in MB
+	     * @param memstoreSizeMB total size of memstore, in MB
+	     * @param storefileIndexSizeMB total size of store file indexes, in MB
+	     */
+	    public Region(byte[] name, int stores, int storefiles,
+          int storefileSizeMB, int memstoreSizeMB, int storefileIndexSizeMB) {
+        this.name = name;
+        this.stores = stores;
+        this.storefiles = storefiles;
+        this.storefileSizeMB = storefileSizeMB;
+        this.memstoreSizeMB = memstoreSizeMB;
+        this.storefileIndexSizeMB = storefileIndexSizeMB;
+      }
+
+      /**
+	     * @return the region name
+	     */
+	    @XmlAttribute
+	    public byte[] getName() {
+	      return name;
+	    }
+
+	    /**
+	     * @return the number of stores 
+	     */
+	    @XmlAttribute
+	    public int getStores() {
+        return stores;
+      }
+
+      /**
+       * @return the number of store files 
+       */
+      @XmlAttribute
+      public int getStorefiles() {
+        return storefiles;
+      }
+
+      /**
+       * @return the total size of store files, in MB
+       */
+      @XmlAttribute
+      public int getStorefileSizeMB() {
+        return storefileSizeMB;
+      }
+
+      /**
+       * @return memstore size, in MB
+       */
+      @XmlAttribute
+      public int getMemstoreSizeMB() {
+        return memstoreSizeMB;
+      }
+
+      /**
+       * @return the total size of store file indexes, in MB
+       */
+      @XmlAttribute
+      public int getStorefileIndexSizeMB() {
+        return storefileIndexSizeMB;
+      }
+
+      /**
+	     * @param name the region name
+	     */
+	    public void setName(byte[] name) {
+	      this.name = name;
+	    }
+
+	    /**
+	     * @param stores the number of stores
+	     */
+      public void setStores(int stores) {
+        this.stores = stores;
+      }
+
+      /**
+       * @param storefiles the number of store files
+       */
+      public void setStorefiles(int storefiles) {
+        this.storefiles = storefiles;
+      }
+
+      /**
+       * @param storefileSizeMB total size of store files, in MB
+       */
+      public void setStorefileSizeMB(int storefileSizeMB) {
+        this.storefileSizeMB = storefileSizeMB;
+      }
+
+      /**
+       * @param memstoreSizeMB memstore size, in MB
+       */
+      public void setMemstoreSizeMB(int memstoreSizeMB) {
+        this.memstoreSizeMB = memstoreSizeMB;
+      }
+
+      /**
+       * @param storefileIndexSizeMB total size of store file indexes, in MB
+       */
+      public void setStorefileIndexSizeMB(int storefileIndexSizeMB) {
+        this.storefileIndexSizeMB = storefileIndexSizeMB;
+      }
+	  }
+
+	  private String name;
+    private long startCode;
+    private int requests;
+    private int heapSizeMB;
+    private int maxHeapSizeMB;
+    private List<Region> regions = new ArrayList<Region>();
+
+    /**
+     * Add a region name to the list
+     * @param name the region name
+     */
+    public void addRegion(byte[] name, int stores, int storefiles,
+        int storefileSizeMB, int memstoreSizeMB, int storefileIndexSizeMB) {
+      regions.add(new Region(name, stores, storefiles, storefileSizeMB,
+        memstoreSizeMB, storefileIndexSizeMB));
+    }
+
+    /**
+     * @param index the index
+     * @return the region name
+     */
+    public Region getRegion(int index) {
+      return regions.get(index);
+    }
+
+    /**
+     * Default constructor
+     */
+    public Node() {}
+
+    /**
+     * Constructor
+     * @param name the region server name
+     * @param startCode the region server's start code
+     */
+    public Node(String name, long startCode) {
+      this.name = name;
+      this.startCode = startCode;
+    }
+
+    /**
+     * @return the region server's name
+     */
+    @XmlAttribute
+    public String getName() {
+      return name;
+    }
+
+    /**
+     * @return the region server's start code
+     */
+    @XmlAttribute
+    public long getStartCode() {
+      return startCode;
+    }
+
+    /**
+     * @return the current heap size, in MB
+     */
+    @XmlAttribute
+    public int getHeapSizeMB() {
+      return heapSizeMB;
+    }
+
+    /**
+     * @return the maximum heap size, in MB
+     */
+    @XmlAttribute
+    public int getMaxHeapSizeMB() {
+      return maxHeapSizeMB;
+    }
+
+    /**
+     * @return the list of regions served by the region server
+     */
+    @XmlElement(name="Region")
+    public List<Region> getRegions() {
+      return regions;
+    }
+
+    /**
+     * @return the number of requests per second processed by the region server
+     */
+    @XmlAttribute
+    public int getRequests() {
+      return requests;
+    }
+
+    /**
+     * @param name the region server's hostname
+     */
+    public void setName(String name) {
+      this.name = name;
+    }
+
+    /**
+     * @param startCode the region server's start code
+     */
+    public void setStartCode(long startCode) {
+      this.startCode = startCode;
+    }
+
+    /**
+     * @param heapSizeMB the current heap size, in MB
+     */
+    public void setHeapSizeMB(int heapSizeMB) {
+      this.heapSizeMB = heapSizeMB;
+    }
+
+    /**
+     * @param maxHeapSizeMB the maximum heap size, in MB
+     */
+    public void setMaxHeapSizeMB(int maxHeapSizeMB) {
+      this.maxHeapSizeMB = maxHeapSizeMB;
+    }
+
+    /**
+     * @param regions a list of regions served by the region server
+     */
+    public void setRegions(List<Region> regions) {
+      this.regions = regions;
+    }
+
+    /**
+     * @param requests the number of requests per second processed by the
+     * region server
+     */
+    public void setRequests(int requests) {
+      this.requests = requests;
+    }
+	}
+
+	private List<Node> liveNodes = new ArrayList<Node>();
+	private List<String> deadNodes = new ArrayList<String>();
+	private int regions;
+	private int requests;
+	private double averageLoad;
+
+	/**
+	 * Add a live node to the cluster representation.
+	 * @param name the region server name
+	 * @param startCode the region server's start code
+	 * @param heapSizeMB the current heap size, in MB
+	 * @param maxHeapSizeMB the maximum heap size, in MB
+	 */
+	public Node addLiveNode(String name, long startCode, int heapSizeMB,
+	    int maxHeapSizeMB) {
+	  Node node = new Node(name, startCode);
+	  node.setHeapSizeMB(heapSizeMB);
+	  node.setMaxHeapSizeMB(maxHeapSizeMB);
+	  liveNodes.add(node);
+	  return node;
+	}
+
+	/**
+	 * @param index the index
+	 * @return the region server model
+	 */
+	public Node getLiveNode(int index) {
+	  return liveNodes.get(index);
+	}
+
+	/**
+	 * Add a dead node to the cluster representation.
+	 * @param node the dead region server's name
+	 */
+	public void addDeadNode(String node) {
+	  deadNodes.add(node);
+	}
+	
+	/**
+	 * @param index the index
+	 * @return the dead region server's name
+	 */
+	public String getDeadNode(int index) {
+	  return deadNodes.get(index);
+	}
+
+	/**
+	 * Default constructor
+	 */
+	public StorageClusterStatusModel() {}
+
+	/**
+	 * @return the list of live nodes
+	 */
+	@XmlElement(name="Node")
+	@XmlElementWrapper(name="LiveNodes")
+	public List<Node> getLiveNodes() {
+	  return liveNodes;
+	}
+
+	/**
+	 * @return the list of dead nodes
+	 */
+  @XmlElement(name="Node")
+  @XmlElementWrapper(name="DeadNodes")
+  public List<String> getDeadNodes() {
+    return deadNodes;
+  }
+
+  /**
+   * @return the total number of regions served by the cluster
+   */
+  @XmlAttribute
+  public int getRegions() {
+    return regions;
+  }
+
+  /**
+   * @return the total number of requests per second handled by the cluster in
+   * the last reporting interval
+   */
+  @XmlAttribute
+  public int getRequests() {
+    return requests;
+  }
+
+  /**
+   * @return the average load of the region servers in the cluster
+   */
+  @XmlAttribute
+  public double getAverageLoad() {
+    return averageLoad;
+  }
+
+  /**
+   * @param nodes the list of live node models
+   */
+  public void setLiveNodes(List<Node> nodes) {
+    this.liveNodes = nodes;
+  }
+
+  /**
+   * @param nodes the list of dead node names
+   */
+  public void setDeadNodes(List<String> nodes) {
+    this.deadNodes = nodes;
+  }
+
+  /**
+   * @param regions the total number of regions served by the cluster
+   */
+  public void setRegions(int regions) {
+    this.regions = regions;
+  }
+
+  /**
+   * @param requests the total number of requests per second handled by the
+   * cluster
+   */
+  public void setRequests(int requests) {
+    this.requests = requests;
+  }
+
+  /**
+   * @param averageLoad the average load of region servers in the cluster
+   */
+  public void setAverageLoad(double averageLoad) {
+    this.averageLoad = averageLoad;
+  }
+
+	/* (non-Javadoc)
+	 * @see java.lang.Object#toString()
+	 */
+	@Override
+	public String toString() {
+	  StringBuilder sb = new StringBuilder();
+	  sb.append(String.format("%d live servers, %d dead servers, " + 
+      "%.4f average load\n\n", liveNodes.size(), deadNodes.size(),
+      averageLoad));
+    if (!liveNodes.isEmpty()) {
+      sb.append(liveNodes.size());
+      sb.append(" live servers\n");
+      for (Node node: liveNodes) {
+        sb.append("    ");
+        sb.append(node.name);
+        sb.append(' ');
+        sb.append(node.startCode);
+        sb.append("\n        requests=");
+        sb.append(node.requests);
+        sb.append(", regions=");
+        sb.append(node.regions.size());
+        sb.append("\n        heapSizeMB=");
+        sb.append(node.heapSizeMB);
+        sb.append("\n        maxHeapSizeMB=");
+        sb.append(node.maxHeapSizeMB);
+        sb.append("\n\n");
+        for (Node.Region region: node.regions) {
+          sb.append("        ");
+          sb.append(Bytes.toString(region.name));
+          sb.append("\n            stores=");
+          sb.append(region.stores);
+          sb.append("\n            storefiless=");
+          sb.append(region.storefiles);
+          sb.append("\n            storefileSizeMB=");
+          sb.append(region.storefileSizeMB);
+          sb.append("\n            memstoreSizeMB=");
+          sb.append(region.memstoreSizeMB);
+          sb.append("\n            storefileIndexSizeMB=");
+          sb.append(region.storefileIndexSizeMB);
+          sb.append('\n');
+        }
+        sb.append('\n');
+      }
+    }
+    if (!deadNodes.isEmpty()) {
+      sb.append('\n');
+      sb.append(deadNodes.size());
+      sb.append(" dead servers\n");
+      for (String node: deadNodes) {
+        sb.append("    ");
+        sb.append(node);
+        sb.append('\n');
+      }
+    }
+	  return sb.toString();
+	}
+
+  @Override
+  public byte[] createProtobufOutput() {
+    StorageClusterStatus.Builder builder = StorageClusterStatus.newBuilder();
+    builder.setRegions(regions);
+    builder.setRequests(requests);
+    builder.setAverageLoad(averageLoad);
+    for (Node node: liveNodes) {
+      StorageClusterStatus.Node.Builder nodeBuilder = 
+        StorageClusterStatus.Node.newBuilder();
+      nodeBuilder.setName(node.name);
+      nodeBuilder.setStartCode(node.startCode);
+      nodeBuilder.setRequests(node.requests);
+      nodeBuilder.setHeapSizeMB(node.heapSizeMB);
+      nodeBuilder.setMaxHeapSizeMB(node.maxHeapSizeMB);
+      for (Node.Region region: node.regions) {
+        StorageClusterStatus.Region.Builder regionBuilder =
+          StorageClusterStatus.Region.newBuilder();
+        regionBuilder.setName(ByteString.copyFrom(region.name));
+        regionBuilder.setStores(region.stores);
+        regionBuilder.setStorefiles(region.storefiles);
+        regionBuilder.setStorefileSizeMB(region.storefileSizeMB);
+        regionBuilder.setMemstoreSizeMB(region.memstoreSizeMB);
+        regionBuilder.setStorefileIndexSizeMB(region.storefileIndexSizeMB);
+        nodeBuilder.addRegions(regionBuilder);
+      }
+      builder.addLiveNodes(nodeBuilder);
+    }
+    for (String node: deadNodes) {
+      builder.addDeadNodes(node);
+    }
+    return builder.build().toByteArray();
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+      throws IOException {
+    StorageClusterStatus.Builder builder = StorageClusterStatus.newBuilder();
+    builder.mergeFrom(message);
+    if (builder.hasRegions()) {
+      regions = builder.getRegions();
+    }
+    if (builder.hasRequests()) {
+      requests = builder.getRequests();
+    }
+    if (builder.hasAverageLoad()) {
+      averageLoad = builder.getAverageLoad();
+    }
+    for (StorageClusterStatus.Node node: builder.getLiveNodesList()) {
+      long startCode = node.hasStartCode() ? node.getStartCode() : -1;
+      StorageClusterStatusModel.Node nodeModel = 
+        addLiveNode(node.getName(), startCode, node.getHeapSizeMB(),
+          node.getMaxHeapSizeMB());
+      int requests = node.hasRequests() ? node.getRequests() : 0;
+      nodeModel.setRequests(requests);
+      for (StorageClusterStatus.Region region: node.getRegionsList()) {
+        nodeModel.addRegion(
+          region.getName().toByteArray(),
+          region.getStores(),
+          region.getStorefiles(),
+          region.getStorefileSizeMB(),
+          region.getMemstoreSizeMB(),
+          region.getStorefileIndexSizeMB());
+      }
+    }
+    for (String node: builder.getDeadNodesList()) {
+      addDeadNode(node);
+    }
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterVersionModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterVersionModel.java
new file mode 100644
index 0000000..0563479
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterVersionModel.java
@@ -0,0 +1,65 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.Serializable;
+
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlValue;
+
+/**
+ * Simple representation of the version of the storage cluster
+ * 
+ * <pre>
+ * &lt;complexType name="StorageClusterVersion"&gt;
+ *   &lt;attribute name="version" type="string"&gt;&lt;/attribute&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="ClusterVersion")
+public class StorageClusterVersionModel implements Serializable {
+	private static final long serialVersionUID = 1L;
+
+	private String version;
+
+	/**
+	 * @return the storage cluster version
+	 */
+	@XmlValue
+	public String getVersion() {
+	  return version;
+	}
+
+	/**
+	 * @param version the storage cluster version
+	 */
+	public void setVersion(String version) {
+	  this.version = version;
+	}
+
+	/* (non-Javadoc)
+	 * @see java.lang.Object#toString()
+	 */
+	@Override
+	public String toString() {
+	  return version;
+	}
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableInfoModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableInfoModel.java
new file mode 100644
index 0000000..ce6fb96
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableInfoModel.java
@@ -0,0 +1,159 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo;
+
+import com.google.protobuf.ByteString;
+
+/**
+ * Representation of a list of table regions. 
+ * 
+ * <pre>
+ * &lt;complexType name="TableInfo"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="region" type="tns:TableRegion" 
+ *       maxOccurs="unbounded" minOccurs="1"&gt;&lt;/element&gt;
+ *   &lt;/sequence&gt;
+ *   &lt;attribute name="name" type="string"&gt;&lt;/attribute&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="TableInfo")
+public class TableInfoModel implements Serializable, ProtobufMessageHandler {
+  private static final long serialVersionUID = 1L;
+
+  private String name;
+  private List<TableRegionModel> regions = new ArrayList<TableRegionModel>();
+
+  /**
+   * Default constructor
+   */
+  public TableInfoModel() {}
+
+  /**
+   * Constructor
+   * @param name
+   */
+  public TableInfoModel(String name) {
+    this.name = name;
+  }
+
+  /**
+   * Add a region model to the list
+   * @param region the region
+   */
+  public void add(TableRegionModel region) {
+    regions.add(region);
+  }
+
+  /**
+   * @param index the index
+   * @return the region model
+   */
+  public TableRegionModel get(int index) {
+    return regions.get(index);
+  }
+
+  /**
+   * @return the table name
+   */
+  @XmlAttribute
+  public String getName() {
+    return name;
+  }
+
+  /**
+   * @return the regions
+   */
+  @XmlElement(name="Region")
+  public List<TableRegionModel> getRegions() {
+    return regions;
+  }
+
+  /**
+   * @param name the table name
+   */
+  public void setName(String name) {
+    this.name = name;
+  }
+
+  /**
+   * @param regions the regions to set
+   */
+  public void setRegions(List<TableRegionModel> regions) {
+    this.regions = regions;
+  }
+
+  /* (non-Javadoc)
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    for(TableRegionModel aRegion : regions) {
+      sb.append(aRegion.toString());
+      sb.append('\n');
+    }
+    return sb.toString();
+  }
+
+  @Override
+  public byte[] createProtobufOutput() {
+    TableInfo.Builder builder = TableInfo.newBuilder();
+    builder.setName(name);
+    for (TableRegionModel aRegion: regions) {
+      TableInfo.Region.Builder regionBuilder = TableInfo.Region.newBuilder();
+      regionBuilder.setName(aRegion.getName());
+      regionBuilder.setId(aRegion.getId());
+      regionBuilder.setStartKey(ByteString.copyFrom(aRegion.getStartKey()));
+      regionBuilder.setEndKey(ByteString.copyFrom(aRegion.getEndKey()));
+      regionBuilder.setLocation(aRegion.getLocation());
+      builder.addRegions(regionBuilder);
+    }
+    return builder.build().toByteArray();
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message) 
+      throws IOException {
+    TableInfo.Builder builder = TableInfo.newBuilder();
+    builder.mergeFrom(message);
+    setName(builder.getName());
+    for (TableInfo.Region region: builder.getRegionsList()) {
+      add(new TableRegionModel(builder.getName(), region.getId(), 
+          region.getStartKey().toByteArray(),
+          region.getEndKey().toByteArray(),
+          region.getLocation()));
+    }
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableListModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableListModel.java
new file mode 100644
index 0000000..1c276c2
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableListModel.java
@@ -0,0 +1,112 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlElementRef;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList;
+
+/**
+ * Simple representation of a list of table names.
+ */
+@XmlRootElement(name="TableList")
+public class TableListModel implements Serializable, ProtobufMessageHandler {
+
+	private static final long serialVersionUID = 1L;
+
+	private List<TableModel> tables = new ArrayList<TableModel>();
+
+	/**
+	 * Default constructor
+	 */
+	public TableListModel() {}
+
+	/**
+	 * Add the table name model to the list
+	 * @param table the table model
+	 */
+	public void add(TableModel table) {
+		tables.add(table);
+	}
+	
+	/**
+	 * @param index the index
+	 * @return the table model
+	 */
+	public TableModel get(int index) {
+		return tables.get(index);
+	}
+
+	/**
+	 * @return the tables
+	 */
+	@XmlElementRef(name="table")
+	public List<TableModel> getTables() {
+		return tables;
+	}
+
+	/**
+	 * @param tables the tables to set
+	 */
+	public void setTables(List<TableModel> tables) {
+		this.tables = tables;
+	}
+
+	/* (non-Javadoc)
+	 * @see java.lang.Object#toString()
+	 */
+	@Override
+	public String toString() {
+		StringBuilder sb = new StringBuilder();
+		for(TableModel aTable : tables) {
+			sb.append(aTable.toString());
+			sb.append('\n');
+		}
+		return sb.toString();
+	}
+
+	@Override
+	public byte[] createProtobufOutput() {
+		TableList.Builder builder = TableList.newBuilder();
+		for (TableModel aTable : tables) {
+			builder.addName(aTable.getName());
+		}
+		return builder.build().toByteArray();
+	}
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+      throws IOException {
+    TableList.Builder builder = TableList.newBuilder();
+    builder.mergeFrom(message);
+    for (String table: builder.getNameList()) {
+      this.add(new TableModel(table));
+    }
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableModel.java
new file mode 100644
index 0000000..e1d33cd
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableModel.java
@@ -0,0 +1,82 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.Serializable;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+
+/**
+ * Simple representation of a table name.
+ * 
+ * <pre>
+ * &lt;complexType name="Table"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="name" type="string"&gt;&lt;/element&gt;
+ *   &lt;/sequence&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="table")
+public class TableModel implements Serializable {
+
+	private static final long serialVersionUID = 1L;
+	
+	private String name;
+	
+	/**
+	 * Default constructor
+	 */
+	public TableModel() {}
+
+	/**
+	 * Constructor
+	 * @param name
+	 */
+	public TableModel(String name) {
+		super();
+		this.name = name;
+	}
+
+	/**
+	 * @return the name
+	 */
+	@XmlAttribute
+	public String getName() {
+		return name;
+	}
+
+	/**
+	 * @param name the name to set
+	 */
+	public void setName(String name) {
+		this.name = name;
+	}
+
+	/* (non-Javadoc)
+	 * @see java.lang.Object#toString()
+	 */
+	@Override
+	public String toString() {
+		return this.name;
+	}
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableRegionModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableRegionModel.java
new file mode 100644
index 0000000..67e7a04
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableRegionModel.java
@@ -0,0 +1,195 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.Serializable;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Representation of a region of a table and its current location on the
+ * storage cluster.
+ * 
+ * <pre>
+ * &lt;complexType name="TableRegion"&gt;
+ *   &lt;attribute name="name" type="string"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="id" type="int"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="startKey" type="base64Binary"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="endKey" type="base64Binary"&gt;&lt;/attribute&gt;
+ *   &lt;attribute name="location" type="string"&gt;&lt;/attribute&gt;
+ *  &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="Region")
+public class TableRegionModel implements Serializable {
+
+  private static final long serialVersionUID = 1L;
+
+  private String table;
+  private long id;
+  private byte[] startKey; 
+  private byte[] endKey;
+  private String location;
+
+  /**
+   * Constructor
+   */
+  public TableRegionModel() {}
+
+  /**
+   * Constructor
+   * @param table the table name
+   * @param id the encoded id of the region
+   * @param startKey the start key of the region
+   * @param endKey the end key of the region
+   */
+  public TableRegionModel(String table, long id, byte[] startKey,
+      byte[] endKey) {
+    this(table, id, startKey, endKey, null);
+  }
+
+  /**
+   * Constructor
+   * @param table the table name
+   * @param id the encoded id of the region
+   * @param startKey the start key of the region
+   * @param endKey the end key of the region
+   * @param location the name and port of the region server hosting the region
+   */
+  public TableRegionModel(String table, long id, byte[] startKey,
+      byte[] endKey, String location) {
+    this.table = table;
+    this.id = id;
+    this.startKey = startKey;
+    this.endKey = endKey;
+    this.location = location;
+  }
+
+  /**
+   * @return the region name
+   */
+  @XmlAttribute
+  public String getName() {
+    byte [] tableNameAsBytes = Bytes.toBytes(this.table);
+    byte [] nameAsBytes = HRegionInfo.createRegionName(tableNameAsBytes,
+      this.startKey, this.id,
+      !HTableDescriptor.isMetaTable(tableNameAsBytes));
+    return Bytes.toString(nameAsBytes);
+  }
+
+  /**
+   * @return the encoded region id
+   */
+  @XmlAttribute 
+  public long getId() {
+    return id;
+  }
+
+  /**
+   * @return the start key
+   */
+  @XmlAttribute 
+  public byte[] getStartKey() {
+    return startKey;
+  }
+
+  /**
+   * @return the end key
+   */
+  @XmlAttribute 
+  public byte[] getEndKey() {
+    return endKey;
+  }
+
+  /**
+   * @return the name and port of the region server hosting the region
+   */
+  @XmlAttribute 
+  public String getLocation() {
+    return location;
+  }
+
+  /**
+   * @param name region printable name
+   */
+  public void setName(String name) {
+    String split[] = name.split(",");
+    this.table = split[0];
+    this.startKey = Bytes.toBytes(split[1]);
+    String tail = split[2];
+    split = tail.split("\\.");
+    id = Long.valueOf(split[0]);
+  }
+
+  /**
+   * @param id the region's encoded id
+   */
+  public void setId(long id) {
+    this.id = id;
+  }
+
+  /**
+   * @param startKey the start key
+   */
+  public void setStartKey(byte[] startKey) {
+    this.startKey = startKey;
+  }
+
+  /**
+   * @param endKey the end key
+   */
+  public void setEndKey(byte[] endKey) {
+    this.endKey = endKey;
+  }
+
+  /**
+   * @param location the name and port of the region server hosting the region
+   */
+  public void setLocation(String location) {
+    this.location = location;
+  }
+
+  /* (non-Javadoc)
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append(getName());
+    sb.append(" [\n  id=");
+    sb.append(id);
+    sb.append("\n  startKey='");
+    sb.append(Bytes.toString(startKey));
+    sb.append("'\n  endKey='");
+    sb.append(Bytes.toString(endKey));
+    if (location != null) {
+      sb.append("'\n  location='");
+      sb.append(location);
+    }
+    sb.append("'\n]\n");
+    return sb.toString();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java
new file mode 100644
index 0000000..fa6e3a6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java
@@ -0,0 +1,353 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import javax.xml.bind.annotation.XmlAnyAttribute;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.namespace.QName;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema;
+import org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * A representation of HBase table descriptors.
+ * 
+ * <pre>
+ * &lt;complexType name="TableSchema"&gt;
+ *   &lt;sequence&gt;
+ *     &lt;element name="column" type="tns:ColumnSchema" 
+ *       maxOccurs="unbounded" minOccurs="1"&gt;&lt;/element&gt;
+ *   &lt;/sequence&gt;
+ *   &lt;attribute name="name" type="string"&gt;&lt;/attribute&gt;
+ *   &lt;anyAttribute&gt;&lt;/anyAttribute&gt;
+ * &lt;/complexType&gt;
+ * </pre>
+ */
+@XmlRootElement(name="TableSchema")
+public class TableSchemaModel implements Serializable, ProtobufMessageHandler {
+  private static final long serialVersionUID = 1L;
+  private static final QName IS_META = new QName(HTableDescriptor.IS_META);
+  private static final QName IS_ROOT = new QName(HTableDescriptor.IS_ROOT);
+  private static final QName READONLY = new QName(HTableDescriptor.READONLY);
+  private static final QName TTL = new QName(HColumnDescriptor.TTL);
+  private static final QName VERSIONS = new QName(HConstants.VERSIONS);
+  private static final QName COMPRESSION = 
+    new QName(HColumnDescriptor.COMPRESSION);
+
+  private String name;
+  private Map<QName,Object> attrs = new HashMap<QName,Object>();
+  private List<ColumnSchemaModel> columns = new ArrayList<ColumnSchemaModel>();
+  
+  /**
+   * Default constructor.
+   */
+  public TableSchemaModel() {}
+
+  /**
+   * Constructor
+   * @param htd the table descriptor
+   */
+  public TableSchemaModel(HTableDescriptor htd) {
+    setName(htd.getNameAsString());
+    for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+        htd.getValues().entrySet()) {
+      addAttribute(Bytes.toString(e.getKey().get()), 
+        Bytes.toString(e.getValue().get()));
+    }
+    for (HColumnDescriptor hcd: htd.getFamilies()) {
+      ColumnSchemaModel columnModel = new ColumnSchemaModel();
+      columnModel.setName(hcd.getNameAsString());
+      for (Map.Entry<ImmutableBytesWritable, ImmutableBytesWritable> e:
+          hcd.getValues().entrySet()) {
+        columnModel.addAttribute(Bytes.toString(e.getKey().get()), 
+            Bytes.toString(e.getValue().get()));
+      }
+      addColumnFamily(columnModel);
+    }
+  }
+
+  /**
+   * Add an attribute to the table descriptor
+   * @param name attribute name
+   * @param value attribute value
+   */
+  public void addAttribute(String name, Object value) {
+    attrs.put(new QName(name), value);
+  }
+
+  /**
+   * Return a table descriptor value as a string. Calls toString() on the
+   * object stored in the descriptor value map.
+   * @param name the attribute name
+   * @return the attribute value
+   */
+  public String getAttribute(String name) {
+    Object o = attrs.get(new QName(name));
+    return o != null ? o.toString() : null;
+  }
+
+  /**
+   * Add a column family to the table descriptor
+   * @param family the column family model
+   */
+  public void addColumnFamily(ColumnSchemaModel family) {
+    columns.add(family);
+  }
+
+  /**
+   * Retrieve the column family at the given index from the table descriptor
+   * @param index the index
+   * @return the column family model
+   */
+  public ColumnSchemaModel getColumnFamily(int index) {
+    return columns.get(index);
+  }
+
+  /**
+   * @return the table name
+   */
+  @XmlAttribute
+  public String getName() {
+    return name;
+  }
+
+  /**
+   * @return the map for holding unspecified (user) attributes
+   */
+  @XmlAnyAttribute
+  public Map<QName,Object> getAny() {
+    return attrs;
+  }
+
+  /**
+   * @return the columns
+   */
+  @XmlElement(name="ColumnSchema")
+  public List<ColumnSchemaModel> getColumns() {
+    return columns;
+  }
+
+  /**
+   * @param name the table name
+   */
+  public void setName(String name) {
+    this.name = name;
+  }
+
+  /**
+   * @param columns the columns to set
+   */
+  public void setColumns(List<ColumnSchemaModel> columns) {
+    this.columns = columns;
+  }
+
+  /* (non-Javadoc)
+   * @see java.lang.Object#toString()
+   */
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append("{ NAME=> '");
+    sb.append(name);
+    sb.append('\'');
+    for (Map.Entry<QName,Object> e: attrs.entrySet()) {
+      sb.append(", ");
+      sb.append(e.getKey().getLocalPart());
+      sb.append(" => '");
+      sb.append(e.getValue().toString());
+      sb.append('\'');
+    }
+    sb.append(", COLUMNS => [ ");
+    Iterator<ColumnSchemaModel> i = columns.iterator();
+    while (i.hasNext()) {
+      ColumnSchemaModel family = i.next();
+      sb.append(family.toString());
+      if (i.hasNext()) {
+        sb.append(',');
+      }
+      sb.append(' ');
+    }
+    sb.append("] }");
+    return sb.toString();
+  }
+
+  // getters and setters for common schema attributes
+
+  // cannot be standard bean type getters and setters, otherwise this would
+  // confuse JAXB
+
+  /**
+   * @return true if IS_META attribute exists and is truel
+   */
+  public boolean __getIsMeta() {
+    Object o = attrs.get(IS_META);
+    return o != null ? Boolean.valueOf(o.toString()) : false;
+  }
+
+  /**
+   * @return true if IS_ROOT attribute exists and is truel
+   */
+  public boolean __getIsRoot() {
+    Object o = attrs.get(IS_ROOT);
+    return o != null ? Boolean.valueOf(o.toString()) : false;
+  }
+
+  /**
+   * @return true if READONLY attribute exists and is truel
+   */
+  public boolean __getReadOnly() {
+    Object o = attrs.get(READONLY);
+    return o != null ? 
+      Boolean.valueOf(o.toString()) : HTableDescriptor.DEFAULT_READONLY;
+  }
+
+  /**
+   * @param value desired value of IS_META attribute
+   */
+  public void __setIsMeta(boolean value) {
+    attrs.put(IS_META, Boolean.toString(value));
+  }
+
+  /**
+   * @param value desired value of IS_ROOT attribute
+   */
+  public void __setIsRoot(boolean value) {
+    attrs.put(IS_ROOT, Boolean.toString(value));
+  }
+
+  /**
+   * @param value desired value of READONLY attribute
+   */
+  public void __setReadOnly(boolean value) {
+    attrs.put(READONLY, Boolean.toString(value));
+  }
+
+  @Override
+  public byte[] createProtobufOutput() {
+    TableSchema.Builder builder = TableSchema.newBuilder();
+    builder.setName(name);
+    for (Map.Entry<QName, Object> e: attrs.entrySet()) {
+      TableSchema.Attribute.Builder attrBuilder = 
+        TableSchema.Attribute.newBuilder();
+      attrBuilder.setName(e.getKey().getLocalPart());
+      attrBuilder.setValue(e.getValue().toString());
+      builder.addAttrs(attrBuilder);
+    }
+    for (ColumnSchemaModel family: columns) {
+      Map<QName, Object> familyAttrs = family.getAny();
+      ColumnSchema.Builder familyBuilder = ColumnSchema.newBuilder();
+      familyBuilder.setName(family.getName());
+      for (Map.Entry<QName, Object> e: familyAttrs.entrySet()) {
+        ColumnSchema.Attribute.Builder attrBuilder = 
+          ColumnSchema.Attribute.newBuilder();
+        attrBuilder.setName(e.getKey().getLocalPart());
+        attrBuilder.setValue(e.getValue().toString());
+        familyBuilder.addAttrs(attrBuilder);
+      }
+      if (familyAttrs.containsKey(TTL)) {
+        familyBuilder.setTtl(
+          Integer.valueOf(familyAttrs.get(TTL).toString()));
+      }
+      if (familyAttrs.containsKey(VERSIONS)) {
+        familyBuilder.setMaxVersions(
+          Integer.valueOf(familyAttrs.get(VERSIONS).toString()));
+      }
+      if (familyAttrs.containsKey(COMPRESSION)) {
+        familyBuilder.setCompression(familyAttrs.get(COMPRESSION).toString());
+      }
+      builder.addColumns(familyBuilder);
+    }
+    if (attrs.containsKey(READONLY)) {
+      builder.setReadOnly(
+        Boolean.valueOf(attrs.get(READONLY).toString()));
+    }
+    return builder.build().toByteArray();
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message) 
+      throws IOException {
+    TableSchema.Builder builder = TableSchema.newBuilder();
+    builder.mergeFrom(message);
+    this.setName(builder.getName());
+    for (TableSchema.Attribute attr: builder.getAttrsList()) {
+      this.addAttribute(attr.getName(), attr.getValue());
+    }
+    if (builder.hasReadOnly()) {
+      this.addAttribute(HTableDescriptor.READONLY, builder.getReadOnly());
+    }
+    for (ColumnSchema family: builder.getColumnsList()) {
+      ColumnSchemaModel familyModel = new ColumnSchemaModel();
+      familyModel.setName(family.getName());
+      for (ColumnSchema.Attribute attr: family.getAttrsList()) {
+        familyModel.addAttribute(attr.getName(), attr.getValue());
+      }
+      if (family.hasTtl()) {
+        familyModel.addAttribute(HColumnDescriptor.TTL, family.getTtl());
+      }
+      if (family.hasMaxVersions()) {
+        familyModel.addAttribute(HConstants.VERSIONS,
+          family.getMaxVersions());
+      }
+      if (family.hasCompression()) {
+        familyModel.addAttribute(HColumnDescriptor.COMPRESSION,
+          family.getCompression());
+      }
+      this.addColumnFamily(familyModel);
+    }
+    return this;
+  }
+
+  /**
+   * @return a table descriptor
+   */
+  public HTableDescriptor getTableDescriptor() {
+    HTableDescriptor htd = new HTableDescriptor(getName());
+    for (Map.Entry<QName, Object> e: getAny().entrySet()) {
+      htd.setValue(e.getKey().getLocalPart(), e.getValue().toString());
+    }
+    for (ColumnSchemaModel column: getColumns()) {
+      HColumnDescriptor hcd = new HColumnDescriptor(column.getName());
+      for (Map.Entry<QName, Object> e: column.getAny().entrySet()) {
+        hcd.setValue(e.getKey().getLocalPart(), e.getValue().toString());
+      }
+      htd.addFamily(hcd);
+    }
+    return htd;
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/VersionModel.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/VersionModel.java
new file mode 100644
index 0000000..e4b6b0f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/model/VersionModel.java
@@ -0,0 +1,208 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.Serializable;
+
+import javax.servlet.ServletContext;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+import org.apache.hadoop.hbase.rest.RESTServlet;
+import org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version;
+
+import com.sun.jersey.spi.container.servlet.ServletContainer;
+
+/**
+ * A representation of the collection of versions of the REST gateway software
+ * components.
+ * <ul>
+ * <li>restVersion: REST gateway revision</li>
+ * <li>jvmVersion: the JVM vendor and version information</li>
+ * <li>osVersion: the OS type, version, and hardware architecture</li>
+ * <li>serverVersion: the name and version of the servlet container</li>
+ * <li>jerseyVersion: the version of the embedded Jersey framework</li>
+ * </ul>
+ */
+@XmlRootElement(name="Version")
+public class VersionModel implements Serializable, ProtobufMessageHandler {
+
+	private static final long serialVersionUID = 1L;
+
+	private String restVersion;
+  private String jvmVersion;
+  private String osVersion;
+  private String serverVersion;
+  private String jerseyVersion;
+
+  /**
+   * Default constructor. Do not use.
+   */
+  public VersionModel() {}
+  
+  /**
+   * Constructor
+   * @param context the servlet context
+   */
+	public VersionModel(ServletContext context) {
+	  restVersion = RESTServlet.VERSION_STRING;
+	  jvmVersion = System.getProperty("java.vm.vendor") + ' ' +
+      System.getProperty("java.version") + '-' +
+      System.getProperty("java.vm.version");
+	  osVersion = System.getProperty("os.name") + ' ' +
+      System.getProperty("os.version") + ' ' +
+      System.getProperty("os.arch");
+	  serverVersion = context.getServerInfo();
+	  jerseyVersion = ServletContainer.class.getPackage()
+      .getImplementationVersion();
+	}
+
+	/**
+	 * @return the REST gateway version
+	 */
+	@XmlAttribute(name="REST")
+	public String getRESTVersion() {
+    return restVersion;
+  }
+
+	/**
+	 * @return the JVM vendor and version
+	 */
+  @XmlAttribute(name="JVM")
+  public String getJVMVersion() {
+    return jvmVersion;
+  }
+
+  /**
+   * @return the OS name, version, and hardware architecture
+   */
+  @XmlAttribute(name="OS")
+  public String getOSVersion() {
+    return osVersion;
+  }
+
+  /**
+   * @return the servlet container version
+   */
+  @XmlAttribute(name="Server")
+  public String getServerVersion() {
+    return serverVersion;
+  }
+
+  /**
+   * @return the version of the embedded Jersey framework
+   */
+  @XmlAttribute(name="Jersey")
+  public String getJerseyVersion() {
+    return jerseyVersion;
+  }
+
+  /**
+   * @param version the REST gateway version string
+   */
+  public void setRESTVersion(String version) {
+    this.restVersion = version;
+  }
+
+  /**
+   * @param version the OS version string
+   */
+  public void setOSVersion(String version) {
+    this.osVersion = version;
+  }
+
+  /**
+   * @param version the JVM version string
+   */
+  public void setJVMVersion(String version) {
+    this.jvmVersion = version;
+  }
+
+  /**
+   * @param version the servlet container version string
+   */
+  public void setServerVersion(String version) {
+    this.serverVersion = version;
+  }
+
+  /**
+   * @param version the Jersey framework version string
+   */
+  public void setJerseyVersion(String version) {
+    this.jerseyVersion = version;
+  }
+
+  /* (non-Javadoc)
+	 * @see java.lang.Object#toString()
+	 */
+	@Override
+	public String toString() {
+	  StringBuilder sb = new StringBuilder();
+	  sb.append("rest ");
+	  sb.append(restVersion);
+	  sb.append(" [JVM: ");
+	  sb.append(jvmVersion);
+	  sb.append("] [OS: ");
+	  sb.append(osVersion);
+	  sb.append("] [Server: ");
+	  sb.append(serverVersion);
+	  sb.append("] [Jersey: ");
+    sb.append(jerseyVersion);
+	  sb.append("]\n");
+	  return sb.toString();
+	}
+
+	@Override
+  public byte[] createProtobufOutput() {
+	  Version.Builder builder = Version.newBuilder();
+	  builder.setRestVersion(restVersion);
+	  builder.setJvmVersion(jvmVersion);
+	  builder.setOsVersion(osVersion);
+	  builder.setServerVersion(serverVersion);
+	  builder.setJerseyVersion(jerseyVersion);
+	  return builder.build().toByteArray();
+  }
+
+  @Override
+  public ProtobufMessageHandler getObjectFromMessage(byte[] message)
+      throws IOException {
+    Version.Builder builder = Version.newBuilder();
+    builder.mergeFrom(message);
+    if (builder.hasRestVersion()) {
+      restVersion = builder.getRestVersion();
+    }
+    if (builder.hasJvmVersion()) {
+      jvmVersion = builder.getJvmVersion();
+    }
+    if (builder.hasOsVersion()) {
+      osVersion = builder.getOsVersion();
+    }
+    if (builder.hasServerVersion()) {
+      serverVersion = builder.getServerVersion();
+    }
+    if (builder.hasJerseyVersion()) {
+      jerseyVersion = builder.getJerseyVersion();
+    }
+    return this;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
new file mode 100644
index 0000000..36065dc
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellMessage.java
@@ -0,0 +1,465 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: CellMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class CellMessage {
+  private CellMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class Cell extends
+      com.google.protobuf.GeneratedMessage {
+    // Use Cell.newBuilder() to construct.
+    private Cell() {
+      initFields();
+    }
+    private Cell(boolean noInit) {}
+    
+    private static final Cell defaultInstance;
+    public static Cell getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public Cell getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Cell_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Cell_fieldAccessorTable;
+    }
+    
+    // optional bytes row = 1;
+    public static final int ROW_FIELD_NUMBER = 1;
+    private boolean hasRow;
+    private com.google.protobuf.ByteString row_ = com.google.protobuf.ByteString.EMPTY;
+    public boolean hasRow() { return hasRow; }
+    public com.google.protobuf.ByteString getRow() { return row_; }
+    
+    // optional bytes column = 2;
+    public static final int COLUMN_FIELD_NUMBER = 2;
+    private boolean hasColumn;
+    private com.google.protobuf.ByteString column_ = com.google.protobuf.ByteString.EMPTY;
+    public boolean hasColumn() { return hasColumn; }
+    public com.google.protobuf.ByteString getColumn() { return column_; }
+    
+    // optional int64 timestamp = 3;
+    public static final int TIMESTAMP_FIELD_NUMBER = 3;
+    private boolean hasTimestamp;
+    private long timestamp_ = 0L;
+    public boolean hasTimestamp() { return hasTimestamp; }
+    public long getTimestamp() { return timestamp_; }
+    
+    // optional bytes data = 4;
+    public static final int DATA_FIELD_NUMBER = 4;
+    private boolean hasData;
+    private com.google.protobuf.ByteString data_ = com.google.protobuf.ByteString.EMPTY;
+    public boolean hasData() { return hasData; }
+    public com.google.protobuf.ByteString getData() { return data_; }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      if (hasRow()) {
+        output.writeBytes(1, getRow());
+      }
+      if (hasColumn()) {
+        output.writeBytes(2, getColumn());
+      }
+      if (hasTimestamp()) {
+        output.writeInt64(3, getTimestamp());
+      }
+      if (hasData()) {
+        output.writeBytes(4, getData());
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      if (hasRow()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeBytesSize(1, getRow());
+      }
+      if (hasColumn()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeBytesSize(2, getColumn());
+      }
+      if (hasTimestamp()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt64Size(3, getTimestamp());
+      }
+      if (hasData()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeBytesSize(4, getData());
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.getDefaultInstance()) return this;
+        if (other.hasRow()) {
+          setRow(other.getRow());
+        }
+        if (other.hasColumn()) {
+          setColumn(other.getColumn());
+        }
+        if (other.hasTimestamp()) {
+          setTimestamp(other.getTimestamp());
+        }
+        if (other.hasData()) {
+          setData(other.getData());
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              setRow(input.readBytes());
+              break;
+            }
+            case 18: {
+              setColumn(input.readBytes());
+              break;
+            }
+            case 24: {
+              setTimestamp(input.readInt64());
+              break;
+            }
+            case 34: {
+              setData(input.readBytes());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // optional bytes row = 1;
+      public boolean hasRow() {
+        return result.hasRow();
+      }
+      public com.google.protobuf.ByteString getRow() {
+        return result.getRow();
+      }
+      public Builder setRow(com.google.protobuf.ByteString value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasRow = true;
+        result.row_ = value;
+        return this;
+      }
+      public Builder clearRow() {
+        result.hasRow = false;
+        result.row_ = getDefaultInstance().getRow();
+        return this;
+      }
+      
+      // optional bytes column = 2;
+      public boolean hasColumn() {
+        return result.hasColumn();
+      }
+      public com.google.protobuf.ByteString getColumn() {
+        return result.getColumn();
+      }
+      public Builder setColumn(com.google.protobuf.ByteString value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasColumn = true;
+        result.column_ = value;
+        return this;
+      }
+      public Builder clearColumn() {
+        result.hasColumn = false;
+        result.column_ = getDefaultInstance().getColumn();
+        return this;
+      }
+      
+      // optional int64 timestamp = 3;
+      public boolean hasTimestamp() {
+        return result.hasTimestamp();
+      }
+      public long getTimestamp() {
+        return result.getTimestamp();
+      }
+      public Builder setTimestamp(long value) {
+        result.hasTimestamp = true;
+        result.timestamp_ = value;
+        return this;
+      }
+      public Builder clearTimestamp() {
+        result.hasTimestamp = false;
+        result.timestamp_ = 0L;
+        return this;
+      }
+      
+      // optional bytes data = 4;
+      public boolean hasData() {
+        return result.hasData();
+      }
+      public com.google.protobuf.ByteString getData() {
+        return result.getData();
+      }
+      public Builder setData(com.google.protobuf.ByteString value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasData = true;
+        result.data_ = value;
+        return this;
+      }
+      public Builder clearData() {
+        result.hasData = false;
+        result.data_ = getDefaultInstance().getData();
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.Cell)
+    }
+    
+    static {
+      defaultInstance = new Cell(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.Cell)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Cell_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Cell_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\021CellMessage.proto\022/org.apache.hadoop.h" +
+      "base.rest.protobuf.generated\"D\n\004Cell\022\013\n\003" +
+      "row\030\001 \001(\014\022\016\n\006column\030\002 \001(\014\022\021\n\ttimestamp\030\003" +
+      " \001(\003\022\014\n\004data\030\004 \001(\014"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Cell_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Cell_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Cell_descriptor,
+              new java.lang.String[] { "Row", "Column", "Timestamp", "Data", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
new file mode 100644
index 0000000..735bf56
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/CellSetMessage.java
@@ -0,0 +1,780 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: CellSetMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class CellSetMessage {
+  private CellSetMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class CellSet extends
+      com.google.protobuf.GeneratedMessage {
+    // Use CellSet.newBuilder() to construct.
+    private CellSet() {
+      initFields();
+    }
+    private CellSet(boolean noInit) {}
+    
+    private static final CellSet defaultInstance;
+    public static CellSet getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public CellSet getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_fieldAccessorTable;
+    }
+    
+    public static final class Row extends
+        com.google.protobuf.GeneratedMessage {
+      // Use Row.newBuilder() to construct.
+      private Row() {
+        initFields();
+      }
+      private Row(boolean noInit) {}
+      
+      private static final Row defaultInstance;
+      public static Row getDefaultInstance() {
+        return defaultInstance;
+      }
+      
+      public Row getDefaultInstanceForType() {
+        return defaultInstance;
+      }
+      
+      public static final com.google.protobuf.Descriptors.Descriptor
+          getDescriptor() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_Row_descriptor;
+      }
+      
+      protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+          internalGetFieldAccessorTable() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_Row_fieldAccessorTable;
+      }
+      
+      // required bytes key = 1;
+      public static final int KEY_FIELD_NUMBER = 1;
+      private boolean hasKey;
+      private com.google.protobuf.ByteString key_ = com.google.protobuf.ByteString.EMPTY;
+      public boolean hasKey() { return hasKey; }
+      public com.google.protobuf.ByteString getKey() { return key_; }
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.Cell values = 2;
+      public static final int VALUES_FIELD_NUMBER = 2;
+      private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell> values_ =
+        java.util.Collections.emptyList();
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell> getValuesList() {
+        return values_;
+      }
+      public int getValuesCount() { return values_.size(); }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell getValues(int index) {
+        return values_.get(index);
+      }
+      
+      private void initFields() {
+      }
+      public final boolean isInitialized() {
+        if (!hasKey) return false;
+        return true;
+      }
+      
+      public void writeTo(com.google.protobuf.CodedOutputStream output)
+                          throws java.io.IOException {
+        getSerializedSize();
+        if (hasKey()) {
+          output.writeBytes(1, getKey());
+        }
+        for (org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell element : getValuesList()) {
+          output.writeMessage(2, element);
+        }
+        getUnknownFields().writeTo(output);
+      }
+      
+      private int memoizedSerializedSize = -1;
+      public int getSerializedSize() {
+        int size = memoizedSerializedSize;
+        if (size != -1) return size;
+      
+        size = 0;
+        if (hasKey()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeBytesSize(1, getKey());
+        }
+        for (org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell element : getValuesList()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeMessageSize(2, element);
+        }
+        size += getUnknownFields().getSerializedSize();
+        memoizedSerializedSize = size;
+        return size;
+      }
+      
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(
+          com.google.protobuf.ByteString data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(
+          com.google.protobuf.ByteString data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(byte[] data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(
+          byte[] data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseDelimitedFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseDelimitedFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(
+          com.google.protobuf.CodedInputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row parseFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      
+      public static Builder newBuilder() { return Builder.create(); }
+      public Builder newBuilderForType() { return newBuilder(); }
+      public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row prototype) {
+        return newBuilder().mergeFrom(prototype);
+      }
+      public Builder toBuilder() { return newBuilder(this); }
+      
+      public static final class Builder extends
+          com.google.protobuf.GeneratedMessage.Builder<Builder> {
+        private org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row result;
+        
+        // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.newBuilder()
+        private Builder() {}
+        
+        private static Builder create() {
+          Builder builder = new Builder();
+          builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row();
+          return builder;
+        }
+        
+        protected org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row internalGetResult() {
+          return result;
+        }
+        
+        public Builder clear() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "Cannot call clear() after build().");
+          }
+          result = new org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row();
+          return this;
+        }
+        
+        public Builder clone() {
+          return create().mergeFrom(result);
+        }
+        
+        public com.google.protobuf.Descriptors.Descriptor
+            getDescriptorForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.getDescriptor();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row getDefaultInstanceForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.getDefaultInstance();
+        }
+        
+        public boolean isInitialized() {
+          return result.isInitialized();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row build() {
+          if (result != null && !isInitialized()) {
+            throw newUninitializedMessageException(result);
+          }
+          return buildPartial();
+        }
+        
+        private org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row buildParsed()
+            throws com.google.protobuf.InvalidProtocolBufferException {
+          if (!isInitialized()) {
+            throw newUninitializedMessageException(
+              result).asInvalidProtocolBufferException();
+          }
+          return buildPartial();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row buildPartial() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "build() has already been called on this Builder.");
+          }
+          if (result.values_ != java.util.Collections.EMPTY_LIST) {
+            result.values_ =
+              java.util.Collections.unmodifiableList(result.values_);
+          }
+          org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row returnMe = result;
+          result = null;
+          return returnMe;
+        }
+        
+        public Builder mergeFrom(com.google.protobuf.Message other) {
+          if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row) {
+            return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row)other);
+          } else {
+            super.mergeFrom(other);
+            return this;
+          }
+        }
+        
+        public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row other) {
+          if (other == org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.getDefaultInstance()) return this;
+          if (other.hasKey()) {
+            setKey(other.getKey());
+          }
+          if (!other.values_.isEmpty()) {
+            if (result.values_.isEmpty()) {
+              result.values_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell>();
+            }
+            result.values_.addAll(other.values_);
+          }
+          this.mergeUnknownFields(other.getUnknownFields());
+          return this;
+        }
+        
+        public Builder mergeFrom(
+            com.google.protobuf.CodedInputStream input,
+            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+            throws java.io.IOException {
+          com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+            com.google.protobuf.UnknownFieldSet.newBuilder(
+              this.getUnknownFields());
+          while (true) {
+            int tag = input.readTag();
+            switch (tag) {
+              case 0:
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              default: {
+                if (!parseUnknownField(input, unknownFields,
+                                       extensionRegistry, tag)) {
+                  this.setUnknownFields(unknownFields.build());
+                  return this;
+                }
+                break;
+              }
+              case 10: {
+                setKey(input.readBytes());
+                break;
+              }
+              case 18: {
+                org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.newBuilder();
+                input.readMessage(subBuilder, extensionRegistry);
+                addValues(subBuilder.buildPartial());
+                break;
+              }
+            }
+          }
+        }
+        
+        
+        // required bytes key = 1;
+        public boolean hasKey() {
+          return result.hasKey();
+        }
+        public com.google.protobuf.ByteString getKey() {
+          return result.getKey();
+        }
+        public Builder setKey(com.google.protobuf.ByteString value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasKey = true;
+          result.key_ = value;
+          return this;
+        }
+        public Builder clearKey() {
+          result.hasKey = false;
+          result.key_ = getDefaultInstance().getKey();
+          return this;
+        }
+        
+        // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.Cell values = 2;
+        public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell> getValuesList() {
+          return java.util.Collections.unmodifiableList(result.values_);
+        }
+        public int getValuesCount() {
+          return result.getValuesCount();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell getValues(int index) {
+          return result.getValues(index);
+        }
+        public Builder setValues(int index, org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell value) {
+          if (value == null) {
+            throw new NullPointerException();
+          }
+          result.values_.set(index, value);
+          return this;
+        }
+        public Builder setValues(int index, org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.Builder builderForValue) {
+          result.values_.set(index, builderForValue.build());
+          return this;
+        }
+        public Builder addValues(org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell value) {
+          if (value == null) {
+            throw new NullPointerException();
+          }
+          if (result.values_.isEmpty()) {
+            result.values_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell>();
+          }
+          result.values_.add(value);
+          return this;
+        }
+        public Builder addValues(org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell.Builder builderForValue) {
+          if (result.values_.isEmpty()) {
+            result.values_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell>();
+          }
+          result.values_.add(builderForValue.build());
+          return this;
+        }
+        public Builder addAllValues(
+            java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell> values) {
+          if (result.values_.isEmpty()) {
+            result.values_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.Cell>();
+          }
+          super.addAll(values, result.values_);
+          return this;
+        }
+        public Builder clearValues() {
+          result.values_ = java.util.Collections.emptyList();
+          return this;
+        }
+        
+        // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.CellSet.Row)
+      }
+      
+      static {
+        defaultInstance = new Row(true);
+        org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.internalForceInit();
+        defaultInstance.initFields();
+      }
+      
+      // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.CellSet.Row)
+    }
+    
+    // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.CellSet.Row rows = 1;
+    public static final int ROWS_FIELD_NUMBER = 1;
+    private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row> rows_ =
+      java.util.Collections.emptyList();
+    public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row> getRowsList() {
+      return rows_;
+    }
+    public int getRowsCount() { return rows_.size(); }
+    public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row getRows(int index) {
+      return rows_.get(index);
+    }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row element : getRowsList()) {
+        if (!element.isInitialized()) return false;
+      }
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row element : getRowsList()) {
+        output.writeMessage(1, element);
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row element : getRowsList()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeMessageSize(1, element);
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        if (result.rows_ != java.util.Collections.EMPTY_LIST) {
+          result.rows_ =
+            java.util.Collections.unmodifiableList(result.rows_);
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.getDefaultInstance()) return this;
+        if (!other.rows_.isEmpty()) {
+          if (result.rows_.isEmpty()) {
+            result.rows_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row>();
+          }
+          result.rows_.addAll(other.rows_);
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.newBuilder();
+              input.readMessage(subBuilder, extensionRegistry);
+              addRows(subBuilder.buildPartial());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.CellSet.Row rows = 1;
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row> getRowsList() {
+        return java.util.Collections.unmodifiableList(result.rows_);
+      }
+      public int getRowsCount() {
+        return result.getRowsCount();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row getRows(int index) {
+        return result.getRows(index);
+      }
+      public Builder setRows(int index, org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        result.rows_.set(index, value);
+        return this;
+      }
+      public Builder setRows(int index, org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.Builder builderForValue) {
+        result.rows_.set(index, builderForValue.build());
+        return this;
+      }
+      public Builder addRows(org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        if (result.rows_.isEmpty()) {
+          result.rows_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row>();
+        }
+        result.rows_.add(value);
+        return this;
+      }
+      public Builder addRows(org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.Builder builderForValue) {
+        if (result.rows_.isEmpty()) {
+          result.rows_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row>();
+        }
+        result.rows_.add(builderForValue.build());
+        return this;
+      }
+      public Builder addAllRows(
+          java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row> values) {
+        if (result.rows_.isEmpty()) {
+          result.rows_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row>();
+        }
+        super.addAll(values, result.rows_);
+        return this;
+      }
+      public Builder clearRows() {
+        result.rows_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.CellSet)
+    }
+    
+    static {
+      defaultInstance = new CellSet(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.CellSet)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_fieldAccessorTable;
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_Row_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_Row_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\024CellSetMessage.proto\022/org.apache.hadoo" +
+      "p.hbase.rest.protobuf.generated\032\021CellMes" +
+      "sage.proto\"\260\001\n\007CellSet\022J\n\004rows\030\001 \003(\0132<.o" +
+      "rg.apache.hadoop.hbase.rest.protobuf.gen" +
+      "erated.CellSet.Row\032Y\n\003Row\022\013\n\003key\030\001 \002(\014\022E" +
+      "\n\006values\030\002 \003(\01325.org.apache.hadoop.hbase" +
+      ".rest.protobuf.generated.Cell"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_descriptor,
+              new java.lang.String[] { "Rows", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Builder.class);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_Row_descriptor =
+            internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_descriptor.getNestedTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_Row_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_CellSet_Row_descriptor,
+              new java.lang.String[] { "Key", "Values", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.CellSetMessage.CellSet.Row.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+          org.apache.hadoop.hbase.rest.protobuf.generated.CellMessage.getDescriptor(),
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
new file mode 100644
index 0000000..d8db71c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ColumnSchemaMessage.java
@@ -0,0 +1,899 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: ColumnSchemaMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class ColumnSchemaMessage {
+  private ColumnSchemaMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class ColumnSchema extends
+      com.google.protobuf.GeneratedMessage {
+    // Use ColumnSchema.newBuilder() to construct.
+    private ColumnSchema() {
+      initFields();
+    }
+    private ColumnSchema(boolean noInit) {}
+    
+    private static final ColumnSchema defaultInstance;
+    public static ColumnSchema getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public ColumnSchema getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_fieldAccessorTable;
+    }
+    
+    public static final class Attribute extends
+        com.google.protobuf.GeneratedMessage {
+      // Use Attribute.newBuilder() to construct.
+      private Attribute() {
+        initFields();
+      }
+      private Attribute(boolean noInit) {}
+      
+      private static final Attribute defaultInstance;
+      public static Attribute getDefaultInstance() {
+        return defaultInstance;
+      }
+      
+      public Attribute getDefaultInstanceForType() {
+        return defaultInstance;
+      }
+      
+      public static final com.google.protobuf.Descriptors.Descriptor
+          getDescriptor() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_Attribute_descriptor;
+      }
+      
+      protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+          internalGetFieldAccessorTable() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_Attribute_fieldAccessorTable;
+      }
+      
+      // required string name = 1;
+      public static final int NAME_FIELD_NUMBER = 1;
+      private boolean hasName;
+      private java.lang.String name_ = "";
+      public boolean hasName() { return hasName; }
+      public java.lang.String getName() { return name_; }
+      
+      // required string value = 2;
+      public static final int VALUE_FIELD_NUMBER = 2;
+      private boolean hasValue;
+      private java.lang.String value_ = "";
+      public boolean hasValue() { return hasValue; }
+      public java.lang.String getValue() { return value_; }
+      
+      private void initFields() {
+      }
+      public final boolean isInitialized() {
+        if (!hasName) return false;
+        if (!hasValue) return false;
+        return true;
+      }
+      
+      public void writeTo(com.google.protobuf.CodedOutputStream output)
+                          throws java.io.IOException {
+        getSerializedSize();
+        if (hasName()) {
+          output.writeString(1, getName());
+        }
+        if (hasValue()) {
+          output.writeString(2, getValue());
+        }
+        getUnknownFields().writeTo(output);
+      }
+      
+      private int memoizedSerializedSize = -1;
+      public int getSerializedSize() {
+        int size = memoizedSerializedSize;
+        if (size != -1) return size;
+      
+        size = 0;
+        if (hasName()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeStringSize(1, getName());
+        }
+        if (hasValue()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeStringSize(2, getValue());
+        }
+        size += getUnknownFields().getSerializedSize();
+        memoizedSerializedSize = size;
+        return size;
+      }
+      
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(
+          com.google.protobuf.ByteString data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(
+          com.google.protobuf.ByteString data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(byte[] data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(
+          byte[] data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseDelimitedFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseDelimitedFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(
+          com.google.protobuf.CodedInputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute parseFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      
+      public static Builder newBuilder() { return Builder.create(); }
+      public Builder newBuilderForType() { return newBuilder(); }
+      public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute prototype) {
+        return newBuilder().mergeFrom(prototype);
+      }
+      public Builder toBuilder() { return newBuilder(this); }
+      
+      public static final class Builder extends
+          com.google.protobuf.GeneratedMessage.Builder<Builder> {
+        private org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute result;
+        
+        // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.newBuilder()
+        private Builder() {}
+        
+        private static Builder create() {
+          Builder builder = new Builder();
+          builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute();
+          return builder;
+        }
+        
+        protected org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute internalGetResult() {
+          return result;
+        }
+        
+        public Builder clear() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "Cannot call clear() after build().");
+          }
+          result = new org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute();
+          return this;
+        }
+        
+        public Builder clone() {
+          return create().mergeFrom(result);
+        }
+        
+        public com.google.protobuf.Descriptors.Descriptor
+            getDescriptorForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.getDescriptor();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute getDefaultInstanceForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.getDefaultInstance();
+        }
+        
+        public boolean isInitialized() {
+          return result.isInitialized();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute build() {
+          if (result != null && !isInitialized()) {
+            throw newUninitializedMessageException(result);
+          }
+          return buildPartial();
+        }
+        
+        private org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute buildParsed()
+            throws com.google.protobuf.InvalidProtocolBufferException {
+          if (!isInitialized()) {
+            throw newUninitializedMessageException(
+              result).asInvalidProtocolBufferException();
+          }
+          return buildPartial();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute buildPartial() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "build() has already been called on this Builder.");
+          }
+          org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute returnMe = result;
+          result = null;
+          return returnMe;
+        }
+        
+        public Builder mergeFrom(com.google.protobuf.Message other) {
+          if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute) {
+            return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute)other);
+          } else {
+            super.mergeFrom(other);
+            return this;
+          }
+        }
+        
+        public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute other) {
+          if (other == org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.getDefaultInstance()) return this;
+          if (other.hasName()) {
+            setName(other.getName());
+          }
+          if (other.hasValue()) {
+            setValue(other.getValue());
+          }
+          this.mergeUnknownFields(other.getUnknownFields());
+          return this;
+        }
+        
+        public Builder mergeFrom(
+            com.google.protobuf.CodedInputStream input,
+            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+            throws java.io.IOException {
+          com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+            com.google.protobuf.UnknownFieldSet.newBuilder(
+              this.getUnknownFields());
+          while (true) {
+            int tag = input.readTag();
+            switch (tag) {
+              case 0:
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              default: {
+                if (!parseUnknownField(input, unknownFields,
+                                       extensionRegistry, tag)) {
+                  this.setUnknownFields(unknownFields.build());
+                  return this;
+                }
+                break;
+              }
+              case 10: {
+                setName(input.readString());
+                break;
+              }
+              case 18: {
+                setValue(input.readString());
+                break;
+              }
+            }
+          }
+        }
+        
+        
+        // required string name = 1;
+        public boolean hasName() {
+          return result.hasName();
+        }
+        public java.lang.String getName() {
+          return result.getName();
+        }
+        public Builder setName(java.lang.String value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+          result.name_ = value;
+          return this;
+        }
+        public Builder clearName() {
+          result.hasName = false;
+          result.name_ = getDefaultInstance().getName();
+          return this;
+        }
+        
+        // required string value = 2;
+        public boolean hasValue() {
+          return result.hasValue();
+        }
+        public java.lang.String getValue() {
+          return result.getValue();
+        }
+        public Builder setValue(java.lang.String value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasValue = true;
+          result.value_ = value;
+          return this;
+        }
+        public Builder clearValue() {
+          result.hasValue = false;
+          result.value_ = getDefaultInstance().getValue();
+          return this;
+        }
+        
+        // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema.Attribute)
+      }
+      
+      static {
+        defaultInstance = new Attribute(true);
+        org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.internalForceInit();
+        defaultInstance.initFields();
+      }
+      
+      // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema.Attribute)
+    }
+    
+    // optional string name = 1;
+    public static final int NAME_FIELD_NUMBER = 1;
+    private boolean hasName;
+    private java.lang.String name_ = "";
+    public boolean hasName() { return hasName; }
+    public java.lang.String getName() { return name_; }
+    
+    // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema.Attribute attrs = 2;
+    public static final int ATTRS_FIELD_NUMBER = 2;
+    private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute> attrs_ =
+      java.util.Collections.emptyList();
+    public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute> getAttrsList() {
+      return attrs_;
+    }
+    public int getAttrsCount() { return attrs_.size(); }
+    public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute getAttrs(int index) {
+      return attrs_.get(index);
+    }
+    
+    // optional int32 ttl = 3;
+    public static final int TTL_FIELD_NUMBER = 3;
+    private boolean hasTtl;
+    private int ttl_ = 0;
+    public boolean hasTtl() { return hasTtl; }
+    public int getTtl() { return ttl_; }
+    
+    // optional int32 maxVersions = 4;
+    public static final int MAXVERSIONS_FIELD_NUMBER = 4;
+    private boolean hasMaxVersions;
+    private int maxVersions_ = 0;
+    public boolean hasMaxVersions() { return hasMaxVersions; }
+    public int getMaxVersions() { return maxVersions_; }
+    
+    // optional string compression = 5;
+    public static final int COMPRESSION_FIELD_NUMBER = 5;
+    private boolean hasCompression;
+    private java.lang.String compression_ = "";
+    public boolean hasCompression() { return hasCompression; }
+    public java.lang.String getCompression() { return compression_; }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute element : getAttrsList()) {
+        if (!element.isInitialized()) return false;
+      }
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      if (hasName()) {
+        output.writeString(1, getName());
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute element : getAttrsList()) {
+        output.writeMessage(2, element);
+      }
+      if (hasTtl()) {
+        output.writeInt32(3, getTtl());
+      }
+      if (hasMaxVersions()) {
+        output.writeInt32(4, getMaxVersions());
+      }
+      if (hasCompression()) {
+        output.writeString(5, getCompression());
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      if (hasName()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(1, getName());
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute element : getAttrsList()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeMessageSize(2, element);
+      }
+      if (hasTtl()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt32Size(3, getTtl());
+      }
+      if (hasMaxVersions()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt32Size(4, getMaxVersions());
+      }
+      if (hasCompression()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(5, getCompression());
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        if (result.attrs_ != java.util.Collections.EMPTY_LIST) {
+          result.attrs_ =
+            java.util.Collections.unmodifiableList(result.attrs_);
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.getDefaultInstance()) return this;
+        if (other.hasName()) {
+          setName(other.getName());
+        }
+        if (!other.attrs_.isEmpty()) {
+          if (result.attrs_.isEmpty()) {
+            result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute>();
+          }
+          result.attrs_.addAll(other.attrs_);
+        }
+        if (other.hasTtl()) {
+          setTtl(other.getTtl());
+        }
+        if (other.hasMaxVersions()) {
+          setMaxVersions(other.getMaxVersions());
+        }
+        if (other.hasCompression()) {
+          setCompression(other.getCompression());
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              setName(input.readString());
+              break;
+            }
+            case 18: {
+              org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.newBuilder();
+              input.readMessage(subBuilder, extensionRegistry);
+              addAttrs(subBuilder.buildPartial());
+              break;
+            }
+            case 24: {
+              setTtl(input.readInt32());
+              break;
+            }
+            case 32: {
+              setMaxVersions(input.readInt32());
+              break;
+            }
+            case 42: {
+              setCompression(input.readString());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // optional string name = 1;
+      public boolean hasName() {
+        return result.hasName();
+      }
+      public java.lang.String getName() {
+        return result.getName();
+      }
+      public Builder setName(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+        result.name_ = value;
+        return this;
+      }
+      public Builder clearName() {
+        result.hasName = false;
+        result.name_ = getDefaultInstance().getName();
+        return this;
+      }
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema.Attribute attrs = 2;
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute> getAttrsList() {
+        return java.util.Collections.unmodifiableList(result.attrs_);
+      }
+      public int getAttrsCount() {
+        return result.getAttrsCount();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute getAttrs(int index) {
+        return result.getAttrs(index);
+      }
+      public Builder setAttrs(int index, org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        result.attrs_.set(index, value);
+        return this;
+      }
+      public Builder setAttrs(int index, org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.Builder builderForValue) {
+        result.attrs_.set(index, builderForValue.build());
+        return this;
+      }
+      public Builder addAttrs(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        if (result.attrs_.isEmpty()) {
+          result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute>();
+        }
+        result.attrs_.add(value);
+        return this;
+      }
+      public Builder addAttrs(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.Builder builderForValue) {
+        if (result.attrs_.isEmpty()) {
+          result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute>();
+        }
+        result.attrs_.add(builderForValue.build());
+        return this;
+      }
+      public Builder addAllAttrs(
+          java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute> values) {
+        if (result.attrs_.isEmpty()) {
+          result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute>();
+        }
+        super.addAll(values, result.attrs_);
+        return this;
+      }
+      public Builder clearAttrs() {
+        result.attrs_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // optional int32 ttl = 3;
+      public boolean hasTtl() {
+        return result.hasTtl();
+      }
+      public int getTtl() {
+        return result.getTtl();
+      }
+      public Builder setTtl(int value) {
+        result.hasTtl = true;
+        result.ttl_ = value;
+        return this;
+      }
+      public Builder clearTtl() {
+        result.hasTtl = false;
+        result.ttl_ = 0;
+        return this;
+      }
+      
+      // optional int32 maxVersions = 4;
+      public boolean hasMaxVersions() {
+        return result.hasMaxVersions();
+      }
+      public int getMaxVersions() {
+        return result.getMaxVersions();
+      }
+      public Builder setMaxVersions(int value) {
+        result.hasMaxVersions = true;
+        result.maxVersions_ = value;
+        return this;
+      }
+      public Builder clearMaxVersions() {
+        result.hasMaxVersions = false;
+        result.maxVersions_ = 0;
+        return this;
+      }
+      
+      // optional string compression = 5;
+      public boolean hasCompression() {
+        return result.hasCompression();
+      }
+      public java.lang.String getCompression() {
+        return result.getCompression();
+      }
+      public Builder setCompression(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasCompression = true;
+        result.compression_ = value;
+        return this;
+      }
+      public Builder clearCompression() {
+        result.hasCompression = false;
+        result.compression_ = getDefaultInstance().getCompression();
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema)
+    }
+    
+    static {
+      defaultInstance = new ColumnSchema(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_fieldAccessorTable;
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_Attribute_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_Attribute_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\031ColumnSchemaMessage.proto\022/org.apache." +
+      "hadoop.hbase.rest.protobuf.generated\"\325\001\n" +
+      "\014ColumnSchema\022\014\n\004name\030\001 \001(\t\022V\n\005attrs\030\002 \003" +
+      "(\0132G.org.apache.hadoop.hbase.rest.protob" +
+      "uf.generated.ColumnSchema.Attribute\022\013\n\003t" +
+      "tl\030\003 \001(\005\022\023\n\013maxVersions\030\004 \001(\005\022\023\n\013compres" +
+      "sion\030\005 \001(\t\032(\n\tAttribute\022\014\n\004name\030\001 \002(\t\022\r\n" +
+      "\005value\030\002 \002(\t"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_descriptor,
+              new java.lang.String[] { "Name", "Attrs", "Ttl", "MaxVersions", "Compression", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Builder.class);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_Attribute_descriptor =
+            internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_descriptor.getNestedTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_Attribute_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_ColumnSchema_Attribute_descriptor,
+              new java.lang.String[] { "Name", "Value", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Attribute.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
new file mode 100644
index 0000000..f3e6b88
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
@@ -0,0 +1,662 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: ScannerMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class ScannerMessage {
+  private ScannerMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class Scanner extends
+      com.google.protobuf.GeneratedMessage {
+    // Use Scanner.newBuilder() to construct.
+    private Scanner() {
+      initFields();
+    }
+    private Scanner(boolean noInit) {}
+    
+    private static final Scanner defaultInstance;
+    public static Scanner getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public Scanner getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Scanner_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Scanner_fieldAccessorTable;
+    }
+    
+    // optional bytes startRow = 1;
+    public static final int STARTROW_FIELD_NUMBER = 1;
+    private boolean hasStartRow;
+    private com.google.protobuf.ByteString startRow_ = com.google.protobuf.ByteString.EMPTY;
+    public boolean hasStartRow() { return hasStartRow; }
+    public com.google.protobuf.ByteString getStartRow() { return startRow_; }
+    
+    // optional bytes endRow = 2;
+    public static final int ENDROW_FIELD_NUMBER = 2;
+    private boolean hasEndRow;
+    private com.google.protobuf.ByteString endRow_ = com.google.protobuf.ByteString.EMPTY;
+    public boolean hasEndRow() { return hasEndRow; }
+    public com.google.protobuf.ByteString getEndRow() { return endRow_; }
+    
+    // repeated bytes columns = 3;
+    public static final int COLUMNS_FIELD_NUMBER = 3;
+    private java.util.List<com.google.protobuf.ByteString> columns_ =
+      java.util.Collections.emptyList();
+    public java.util.List<com.google.protobuf.ByteString> getColumnsList() {
+      return columns_;
+    }
+    public int getColumnsCount() { return columns_.size(); }
+    public com.google.protobuf.ByteString getColumns(int index) {
+      return columns_.get(index);
+    }
+    
+    // optional int32 batch = 4;
+    public static final int BATCH_FIELD_NUMBER = 4;
+    private boolean hasBatch;
+    private int batch_ = 0;
+    public boolean hasBatch() { return hasBatch; }
+    public int getBatch() { return batch_; }
+    
+    // optional int64 startTime = 5;
+    public static final int STARTTIME_FIELD_NUMBER = 5;
+    private boolean hasStartTime;
+    private long startTime_ = 0L;
+    public boolean hasStartTime() { return hasStartTime; }
+    public long getStartTime() { return startTime_; }
+    
+    // optional int64 endTime = 6;
+    public static final int ENDTIME_FIELD_NUMBER = 6;
+    private boolean hasEndTime;
+    private long endTime_ = 0L;
+    public boolean hasEndTime() { return hasEndTime; }
+    public long getEndTime() { return endTime_; }
+    
+    // optional int32 maxVersions = 7;
+    public static final int MAXVERSIONS_FIELD_NUMBER = 7;
+    private boolean hasMaxVersions;
+    private int maxVersions_ = 0;
+    public boolean hasMaxVersions() { return hasMaxVersions; }
+    public int getMaxVersions() { return maxVersions_; }
+    
+    // optional string filter = 8;
+    public static final int FILTER_FIELD_NUMBER = 8;
+    private boolean hasFilter;
+    private java.lang.String filter_ = "";
+    public boolean hasFilter() { return hasFilter; }
+    public java.lang.String getFilter() { return filter_; }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      if (hasStartRow()) {
+        output.writeBytes(1, getStartRow());
+      }
+      if (hasEndRow()) {
+        output.writeBytes(2, getEndRow());
+      }
+      for (com.google.protobuf.ByteString element : getColumnsList()) {
+        output.writeBytes(3, element);
+      }
+      if (hasBatch()) {
+        output.writeInt32(4, getBatch());
+      }
+      if (hasStartTime()) {
+        output.writeInt64(5, getStartTime());
+      }
+      if (hasEndTime()) {
+        output.writeInt64(6, getEndTime());
+      }
+      if (hasMaxVersions()) {
+        output.writeInt32(7, getMaxVersions());
+      }
+      if (hasFilter()) {
+        output.writeString(8, getFilter());
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      if (hasStartRow()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeBytesSize(1, getStartRow());
+      }
+      if (hasEndRow()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeBytesSize(2, getEndRow());
+      }
+      {
+        int dataSize = 0;
+        for (com.google.protobuf.ByteString element : getColumnsList()) {
+          dataSize += com.google.protobuf.CodedOutputStream
+            .computeBytesSizeNoTag(element);
+        }
+        size += dataSize;
+        size += 1 * getColumnsList().size();
+      }
+      if (hasBatch()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt32Size(4, getBatch());
+      }
+      if (hasStartTime()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt64Size(5, getStartTime());
+      }
+      if (hasEndTime()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt64Size(6, getEndTime());
+      }
+      if (hasMaxVersions()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt32Size(7, getMaxVersions());
+      }
+      if (hasFilter()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(8, getFilter());
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        if (result.columns_ != java.util.Collections.EMPTY_LIST) {
+          result.columns_ =
+            java.util.Collections.unmodifiableList(result.columns_);
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner.getDefaultInstance()) return this;
+        if (other.hasStartRow()) {
+          setStartRow(other.getStartRow());
+        }
+        if (other.hasEndRow()) {
+          setEndRow(other.getEndRow());
+        }
+        if (!other.columns_.isEmpty()) {
+          if (result.columns_.isEmpty()) {
+            result.columns_ = new java.util.ArrayList<com.google.protobuf.ByteString>();
+          }
+          result.columns_.addAll(other.columns_);
+        }
+        if (other.hasBatch()) {
+          setBatch(other.getBatch());
+        }
+        if (other.hasStartTime()) {
+          setStartTime(other.getStartTime());
+        }
+        if (other.hasEndTime()) {
+          setEndTime(other.getEndTime());
+        }
+        if (other.hasMaxVersions()) {
+          setMaxVersions(other.getMaxVersions());
+        }
+        if (other.hasFilter()) {
+          setFilter(other.getFilter());
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              setStartRow(input.readBytes());
+              break;
+            }
+            case 18: {
+              setEndRow(input.readBytes());
+              break;
+            }
+            case 26: {
+              addColumns(input.readBytes());
+              break;
+            }
+            case 32: {
+              setBatch(input.readInt32());
+              break;
+            }
+            case 40: {
+              setStartTime(input.readInt64());
+              break;
+            }
+            case 48: {
+              setEndTime(input.readInt64());
+              break;
+            }
+            case 56: {
+              setMaxVersions(input.readInt32());
+              break;
+            }
+            case 66: {
+              setFilter(input.readString());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // optional bytes startRow = 1;
+      public boolean hasStartRow() {
+        return result.hasStartRow();
+      }
+      public com.google.protobuf.ByteString getStartRow() {
+        return result.getStartRow();
+      }
+      public Builder setStartRow(com.google.protobuf.ByteString value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasStartRow = true;
+        result.startRow_ = value;
+        return this;
+      }
+      public Builder clearStartRow() {
+        result.hasStartRow = false;
+        result.startRow_ = getDefaultInstance().getStartRow();
+        return this;
+      }
+      
+      // optional bytes endRow = 2;
+      public boolean hasEndRow() {
+        return result.hasEndRow();
+      }
+      public com.google.protobuf.ByteString getEndRow() {
+        return result.getEndRow();
+      }
+      public Builder setEndRow(com.google.protobuf.ByteString value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasEndRow = true;
+        result.endRow_ = value;
+        return this;
+      }
+      public Builder clearEndRow() {
+        result.hasEndRow = false;
+        result.endRow_ = getDefaultInstance().getEndRow();
+        return this;
+      }
+      
+      // repeated bytes columns = 3;
+      public java.util.List<com.google.protobuf.ByteString> getColumnsList() {
+        return java.util.Collections.unmodifiableList(result.columns_);
+      }
+      public int getColumnsCount() {
+        return result.getColumnsCount();
+      }
+      public com.google.protobuf.ByteString getColumns(int index) {
+        return result.getColumns(index);
+      }
+      public Builder setColumns(int index, com.google.protobuf.ByteString value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.columns_.set(index, value);
+        return this;
+      }
+      public Builder addColumns(com.google.protobuf.ByteString value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  if (result.columns_.isEmpty()) {
+          result.columns_ = new java.util.ArrayList<com.google.protobuf.ByteString>();
+        }
+        result.columns_.add(value);
+        return this;
+      }
+      public Builder addAllColumns(
+          java.lang.Iterable<? extends com.google.protobuf.ByteString> values) {
+        if (result.columns_.isEmpty()) {
+          result.columns_ = new java.util.ArrayList<com.google.protobuf.ByteString>();
+        }
+        super.addAll(values, result.columns_);
+        return this;
+      }
+      public Builder clearColumns() {
+        result.columns_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // optional int32 batch = 4;
+      public boolean hasBatch() {
+        return result.hasBatch();
+      }
+      public int getBatch() {
+        return result.getBatch();
+      }
+      public Builder setBatch(int value) {
+        result.hasBatch = true;
+        result.batch_ = value;
+        return this;
+      }
+      public Builder clearBatch() {
+        result.hasBatch = false;
+        result.batch_ = 0;
+        return this;
+      }
+      
+      // optional int64 startTime = 5;
+      public boolean hasStartTime() {
+        return result.hasStartTime();
+      }
+      public long getStartTime() {
+        return result.getStartTime();
+      }
+      public Builder setStartTime(long value) {
+        result.hasStartTime = true;
+        result.startTime_ = value;
+        return this;
+      }
+      public Builder clearStartTime() {
+        result.hasStartTime = false;
+        result.startTime_ = 0L;
+        return this;
+      }
+      
+      // optional int64 endTime = 6;
+      public boolean hasEndTime() {
+        return result.hasEndTime();
+      }
+      public long getEndTime() {
+        return result.getEndTime();
+      }
+      public Builder setEndTime(long value) {
+        result.hasEndTime = true;
+        result.endTime_ = value;
+        return this;
+      }
+      public Builder clearEndTime() {
+        result.hasEndTime = false;
+        result.endTime_ = 0L;
+        return this;
+      }
+      
+      // optional int32 maxVersions = 7;
+      public boolean hasMaxVersions() {
+        return result.hasMaxVersions();
+      }
+      public int getMaxVersions() {
+        return result.getMaxVersions();
+      }
+      public Builder setMaxVersions(int value) {
+        result.hasMaxVersions = true;
+        result.maxVersions_ = value;
+        return this;
+      }
+      public Builder clearMaxVersions() {
+        result.hasMaxVersions = false;
+        result.maxVersions_ = 0;
+        return this;
+      }
+      
+      // optional string filter = 8;
+      public boolean hasFilter() {
+        return result.hasFilter();
+      }
+      public java.lang.String getFilter() {
+        return result.getFilter();
+      }
+      public Builder setFilter(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasFilter = true;
+        result.filter_ = value;
+        return this;
+      }
+      public Builder clearFilter() {
+        result.hasFilter = false;
+        result.filter_ = getDefaultInstance().getFilter();
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.Scanner)
+    }
+    
+    static {
+      defaultInstance = new Scanner(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.Scanner)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Scanner_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Scanner_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\024ScannerMessage.proto\022/org.apache.hadoo" +
+      "p.hbase.rest.protobuf.generated\"\224\001\n\007Scan" +
+      "ner\022\020\n\010startRow\030\001 \001(\014\022\016\n\006endRow\030\002 \001(\014\022\017\n" +
+      "\007columns\030\003 \003(\014\022\r\n\005batch\030\004 \001(\005\022\021\n\tstartTi" +
+      "me\030\005 \001(\003\022\017\n\007endTime\030\006 \001(\003\022\023\n\013maxVersions" +
+      "\030\007 \001(\005\022\016\n\006filter\030\010 \001(\t"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Scanner_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Scanner_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Scanner_descriptor,
+              new java.lang.String[] { "StartRow", "EndRow", "Columns", "Batch", "StartTime", "EndTime", "MaxVersions", "Filter", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.ScannerMessage.Scanner.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
new file mode 100644
index 0000000..ed3e07f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/StorageClusterStatusMessage.java
@@ -0,0 +1,1638 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: StorageClusterStatusMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class StorageClusterStatusMessage {
+  private StorageClusterStatusMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class StorageClusterStatus extends
+      com.google.protobuf.GeneratedMessage {
+    // Use StorageClusterStatus.newBuilder() to construct.
+    private StorageClusterStatus() {
+      initFields();
+    }
+    private StorageClusterStatus(boolean noInit) {}
+    
+    private static final StorageClusterStatus defaultInstance;
+    public static StorageClusterStatus getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public StorageClusterStatus getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_fieldAccessorTable;
+    }
+    
+    public static final class Region extends
+        com.google.protobuf.GeneratedMessage {
+      // Use Region.newBuilder() to construct.
+      private Region() {
+        initFields();
+      }
+      private Region(boolean noInit) {}
+      
+      private static final Region defaultInstance;
+      public static Region getDefaultInstance() {
+        return defaultInstance;
+      }
+      
+      public Region getDefaultInstanceForType() {
+        return defaultInstance;
+      }
+      
+      public static final com.google.protobuf.Descriptors.Descriptor
+          getDescriptor() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Region_descriptor;
+      }
+      
+      protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+          internalGetFieldAccessorTable() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Region_fieldAccessorTable;
+      }
+      
+      // required bytes name = 1;
+      public static final int NAME_FIELD_NUMBER = 1;
+      private boolean hasName;
+      private com.google.protobuf.ByteString name_ = com.google.protobuf.ByteString.EMPTY;
+      public boolean hasName() { return hasName; }
+      public com.google.protobuf.ByteString getName() { return name_; }
+      
+      // optional int32 stores = 2;
+      public static final int STORES_FIELD_NUMBER = 2;
+      private boolean hasStores;
+      private int stores_ = 0;
+      public boolean hasStores() { return hasStores; }
+      public int getStores() { return stores_; }
+      
+      // optional int32 storefiles = 3;
+      public static final int STOREFILES_FIELD_NUMBER = 3;
+      private boolean hasStorefiles;
+      private int storefiles_ = 0;
+      public boolean hasStorefiles() { return hasStorefiles; }
+      public int getStorefiles() { return storefiles_; }
+      
+      // optional int32 storefileSizeMB = 4;
+      public static final int STOREFILESIZEMB_FIELD_NUMBER = 4;
+      private boolean hasStorefileSizeMB;
+      private int storefileSizeMB_ = 0;
+      public boolean hasStorefileSizeMB() { return hasStorefileSizeMB; }
+      public int getStorefileSizeMB() { return storefileSizeMB_; }
+      
+      // optional int32 memstoreSizeMB = 5;
+      public static final int MEMSTORESIZEMB_FIELD_NUMBER = 5;
+      private boolean hasMemstoreSizeMB;
+      private int memstoreSizeMB_ = 0;
+      public boolean hasMemstoreSizeMB() { return hasMemstoreSizeMB; }
+      public int getMemstoreSizeMB() { return memstoreSizeMB_; }
+      
+      // optional int32 storefileIndexSizeMB = 6;
+      public static final int STOREFILEINDEXSIZEMB_FIELD_NUMBER = 6;
+      private boolean hasStorefileIndexSizeMB;
+      private int storefileIndexSizeMB_ = 0;
+      public boolean hasStorefileIndexSizeMB() { return hasStorefileIndexSizeMB; }
+      public int getStorefileIndexSizeMB() { return storefileIndexSizeMB_; }
+      
+      private void initFields() {
+      }
+      public final boolean isInitialized() {
+        if (!hasName) return false;
+        return true;
+      }
+      
+      public void writeTo(com.google.protobuf.CodedOutputStream output)
+                          throws java.io.IOException {
+        getSerializedSize();
+        if (hasName()) {
+          output.writeBytes(1, getName());
+        }
+        if (hasStores()) {
+          output.writeInt32(2, getStores());
+        }
+        if (hasStorefiles()) {
+          output.writeInt32(3, getStorefiles());
+        }
+        if (hasStorefileSizeMB()) {
+          output.writeInt32(4, getStorefileSizeMB());
+        }
+        if (hasMemstoreSizeMB()) {
+          output.writeInt32(5, getMemstoreSizeMB());
+        }
+        if (hasStorefileIndexSizeMB()) {
+          output.writeInt32(6, getStorefileIndexSizeMB());
+        }
+        getUnknownFields().writeTo(output);
+      }
+      
+      private int memoizedSerializedSize = -1;
+      public int getSerializedSize() {
+        int size = memoizedSerializedSize;
+        if (size != -1) return size;
+      
+        size = 0;
+        if (hasName()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeBytesSize(1, getName());
+        }
+        if (hasStores()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(2, getStores());
+        }
+        if (hasStorefiles()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(3, getStorefiles());
+        }
+        if (hasStorefileSizeMB()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(4, getStorefileSizeMB());
+        }
+        if (hasMemstoreSizeMB()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(5, getMemstoreSizeMB());
+        }
+        if (hasStorefileIndexSizeMB()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(6, getStorefileIndexSizeMB());
+        }
+        size += getUnknownFields().getSerializedSize();
+        memoizedSerializedSize = size;
+        return size;
+      }
+      
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(
+          com.google.protobuf.ByteString data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(
+          com.google.protobuf.ByteString data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(byte[] data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(
+          byte[] data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseDelimitedFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseDelimitedFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(
+          com.google.protobuf.CodedInputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region parseFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      
+      public static Builder newBuilder() { return Builder.create(); }
+      public Builder newBuilderForType() { return newBuilder(); }
+      public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region prototype) {
+        return newBuilder().mergeFrom(prototype);
+      }
+      public Builder toBuilder() { return newBuilder(this); }
+      
+      public static final class Builder extends
+          com.google.protobuf.GeneratedMessage.Builder<Builder> {
+        private org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region result;
+        
+        // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.newBuilder()
+        private Builder() {}
+        
+        private static Builder create() {
+          Builder builder = new Builder();
+          builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region();
+          return builder;
+        }
+        
+        protected org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region internalGetResult() {
+          return result;
+        }
+        
+        public Builder clear() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "Cannot call clear() after build().");
+          }
+          result = new org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region();
+          return this;
+        }
+        
+        public Builder clone() {
+          return create().mergeFrom(result);
+        }
+        
+        public com.google.protobuf.Descriptors.Descriptor
+            getDescriptorForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.getDescriptor();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region getDefaultInstanceForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.getDefaultInstance();
+        }
+        
+        public boolean isInitialized() {
+          return result.isInitialized();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region build() {
+          if (result != null && !isInitialized()) {
+            throw newUninitializedMessageException(result);
+          }
+          return buildPartial();
+        }
+        
+        private org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region buildParsed()
+            throws com.google.protobuf.InvalidProtocolBufferException {
+          if (!isInitialized()) {
+            throw newUninitializedMessageException(
+              result).asInvalidProtocolBufferException();
+          }
+          return buildPartial();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region buildPartial() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "build() has already been called on this Builder.");
+          }
+          org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region returnMe = result;
+          result = null;
+          return returnMe;
+        }
+        
+        public Builder mergeFrom(com.google.protobuf.Message other) {
+          if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region) {
+            return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region)other);
+          } else {
+            super.mergeFrom(other);
+            return this;
+          }
+        }
+        
+        public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region other) {
+          if (other == org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.getDefaultInstance()) return this;
+          if (other.hasName()) {
+            setName(other.getName());
+          }
+          if (other.hasStores()) {
+            setStores(other.getStores());
+          }
+          if (other.hasStorefiles()) {
+            setStorefiles(other.getStorefiles());
+          }
+          if (other.hasStorefileSizeMB()) {
+            setStorefileSizeMB(other.getStorefileSizeMB());
+          }
+          if (other.hasMemstoreSizeMB()) {
+            setMemstoreSizeMB(other.getMemstoreSizeMB());
+          }
+          if (other.hasStorefileIndexSizeMB()) {
+            setStorefileIndexSizeMB(other.getStorefileIndexSizeMB());
+          }
+          this.mergeUnknownFields(other.getUnknownFields());
+          return this;
+        }
+        
+        public Builder mergeFrom(
+            com.google.protobuf.CodedInputStream input,
+            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+            throws java.io.IOException {
+          com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+            com.google.protobuf.UnknownFieldSet.newBuilder(
+              this.getUnknownFields());
+          while (true) {
+            int tag = input.readTag();
+            switch (tag) {
+              case 0:
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              default: {
+                if (!parseUnknownField(input, unknownFields,
+                                       extensionRegistry, tag)) {
+                  this.setUnknownFields(unknownFields.build());
+                  return this;
+                }
+                break;
+              }
+              case 10: {
+                setName(input.readBytes());
+                break;
+              }
+              case 16: {
+                setStores(input.readInt32());
+                break;
+              }
+              case 24: {
+                setStorefiles(input.readInt32());
+                break;
+              }
+              case 32: {
+                setStorefileSizeMB(input.readInt32());
+                break;
+              }
+              case 40: {
+                setMemstoreSizeMB(input.readInt32());
+                break;
+              }
+              case 48: {
+                setStorefileIndexSizeMB(input.readInt32());
+                break;
+              }
+            }
+          }
+        }
+        
+        
+        // required bytes name = 1;
+        public boolean hasName() {
+          return result.hasName();
+        }
+        public com.google.protobuf.ByteString getName() {
+          return result.getName();
+        }
+        public Builder setName(com.google.protobuf.ByteString value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+          result.name_ = value;
+          return this;
+        }
+        public Builder clearName() {
+          result.hasName = false;
+          result.name_ = getDefaultInstance().getName();
+          return this;
+        }
+        
+        // optional int32 stores = 2;
+        public boolean hasStores() {
+          return result.hasStores();
+        }
+        public int getStores() {
+          return result.getStores();
+        }
+        public Builder setStores(int value) {
+          result.hasStores = true;
+          result.stores_ = value;
+          return this;
+        }
+        public Builder clearStores() {
+          result.hasStores = false;
+          result.stores_ = 0;
+          return this;
+        }
+        
+        // optional int32 storefiles = 3;
+        public boolean hasStorefiles() {
+          return result.hasStorefiles();
+        }
+        public int getStorefiles() {
+          return result.getStorefiles();
+        }
+        public Builder setStorefiles(int value) {
+          result.hasStorefiles = true;
+          result.storefiles_ = value;
+          return this;
+        }
+        public Builder clearStorefiles() {
+          result.hasStorefiles = false;
+          result.storefiles_ = 0;
+          return this;
+        }
+        
+        // optional int32 storefileSizeMB = 4;
+        public boolean hasStorefileSizeMB() {
+          return result.hasStorefileSizeMB();
+        }
+        public int getStorefileSizeMB() {
+          return result.getStorefileSizeMB();
+        }
+        public Builder setStorefileSizeMB(int value) {
+          result.hasStorefileSizeMB = true;
+          result.storefileSizeMB_ = value;
+          return this;
+        }
+        public Builder clearStorefileSizeMB() {
+          result.hasStorefileSizeMB = false;
+          result.storefileSizeMB_ = 0;
+          return this;
+        }
+        
+        // optional int32 memstoreSizeMB = 5;
+        public boolean hasMemstoreSizeMB() {
+          return result.hasMemstoreSizeMB();
+        }
+        public int getMemstoreSizeMB() {
+          return result.getMemstoreSizeMB();
+        }
+        public Builder setMemstoreSizeMB(int value) {
+          result.hasMemstoreSizeMB = true;
+          result.memstoreSizeMB_ = value;
+          return this;
+        }
+        public Builder clearMemstoreSizeMB() {
+          result.hasMemstoreSizeMB = false;
+          result.memstoreSizeMB_ = 0;
+          return this;
+        }
+        
+        // optional int32 storefileIndexSizeMB = 6;
+        public boolean hasStorefileIndexSizeMB() {
+          return result.hasStorefileIndexSizeMB();
+        }
+        public int getStorefileIndexSizeMB() {
+          return result.getStorefileIndexSizeMB();
+        }
+        public Builder setStorefileIndexSizeMB(int value) {
+          result.hasStorefileIndexSizeMB = true;
+          result.storefileIndexSizeMB_ = value;
+          return this;
+        }
+        public Builder clearStorefileIndexSizeMB() {
+          result.hasStorefileIndexSizeMB = false;
+          result.storefileIndexSizeMB_ = 0;
+          return this;
+        }
+        
+        // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Region)
+      }
+      
+      static {
+        defaultInstance = new Region(true);
+        org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internalForceInit();
+        defaultInstance.initFields();
+      }
+      
+      // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Region)
+    }
+    
+    public static final class Node extends
+        com.google.protobuf.GeneratedMessage {
+      // Use Node.newBuilder() to construct.
+      private Node() {
+        initFields();
+      }
+      private Node(boolean noInit) {}
+      
+      private static final Node defaultInstance;
+      public static Node getDefaultInstance() {
+        return defaultInstance;
+      }
+      
+      public Node getDefaultInstanceForType() {
+        return defaultInstance;
+      }
+      
+      public static final com.google.protobuf.Descriptors.Descriptor
+          getDescriptor() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Node_descriptor;
+      }
+      
+      protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+          internalGetFieldAccessorTable() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Node_fieldAccessorTable;
+      }
+      
+      // required string name = 1;
+      public static final int NAME_FIELD_NUMBER = 1;
+      private boolean hasName;
+      private java.lang.String name_ = "";
+      public boolean hasName() { return hasName; }
+      public java.lang.String getName() { return name_; }
+      
+      // optional int64 startCode = 2;
+      public static final int STARTCODE_FIELD_NUMBER = 2;
+      private boolean hasStartCode;
+      private long startCode_ = 0L;
+      public boolean hasStartCode() { return hasStartCode; }
+      public long getStartCode() { return startCode_; }
+      
+      // optional int32 requests = 3;
+      public static final int REQUESTS_FIELD_NUMBER = 3;
+      private boolean hasRequests;
+      private int requests_ = 0;
+      public boolean hasRequests() { return hasRequests; }
+      public int getRequests() { return requests_; }
+      
+      // optional int32 heapSizeMB = 4;
+      public static final int HEAPSIZEMB_FIELD_NUMBER = 4;
+      private boolean hasHeapSizeMB;
+      private int heapSizeMB_ = 0;
+      public boolean hasHeapSizeMB() { return hasHeapSizeMB; }
+      public int getHeapSizeMB() { return heapSizeMB_; }
+      
+      // optional int32 maxHeapSizeMB = 5;
+      public static final int MAXHEAPSIZEMB_FIELD_NUMBER = 5;
+      private boolean hasMaxHeapSizeMB;
+      private int maxHeapSizeMB_ = 0;
+      public boolean hasMaxHeapSizeMB() { return hasMaxHeapSizeMB; }
+      public int getMaxHeapSizeMB() { return maxHeapSizeMB_; }
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Region regions = 6;
+      public static final int REGIONS_FIELD_NUMBER = 6;
+      private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region> regions_ =
+        java.util.Collections.emptyList();
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region> getRegionsList() {
+        return regions_;
+      }
+      public int getRegionsCount() { return regions_.size(); }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region getRegions(int index) {
+        return regions_.get(index);
+      }
+      
+      private void initFields() {
+      }
+      public final boolean isInitialized() {
+        if (!hasName) return false;
+        for (org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region element : getRegionsList()) {
+          if (!element.isInitialized()) return false;
+        }
+        return true;
+      }
+      
+      public void writeTo(com.google.protobuf.CodedOutputStream output)
+                          throws java.io.IOException {
+        getSerializedSize();
+        if (hasName()) {
+          output.writeString(1, getName());
+        }
+        if (hasStartCode()) {
+          output.writeInt64(2, getStartCode());
+        }
+        if (hasRequests()) {
+          output.writeInt32(3, getRequests());
+        }
+        if (hasHeapSizeMB()) {
+          output.writeInt32(4, getHeapSizeMB());
+        }
+        if (hasMaxHeapSizeMB()) {
+          output.writeInt32(5, getMaxHeapSizeMB());
+        }
+        for (org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region element : getRegionsList()) {
+          output.writeMessage(6, element);
+        }
+        getUnknownFields().writeTo(output);
+      }
+      
+      private int memoizedSerializedSize = -1;
+      public int getSerializedSize() {
+        int size = memoizedSerializedSize;
+        if (size != -1) return size;
+      
+        size = 0;
+        if (hasName()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeStringSize(1, getName());
+        }
+        if (hasStartCode()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt64Size(2, getStartCode());
+        }
+        if (hasRequests()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(3, getRequests());
+        }
+        if (hasHeapSizeMB()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(4, getHeapSizeMB());
+        }
+        if (hasMaxHeapSizeMB()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt32Size(5, getMaxHeapSizeMB());
+        }
+        for (org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region element : getRegionsList()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeMessageSize(6, element);
+        }
+        size += getUnknownFields().getSerializedSize();
+        memoizedSerializedSize = size;
+        return size;
+      }
+      
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(
+          com.google.protobuf.ByteString data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(
+          com.google.protobuf.ByteString data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(byte[] data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(
+          byte[] data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseDelimitedFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseDelimitedFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(
+          com.google.protobuf.CodedInputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node parseFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      
+      public static Builder newBuilder() { return Builder.create(); }
+      public Builder newBuilderForType() { return newBuilder(); }
+      public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node prototype) {
+        return newBuilder().mergeFrom(prototype);
+      }
+      public Builder toBuilder() { return newBuilder(this); }
+      
+      public static final class Builder extends
+          com.google.protobuf.GeneratedMessage.Builder<Builder> {
+        private org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node result;
+        
+        // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.newBuilder()
+        private Builder() {}
+        
+        private static Builder create() {
+          Builder builder = new Builder();
+          builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node();
+          return builder;
+        }
+        
+        protected org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node internalGetResult() {
+          return result;
+        }
+        
+        public Builder clear() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "Cannot call clear() after build().");
+          }
+          result = new org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node();
+          return this;
+        }
+        
+        public Builder clone() {
+          return create().mergeFrom(result);
+        }
+        
+        public com.google.protobuf.Descriptors.Descriptor
+            getDescriptorForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.getDescriptor();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node getDefaultInstanceForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.getDefaultInstance();
+        }
+        
+        public boolean isInitialized() {
+          return result.isInitialized();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node build() {
+          if (result != null && !isInitialized()) {
+            throw newUninitializedMessageException(result);
+          }
+          return buildPartial();
+        }
+        
+        private org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node buildParsed()
+            throws com.google.protobuf.InvalidProtocolBufferException {
+          if (!isInitialized()) {
+            throw newUninitializedMessageException(
+              result).asInvalidProtocolBufferException();
+          }
+          return buildPartial();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node buildPartial() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "build() has already been called on this Builder.");
+          }
+          if (result.regions_ != java.util.Collections.EMPTY_LIST) {
+            result.regions_ =
+              java.util.Collections.unmodifiableList(result.regions_);
+          }
+          org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node returnMe = result;
+          result = null;
+          return returnMe;
+        }
+        
+        public Builder mergeFrom(com.google.protobuf.Message other) {
+          if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node) {
+            return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node)other);
+          } else {
+            super.mergeFrom(other);
+            return this;
+          }
+        }
+        
+        public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node other) {
+          if (other == org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.getDefaultInstance()) return this;
+          if (other.hasName()) {
+            setName(other.getName());
+          }
+          if (other.hasStartCode()) {
+            setStartCode(other.getStartCode());
+          }
+          if (other.hasRequests()) {
+            setRequests(other.getRequests());
+          }
+          if (other.hasHeapSizeMB()) {
+            setHeapSizeMB(other.getHeapSizeMB());
+          }
+          if (other.hasMaxHeapSizeMB()) {
+            setMaxHeapSizeMB(other.getMaxHeapSizeMB());
+          }
+          if (!other.regions_.isEmpty()) {
+            if (result.regions_.isEmpty()) {
+              result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region>();
+            }
+            result.regions_.addAll(other.regions_);
+          }
+          this.mergeUnknownFields(other.getUnknownFields());
+          return this;
+        }
+        
+        public Builder mergeFrom(
+            com.google.protobuf.CodedInputStream input,
+            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+            throws java.io.IOException {
+          com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+            com.google.protobuf.UnknownFieldSet.newBuilder(
+              this.getUnknownFields());
+          while (true) {
+            int tag = input.readTag();
+            switch (tag) {
+              case 0:
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              default: {
+                if (!parseUnknownField(input, unknownFields,
+                                       extensionRegistry, tag)) {
+                  this.setUnknownFields(unknownFields.build());
+                  return this;
+                }
+                break;
+              }
+              case 10: {
+                setName(input.readString());
+                break;
+              }
+              case 16: {
+                setStartCode(input.readInt64());
+                break;
+              }
+              case 24: {
+                setRequests(input.readInt32());
+                break;
+              }
+              case 32: {
+                setHeapSizeMB(input.readInt32());
+                break;
+              }
+              case 40: {
+                setMaxHeapSizeMB(input.readInt32());
+                break;
+              }
+              case 50: {
+                org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.newBuilder();
+                input.readMessage(subBuilder, extensionRegistry);
+                addRegions(subBuilder.buildPartial());
+                break;
+              }
+            }
+          }
+        }
+        
+        
+        // required string name = 1;
+        public boolean hasName() {
+          return result.hasName();
+        }
+        public java.lang.String getName() {
+          return result.getName();
+        }
+        public Builder setName(java.lang.String value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+          result.name_ = value;
+          return this;
+        }
+        public Builder clearName() {
+          result.hasName = false;
+          result.name_ = getDefaultInstance().getName();
+          return this;
+        }
+        
+        // optional int64 startCode = 2;
+        public boolean hasStartCode() {
+          return result.hasStartCode();
+        }
+        public long getStartCode() {
+          return result.getStartCode();
+        }
+        public Builder setStartCode(long value) {
+          result.hasStartCode = true;
+          result.startCode_ = value;
+          return this;
+        }
+        public Builder clearStartCode() {
+          result.hasStartCode = false;
+          result.startCode_ = 0L;
+          return this;
+        }
+        
+        // optional int32 requests = 3;
+        public boolean hasRequests() {
+          return result.hasRequests();
+        }
+        public int getRequests() {
+          return result.getRequests();
+        }
+        public Builder setRequests(int value) {
+          result.hasRequests = true;
+          result.requests_ = value;
+          return this;
+        }
+        public Builder clearRequests() {
+          result.hasRequests = false;
+          result.requests_ = 0;
+          return this;
+        }
+        
+        // optional int32 heapSizeMB = 4;
+        public boolean hasHeapSizeMB() {
+          return result.hasHeapSizeMB();
+        }
+        public int getHeapSizeMB() {
+          return result.getHeapSizeMB();
+        }
+        public Builder setHeapSizeMB(int value) {
+          result.hasHeapSizeMB = true;
+          result.heapSizeMB_ = value;
+          return this;
+        }
+        public Builder clearHeapSizeMB() {
+          result.hasHeapSizeMB = false;
+          result.heapSizeMB_ = 0;
+          return this;
+        }
+        
+        // optional int32 maxHeapSizeMB = 5;
+        public boolean hasMaxHeapSizeMB() {
+          return result.hasMaxHeapSizeMB();
+        }
+        public int getMaxHeapSizeMB() {
+          return result.getMaxHeapSizeMB();
+        }
+        public Builder setMaxHeapSizeMB(int value) {
+          result.hasMaxHeapSizeMB = true;
+          result.maxHeapSizeMB_ = value;
+          return this;
+        }
+        public Builder clearMaxHeapSizeMB() {
+          result.hasMaxHeapSizeMB = false;
+          result.maxHeapSizeMB_ = 0;
+          return this;
+        }
+        
+        // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Region regions = 6;
+        public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region> getRegionsList() {
+          return java.util.Collections.unmodifiableList(result.regions_);
+        }
+        public int getRegionsCount() {
+          return result.getRegionsCount();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region getRegions(int index) {
+          return result.getRegions(index);
+        }
+        public Builder setRegions(int index, org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region value) {
+          if (value == null) {
+            throw new NullPointerException();
+          }
+          result.regions_.set(index, value);
+          return this;
+        }
+        public Builder setRegions(int index, org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.Builder builderForValue) {
+          result.regions_.set(index, builderForValue.build());
+          return this;
+        }
+        public Builder addRegions(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region value) {
+          if (value == null) {
+            throw new NullPointerException();
+          }
+          if (result.regions_.isEmpty()) {
+            result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region>();
+          }
+          result.regions_.add(value);
+          return this;
+        }
+        public Builder addRegions(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.Builder builderForValue) {
+          if (result.regions_.isEmpty()) {
+            result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region>();
+          }
+          result.regions_.add(builderForValue.build());
+          return this;
+        }
+        public Builder addAllRegions(
+            java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region> values) {
+          if (result.regions_.isEmpty()) {
+            result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region>();
+          }
+          super.addAll(values, result.regions_);
+          return this;
+        }
+        public Builder clearRegions() {
+          result.regions_ = java.util.Collections.emptyList();
+          return this;
+        }
+        
+        // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Node)
+      }
+      
+      static {
+        defaultInstance = new Node(true);
+        org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internalForceInit();
+        defaultInstance.initFields();
+      }
+      
+      // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Node)
+    }
+    
+    // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Node liveNodes = 1;
+    public static final int LIVENODES_FIELD_NUMBER = 1;
+    private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node> liveNodes_ =
+      java.util.Collections.emptyList();
+    public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node> getLiveNodesList() {
+      return liveNodes_;
+    }
+    public int getLiveNodesCount() { return liveNodes_.size(); }
+    public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node getLiveNodes(int index) {
+      return liveNodes_.get(index);
+    }
+    
+    // repeated string deadNodes = 2;
+    public static final int DEADNODES_FIELD_NUMBER = 2;
+    private java.util.List<java.lang.String> deadNodes_ =
+      java.util.Collections.emptyList();
+    public java.util.List<java.lang.String> getDeadNodesList() {
+      return deadNodes_;
+    }
+    public int getDeadNodesCount() { return deadNodes_.size(); }
+    public java.lang.String getDeadNodes(int index) {
+      return deadNodes_.get(index);
+    }
+    
+    // optional int32 regions = 3;
+    public static final int REGIONS_FIELD_NUMBER = 3;
+    private boolean hasRegions;
+    private int regions_ = 0;
+    public boolean hasRegions() { return hasRegions; }
+    public int getRegions() { return regions_; }
+    
+    // optional int32 requests = 4;
+    public static final int REQUESTS_FIELD_NUMBER = 4;
+    private boolean hasRequests;
+    private int requests_ = 0;
+    public boolean hasRequests() { return hasRequests; }
+    public int getRequests() { return requests_; }
+    
+    // optional double averageLoad = 5;
+    public static final int AVERAGELOAD_FIELD_NUMBER = 5;
+    private boolean hasAverageLoad;
+    private double averageLoad_ = 0D;
+    public boolean hasAverageLoad() { return hasAverageLoad; }
+    public double getAverageLoad() { return averageLoad_; }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node element : getLiveNodesList()) {
+        if (!element.isInitialized()) return false;
+      }
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node element : getLiveNodesList()) {
+        output.writeMessage(1, element);
+      }
+      for (java.lang.String element : getDeadNodesList()) {
+        output.writeString(2, element);
+      }
+      if (hasRegions()) {
+        output.writeInt32(3, getRegions());
+      }
+      if (hasRequests()) {
+        output.writeInt32(4, getRequests());
+      }
+      if (hasAverageLoad()) {
+        output.writeDouble(5, getAverageLoad());
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node element : getLiveNodesList()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeMessageSize(1, element);
+      }
+      {
+        int dataSize = 0;
+        for (java.lang.String element : getDeadNodesList()) {
+          dataSize += com.google.protobuf.CodedOutputStream
+            .computeStringSizeNoTag(element);
+        }
+        size += dataSize;
+        size += 1 * getDeadNodesList().size();
+      }
+      if (hasRegions()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt32Size(3, getRegions());
+      }
+      if (hasRequests()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeInt32Size(4, getRequests());
+      }
+      if (hasAverageLoad()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeDoubleSize(5, getAverageLoad());
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        if (result.liveNodes_ != java.util.Collections.EMPTY_LIST) {
+          result.liveNodes_ =
+            java.util.Collections.unmodifiableList(result.liveNodes_);
+        }
+        if (result.deadNodes_ != java.util.Collections.EMPTY_LIST) {
+          result.deadNodes_ =
+            java.util.Collections.unmodifiableList(result.deadNodes_);
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.getDefaultInstance()) return this;
+        if (!other.liveNodes_.isEmpty()) {
+          if (result.liveNodes_.isEmpty()) {
+            result.liveNodes_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node>();
+          }
+          result.liveNodes_.addAll(other.liveNodes_);
+        }
+        if (!other.deadNodes_.isEmpty()) {
+          if (result.deadNodes_.isEmpty()) {
+            result.deadNodes_ = new java.util.ArrayList<java.lang.String>();
+          }
+          result.deadNodes_.addAll(other.deadNodes_);
+        }
+        if (other.hasRegions()) {
+          setRegions(other.getRegions());
+        }
+        if (other.hasRequests()) {
+          setRequests(other.getRequests());
+        }
+        if (other.hasAverageLoad()) {
+          setAverageLoad(other.getAverageLoad());
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.newBuilder();
+              input.readMessage(subBuilder, extensionRegistry);
+              addLiveNodes(subBuilder.buildPartial());
+              break;
+            }
+            case 18: {
+              addDeadNodes(input.readString());
+              break;
+            }
+            case 24: {
+              setRegions(input.readInt32());
+              break;
+            }
+            case 32: {
+              setRequests(input.readInt32());
+              break;
+            }
+            case 41: {
+              setAverageLoad(input.readDouble());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus.Node liveNodes = 1;
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node> getLiveNodesList() {
+        return java.util.Collections.unmodifiableList(result.liveNodes_);
+      }
+      public int getLiveNodesCount() {
+        return result.getLiveNodesCount();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node getLiveNodes(int index) {
+        return result.getLiveNodes(index);
+      }
+      public Builder setLiveNodes(int index, org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        result.liveNodes_.set(index, value);
+        return this;
+      }
+      public Builder setLiveNodes(int index, org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.Builder builderForValue) {
+        result.liveNodes_.set(index, builderForValue.build());
+        return this;
+      }
+      public Builder addLiveNodes(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        if (result.liveNodes_.isEmpty()) {
+          result.liveNodes_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node>();
+        }
+        result.liveNodes_.add(value);
+        return this;
+      }
+      public Builder addLiveNodes(org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.Builder builderForValue) {
+        if (result.liveNodes_.isEmpty()) {
+          result.liveNodes_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node>();
+        }
+        result.liveNodes_.add(builderForValue.build());
+        return this;
+      }
+      public Builder addAllLiveNodes(
+          java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node> values) {
+        if (result.liveNodes_.isEmpty()) {
+          result.liveNodes_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node>();
+        }
+        super.addAll(values, result.liveNodes_);
+        return this;
+      }
+      public Builder clearLiveNodes() {
+        result.liveNodes_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // repeated string deadNodes = 2;
+      public java.util.List<java.lang.String> getDeadNodesList() {
+        return java.util.Collections.unmodifiableList(result.deadNodes_);
+      }
+      public int getDeadNodesCount() {
+        return result.getDeadNodesCount();
+      }
+      public java.lang.String getDeadNodes(int index) {
+        return result.getDeadNodes(index);
+      }
+      public Builder setDeadNodes(int index, java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.deadNodes_.set(index, value);
+        return this;
+      }
+      public Builder addDeadNodes(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  if (result.deadNodes_.isEmpty()) {
+          result.deadNodes_ = new java.util.ArrayList<java.lang.String>();
+        }
+        result.deadNodes_.add(value);
+        return this;
+      }
+      public Builder addAllDeadNodes(
+          java.lang.Iterable<? extends java.lang.String> values) {
+        if (result.deadNodes_.isEmpty()) {
+          result.deadNodes_ = new java.util.ArrayList<java.lang.String>();
+        }
+        super.addAll(values, result.deadNodes_);
+        return this;
+      }
+      public Builder clearDeadNodes() {
+        result.deadNodes_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // optional int32 regions = 3;
+      public boolean hasRegions() {
+        return result.hasRegions();
+      }
+      public int getRegions() {
+        return result.getRegions();
+      }
+      public Builder setRegions(int value) {
+        result.hasRegions = true;
+        result.regions_ = value;
+        return this;
+      }
+      public Builder clearRegions() {
+        result.hasRegions = false;
+        result.regions_ = 0;
+        return this;
+      }
+      
+      // optional int32 requests = 4;
+      public boolean hasRequests() {
+        return result.hasRequests();
+      }
+      public int getRequests() {
+        return result.getRequests();
+      }
+      public Builder setRequests(int value) {
+        result.hasRequests = true;
+        result.requests_ = value;
+        return this;
+      }
+      public Builder clearRequests() {
+        result.hasRequests = false;
+        result.requests_ = 0;
+        return this;
+      }
+      
+      // optional double averageLoad = 5;
+      public boolean hasAverageLoad() {
+        return result.hasAverageLoad();
+      }
+      public double getAverageLoad() {
+        return result.getAverageLoad();
+      }
+      public Builder setAverageLoad(double value) {
+        result.hasAverageLoad = true;
+        result.averageLoad_ = value;
+        return this;
+      }
+      public Builder clearAverageLoad() {
+        result.hasAverageLoad = false;
+        result.averageLoad_ = 0D;
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus)
+    }
+    
+    static {
+      defaultInstance = new StorageClusterStatus(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatus)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_fieldAccessorTable;
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Region_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Region_fieldAccessorTable;
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Node_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Node_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n!StorageClusterStatusMessage.proto\022/org" +
+      ".apache.hadoop.hbase.rest.protobuf.gener" +
+      "ated\"\222\004\n\024StorageClusterStatus\022]\n\tliveNod" +
+      "es\030\001 \003(\0132J.org.apache.hadoop.hbase.rest." +
+      "protobuf.generated.StorageClusterStatus." +
+      "Node\022\021\n\tdeadNodes\030\002 \003(\t\022\017\n\007regions\030\003 \001(\005" +
+      "\022\020\n\010requests\030\004 \001(\005\022\023\n\013averageLoad\030\005 \001(\001\032" +
+      "\211\001\n\006Region\022\014\n\004name\030\001 \002(\014\022\016\n\006stores\030\002 \001(\005" +
+      "\022\022\n\nstorefiles\030\003 \001(\005\022\027\n\017storefileSizeMB\030" +
+      "\004 \001(\005\022\026\n\016memstoreSizeMB\030\005 \001(\005\022\034\n\024storefi",
+      "leIndexSizeMB\030\006 \001(\005\032\303\001\n\004Node\022\014\n\004name\030\001 \002" +
+      "(\t\022\021\n\tstartCode\030\002 \001(\003\022\020\n\010requests\030\003 \001(\005\022" +
+      "\022\n\nheapSizeMB\030\004 \001(\005\022\025\n\rmaxHeapSizeMB\030\005 \001" +
+      "(\005\022]\n\007regions\030\006 \003(\0132L.org.apache.hadoop." +
+      "hbase.rest.protobuf.generated.StorageClu" +
+      "sterStatus.Region"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_descriptor,
+              new java.lang.String[] { "LiveNodes", "DeadNodes", "Regions", "Requests", "AverageLoad", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Builder.class);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Region_descriptor =
+            internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_descriptor.getNestedTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Region_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Region_descriptor,
+              new java.lang.String[] { "Name", "Stores", "Storefiles", "StorefileSizeMB", "MemstoreSizeMB", "StorefileIndexSizeMB", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Region.Builder.class);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Node_descriptor =
+            internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_descriptor.getNestedTypes().get(1);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Node_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_StorageClusterStatus_Node_descriptor,
+              new java.lang.String[] { "Name", "StartCode", "Requests", "HeapSizeMB", "MaxHeapSizeMB", "Regions", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.StorageClusterStatusMessage.StorageClusterStatus.Node.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
new file mode 100644
index 0000000..c4bb3a9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableInfoMessage.java
@@ -0,0 +1,901 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: TableInfoMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class TableInfoMessage {
+  private TableInfoMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class TableInfo extends
+      com.google.protobuf.GeneratedMessage {
+    // Use TableInfo.newBuilder() to construct.
+    private TableInfo() {
+      initFields();
+    }
+    private TableInfo(boolean noInit) {}
+    
+    private static final TableInfo defaultInstance;
+    public static TableInfo getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public TableInfo getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_fieldAccessorTable;
+    }
+    
+    public static final class Region extends
+        com.google.protobuf.GeneratedMessage {
+      // Use Region.newBuilder() to construct.
+      private Region() {
+        initFields();
+      }
+      private Region(boolean noInit) {}
+      
+      private static final Region defaultInstance;
+      public static Region getDefaultInstance() {
+        return defaultInstance;
+      }
+      
+      public Region getDefaultInstanceForType() {
+        return defaultInstance;
+      }
+      
+      public static final com.google.protobuf.Descriptors.Descriptor
+          getDescriptor() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_Region_descriptor;
+      }
+      
+      protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+          internalGetFieldAccessorTable() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_Region_fieldAccessorTable;
+      }
+      
+      // required string name = 1;
+      public static final int NAME_FIELD_NUMBER = 1;
+      private boolean hasName;
+      private java.lang.String name_ = "";
+      public boolean hasName() { return hasName; }
+      public java.lang.String getName() { return name_; }
+      
+      // optional bytes startKey = 2;
+      public static final int STARTKEY_FIELD_NUMBER = 2;
+      private boolean hasStartKey;
+      private com.google.protobuf.ByteString startKey_ = com.google.protobuf.ByteString.EMPTY;
+      public boolean hasStartKey() { return hasStartKey; }
+      public com.google.protobuf.ByteString getStartKey() { return startKey_; }
+      
+      // optional bytes endKey = 3;
+      public static final int ENDKEY_FIELD_NUMBER = 3;
+      private boolean hasEndKey;
+      private com.google.protobuf.ByteString endKey_ = com.google.protobuf.ByteString.EMPTY;
+      public boolean hasEndKey() { return hasEndKey; }
+      public com.google.protobuf.ByteString getEndKey() { return endKey_; }
+      
+      // optional int64 id = 4;
+      public static final int ID_FIELD_NUMBER = 4;
+      private boolean hasId;
+      private long id_ = 0L;
+      public boolean hasId() { return hasId; }
+      public long getId() { return id_; }
+      
+      // optional string location = 5;
+      public static final int LOCATION_FIELD_NUMBER = 5;
+      private boolean hasLocation;
+      private java.lang.String location_ = "";
+      public boolean hasLocation() { return hasLocation; }
+      public java.lang.String getLocation() { return location_; }
+      
+      private void initFields() {
+      }
+      public final boolean isInitialized() {
+        if (!hasName) return false;
+        return true;
+      }
+      
+      public void writeTo(com.google.protobuf.CodedOutputStream output)
+                          throws java.io.IOException {
+        getSerializedSize();
+        if (hasName()) {
+          output.writeString(1, getName());
+        }
+        if (hasStartKey()) {
+          output.writeBytes(2, getStartKey());
+        }
+        if (hasEndKey()) {
+          output.writeBytes(3, getEndKey());
+        }
+        if (hasId()) {
+          output.writeInt64(4, getId());
+        }
+        if (hasLocation()) {
+          output.writeString(5, getLocation());
+        }
+        getUnknownFields().writeTo(output);
+      }
+      
+      private int memoizedSerializedSize = -1;
+      public int getSerializedSize() {
+        int size = memoizedSerializedSize;
+        if (size != -1) return size;
+      
+        size = 0;
+        if (hasName()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeStringSize(1, getName());
+        }
+        if (hasStartKey()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeBytesSize(2, getStartKey());
+        }
+        if (hasEndKey()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeBytesSize(3, getEndKey());
+        }
+        if (hasId()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeInt64Size(4, getId());
+        }
+        if (hasLocation()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeStringSize(5, getLocation());
+        }
+        size += getUnknownFields().getSerializedSize();
+        memoizedSerializedSize = size;
+        return size;
+      }
+      
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(
+          com.google.protobuf.ByteString data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(
+          com.google.protobuf.ByteString data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(byte[] data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(
+          byte[] data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseDelimitedFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseDelimitedFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(
+          com.google.protobuf.CodedInputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region parseFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      
+      public static Builder newBuilder() { return Builder.create(); }
+      public Builder newBuilderForType() { return newBuilder(); }
+      public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region prototype) {
+        return newBuilder().mergeFrom(prototype);
+      }
+      public Builder toBuilder() { return newBuilder(this); }
+      
+      public static final class Builder extends
+          com.google.protobuf.GeneratedMessage.Builder<Builder> {
+        private org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region result;
+        
+        // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.newBuilder()
+        private Builder() {}
+        
+        private static Builder create() {
+          Builder builder = new Builder();
+          builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region();
+          return builder;
+        }
+        
+        protected org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region internalGetResult() {
+          return result;
+        }
+        
+        public Builder clear() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "Cannot call clear() after build().");
+          }
+          result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region();
+          return this;
+        }
+        
+        public Builder clone() {
+          return create().mergeFrom(result);
+        }
+        
+        public com.google.protobuf.Descriptors.Descriptor
+            getDescriptorForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.getDescriptor();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region getDefaultInstanceForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.getDefaultInstance();
+        }
+        
+        public boolean isInitialized() {
+          return result.isInitialized();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region build() {
+          if (result != null && !isInitialized()) {
+            throw newUninitializedMessageException(result);
+          }
+          return buildPartial();
+        }
+        
+        private org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region buildParsed()
+            throws com.google.protobuf.InvalidProtocolBufferException {
+          if (!isInitialized()) {
+            throw newUninitializedMessageException(
+              result).asInvalidProtocolBufferException();
+          }
+          return buildPartial();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region buildPartial() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "build() has already been called on this Builder.");
+          }
+          org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region returnMe = result;
+          result = null;
+          return returnMe;
+        }
+        
+        public Builder mergeFrom(com.google.protobuf.Message other) {
+          if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region) {
+            return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region)other);
+          } else {
+            super.mergeFrom(other);
+            return this;
+          }
+        }
+        
+        public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region other) {
+          if (other == org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.getDefaultInstance()) return this;
+          if (other.hasName()) {
+            setName(other.getName());
+          }
+          if (other.hasStartKey()) {
+            setStartKey(other.getStartKey());
+          }
+          if (other.hasEndKey()) {
+            setEndKey(other.getEndKey());
+          }
+          if (other.hasId()) {
+            setId(other.getId());
+          }
+          if (other.hasLocation()) {
+            setLocation(other.getLocation());
+          }
+          this.mergeUnknownFields(other.getUnknownFields());
+          return this;
+        }
+        
+        public Builder mergeFrom(
+            com.google.protobuf.CodedInputStream input,
+            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+            throws java.io.IOException {
+          com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+            com.google.protobuf.UnknownFieldSet.newBuilder(
+              this.getUnknownFields());
+          while (true) {
+            int tag = input.readTag();
+            switch (tag) {
+              case 0:
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              default: {
+                if (!parseUnknownField(input, unknownFields,
+                                       extensionRegistry, tag)) {
+                  this.setUnknownFields(unknownFields.build());
+                  return this;
+                }
+                break;
+              }
+              case 10: {
+                setName(input.readString());
+                break;
+              }
+              case 18: {
+                setStartKey(input.readBytes());
+                break;
+              }
+              case 26: {
+                setEndKey(input.readBytes());
+                break;
+              }
+              case 32: {
+                setId(input.readInt64());
+                break;
+              }
+              case 42: {
+                setLocation(input.readString());
+                break;
+              }
+            }
+          }
+        }
+        
+        
+        // required string name = 1;
+        public boolean hasName() {
+          return result.hasName();
+        }
+        public java.lang.String getName() {
+          return result.getName();
+        }
+        public Builder setName(java.lang.String value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+          result.name_ = value;
+          return this;
+        }
+        public Builder clearName() {
+          result.hasName = false;
+          result.name_ = getDefaultInstance().getName();
+          return this;
+        }
+        
+        // optional bytes startKey = 2;
+        public boolean hasStartKey() {
+          return result.hasStartKey();
+        }
+        public com.google.protobuf.ByteString getStartKey() {
+          return result.getStartKey();
+        }
+        public Builder setStartKey(com.google.protobuf.ByteString value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasStartKey = true;
+          result.startKey_ = value;
+          return this;
+        }
+        public Builder clearStartKey() {
+          result.hasStartKey = false;
+          result.startKey_ = getDefaultInstance().getStartKey();
+          return this;
+        }
+        
+        // optional bytes endKey = 3;
+        public boolean hasEndKey() {
+          return result.hasEndKey();
+        }
+        public com.google.protobuf.ByteString getEndKey() {
+          return result.getEndKey();
+        }
+        public Builder setEndKey(com.google.protobuf.ByteString value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasEndKey = true;
+          result.endKey_ = value;
+          return this;
+        }
+        public Builder clearEndKey() {
+          result.hasEndKey = false;
+          result.endKey_ = getDefaultInstance().getEndKey();
+          return this;
+        }
+        
+        // optional int64 id = 4;
+        public boolean hasId() {
+          return result.hasId();
+        }
+        public long getId() {
+          return result.getId();
+        }
+        public Builder setId(long value) {
+          result.hasId = true;
+          result.id_ = value;
+          return this;
+        }
+        public Builder clearId() {
+          result.hasId = false;
+          result.id_ = 0L;
+          return this;
+        }
+        
+        // optional string location = 5;
+        public boolean hasLocation() {
+          return result.hasLocation();
+        }
+        public java.lang.String getLocation() {
+          return result.getLocation();
+        }
+        public Builder setLocation(java.lang.String value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasLocation = true;
+          result.location_ = value;
+          return this;
+        }
+        public Builder clearLocation() {
+          result.hasLocation = false;
+          result.location_ = getDefaultInstance().getLocation();
+          return this;
+        }
+        
+        // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableInfo.Region)
+      }
+      
+      static {
+        defaultInstance = new Region(true);
+        org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.internalForceInit();
+        defaultInstance.initFields();
+      }
+      
+      // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableInfo.Region)
+    }
+    
+    // required string name = 1;
+    public static final int NAME_FIELD_NUMBER = 1;
+    private boolean hasName;
+    private java.lang.String name_ = "";
+    public boolean hasName() { return hasName; }
+    public java.lang.String getName() { return name_; }
+    
+    // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.TableInfo.Region regions = 2;
+    public static final int REGIONS_FIELD_NUMBER = 2;
+    private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region> regions_ =
+      java.util.Collections.emptyList();
+    public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region> getRegionsList() {
+      return regions_;
+    }
+    public int getRegionsCount() { return regions_.size(); }
+    public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region getRegions(int index) {
+      return regions_.get(index);
+    }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      if (!hasName) return false;
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region element : getRegionsList()) {
+        if (!element.isInitialized()) return false;
+      }
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      if (hasName()) {
+        output.writeString(1, getName());
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region element : getRegionsList()) {
+        output.writeMessage(2, element);
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      if (hasName()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(1, getName());
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region element : getRegionsList()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeMessageSize(2, element);
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        if (result.regions_ != java.util.Collections.EMPTY_LIST) {
+          result.regions_ =
+            java.util.Collections.unmodifiableList(result.regions_);
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.getDefaultInstance()) return this;
+        if (other.hasName()) {
+          setName(other.getName());
+        }
+        if (!other.regions_.isEmpty()) {
+          if (result.regions_.isEmpty()) {
+            result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region>();
+          }
+          result.regions_.addAll(other.regions_);
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              setName(input.readString());
+              break;
+            }
+            case 18: {
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.newBuilder();
+              input.readMessage(subBuilder, extensionRegistry);
+              addRegions(subBuilder.buildPartial());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // required string name = 1;
+      public boolean hasName() {
+        return result.hasName();
+      }
+      public java.lang.String getName() {
+        return result.getName();
+      }
+      public Builder setName(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+        result.name_ = value;
+        return this;
+      }
+      public Builder clearName() {
+        result.hasName = false;
+        result.name_ = getDefaultInstance().getName();
+        return this;
+      }
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.TableInfo.Region regions = 2;
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region> getRegionsList() {
+        return java.util.Collections.unmodifiableList(result.regions_);
+      }
+      public int getRegionsCount() {
+        return result.getRegionsCount();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region getRegions(int index) {
+        return result.getRegions(index);
+      }
+      public Builder setRegions(int index, org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        result.regions_.set(index, value);
+        return this;
+      }
+      public Builder setRegions(int index, org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.Builder builderForValue) {
+        result.regions_.set(index, builderForValue.build());
+        return this;
+      }
+      public Builder addRegions(org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        if (result.regions_.isEmpty()) {
+          result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region>();
+        }
+        result.regions_.add(value);
+        return this;
+      }
+      public Builder addRegions(org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.Builder builderForValue) {
+        if (result.regions_.isEmpty()) {
+          result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region>();
+        }
+        result.regions_.add(builderForValue.build());
+        return this;
+      }
+      public Builder addAllRegions(
+          java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region> values) {
+        if (result.regions_.isEmpty()) {
+          result.regions_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region>();
+        }
+        super.addAll(values, result.regions_);
+        return this;
+      }
+      public Builder clearRegions() {
+        result.regions_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableInfo)
+    }
+    
+    static {
+      defaultInstance = new TableInfo(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableInfo)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_fieldAccessorTable;
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_Region_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_Region_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\026TableInfoMessage.proto\022/org.apache.had" +
+      "oop.hbase.rest.protobuf.generated\"\305\001\n\tTa" +
+      "bleInfo\022\014\n\004name\030\001 \002(\t\022R\n\007regions\030\002 \003(\0132A" +
+      ".org.apache.hadoop.hbase.rest.protobuf.g" +
+      "enerated.TableInfo.Region\032V\n\006Region\022\014\n\004n" +
+      "ame\030\001 \002(\t\022\020\n\010startKey\030\002 \001(\014\022\016\n\006endKey\030\003 " +
+      "\001(\014\022\n\n\002id\030\004 \001(\003\022\020\n\010location\030\005 \001(\t"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_descriptor,
+              new java.lang.String[] { "Name", "Regions", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Builder.class);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_Region_descriptor =
+            internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_descriptor.getNestedTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_Region_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableInfo_Region_descriptor,
+              new java.lang.String[] { "Name", "StartKey", "EndKey", "Id", "Location", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableInfoMessage.TableInfo.Region.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableListMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableListMessage.java
new file mode 100644
index 0000000..f3d6d06
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableListMessage.java
@@ -0,0 +1,377 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: TableListMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class TableListMessage {
+  private TableListMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class TableList extends
+      com.google.protobuf.GeneratedMessage {
+    // Use TableList.newBuilder() to construct.
+    private TableList() {
+      initFields();
+    }
+    private TableList(boolean noInit) {}
+    
+    private static final TableList defaultInstance;
+    public static TableList getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public TableList getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableList_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableList_fieldAccessorTable;
+    }
+    
+    // repeated string name = 1;
+    public static final int NAME_FIELD_NUMBER = 1;
+    private java.util.List<java.lang.String> name_ =
+      java.util.Collections.emptyList();
+    public java.util.List<java.lang.String> getNameList() {
+      return name_;
+    }
+    public int getNameCount() { return name_.size(); }
+    public java.lang.String getName(int index) {
+      return name_.get(index);
+    }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      for (java.lang.String element : getNameList()) {
+        output.writeString(1, element);
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      {
+        int dataSize = 0;
+        for (java.lang.String element : getNameList()) {
+          dataSize += com.google.protobuf.CodedOutputStream
+            .computeStringSizeNoTag(element);
+        }
+        size += dataSize;
+        size += 1 * getNameList().size();
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        if (result.name_ != java.util.Collections.EMPTY_LIST) {
+          result.name_ =
+            java.util.Collections.unmodifiableList(result.name_);
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList.getDefaultInstance()) return this;
+        if (!other.name_.isEmpty()) {
+          if (result.name_.isEmpty()) {
+            result.name_ = new java.util.ArrayList<java.lang.String>();
+          }
+          result.name_.addAll(other.name_);
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              addName(input.readString());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // repeated string name = 1;
+      public java.util.List<java.lang.String> getNameList() {
+        return java.util.Collections.unmodifiableList(result.name_);
+      }
+      public int getNameCount() {
+        return result.getNameCount();
+      }
+      public java.lang.String getName(int index) {
+        return result.getName(index);
+      }
+      public Builder setName(int index, java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.name_.set(index, value);
+        return this;
+      }
+      public Builder addName(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  if (result.name_.isEmpty()) {
+          result.name_ = new java.util.ArrayList<java.lang.String>();
+        }
+        result.name_.add(value);
+        return this;
+      }
+      public Builder addAllName(
+          java.lang.Iterable<? extends java.lang.String> values) {
+        if (result.name_.isEmpty()) {
+          result.name_ = new java.util.ArrayList<java.lang.String>();
+        }
+        super.addAll(values, result.name_);
+        return this;
+      }
+      public Builder clearName() {
+        result.name_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableList)
+    }
+    
+    static {
+      defaultInstance = new TableList(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableList)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableList_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableList_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\026TableListMessage.proto\022/org.apache.had" +
+      "oop.hbase.rest.protobuf.generated\"\031\n\tTab" +
+      "leList\022\014\n\004name\030\001 \003(\t"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableList_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableList_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableList_descriptor,
+              new java.lang.String[] { "Name", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableListMessage.TableList.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableSchemaMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableSchemaMessage.java
new file mode 100644
index 0000000..a649f00
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/TableSchemaMessage.java
@@ -0,0 +1,949 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: TableSchemaMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class TableSchemaMessage {
+  private TableSchemaMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class TableSchema extends
+      com.google.protobuf.GeneratedMessage {
+    // Use TableSchema.newBuilder() to construct.
+    private TableSchema() {
+      initFields();
+    }
+    private TableSchema(boolean noInit) {}
+    
+    private static final TableSchema defaultInstance;
+    public static TableSchema getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public TableSchema getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_fieldAccessorTable;
+    }
+    
+    public static final class Attribute extends
+        com.google.protobuf.GeneratedMessage {
+      // Use Attribute.newBuilder() to construct.
+      private Attribute() {
+        initFields();
+      }
+      private Attribute(boolean noInit) {}
+      
+      private static final Attribute defaultInstance;
+      public static Attribute getDefaultInstance() {
+        return defaultInstance;
+      }
+      
+      public Attribute getDefaultInstanceForType() {
+        return defaultInstance;
+      }
+      
+      public static final com.google.protobuf.Descriptors.Descriptor
+          getDescriptor() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_Attribute_descriptor;
+      }
+      
+      protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+          internalGetFieldAccessorTable() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_Attribute_fieldAccessorTable;
+      }
+      
+      // required string name = 1;
+      public static final int NAME_FIELD_NUMBER = 1;
+      private boolean hasName;
+      private java.lang.String name_ = "";
+      public boolean hasName() { return hasName; }
+      public java.lang.String getName() { return name_; }
+      
+      // required string value = 2;
+      public static final int VALUE_FIELD_NUMBER = 2;
+      private boolean hasValue;
+      private java.lang.String value_ = "";
+      public boolean hasValue() { return hasValue; }
+      public java.lang.String getValue() { return value_; }
+      
+      private void initFields() {
+      }
+      public final boolean isInitialized() {
+        if (!hasName) return false;
+        if (!hasValue) return false;
+        return true;
+      }
+      
+      public void writeTo(com.google.protobuf.CodedOutputStream output)
+                          throws java.io.IOException {
+        getSerializedSize();
+        if (hasName()) {
+          output.writeString(1, getName());
+        }
+        if (hasValue()) {
+          output.writeString(2, getValue());
+        }
+        getUnknownFields().writeTo(output);
+      }
+      
+      private int memoizedSerializedSize = -1;
+      public int getSerializedSize() {
+        int size = memoizedSerializedSize;
+        if (size != -1) return size;
+      
+        size = 0;
+        if (hasName()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeStringSize(1, getName());
+        }
+        if (hasValue()) {
+          size += com.google.protobuf.CodedOutputStream
+            .computeStringSize(2, getValue());
+        }
+        size += getUnknownFields().getSerializedSize();
+        memoizedSerializedSize = size;
+        return size;
+      }
+      
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(
+          com.google.protobuf.ByteString data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(
+          com.google.protobuf.ByteString data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(byte[] data)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(
+          byte[] data,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        return newBuilder().mergeFrom(data, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseDelimitedFrom(java.io.InputStream input)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseDelimitedFrom(
+          java.io.InputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        Builder builder = newBuilder();
+        if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+          return builder.buildParsed();
+        } else {
+          return null;
+        }
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(
+          com.google.protobuf.CodedInputStream input)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input).buildParsed();
+      }
+      public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute parseFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        return newBuilder().mergeFrom(input, extensionRegistry)
+                 .buildParsed();
+      }
+      
+      public static Builder newBuilder() { return Builder.create(); }
+      public Builder newBuilderForType() { return newBuilder(); }
+      public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute prototype) {
+        return newBuilder().mergeFrom(prototype);
+      }
+      public Builder toBuilder() { return newBuilder(this); }
+      
+      public static final class Builder extends
+          com.google.protobuf.GeneratedMessage.Builder<Builder> {
+        private org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute result;
+        
+        // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.newBuilder()
+        private Builder() {}
+        
+        private static Builder create() {
+          Builder builder = new Builder();
+          builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute();
+          return builder;
+        }
+        
+        protected org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute internalGetResult() {
+          return result;
+        }
+        
+        public Builder clear() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "Cannot call clear() after build().");
+          }
+          result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute();
+          return this;
+        }
+        
+        public Builder clone() {
+          return create().mergeFrom(result);
+        }
+        
+        public com.google.protobuf.Descriptors.Descriptor
+            getDescriptorForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.getDescriptor();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute getDefaultInstanceForType() {
+          return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.getDefaultInstance();
+        }
+        
+        public boolean isInitialized() {
+          return result.isInitialized();
+        }
+        public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute build() {
+          if (result != null && !isInitialized()) {
+            throw newUninitializedMessageException(result);
+          }
+          return buildPartial();
+        }
+        
+        private org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute buildParsed()
+            throws com.google.protobuf.InvalidProtocolBufferException {
+          if (!isInitialized()) {
+            throw newUninitializedMessageException(
+              result).asInvalidProtocolBufferException();
+          }
+          return buildPartial();
+        }
+        
+        public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute buildPartial() {
+          if (result == null) {
+            throw new IllegalStateException(
+              "build() has already been called on this Builder.");
+          }
+          org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute returnMe = result;
+          result = null;
+          return returnMe;
+        }
+        
+        public Builder mergeFrom(com.google.protobuf.Message other) {
+          if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute) {
+            return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute)other);
+          } else {
+            super.mergeFrom(other);
+            return this;
+          }
+        }
+        
+        public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute other) {
+          if (other == org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.getDefaultInstance()) return this;
+          if (other.hasName()) {
+            setName(other.getName());
+          }
+          if (other.hasValue()) {
+            setValue(other.getValue());
+          }
+          this.mergeUnknownFields(other.getUnknownFields());
+          return this;
+        }
+        
+        public Builder mergeFrom(
+            com.google.protobuf.CodedInputStream input,
+            com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+            throws java.io.IOException {
+          com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+            com.google.protobuf.UnknownFieldSet.newBuilder(
+              this.getUnknownFields());
+          while (true) {
+            int tag = input.readTag();
+            switch (tag) {
+              case 0:
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              default: {
+                if (!parseUnknownField(input, unknownFields,
+                                       extensionRegistry, tag)) {
+                  this.setUnknownFields(unknownFields.build());
+                  return this;
+                }
+                break;
+              }
+              case 10: {
+                setName(input.readString());
+                break;
+              }
+              case 18: {
+                setValue(input.readString());
+                break;
+              }
+            }
+          }
+        }
+        
+        
+        // required string name = 1;
+        public boolean hasName() {
+          return result.hasName();
+        }
+        public java.lang.String getName() {
+          return result.getName();
+        }
+        public Builder setName(java.lang.String value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+          result.name_ = value;
+          return this;
+        }
+        public Builder clearName() {
+          result.hasName = false;
+          result.name_ = getDefaultInstance().getName();
+          return this;
+        }
+        
+        // required string value = 2;
+        public boolean hasValue() {
+          return result.hasValue();
+        }
+        public java.lang.String getValue() {
+          return result.getValue();
+        }
+        public Builder setValue(java.lang.String value) {
+          if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasValue = true;
+          result.value_ = value;
+          return this;
+        }
+        public Builder clearValue() {
+          result.hasValue = false;
+          result.value_ = getDefaultInstance().getValue();
+          return this;
+        }
+        
+        // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableSchema.Attribute)
+      }
+      
+      static {
+        defaultInstance = new Attribute(true);
+        org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.internalForceInit();
+        defaultInstance.initFields();
+      }
+      
+      // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableSchema.Attribute)
+    }
+    
+    // optional string name = 1;
+    public static final int NAME_FIELD_NUMBER = 1;
+    private boolean hasName;
+    private java.lang.String name_ = "";
+    public boolean hasName() { return hasName; }
+    public java.lang.String getName() { return name_; }
+    
+    // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.TableSchema.Attribute attrs = 2;
+    public static final int ATTRS_FIELD_NUMBER = 2;
+    private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute> attrs_ =
+      java.util.Collections.emptyList();
+    public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute> getAttrsList() {
+      return attrs_;
+    }
+    public int getAttrsCount() { return attrs_.size(); }
+    public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute getAttrs(int index) {
+      return attrs_.get(index);
+    }
+    
+    // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema columns = 3;
+    public static final int COLUMNS_FIELD_NUMBER = 3;
+    private java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema> columns_ =
+      java.util.Collections.emptyList();
+    public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema> getColumnsList() {
+      return columns_;
+    }
+    public int getColumnsCount() { return columns_.size(); }
+    public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema getColumns(int index) {
+      return columns_.get(index);
+    }
+    
+    // optional bool inMemory = 4;
+    public static final int INMEMORY_FIELD_NUMBER = 4;
+    private boolean hasInMemory;
+    private boolean inMemory_ = false;
+    public boolean hasInMemory() { return hasInMemory; }
+    public boolean getInMemory() { return inMemory_; }
+    
+    // optional bool readOnly = 5;
+    public static final int READONLY_FIELD_NUMBER = 5;
+    private boolean hasReadOnly;
+    private boolean readOnly_ = false;
+    public boolean hasReadOnly() { return hasReadOnly; }
+    public boolean getReadOnly() { return readOnly_; }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute element : getAttrsList()) {
+        if (!element.isInitialized()) return false;
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema element : getColumnsList()) {
+        if (!element.isInitialized()) return false;
+      }
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      if (hasName()) {
+        output.writeString(1, getName());
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute element : getAttrsList()) {
+        output.writeMessage(2, element);
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema element : getColumnsList()) {
+        output.writeMessage(3, element);
+      }
+      if (hasInMemory()) {
+        output.writeBool(4, getInMemory());
+      }
+      if (hasReadOnly()) {
+        output.writeBool(5, getReadOnly());
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      if (hasName()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(1, getName());
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute element : getAttrsList()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeMessageSize(2, element);
+      }
+      for (org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema element : getColumnsList()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeMessageSize(3, element);
+      }
+      if (hasInMemory()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeBoolSize(4, getInMemory());
+      }
+      if (hasReadOnly()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeBoolSize(5, getReadOnly());
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        if (result.attrs_ != java.util.Collections.EMPTY_LIST) {
+          result.attrs_ =
+            java.util.Collections.unmodifiableList(result.attrs_);
+        }
+        if (result.columns_ != java.util.Collections.EMPTY_LIST) {
+          result.columns_ =
+            java.util.Collections.unmodifiableList(result.columns_);
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.getDefaultInstance()) return this;
+        if (other.hasName()) {
+          setName(other.getName());
+        }
+        if (!other.attrs_.isEmpty()) {
+          if (result.attrs_.isEmpty()) {
+            result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute>();
+          }
+          result.attrs_.addAll(other.attrs_);
+        }
+        if (!other.columns_.isEmpty()) {
+          if (result.columns_.isEmpty()) {
+            result.columns_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema>();
+          }
+          result.columns_.addAll(other.columns_);
+        }
+        if (other.hasInMemory()) {
+          setInMemory(other.getInMemory());
+        }
+        if (other.hasReadOnly()) {
+          setReadOnly(other.getReadOnly());
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              setName(input.readString());
+              break;
+            }
+            case 18: {
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.newBuilder();
+              input.readMessage(subBuilder, extensionRegistry);
+              addAttrs(subBuilder.buildPartial());
+              break;
+            }
+            case 26: {
+              org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Builder subBuilder = org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.newBuilder();
+              input.readMessage(subBuilder, extensionRegistry);
+              addColumns(subBuilder.buildPartial());
+              break;
+            }
+            case 32: {
+              setInMemory(input.readBool());
+              break;
+            }
+            case 40: {
+              setReadOnly(input.readBool());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // optional string name = 1;
+      public boolean hasName() {
+        return result.hasName();
+      }
+      public java.lang.String getName() {
+        return result.getName();
+      }
+      public Builder setName(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasName = true;
+        result.name_ = value;
+        return this;
+      }
+      public Builder clearName() {
+        result.hasName = false;
+        result.name_ = getDefaultInstance().getName();
+        return this;
+      }
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.TableSchema.Attribute attrs = 2;
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute> getAttrsList() {
+        return java.util.Collections.unmodifiableList(result.attrs_);
+      }
+      public int getAttrsCount() {
+        return result.getAttrsCount();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute getAttrs(int index) {
+        return result.getAttrs(index);
+      }
+      public Builder setAttrs(int index, org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        result.attrs_.set(index, value);
+        return this;
+      }
+      public Builder setAttrs(int index, org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.Builder builderForValue) {
+        result.attrs_.set(index, builderForValue.build());
+        return this;
+      }
+      public Builder addAttrs(org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        if (result.attrs_.isEmpty()) {
+          result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute>();
+        }
+        result.attrs_.add(value);
+        return this;
+      }
+      public Builder addAttrs(org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.Builder builderForValue) {
+        if (result.attrs_.isEmpty()) {
+          result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute>();
+        }
+        result.attrs_.add(builderForValue.build());
+        return this;
+      }
+      public Builder addAllAttrs(
+          java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute> values) {
+        if (result.attrs_.isEmpty()) {
+          result.attrs_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute>();
+        }
+        super.addAll(values, result.attrs_);
+        return this;
+      }
+      public Builder clearAttrs() {
+        result.attrs_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // repeated .org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchema columns = 3;
+      public java.util.List<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema> getColumnsList() {
+        return java.util.Collections.unmodifiableList(result.columns_);
+      }
+      public int getColumnsCount() {
+        return result.getColumnsCount();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema getColumns(int index) {
+        return result.getColumns(index);
+      }
+      public Builder setColumns(int index, org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        result.columns_.set(index, value);
+        return this;
+      }
+      public Builder setColumns(int index, org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Builder builderForValue) {
+        result.columns_.set(index, builderForValue.build());
+        return this;
+      }
+      public Builder addColumns(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema value) {
+        if (value == null) {
+          throw new NullPointerException();
+        }
+        if (result.columns_.isEmpty()) {
+          result.columns_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema>();
+        }
+        result.columns_.add(value);
+        return this;
+      }
+      public Builder addColumns(org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema.Builder builderForValue) {
+        if (result.columns_.isEmpty()) {
+          result.columns_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema>();
+        }
+        result.columns_.add(builderForValue.build());
+        return this;
+      }
+      public Builder addAllColumns(
+          java.lang.Iterable<? extends org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema> values) {
+        if (result.columns_.isEmpty()) {
+          result.columns_ = new java.util.ArrayList<org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema>();
+        }
+        super.addAll(values, result.columns_);
+        return this;
+      }
+      public Builder clearColumns() {
+        result.columns_ = java.util.Collections.emptyList();
+        return this;
+      }
+      
+      // optional bool inMemory = 4;
+      public boolean hasInMemory() {
+        return result.hasInMemory();
+      }
+      public boolean getInMemory() {
+        return result.getInMemory();
+      }
+      public Builder setInMemory(boolean value) {
+        result.hasInMemory = true;
+        result.inMemory_ = value;
+        return this;
+      }
+      public Builder clearInMemory() {
+        result.hasInMemory = false;
+        result.inMemory_ = false;
+        return this;
+      }
+      
+      // optional bool readOnly = 5;
+      public boolean hasReadOnly() {
+        return result.hasReadOnly();
+      }
+      public boolean getReadOnly() {
+        return result.getReadOnly();
+      }
+      public Builder setReadOnly(boolean value) {
+        result.hasReadOnly = true;
+        result.readOnly_ = value;
+        return this;
+      }
+      public Builder clearReadOnly() {
+        result.hasReadOnly = false;
+        result.readOnly_ = false;
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableSchema)
+    }
+    
+    static {
+      defaultInstance = new TableSchema(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.TableSchema)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_fieldAccessorTable;
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_Attribute_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_Attribute_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\030TableSchemaMessage.proto\022/org.apache.h" +
+      "adoop.hbase.rest.protobuf.generated\032\031Col" +
+      "umnSchemaMessage.proto\"\220\002\n\013TableSchema\022\014" +
+      "\n\004name\030\001 \001(\t\022U\n\005attrs\030\002 \003(\0132F.org.apache" +
+      ".hadoop.hbase.rest.protobuf.generated.Ta" +
+      "bleSchema.Attribute\022N\n\007columns\030\003 \003(\0132=.o" +
+      "rg.apache.hadoop.hbase.rest.protobuf.gen" +
+      "erated.ColumnSchema\022\020\n\010inMemory\030\004 \001(\010\022\020\n" +
+      "\010readOnly\030\005 \001(\010\032(\n\tAttribute\022\014\n\004name\030\001 \002" +
+      "(\t\022\r\n\005value\030\002 \002(\t"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_descriptor,
+              new java.lang.String[] { "Name", "Attrs", "Columns", "InMemory", "ReadOnly", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Builder.class);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_Attribute_descriptor =
+            internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_descriptor.getNestedTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_Attribute_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_TableSchema_Attribute_descriptor,
+              new java.lang.String[] { "Name", "Value", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema.Attribute.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+          org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.getDescriptor(),
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/VersionMessage.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/VersionMessage.java
new file mode 100644
index 0000000..43e06bf
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/VersionMessage.java
@@ -0,0 +1,511 @@
+// Generated by the protocol buffer compiler.  DO NOT EDIT!
+// source: VersionMessage.proto
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+public final class VersionMessage {
+  private VersionMessage() {}
+  public static void registerAllExtensions(
+      com.google.protobuf.ExtensionRegistry registry) {
+  }
+  public static final class Version extends
+      com.google.protobuf.GeneratedMessage {
+    // Use Version.newBuilder() to construct.
+    private Version() {
+      initFields();
+    }
+    private Version(boolean noInit) {}
+    
+    private static final Version defaultInstance;
+    public static Version getDefaultInstance() {
+      return defaultInstance;
+    }
+    
+    public Version getDefaultInstanceForType() {
+      return defaultInstance;
+    }
+    
+    public static final com.google.protobuf.Descriptors.Descriptor
+        getDescriptor() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Version_descriptor;
+    }
+    
+    protected com.google.protobuf.GeneratedMessage.FieldAccessorTable
+        internalGetFieldAccessorTable() {
+      return org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Version_fieldAccessorTable;
+    }
+    
+    // optional string restVersion = 1;
+    public static final int RESTVERSION_FIELD_NUMBER = 1;
+    private boolean hasRestVersion;
+    private java.lang.String restVersion_ = "";
+    public boolean hasRestVersion() { return hasRestVersion; }
+    public java.lang.String getRestVersion() { return restVersion_; }
+    
+    // optional string jvmVersion = 2;
+    public static final int JVMVERSION_FIELD_NUMBER = 2;
+    private boolean hasJvmVersion;
+    private java.lang.String jvmVersion_ = "";
+    public boolean hasJvmVersion() { return hasJvmVersion; }
+    public java.lang.String getJvmVersion() { return jvmVersion_; }
+    
+    // optional string osVersion = 3;
+    public static final int OSVERSION_FIELD_NUMBER = 3;
+    private boolean hasOsVersion;
+    private java.lang.String osVersion_ = "";
+    public boolean hasOsVersion() { return hasOsVersion; }
+    public java.lang.String getOsVersion() { return osVersion_; }
+    
+    // optional string serverVersion = 4;
+    public static final int SERVERVERSION_FIELD_NUMBER = 4;
+    private boolean hasServerVersion;
+    private java.lang.String serverVersion_ = "";
+    public boolean hasServerVersion() { return hasServerVersion; }
+    public java.lang.String getServerVersion() { return serverVersion_; }
+    
+    // optional string jerseyVersion = 5;
+    public static final int JERSEYVERSION_FIELD_NUMBER = 5;
+    private boolean hasJerseyVersion;
+    private java.lang.String jerseyVersion_ = "";
+    public boolean hasJerseyVersion() { return hasJerseyVersion; }
+    public java.lang.String getJerseyVersion() { return jerseyVersion_; }
+    
+    private void initFields() {
+    }
+    public final boolean isInitialized() {
+      return true;
+    }
+    
+    public void writeTo(com.google.protobuf.CodedOutputStream output)
+                        throws java.io.IOException {
+      getSerializedSize();
+      if (hasRestVersion()) {
+        output.writeString(1, getRestVersion());
+      }
+      if (hasJvmVersion()) {
+        output.writeString(2, getJvmVersion());
+      }
+      if (hasOsVersion()) {
+        output.writeString(3, getOsVersion());
+      }
+      if (hasServerVersion()) {
+        output.writeString(4, getServerVersion());
+      }
+      if (hasJerseyVersion()) {
+        output.writeString(5, getJerseyVersion());
+      }
+      getUnknownFields().writeTo(output);
+    }
+    
+    private int memoizedSerializedSize = -1;
+    public int getSerializedSize() {
+      int size = memoizedSerializedSize;
+      if (size != -1) return size;
+    
+      size = 0;
+      if (hasRestVersion()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(1, getRestVersion());
+      }
+      if (hasJvmVersion()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(2, getJvmVersion());
+      }
+      if (hasOsVersion()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(3, getOsVersion());
+      }
+      if (hasServerVersion()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(4, getServerVersion());
+      }
+      if (hasJerseyVersion()) {
+        size += com.google.protobuf.CodedOutputStream
+          .computeStringSize(5, getJerseyVersion());
+      }
+      size += getUnknownFields().getSerializedSize();
+      memoizedSerializedSize = size;
+      return size;
+    }
+    
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(
+        com.google.protobuf.ByteString data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(
+        com.google.protobuf.ByteString data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(byte[] data)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(
+        byte[] data,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws com.google.protobuf.InvalidProtocolBufferException {
+      return newBuilder().mergeFrom(data, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseDelimitedFrom(java.io.InputStream input)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseDelimitedFrom(
+        java.io.InputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      Builder builder = newBuilder();
+      if (builder.mergeDelimitedFrom(input, extensionRegistry)) {
+        return builder.buildParsed();
+      } else {
+        return null;
+      }
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(
+        com.google.protobuf.CodedInputStream input)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input).buildParsed();
+    }
+    public static org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version parseFrom(
+        com.google.protobuf.CodedInputStream input,
+        com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+        throws java.io.IOException {
+      return newBuilder().mergeFrom(input, extensionRegistry)
+               .buildParsed();
+    }
+    
+    public static Builder newBuilder() { return Builder.create(); }
+    public Builder newBuilderForType() { return newBuilder(); }
+    public static Builder newBuilder(org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version prototype) {
+      return newBuilder().mergeFrom(prototype);
+    }
+    public Builder toBuilder() { return newBuilder(this); }
+    
+    public static final class Builder extends
+        com.google.protobuf.GeneratedMessage.Builder<Builder> {
+      private org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version result;
+      
+      // Construct using org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version.newBuilder()
+      private Builder() {}
+      
+      private static Builder create() {
+        Builder builder = new Builder();
+        builder.result = new org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version();
+        return builder;
+      }
+      
+      protected org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version internalGetResult() {
+        return result;
+      }
+      
+      public Builder clear() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "Cannot call clear() after build().");
+        }
+        result = new org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version();
+        return this;
+      }
+      
+      public Builder clone() {
+        return create().mergeFrom(result);
+      }
+      
+      public com.google.protobuf.Descriptors.Descriptor
+          getDescriptorForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version.getDescriptor();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version getDefaultInstanceForType() {
+        return org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version.getDefaultInstance();
+      }
+      
+      public boolean isInitialized() {
+        return result.isInitialized();
+      }
+      public org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version build() {
+        if (result != null && !isInitialized()) {
+          throw newUninitializedMessageException(result);
+        }
+        return buildPartial();
+      }
+      
+      private org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version buildParsed()
+          throws com.google.protobuf.InvalidProtocolBufferException {
+        if (!isInitialized()) {
+          throw newUninitializedMessageException(
+            result).asInvalidProtocolBufferException();
+        }
+        return buildPartial();
+      }
+      
+      public org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version buildPartial() {
+        if (result == null) {
+          throw new IllegalStateException(
+            "build() has already been called on this Builder.");
+        }
+        org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version returnMe = result;
+        result = null;
+        return returnMe;
+      }
+      
+      public Builder mergeFrom(com.google.protobuf.Message other) {
+        if (other instanceof org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version) {
+          return mergeFrom((org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version)other);
+        } else {
+          super.mergeFrom(other);
+          return this;
+        }
+      }
+      
+      public Builder mergeFrom(org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version other) {
+        if (other == org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version.getDefaultInstance()) return this;
+        if (other.hasRestVersion()) {
+          setRestVersion(other.getRestVersion());
+        }
+        if (other.hasJvmVersion()) {
+          setJvmVersion(other.getJvmVersion());
+        }
+        if (other.hasOsVersion()) {
+          setOsVersion(other.getOsVersion());
+        }
+        if (other.hasServerVersion()) {
+          setServerVersion(other.getServerVersion());
+        }
+        if (other.hasJerseyVersion()) {
+          setJerseyVersion(other.getJerseyVersion());
+        }
+        this.mergeUnknownFields(other.getUnknownFields());
+        return this;
+      }
+      
+      public Builder mergeFrom(
+          com.google.protobuf.CodedInputStream input,
+          com.google.protobuf.ExtensionRegistryLite extensionRegistry)
+          throws java.io.IOException {
+        com.google.protobuf.UnknownFieldSet.Builder unknownFields =
+          com.google.protobuf.UnknownFieldSet.newBuilder(
+            this.getUnknownFields());
+        while (true) {
+          int tag = input.readTag();
+          switch (tag) {
+            case 0:
+              this.setUnknownFields(unknownFields.build());
+              return this;
+            default: {
+              if (!parseUnknownField(input, unknownFields,
+                                     extensionRegistry, tag)) {
+                this.setUnknownFields(unknownFields.build());
+                return this;
+              }
+              break;
+            }
+            case 10: {
+              setRestVersion(input.readString());
+              break;
+            }
+            case 18: {
+              setJvmVersion(input.readString());
+              break;
+            }
+            case 26: {
+              setOsVersion(input.readString());
+              break;
+            }
+            case 34: {
+              setServerVersion(input.readString());
+              break;
+            }
+            case 42: {
+              setJerseyVersion(input.readString());
+              break;
+            }
+          }
+        }
+      }
+      
+      
+      // optional string restVersion = 1;
+      public boolean hasRestVersion() {
+        return result.hasRestVersion();
+      }
+      public java.lang.String getRestVersion() {
+        return result.getRestVersion();
+      }
+      public Builder setRestVersion(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasRestVersion = true;
+        result.restVersion_ = value;
+        return this;
+      }
+      public Builder clearRestVersion() {
+        result.hasRestVersion = false;
+        result.restVersion_ = getDefaultInstance().getRestVersion();
+        return this;
+      }
+      
+      // optional string jvmVersion = 2;
+      public boolean hasJvmVersion() {
+        return result.hasJvmVersion();
+      }
+      public java.lang.String getJvmVersion() {
+        return result.getJvmVersion();
+      }
+      public Builder setJvmVersion(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasJvmVersion = true;
+        result.jvmVersion_ = value;
+        return this;
+      }
+      public Builder clearJvmVersion() {
+        result.hasJvmVersion = false;
+        result.jvmVersion_ = getDefaultInstance().getJvmVersion();
+        return this;
+      }
+      
+      // optional string osVersion = 3;
+      public boolean hasOsVersion() {
+        return result.hasOsVersion();
+      }
+      public java.lang.String getOsVersion() {
+        return result.getOsVersion();
+      }
+      public Builder setOsVersion(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasOsVersion = true;
+        result.osVersion_ = value;
+        return this;
+      }
+      public Builder clearOsVersion() {
+        result.hasOsVersion = false;
+        result.osVersion_ = getDefaultInstance().getOsVersion();
+        return this;
+      }
+      
+      // optional string serverVersion = 4;
+      public boolean hasServerVersion() {
+        return result.hasServerVersion();
+      }
+      public java.lang.String getServerVersion() {
+        return result.getServerVersion();
+      }
+      public Builder setServerVersion(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasServerVersion = true;
+        result.serverVersion_ = value;
+        return this;
+      }
+      public Builder clearServerVersion() {
+        result.hasServerVersion = false;
+        result.serverVersion_ = getDefaultInstance().getServerVersion();
+        return this;
+      }
+      
+      // optional string jerseyVersion = 5;
+      public boolean hasJerseyVersion() {
+        return result.hasJerseyVersion();
+      }
+      public java.lang.String getJerseyVersion() {
+        return result.getJerseyVersion();
+      }
+      public Builder setJerseyVersion(java.lang.String value) {
+        if (value == null) {
+    throw new NullPointerException();
+  }
+  result.hasJerseyVersion = true;
+        result.jerseyVersion_ = value;
+        return this;
+      }
+      public Builder clearJerseyVersion() {
+        result.hasJerseyVersion = false;
+        result.jerseyVersion_ = getDefaultInstance().getJerseyVersion();
+        return this;
+      }
+      
+      // @@protoc_insertion_point(builder_scope:org.apache.hadoop.hbase.rest.protobuf.generated.Version)
+    }
+    
+    static {
+      defaultInstance = new Version(true);
+      org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.internalForceInit();
+      defaultInstance.initFields();
+    }
+    
+    // @@protoc_insertion_point(class_scope:org.apache.hadoop.hbase.rest.protobuf.generated.Version)
+  }
+  
+  private static com.google.protobuf.Descriptors.Descriptor
+    internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Version_descriptor;
+  private static
+    com.google.protobuf.GeneratedMessage.FieldAccessorTable
+      internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Version_fieldAccessorTable;
+  
+  public static com.google.protobuf.Descriptors.FileDescriptor
+      getDescriptor() {
+    return descriptor;
+  }
+  private static com.google.protobuf.Descriptors.FileDescriptor
+      descriptor;
+  static {
+    java.lang.String[] descriptorData = {
+      "\n\024VersionMessage.proto\022/org.apache.hadoo" +
+      "p.hbase.rest.protobuf.generated\"s\n\007Versi" +
+      "on\022\023\n\013restVersion\030\001 \001(\t\022\022\n\njvmVersion\030\002 " +
+      "\001(\t\022\021\n\tosVersion\030\003 \001(\t\022\025\n\rserverVersion\030" +
+      "\004 \001(\t\022\025\n\rjerseyVersion\030\005 \001(\t"
+    };
+    com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
+      new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
+        public com.google.protobuf.ExtensionRegistry assignDescriptors(
+            com.google.protobuf.Descriptors.FileDescriptor root) {
+          descriptor = root;
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Version_descriptor =
+            getDescriptor().getMessageTypes().get(0);
+          internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Version_fieldAccessorTable = new
+            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
+              internal_static_org_apache_hadoop_hbase_rest_protobuf_generated_Version_descriptor,
+              new java.lang.String[] { "RestVersion", "JvmVersion", "OsVersion", "ServerVersion", "JerseyVersion", },
+              org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version.class,
+              org.apache.hadoop.hbase.rest.protobuf.generated.VersionMessage.Version.Builder.class);
+          return null;
+        }
+      };
+    com.google.protobuf.Descriptors.FileDescriptor
+      .internalBuildGeneratedFileFrom(descriptorData,
+        new com.google.protobuf.Descriptors.FileDescriptor[] {
+        }, assigner);
+  }
+  
+  public static void internalForceInit() {}
+  
+  // @@protoc_insertion_point(outer_class_scope)
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/JAXBContextResolver.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/JAXBContextResolver.java
new file mode 100644
index 0000000..0c2ab3d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/JAXBContextResolver.java
@@ -0,0 +1,88 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.provider;
+
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Set;
+
+import javax.ws.rs.ext.ContextResolver;
+import javax.ws.rs.ext.Provider;
+import javax.xml.bind.JAXBContext;
+
+import org.apache.hadoop.hbase.rest.model.CellModel;
+import org.apache.hadoop.hbase.rest.model.CellSetModel;
+import org.apache.hadoop.hbase.rest.model.ColumnSchemaModel;
+import org.apache.hadoop.hbase.rest.model.RowModel;
+import org.apache.hadoop.hbase.rest.model.ScannerModel;
+import org.apache.hadoop.hbase.rest.model.StorageClusterStatusModel;
+import org.apache.hadoop.hbase.rest.model.StorageClusterVersionModel;
+import org.apache.hadoop.hbase.rest.model.TableInfoModel;
+import org.apache.hadoop.hbase.rest.model.TableListModel;
+import org.apache.hadoop.hbase.rest.model.TableModel;
+import org.apache.hadoop.hbase.rest.model.TableRegionModel;
+import org.apache.hadoop.hbase.rest.model.TableSchemaModel;
+import org.apache.hadoop.hbase.rest.model.VersionModel;
+
+import com.sun.jersey.api.json.JSONConfiguration;
+import com.sun.jersey.api.json.JSONJAXBContext;
+
+/**
+ * Plumbing for hooking up Jersey's JSON entity body encoding and decoding
+ * support to JAXB. Modify how the context is created (by using e.g. a 
+ * different configuration builder) to control how JSON is processed and
+ * created.
+ */
+@Provider
+public class JAXBContextResolver implements ContextResolver<JAXBContext> {
+
+	private final JAXBContext context;
+
+	private final Set<Class<?>> types;
+
+	private final Class<?>[] cTypes = {
+	  CellModel.class,
+    CellSetModel.class,
+    ColumnSchemaModel.class,
+    RowModel.class,
+    ScannerModel.class,
+    StorageClusterStatusModel.class,
+    StorageClusterVersionModel.class,
+    TableInfoModel.class,
+	  TableListModel.class,
+	  TableModel.class,
+	  TableRegionModel.class,
+	  TableSchemaModel.class,
+	  VersionModel.class
+	};
+
+	@SuppressWarnings("unchecked")
+  public JAXBContextResolver() throws Exception {
+		this.types = new HashSet(Arrays.asList(cTypes));
+		this.context = new JSONJAXBContext(JSONConfiguration.natural().build(),
+		  cTypes);
+	}
+
+	@Override
+	public JAXBContext getContext(Class<?> objectType) {
+		return (types.contains(objectType)) ? context : null;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/consumer/ProtobufMessageBodyConsumer.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/consumer/ProtobufMessageBodyConsumer.java
new file mode 100644
index 0000000..6fe2dd0
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/consumer/ProtobufMessageBodyConsumer.java
@@ -0,0 +1,87 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.provider.consumer;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.lang.annotation.Annotation;
+import java.lang.reflect.Type;
+
+import javax.ws.rs.Consumes;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.MultivaluedMap;
+import javax.ws.rs.ext.MessageBodyReader;
+import javax.ws.rs.ext.Provider;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.rest.Constants;
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+
+/**
+ * Adapter for hooking up Jersey content processing dispatch to
+ * ProtobufMessageHandler interface capable handlers for decoding protobuf input.
+ */
+@Provider
+@Consumes(Constants.MIMETYPE_PROTOBUF)
+public class ProtobufMessageBodyConsumer 
+    implements MessageBodyReader<ProtobufMessageHandler> {
+  private static final Log LOG =
+    LogFactory.getLog(ProtobufMessageBodyConsumer.class);
+
+  @Override
+  public boolean isReadable(Class<?> type, Type genericType,
+      Annotation[] annotations, MediaType mediaType) {
+    return ProtobufMessageHandler.class.isAssignableFrom(type);
+  }
+
+  @Override
+  public ProtobufMessageHandler readFrom(Class<ProtobufMessageHandler> type, Type genericType,
+      Annotation[] annotations, MediaType mediaType,
+      MultivaluedMap<String, String> httpHeaders, InputStream inputStream)
+      throws IOException, WebApplicationException {
+    ProtobufMessageHandler obj = null;
+    try {
+      obj = type.newInstance();
+      ByteArrayOutputStream baos = new ByteArrayOutputStream();
+      byte[] buffer = new byte[4096];
+      int read;
+      do {
+        read = inputStream.read(buffer, 0, buffer.length);
+        if (read > 0) {
+          baos.write(buffer, 0, read);
+        }
+      } while (read > 0);
+      if (LOG.isDebugEnabled()) {
+        LOG.debug(getClass() + ": read " + baos.size() + " bytes from " +
+          inputStream);
+      }
+      obj = obj.getObjectFromMessage(baos.toByteArray());
+    } catch (InstantiationException e) {
+      throw new WebApplicationException(e);
+    } catch (IllegalAccessException e) {
+      throw new WebApplicationException(e);
+    }
+    return obj;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java
new file mode 100644
index 0000000..092c695
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java
@@ -0,0 +1,73 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.provider.producer;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.annotation.Annotation;
+import java.lang.reflect.Type;
+
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.MultivaluedMap;
+import javax.ws.rs.ext.MessageBodyWriter;
+import javax.ws.rs.ext.Provider;
+
+import org.apache.hadoop.hbase.rest.Constants;
+
+/**
+ * An adapter between Jersey and Object.toString(). Hooks up plain text output
+ * to the Jersey content handling framework. 
+ * Jersey will first call getSize() to learn the number of bytes that will be
+ * sent, then writeTo to perform the actual I/O.
+ */
+@Provider
+@Produces(Constants.MIMETYPE_TEXT)
+public class PlainTextMessageBodyProducer 
+  implements MessageBodyWriter<Object> {
+
+  private ThreadLocal<byte[]> buffer = new ThreadLocal<byte[]>();
+
+  @Override
+  public boolean isWriteable(Class<?> arg0, Type arg1, Annotation[] arg2,
+      MediaType arg3) {
+    return true;
+  }
+
+	@Override
+	public long getSize(Object object, Class<?> type, Type genericType,
+			Annotation[] annotations, MediaType mediaType) {
+    byte[] bytes = object.toString().getBytes(); 
+	  buffer.set(bytes);
+    return bytes.length;
+	}
+
+	@Override
+	public void writeTo(Object object, Class<?> type, Type genericType,
+			Annotation[] annotations, MediaType mediaType,
+			MultivaluedMap<String, Object> httpHeaders, OutputStream outStream)
+			throws IOException, WebApplicationException {
+    byte[] bytes = buffer.get();
+		outStream.write(bytes);
+    buffer.remove();
+	}	
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java
new file mode 100644
index 0000000..a1b4b70
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java
@@ -0,0 +1,80 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.provider.producer;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.annotation.Annotation;
+import java.lang.reflect.Type;
+
+import javax.ws.rs.Produces;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.MultivaluedMap;
+import javax.ws.rs.ext.MessageBodyWriter;
+import javax.ws.rs.ext.Provider;
+
+import org.apache.hadoop.hbase.rest.Constants;
+import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
+
+/**
+ * An adapter between Jersey and ProtobufMessageHandler implementors. Hooks up
+ * protobuf output producing methods to the Jersey content handling framework.
+ * Jersey will first call getSize() to learn the number of bytes that will be
+ * sent, then writeTo to perform the actual I/O.
+ */
+@Provider
+@Produces(Constants.MIMETYPE_PROTOBUF)
+public class ProtobufMessageBodyProducer
+  implements MessageBodyWriter<ProtobufMessageHandler> {
+
+  private ThreadLocal<byte[]> buffer = new ThreadLocal<byte[]>();
+
+	@Override
+	public boolean isWriteable(Class<?> type, Type genericType, 
+	  Annotation[] annotations, MediaType mediaType) {
+      return ProtobufMessageHandler.class.isAssignableFrom(type);
+  }
+
+	@Override
+	public long getSize(ProtobufMessageHandler m, Class<?> type, Type genericType,
+	    Annotation[] annotations, MediaType mediaType) {
+	  ByteArrayOutputStream baos = new ByteArrayOutputStream();
+	  try {
+	    baos.write(m.createProtobufOutput());
+	  } catch (IOException e) {
+	    return -1;
+	  }
+	  byte[] bytes = baos.toByteArray();
+	  buffer.set(bytes);
+	  return bytes.length;
+	}
+
+	public void writeTo(ProtobufMessageHandler m, Class<?> type, Type genericType,
+	    Annotation[] annotations, MediaType mediaType, 
+	    MultivaluedMap<String, Object> httpHeaders, OutputStream entityStream) 
+	    throws IOException, WebApplicationException {
+    byte[] bytes = buffer.get();
+	  entityStream.write(bytes);
+    buffer.remove();
+	}
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/Base64.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/Base64.java
new file mode 100644
index 0000000..f991121
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/Base64.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.transform;
+
+public class Base64 implements Transform {
+  @Override
+  public byte[] transform(byte[] data, Direction direction) {
+    switch (direction) {
+    case IN:
+      return com.sun.jersey.core.util.Base64.encode(data);
+    case OUT:
+      return com.sun.jersey.core.util.Base64.decode(data);
+    default:
+      throw new RuntimeException("illegal direction");
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/NullTransform.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/NullTransform.java
new file mode 100644
index 0000000..8492cc6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/NullTransform.java
@@ -0,0 +1,28 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.transform;
+
+public class NullTransform implements Transform {
+  @Override
+  public byte[] transform(byte[] data, Direction direction) {
+    return data;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/Transform.java b/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/Transform.java
new file mode 100644
index 0000000..9f33bab
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/rest/transform/Transform.java
@@ -0,0 +1,44 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.transform;
+
+/**
+ * Data transformation module
+ */
+public interface Transform {
+
+  /*** Transfer direction */
+  static enum Direction {
+    /** From client to server */
+    IN,
+    /** From server to client */
+    OUT
+  };
+
+  /**
+   * Transform data from one representation to another according to
+   * transfer direction.
+   * @param data input data
+   * @param direction IN or OUT
+   * @return the transformed data
+   */
+  byte[] transform (byte[] data, Direction direction);
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/security/User.java b/0.90/src/main/java/org/apache/hadoop/hbase/security/User.java
new file mode 100644
index 0000000..4b5e9e8
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/security/User.java
@@ -0,0 +1,279 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.security;
+
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.Method;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.security.PrivilegedAction;
+import java.security.PrivilegedExceptionAction;
+import org.apache.commons.logging.Log;
+
+/**
+ * Wrapper to abstract out usage of user and group information in HBase.
+ *
+ * <p>
+ * This class provides a common interface for interacting with user and group
+ * information across changing APIs in different versions of Hadoop.  It only
+ * provides access to the common set of functionality in
+ * {@link org.apache.hadoop.security.UserGroupInformation} currently needed by
+ * HBase, but can be extended as needs change.
+ * </p>
+ *
+ * <p>
+ * Note: this class does not attempt to support any of the Kerberos
+ * authentication methods exposed in security-enabled Hadoop (for the moment
+ * at least), as they're not yet needed.  Properly supporting
+ * authentication is left up to implementation in secure HBase.
+ * </p>
+ */
+public abstract class User {
+  private static boolean IS_SECURE_HADOOP = true;
+  static {
+    try {
+      UserGroupInformation.class.getMethod("isSecurityEnabled");
+    } catch (NoSuchMethodException nsme) {
+      IS_SECURE_HADOOP = false;
+    }
+  }
+  private static Log LOG = LogFactory.getLog(User.class);
+  protected UserGroupInformation ugi;
+
+  /**
+   * Returns the full user name.  For Kerberos principals this will include
+   * the host and realm portions of the principal name.
+   * @return User full name.
+   */
+  public String getName() {
+    return ugi.getUserName();
+  }
+
+  /**
+   * Returns the shortened version of the user name -- the portion that maps
+   * to an operating system user name.
+   * @return Short name
+   */
+  public abstract String getShortName();
+
+  /**
+   * Executes the given action within the context of this user.
+   */
+  public abstract <T> T runAs(PrivilegedAction<T> action);
+
+  /**
+   * Executes the given action within the context of this user.
+   */
+  public abstract <T> T runAs(PrivilegedExceptionAction<T> action)
+      throws IOException, InterruptedException;
+
+  public String toString() {
+    return ugi.toString();
+  }
+
+  /**
+   * Returns the {@code User} instance within current execution context.
+   */
+  public static User getCurrent() {
+    if (IS_SECURE_HADOOP) {
+      return new SecureHadoopUser();
+    } else {
+      return new HadoopUser();
+    }
+  }
+
+  /**
+   * Generates a new {@code User} instance specifically for use in test code.
+   * @param name the full username
+   * @param groups the group names to which the test user will belong
+   * @return a new <code>User</code> instance
+   */
+  public static User createUserForTesting(Configuration conf,
+      String name, String[] groups) {
+    if (IS_SECURE_HADOOP) {
+      return SecureHadoopUser.createUserForTesting(conf, name, groups);
+    }
+    return HadoopUser.createUserForTesting(conf, name, groups);
+  }
+
+  /* Concrete implementations */
+
+  /**
+   * Bridges {@link User} calls to invocations of the appropriate methods
+   * in {@link org.apache.hadoop.security.UserGroupInformation} in regular
+   * Hadoop 0.20 (ASF Hadoop and other versions without the backported security
+   * features).
+   */
+  private static class HadoopUser extends User {
+
+    private HadoopUser() {
+      ugi = (UserGroupInformation) callStatic("getCurrentUGI");
+    }
+
+    private HadoopUser(UserGroupInformation ugi) {
+      this.ugi = ugi;
+    }
+
+    @Override
+    public String getShortName() {
+      return ugi.getUserName();
+    }
+
+    @Override
+    public <T> T runAs(PrivilegedAction<T> action) {
+      UserGroupInformation previous =
+          (UserGroupInformation) callStatic("getCurrentUGI");
+      if (ugi != null) {
+        callStatic("setCurrentUser", new Class[]{UserGroupInformation.class},
+            new Object[]{ugi});
+      }
+      T result = action.run();
+      callStatic("setCurrentUser", new Class[]{UserGroupInformation.class},
+          new Object[]{previous});
+      return result;
+    }
+
+    @Override
+    public <T> T runAs(PrivilegedExceptionAction<T> action)
+        throws IOException, InterruptedException {
+      UserGroupInformation previous =
+          (UserGroupInformation) callStatic("getCurrentUGI");
+      if (ugi != null) {
+        callStatic("setCurrentUGI", new Class[]{UserGroupInformation.class},
+            new Object[]{ugi});
+      }
+      T result = null;
+      try {
+        result = action.run();
+      } catch (Exception e) {
+        if (e instanceof IOException) {
+          throw (IOException)e;
+        } else if (e instanceof InterruptedException) {
+          throw (InterruptedException)e;
+        } else if (e instanceof RuntimeException) {
+          throw (RuntimeException)e;
+        } else {
+          throw new UndeclaredThrowableException(e, "Unknown exception in runAs()");
+        }
+      } finally {
+        callStatic("setCurrentUGI", new Class[]{UserGroupInformation.class},
+            new Object[]{previous});
+      }
+      return result;
+    }
+
+    public static User createUserForTesting(Configuration conf,
+        String name, String[] groups) {
+      try {
+        Class c = Class.forName("org.apache.hadoop.security.UnixUserGroupInformation");
+        Constructor constructor = c.getConstructor(String.class, String[].class);
+        if (constructor == null) {
+          throw new NullPointerException(
+             );
+        }
+        UserGroupInformation newUser =
+            (UserGroupInformation)constructor.newInstance(name, groups);
+        // set user in configuration -- hack for regular hadoop
+        conf.set("hadoop.job.ugi", newUser.toString());
+        return new HadoopUser(newUser);
+      } catch (ClassNotFoundException cnfe) {
+        LOG.error("UnixUserGroupInformation not found, is this secure Hadoop?", cnfe);
+      } catch (NoSuchMethodException nsme) {
+        LOG.error("No valid constructor found for UnixUserGroupInformation!", nsme);
+      } catch (Exception e) {
+        LOG.error("Error instantiating new UnixUserGroupInformation", e);
+      }
+
+      return null;
+    }
+  }
+
+  /**
+   * Bridges {@code User} invocations to underlying calls to
+   * {@link org.apache.hadoop.security.UserGroupInformation} for secure Hadoop
+   * 0.20 and versions 0.21 and above.
+   */
+  private static class SecureHadoopUser extends User {
+    private SecureHadoopUser() {
+      ugi = (UserGroupInformation) callStatic("getCurrentUser");
+    }
+
+    private SecureHadoopUser(UserGroupInformation ugi) {
+      this.ugi = ugi;
+    }
+
+    @Override
+    public String getShortName() {
+      return (String)call(ugi, "getShortUserName", null, null);
+    }
+
+    @Override
+    public <T> T runAs(PrivilegedAction<T> action) {
+      return (T) call(ugi, "doAs", new Class[]{PrivilegedAction.class},
+          new Object[]{action});
+    }
+
+    @Override
+    public <T> T runAs(PrivilegedExceptionAction<T> action)
+        throws IOException, InterruptedException {
+      return (T) call(ugi, "doAs",
+          new Class[]{PrivilegedExceptionAction.class},
+          new Object[]{action});
+    }
+
+    public static User createUserForTesting(Configuration conf,
+        String name, String[] groups) {
+      return new SecureHadoopUser(
+          (UserGroupInformation)callStatic("createUserForTesting",
+              new Class[]{String.class, String[].class},
+              new Object[]{name, groups})
+      );
+    }
+  }
+
+  /* Reflection helper methods */
+  private static Object callStatic(String methodName) {
+    return call(null, methodName, null, null);
+  }
+
+  private static Object callStatic(String methodName, Class[] types,
+      Object[] args) {
+    return call(null, methodName, types, args);
+  }
+
+  private static Object call(UserGroupInformation instance, String methodName,
+      Class[] types, Object[] args) {
+    try {
+      Method m = UserGroupInformation.class.getMethod(methodName, types);
+      return m.invoke(instance, args);
+    } catch (NoSuchMethodException nsme) {
+      LOG.fatal("Can't find method "+methodName+" in UserGroupInformation!",
+          nsme);
+    } catch (Exception e) {
+      LOG.fatal("Error calling method "+methodName, e);
+    }
+    return null;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
new file mode 100644
index 0000000..78f38d6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java
@@ -0,0 +1,883 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.thrift;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.InetSocketAddress;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionGroup;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.PosixParser;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.thrift.generated.AlreadyExists;
+import org.apache.hadoop.hbase.thrift.generated.BatchMutation;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.Hbase;
+import org.apache.hadoop.hbase.thrift.generated.IOError;
+import org.apache.hadoop.hbase.thrift.generated.IllegalArgument;
+import org.apache.hadoop.hbase.thrift.generated.Mutation;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRegionInfo;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.thrift.TException;
+import org.apache.thrift.protocol.TBinaryProtocol;
+import org.apache.thrift.protocol.TCompactProtocol;
+import org.apache.thrift.protocol.TProtocolFactory;
+import org.apache.thrift.server.THsHaServer;
+import org.apache.thrift.server.TNonblockingServer;
+import org.apache.thrift.server.TServer;
+import org.apache.thrift.server.TThreadPoolServer;
+import org.apache.thrift.transport.TFramedTransport;
+import org.apache.thrift.transport.TNonblockingServerSocket;
+import org.apache.thrift.transport.TNonblockingServerTransport;
+import org.apache.thrift.transport.TServerSocket;
+import org.apache.thrift.transport.TServerTransport;
+import org.apache.thrift.transport.TTransportFactory;
+
+/**
+ * ThriftServer - this class starts up a Thrift server which implements the
+ * Hbase API specified in the Hbase.thrift IDL file.
+ */
+public class ThriftServer {
+
+  /**
+   * The HBaseHandler is a glue object that connects Thrift RPC calls to the
+   * HBase client API primarily defined in the HBaseAdmin and HTable objects.
+   */
+  public static class HBaseHandler implements Hbase.Iface {
+    protected Configuration conf;
+    protected HBaseAdmin admin = null;
+    protected final Log LOG = LogFactory.getLog(this.getClass().getName());
+
+    // nextScannerId and scannerMap are used to manage scanner state
+    protected int nextScannerId = 0;
+    protected HashMap<Integer, ResultScanner> scannerMap = null;
+
+    private static ThreadLocal<Map<String, HTable>> threadLocalTables = new ThreadLocal<Map<String, HTable>>() {
+      @Override
+      protected Map<String, HTable> initialValue() {
+        return new TreeMap<String, HTable>();
+      }
+    };
+
+    /**
+     * Returns a list of all the column families for a given htable.
+     *
+     * @param table
+     * @return
+     * @throws IOException
+     */
+    byte[][] getAllColumns(HTable table) throws IOException {
+      HColumnDescriptor[] cds = table.getTableDescriptor().getColumnFamilies();
+      byte[][] columns = new byte[cds.length][];
+      for (int i = 0; i < cds.length; i++) {
+        columns[i] = Bytes.add(cds[i].getName(),
+            KeyValue.COLUMN_FAMILY_DELIM_ARRAY);
+      }
+      return columns;
+    }
+
+    /**
+     * Creates and returns an HTable instance from a given table name.
+     *
+     * @param tableName
+     *          name of table
+     * @return HTable object
+     * @throws IOException
+     * @throws IOError
+     */
+    protected HTable getTable(final byte[] tableName) throws IOError,
+        IOException {
+      String table = new String(tableName);
+      Map<String, HTable> tables = threadLocalTables.get();
+      if (!tables.containsKey(table)) {
+        tables.put(table, new HTable(conf, tableName));
+      }
+      return tables.get(table);
+    }
+
+    /**
+     * Assigns a unique ID to the scanner and adds the mapping to an internal
+     * hash-map.
+     *
+     * @param scanner
+     * @return integer scanner id
+     */
+    protected synchronized int addScanner(ResultScanner scanner) {
+      int id = nextScannerId++;
+      scannerMap.put(id, scanner);
+      return id;
+    }
+
+    /**
+     * Returns the scanner associated with the specified ID.
+     *
+     * @param id
+     * @return a Scanner, or null if ID was invalid.
+     */
+    protected synchronized ResultScanner getScanner(int id) {
+      return scannerMap.get(id);
+    }
+
+    /**
+     * Removes the scanner associated with the specified ID from the internal
+     * id->scanner hash-map.
+     *
+     * @param id
+     * @return a Scanner, or null if ID was invalid.
+     */
+    protected synchronized ResultScanner removeScanner(int id) {
+      return scannerMap.remove(id);
+    }
+
+    /**
+     * Constructs an HBaseHandler object.
+     * @throws IOException 
+     */
+    HBaseHandler()
+    throws IOException {
+      this(HBaseConfiguration.create());
+    }
+
+    HBaseHandler(final Configuration c)
+    throws IOException {
+      this.conf = c;
+      admin = new HBaseAdmin(conf);
+      scannerMap = new HashMap<Integer, ResultScanner>();
+    }
+
+    public void enableTable(final byte[] tableName) throws IOError {
+      try{
+        admin.enableTable(tableName);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void disableTable(final byte[] tableName) throws IOError{
+      try{
+        admin.disableTable(tableName);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public boolean isTableEnabled(final byte[] tableName) throws IOError {
+      try {
+        return HTable.isTableEnabled(this.conf, tableName);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void compact(byte[] tableNameOrRegionName) throws IOError {
+      try{
+        admin.compact(tableNameOrRegionName);
+      } catch (InterruptedException e) {
+        throw new IOError(e.getMessage());
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void majorCompact(byte[] tableNameOrRegionName) throws IOError {
+      try{
+        admin.majorCompact(tableNameOrRegionName);
+      } catch (InterruptedException e) {
+        throw new IOError(e.getMessage());
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public List<byte[]> getTableNames() throws IOError {
+      try {
+        HTableDescriptor[] tables = this.admin.listTables();
+        ArrayList<byte[]> list = new ArrayList<byte[]>(tables.length);
+        for (int i = 0; i < tables.length; i++) {
+          list.add(tables[i].getName());
+        }
+        return list;
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public List<TRegionInfo> getTableRegions(byte[] tableName)
+    throws IOError {
+      try{
+        HTable table = getTable(tableName);
+        Map<HRegionInfo, HServerAddress> regionsInfo = table.getRegionsInfo();
+        List<TRegionInfo> regions = new ArrayList<TRegionInfo>();
+
+        for (HRegionInfo regionInfo : regionsInfo.keySet()){
+          TRegionInfo region = new TRegionInfo();
+          region.startKey = regionInfo.getStartKey();
+          region.endKey = regionInfo.getEndKey();
+          region.id = regionInfo.getRegionId();
+          region.name = regionInfo.getRegionName();
+          region.version = regionInfo.getVersion();
+          regions.add(region);
+        }
+        return regions;
+      } catch (IOException e){
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    @Deprecated
+    public List<TCell> get(byte[] tableName, byte[] row, byte[] column)
+        throws IOError {
+      byte [][] famAndQf = KeyValue.parseColumn(column);
+      if(famAndQf.length == 1) {
+        return get(tableName, row, famAndQf[0], new byte[0]);
+      }
+      return get(tableName, row, famAndQf[0], famAndQf[1]);
+    }
+
+    public List<TCell> get(byte [] tableName, byte [] row, byte [] family,
+        byte [] qualifier) throws IOError {
+      try {
+        HTable table = getTable(tableName);
+        Get get = new Get(row);
+        if (qualifier == null || qualifier.length == 0) {
+          get.addFamily(family);
+        } else {
+          get.addColumn(family, qualifier);
+        }
+        Result result = table.get(get);
+        return ThriftUtilities.cellFromHBase(result.sorted());
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    @Deprecated
+    public List<TCell> getVer(byte[] tableName, byte[] row,
+        byte[] column, int numVersions) throws IOError {
+      byte [][] famAndQf = KeyValue.parseColumn(column);
+      if(famAndQf.length == 1) {
+        return getVer(tableName, row, famAndQf[0], new byte[0], numVersions);
+      }
+      return getVer(tableName, row, famAndQf[0], famAndQf[1], numVersions);
+    }
+
+    public List<TCell> getVer(byte [] tableName, byte [] row, byte [] family,
+        byte [] qualifier, int numVersions) throws IOError {
+      try {
+        HTable table = getTable(tableName);
+        Get get = new Get(row);
+        get.addColumn(family, qualifier);
+        get.setMaxVersions(numVersions);
+        Result result = table.get(get);
+        return ThriftUtilities.cellFromHBase(result.sorted());
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    @Deprecated
+    public List<TCell> getVerTs(byte[] tableName, byte[] row,
+        byte[] column, long timestamp, int numVersions) throws IOError {
+      byte [][] famAndQf = KeyValue.parseColumn(column);
+      if(famAndQf.length == 1) {
+        return getVerTs(tableName, row, famAndQf[0], new byte[0], timestamp,
+            numVersions);
+      }
+      return getVerTs(tableName, row, famAndQf[0], famAndQf[1], timestamp,
+          numVersions);
+    }
+
+    public List<TCell> getVerTs(byte [] tableName, byte [] row, byte [] family,
+        byte [] qualifier, long timestamp, int numVersions) throws IOError {
+      try {
+        HTable table = getTable(tableName);
+        Get get = new Get(row);
+        get.addColumn(family, qualifier);
+        get.setTimeRange(Long.MIN_VALUE, timestamp);
+        get.setMaxVersions(numVersions);
+        Result result = table.get(get);
+        return ThriftUtilities.cellFromHBase(result.sorted());
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public List<TRowResult> getRow(byte[] tableName, byte[] row)
+        throws IOError {
+      return getRowWithColumnsTs(tableName, row, null,
+                                 HConstants.LATEST_TIMESTAMP);
+    }
+
+    public List<TRowResult> getRowWithColumns(byte[] tableName, byte[] row,
+        List<byte[]> columns) throws IOError {
+      return getRowWithColumnsTs(tableName, row, columns,
+                                 HConstants.LATEST_TIMESTAMP);
+    }
+
+    public List<TRowResult> getRowTs(byte[] tableName, byte[] row,
+        long timestamp) throws IOError {
+      return getRowWithColumnsTs(tableName, row, null,
+                                 timestamp);
+    }
+
+    public List<TRowResult> getRowWithColumnsTs(byte[] tableName, byte[] row,
+        List<byte[]> columns, long timestamp) throws IOError {
+      try {
+        HTable table = getTable(tableName);
+        if (columns == null) {
+          Get get = new Get(row);
+          get.setTimeRange(Long.MIN_VALUE, timestamp);
+          Result result = table.get(get);
+          return ThriftUtilities.rowResultFromHBase(result);
+        }
+        byte[][] columnArr = columns.toArray(new byte[columns.size()][]);
+        Get get = new Get(row);
+        for(byte [] column : columnArr) {
+          byte [][] famAndQf = KeyValue.parseColumn(column);
+          if (famAndQf.length == 1) {
+              get.addFamily(famAndQf[0]);
+          } else {
+              get.addColumn(famAndQf[0], famAndQf[1]);
+          }
+        }
+        get.setTimeRange(Long.MIN_VALUE, timestamp);
+        Result result = table.get(get);
+        return ThriftUtilities.rowResultFromHBase(result);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void deleteAll(byte[] tableName, byte[] row, byte[] column)
+        throws IOError {
+      deleteAllTs(tableName, row, column, HConstants.LATEST_TIMESTAMP);
+    }
+
+    public void deleteAllTs(byte[] tableName, byte[] row, byte[] column,
+        long timestamp) throws IOError {
+      try {
+        HTable table = getTable(tableName);
+        Delete delete  = new Delete(row);
+        byte [][] famAndQf = KeyValue.parseColumn(column);
+        if (famAndQf.length == 1) {
+          delete.deleteFamily(famAndQf[0], timestamp);
+        } else {
+          delete.deleteColumns(famAndQf[0], famAndQf[1], timestamp);
+        }
+        table.delete(delete);
+
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void deleteAllRow(byte[] tableName, byte[] row) throws IOError {
+      deleteAllRowTs(tableName, row, HConstants.LATEST_TIMESTAMP);
+    }
+
+    public void deleteAllRowTs(byte[] tableName, byte[] row, long timestamp)
+        throws IOError {
+      try {
+        HTable table = getTable(tableName);
+        Delete delete  = new Delete(row, timestamp, null);
+        table.delete(delete);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void createTable(byte[] tableName,
+        List<ColumnDescriptor> columnFamilies) throws IOError,
+        IllegalArgument, AlreadyExists {
+      try {
+        if (admin.tableExists(tableName)) {
+          throw new AlreadyExists("table name already in use");
+        }
+        HTableDescriptor desc = new HTableDescriptor(tableName);
+        for (ColumnDescriptor col : columnFamilies) {
+          HColumnDescriptor colDesc = ThriftUtilities.colDescFromThrift(col);
+          desc.addFamily(colDesc);
+        }
+        admin.createTable(desc);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      } catch (IllegalArgumentException e) {
+        throw new IllegalArgument(e.getMessage());
+      }
+    }
+
+    public void deleteTable(byte[] tableName) throws IOError {
+      if (LOG.isDebugEnabled()) {
+        LOG.debug("deleteTable: table=" + new String(tableName));
+      }
+      try {
+        if (!admin.tableExists(tableName)) {
+          throw new IOError("table does not exist");
+        }
+        admin.deleteTable(tableName);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void mutateRow(byte[] tableName, byte[] row,
+        List<Mutation> mutations) throws IOError, IllegalArgument {
+      mutateRowTs(tableName, row, mutations, HConstants.LATEST_TIMESTAMP);
+    }
+
+    public void mutateRowTs(byte[] tableName, byte[] row,
+        List<Mutation> mutations, long timestamp) throws IOError, IllegalArgument {
+      HTable table = null;
+      try {
+        table = getTable(tableName);
+        Put put = new Put(row, timestamp, null);
+
+        Delete delete = new Delete(row);
+
+        // I apologize for all this mess :)
+        for (Mutation m : mutations) {
+          byte[][] famAndQf = KeyValue.parseColumn(m.column);
+          if (m.isDelete) {
+            if (famAndQf.length == 1) {
+              delete.deleteFamily(famAndQf[0], timestamp);
+            } else {
+              delete.deleteColumns(famAndQf[0], famAndQf[1], timestamp);
+            }
+          } else {
+            if(famAndQf.length == 1) {
+              put.add(famAndQf[0], new byte[0], m.value);
+            } else {
+              put.add(famAndQf[0], famAndQf[1], m.value);
+            }
+          }
+        }
+        if (!delete.isEmpty())
+          table.delete(delete);
+        if (!put.isEmpty())
+          table.put(put);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      } catch (IllegalArgumentException e) {
+        throw new IllegalArgument(e.getMessage());
+      }
+    }
+
+    public void mutateRows(byte[] tableName, List<BatchMutation> rowBatches)
+        throws IOError, IllegalArgument, TException {
+      mutateRowsTs(tableName, rowBatches, HConstants.LATEST_TIMESTAMP);
+    }
+
+    public void mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp)
+        throws IOError, IllegalArgument, TException {
+      List<Put> puts = new ArrayList<Put>();
+      List<Delete> deletes = new ArrayList<Delete>();
+
+      for (BatchMutation batch : rowBatches) {
+        byte[] row = batch.row;
+        List<Mutation> mutations = batch.mutations;
+        Delete delete = new Delete(row);
+        Put put = new Put(row, timestamp, null);
+        for (Mutation m : mutations) {
+          byte[][] famAndQf = KeyValue.parseColumn(m.column);
+          if (m.isDelete) {
+            // no qualifier, family only.
+            if (famAndQf.length == 1) {
+              delete.deleteFamily(famAndQf[0], timestamp);
+            } else {
+              delete.deleteColumns(famAndQf[0], famAndQf[1], timestamp);
+            }
+          } else {
+            if(famAndQf.length == 1) {
+              put.add(famAndQf[0], new byte[0], m.value);
+            } else {
+              put.add(famAndQf[0], famAndQf[1], m.value);
+            }
+          }
+        }
+        if (!delete.isEmpty())
+          deletes.add(delete);
+        if (!put.isEmpty())
+          puts.add(put);
+      }
+
+      HTable table = null;
+      try {
+        table = getTable(tableName);
+        if (!puts.isEmpty())
+          table.put(puts);
+        for (Delete del : deletes) {
+          table.delete(del);
+        }
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      } catch (IllegalArgumentException e) {
+        throw new IllegalArgument(e.getMessage());
+      }
+    }
+
+    @Deprecated
+    public long atomicIncrement(byte[] tableName, byte[] row, byte[] column,
+        long amount) throws IOError, IllegalArgument, TException {
+      byte [][] famAndQf = KeyValue.parseColumn(column);
+      if(famAndQf.length == 1) {
+        return atomicIncrement(tableName, row, famAndQf[0], new byte[0],
+            amount);
+      }
+      return atomicIncrement(tableName, row, famAndQf[0], famAndQf[1], amount);
+    }
+
+    public long atomicIncrement(byte [] tableName, byte [] row, byte [] family,
+        byte [] qualifier, long amount)
+    throws IOError, IllegalArgument, TException {
+      HTable table;
+      try {
+        table = getTable(tableName);
+        return table.incrementColumnValue(row, family, qualifier, amount);
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public void scannerClose(int id) throws IOError, IllegalArgument {
+      LOG.debug("scannerClose: id=" + id);
+      ResultScanner scanner = getScanner(id);
+      if (scanner == null) {
+        throw new IllegalArgument("scanner ID is invalid");
+      }
+      scanner.close();
+      removeScanner(id);
+    }
+
+    public List<TRowResult> scannerGetList(int id,int nbRows) throws IllegalArgument, IOError {
+        LOG.debug("scannerGetList: id=" + id);
+        ResultScanner scanner = getScanner(id);
+        if (null == scanner) {
+            throw new IllegalArgument("scanner ID is invalid");
+        }
+
+        Result [] results = null;
+        try {
+            results = scanner.next(nbRows);
+            if (null == results) {
+                return new ArrayList<TRowResult>();
+            }
+        } catch (IOException e) {
+            throw new IOError(e.getMessage());
+        }
+        return ThriftUtilities.rowResultFromHBase(results);
+    }
+    public List<TRowResult> scannerGet(int id) throws IllegalArgument, IOError {
+        return scannerGetList(id,1);
+    }
+    public int scannerOpen(byte[] tableName, byte[] startRow,
+            List<byte[]> columns) throws IOError {
+        try {
+          HTable table = getTable(tableName);
+          Scan scan = new Scan(startRow);
+          if(columns != null && columns.size() != 0) {
+            for(byte [] column : columns) {
+              byte [][] famQf = KeyValue.parseColumn(column);
+              if(famQf.length == 1) {
+                scan.addFamily(famQf[0]);
+              } else {
+                scan.addColumn(famQf[0], famQf[1]);
+              }
+            }
+          }
+          return addScanner(table.getScanner(scan));
+        } catch (IOException e) {
+          throw new IOError(e.getMessage());
+        }
+    }
+
+    public int scannerOpenWithStop(byte[] tableName, byte[] startRow,
+        byte[] stopRow, List<byte[]> columns) throws IOError, TException {
+      try {
+        HTable table = getTable(tableName);
+        Scan scan = new Scan(startRow, stopRow);
+        if(columns != null && columns.size() != 0) {
+          for(byte [] column : columns) {
+            byte [][] famQf = KeyValue.parseColumn(column);
+            if(famQf.length == 1) {
+              scan.addFamily(famQf[0]);
+            } else {
+              scan.addColumn(famQf[0], famQf[1]);
+            }
+          }
+        }
+        return addScanner(table.getScanner(scan));
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    @Override
+    public int scannerOpenWithPrefix(byte[] tableName, byte[] startAndPrefix, List<byte[]> columns) throws IOError, TException {
+      try {
+        HTable table = getTable(tableName);
+        Scan scan = new Scan(startAndPrefix);
+        Filter f = new WhileMatchFilter(
+            new PrefixFilter(startAndPrefix));
+        scan.setFilter(f);
+        if(columns != null && columns.size() != 0) {
+          for(byte [] column : columns) {
+            byte [][] famQf = KeyValue.parseColumn(column);
+            if(famQf.length == 1) {
+              scan.addFamily(famQf[0]);
+            } else {
+              scan.addColumn(famQf[0], famQf[1]);
+            }
+          }
+        }
+        return addScanner(table.getScanner(scan));
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public int scannerOpenTs(byte[] tableName, byte[] startRow,
+        List<byte[]> columns, long timestamp) throws IOError, TException {
+      try {
+        HTable table = getTable(tableName);
+        Scan scan = new Scan(startRow);
+        scan.setTimeRange(Long.MIN_VALUE, timestamp);
+        if(columns != null && columns.size() != 0) {
+          for(byte [] column : columns) {
+            byte [][] famQf = KeyValue.parseColumn(column);
+            if(famQf.length == 1) {
+              scan.addFamily(famQf[0]);
+            } else {
+              scan.addColumn(famQf[0], famQf[1]);
+            }
+          }
+        }
+        return addScanner(table.getScanner(scan));
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public int scannerOpenWithStopTs(byte[] tableName, byte[] startRow,
+        byte[] stopRow, List<byte[]> columns, long timestamp)
+        throws IOError, TException {
+      try {
+        HTable table = getTable(tableName);
+        Scan scan = new Scan(startRow, stopRow);
+        scan.setTimeRange(Long.MIN_VALUE, timestamp);
+        if(columns != null && columns.size() != 0) {
+          for(byte [] column : columns) {
+            byte [][] famQf = KeyValue.parseColumn(column);
+            if(famQf.length == 1) {
+              scan.addFamily(famQf[0]);
+            } else {
+              scan.addColumn(famQf[0], famQf[1]);
+            }
+          }
+        }
+        scan.setTimeRange(Long.MIN_VALUE, timestamp);
+        return addScanner(table.getScanner(scan));
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+
+    public Map<byte[], ColumnDescriptor> getColumnDescriptors(
+        byte[] tableName) throws IOError, TException {
+      try {
+        TreeMap<byte[], ColumnDescriptor> columns =
+          new TreeMap<byte[], ColumnDescriptor>(Bytes.BYTES_COMPARATOR);
+
+        HTable table = getTable(tableName);
+        HTableDescriptor desc = table.getTableDescriptor();
+
+        for (HColumnDescriptor e : desc.getFamilies()) {
+          ColumnDescriptor col = ThriftUtilities.colDescFromHbase(e);
+          columns.put(col.name, col);
+        }
+        return columns;
+      } catch (IOException e) {
+        throw new IOError(e.getMessage());
+      }
+    }
+  }
+
+  //
+  // Main program and support routines
+  //
+
+  private static void printUsageAndExit(Options options, int exitCode) {
+    HelpFormatter formatter = new HelpFormatter();
+    formatter.printHelp("Thrift", null, options,
+            "To start the Thrift server run 'bin/hbase-daemon.sh start thrift'\n" +
+            "To shutdown the thrift server run 'bin/hbase-daemon.sh stop thrift' or" +
+            " send a kill signal to the thrift server pid",
+            true);
+      System.exit(exitCode);
+  }
+
+  private static final String DEFAULT_LISTEN_PORT = "9090";
+
+  /*
+   * Start up the Thrift server.
+   * @param args
+   */
+  static private void doMain(final String[] args) throws Exception {
+    Log LOG = LogFactory.getLog("ThriftServer");
+
+    Options options = new Options();
+    options.addOption("b", "bind", true, "Address to bind the Thrift server to. Not supported by the Nonblocking and HsHa server [default: 0.0.0.0]");
+    options.addOption("p", "port", true, "Port to bind to [default: 9090]");
+    options.addOption("f", "framed", false, "Use framed transport");
+    options.addOption("c", "compact", false, "Use the compact protocol");
+    options.addOption("h", "help", false, "Print help information");
+
+    OptionGroup servers = new OptionGroup();
+    servers.addOption(new Option("nonblocking", false, "Use the TNonblockingServer. This implies the framed transport."));
+    servers.addOption(new Option("hsha", false, "Use the THsHaServer. This implies the framed transport."));
+    servers.addOption(new Option("threadpool", false, "Use the TThreadPoolServer. This is the default."));
+    options.addOptionGroup(servers);
+
+    CommandLineParser parser = new PosixParser();
+    CommandLine cmd = parser.parse(options, args);
+
+    /**
+     * This is so complicated to please both bin/hbase and bin/hbase-daemon.
+     * hbase-daemon provides "start" and "stop" arguments
+     * hbase should print the help if no argument is provided
+     */
+    List<String> commandLine = Arrays.asList(args);
+    boolean stop = commandLine.contains("stop");
+    boolean start = commandLine.contains("start");
+    if (cmd.hasOption("help") || !start || stop) {
+      printUsageAndExit(options, 1);
+    }
+
+    // Get port to bind to
+    int listenPort = 0;
+    try {
+      listenPort = Integer.parseInt(cmd.getOptionValue("port", DEFAULT_LISTEN_PORT));
+    } catch (NumberFormatException e) {
+      LOG.error("Could not parse the value provided for the port option", e);
+      printUsageAndExit(options, -1);
+    }
+
+    // Construct correct ProtocolFactory
+    TProtocolFactory protocolFactory;
+    if (cmd.hasOption("compact")) {
+      LOG.debug("Using compact protocol");
+      protocolFactory = new TCompactProtocol.Factory();
+    } else {
+      LOG.debug("Using binary protocol");
+      protocolFactory = new TBinaryProtocol.Factory();
+    }
+
+    HBaseHandler handler = new HBaseHandler();
+    Hbase.Processor processor = new Hbase.Processor(handler);
+
+    TServer server;
+    if (cmd.hasOption("nonblocking") || cmd.hasOption("hsha")) {
+      if (cmd.hasOption("bind")) {
+        LOG.error("The Nonblocking and HsHa servers don't support IP address binding at the moment." +
+                " See https://issues.apache.org/jira/browse/HBASE-2155 for details.");
+        printUsageAndExit(options, -1);
+      }
+
+      TNonblockingServerTransport serverTransport = new TNonblockingServerSocket(listenPort);
+      TFramedTransport.Factory transportFactory = new TFramedTransport.Factory();
+
+      if (cmd.hasOption("nonblocking")) {
+        LOG.info("starting HBase Nonblocking Thrift server on " + Integer.toString(listenPort));
+        server = new TNonblockingServer(processor, serverTransport, transportFactory, protocolFactory);
+      } else {
+        LOG.info("starting HBase HsHA Thrift server on " + Integer.toString(listenPort));
+        server = new THsHaServer(processor, serverTransport, transportFactory, protocolFactory);
+      }
+    } else {
+      // Get IP address to bind to
+      InetAddress listenAddress = null;
+      if (cmd.hasOption("bind")) {
+        try {
+          listenAddress = InetAddress.getByName(cmd.getOptionValue("bind"));
+        } catch (UnknownHostException e) {
+          LOG.error("Could not bind to provided ip address", e);
+          printUsageAndExit(options, -1);
+        }
+      } else {
+        listenAddress = InetAddress.getLocalHost();
+      }
+      TServerTransport serverTransport = new TServerSocket(new InetSocketAddress(listenAddress, listenPort));
+
+      // Construct correct TransportFactory
+      TTransportFactory transportFactory;
+      if (cmd.hasOption("framed")) {
+        transportFactory = new TFramedTransport.Factory();
+        LOG.debug("Using framed transport");
+      } else {
+        transportFactory = new TTransportFactory();
+      }
+
+      LOG.info("starting HBase ThreadPool Thrift server on " + listenAddress + ":" + Integer.toString(listenPort));
+      server = new TThreadPoolServer(processor, serverTransport, transportFactory, protocolFactory);
+    }
+
+    server.serve();
+  }
+
+  /**
+   * @param args
+   * @throws Exception
+   */
+  public static void main(String [] args) throws Exception {
+    doMain(args);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java
new file mode 100644
index 0000000..f319751
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java
@@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.thrift;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.TreeMap;
+
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.regionserver.StoreFile;
+import org.apache.hadoop.hbase.regionserver.StoreFile.BloomType;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.IllegalArgument;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class ThriftUtilities {
+
+  /**
+   * This utility method creates a new Hbase HColumnDescriptor object based on a
+   * Thrift ColumnDescriptor "struct".
+   *
+   * @param in
+   *          Thrift ColumnDescriptor object
+   * @return HColumnDescriptor
+   * @throws IllegalArgument
+   */
+  static public HColumnDescriptor colDescFromThrift(ColumnDescriptor in)
+      throws IllegalArgument {
+    Compression.Algorithm comp =
+      Compression.getCompressionAlgorithmByName(in.compression.toLowerCase());
+    StoreFile.BloomType bt =
+      BloomType.valueOf(in.bloomFilterType);
+
+    if (in.name == null || in.name.length <= 0) {
+      throw new IllegalArgument("column name is empty");
+    }
+    byte [] parsedName = KeyValue.parseColumn(in.name)[0];
+    HColumnDescriptor col = new HColumnDescriptor(parsedName,
+        in.maxVersions, comp.getName(), in.inMemory, in.blockCacheEnabled,
+        in.timeToLive, bt.toString());
+    return col;
+  }
+
+  /**
+   * This utility method creates a new Thrift ColumnDescriptor "struct" based on
+   * an Hbase HColumnDescriptor object.
+   *
+   * @param in
+   *          Hbase HColumnDescriptor object
+   * @return Thrift ColumnDescriptor
+   */
+  static public ColumnDescriptor colDescFromHbase(HColumnDescriptor in) {
+    ColumnDescriptor col = new ColumnDescriptor();
+    col.name = Bytes.add(in.getName(), KeyValue.COLUMN_FAMILY_DELIM_ARRAY);
+    col.maxVersions = in.getMaxVersions();
+    col.compression = in.getCompression().toString();
+    col.inMemory = in.isInMemory();
+    col.blockCacheEnabled = in.isBlockCacheEnabled();
+    col.bloomFilterType = in.getBloomFilterType().toString();
+    return col;
+  }
+
+  /**
+   * This utility method creates a list of Thrift TCell "struct" based on
+   * an Hbase Cell object. The empty list is returned if the input is null.
+   *
+   * @param in
+   *          Hbase Cell object
+   * @return Thrift TCell array
+   */
+  static public List<TCell> cellFromHBase(KeyValue in) {
+    List<TCell> list = new ArrayList<TCell>(1);
+    if (in != null) {
+      list.add(new TCell(in.getValue(), in.getTimestamp()));
+    }
+    return list;
+  }
+
+  /**
+   * This utility method creates a list of Thrift TCell "struct" based on
+   * an Hbase Cell array. The empty list is returned if the input is null.
+   * @param in Hbase Cell array
+   * @return Thrift TCell array
+   */
+  static public List<TCell> cellFromHBase(KeyValue[] in) {
+    List<TCell> list = null;
+    if (in != null) {
+      list = new ArrayList<TCell>(in.length);
+      for (int i = 0; i < in.length; i++) {
+        list.add(new TCell(in[i].getValue(), in[i].getTimestamp()));
+      }
+    } else {
+      list = new ArrayList<TCell>(0);
+    }
+    return list;
+  }
+
+  /**
+   * This utility method creates a list of Thrift TRowResult "struct" based on
+   * an Hbase RowResult object. The empty list is returned if the input is
+   * null.
+   *
+   * @param in
+   *          Hbase RowResult object
+   * @return Thrift TRowResult array
+   */
+  static public List<TRowResult> rowResultFromHBase(Result[] in) {
+    List<TRowResult> results = new ArrayList<TRowResult>();
+    for ( Result result_ : in) {
+        if(result_ == null || result_.isEmpty()) {
+            continue;
+        }
+        TRowResult result = new TRowResult();
+        result.row = result_.getRow();
+        result.columns = new TreeMap<byte[], TCell>(Bytes.BYTES_COMPARATOR);
+        for(KeyValue kv : result_.sorted()) {
+          result.columns.put(KeyValue.makeColumn(kv.getFamily(),
+              kv.getQualifier()), new TCell(kv.getValue(), kv.getTimestamp()));
+        }
+        results.add(result);
+    }
+    return results;
+  }
+
+  static public List<TRowResult> rowResultFromHBase(Result in) {
+    Result [] result = { in };
+    return rowResultFromHBase(result);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
new file mode 100644
index 0000000..2bd4f77
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/AlreadyExists.java
@@ -0,0 +1,321 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.FieldMetaData;
+import org.apache.thrift.meta_data.FieldValueMetaData;
+import org.apache.thrift.protocol.*;
+
+import java.util.*;
+
+/**
+ * An AlreadyExists exceptions signals that a table with the specified
+ * name already exists
+ */
+public class AlreadyExists extends Exception implements TBase<AlreadyExists._Fields>, java.io.Serializable, Cloneable, Comparable<AlreadyExists> {
+  private static final TStruct STRUCT_DESC = new TStruct("AlreadyExists");
+
+  private static final TField MESSAGE_FIELD_DESC = new TField("message", TType.STRING, (short)1);
+
+  public String message;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    MESSAGE((short)1, "message");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.MESSAGE, new FieldMetaData("message", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(AlreadyExists.class, metaDataMap);
+  }
+
+  public AlreadyExists() {
+  }
+
+  public AlreadyExists(
+    String message)
+  {
+    this();
+    this.message = message;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public AlreadyExists(AlreadyExists other) {
+    if (other.isSetMessage()) {
+      this.message = other.message;
+    }
+  }
+
+  public AlreadyExists deepCopy() {
+    return new AlreadyExists(this);
+  }
+
+  @Deprecated
+  public AlreadyExists clone() {
+    return new AlreadyExists(this);
+  }
+
+  public String getMessage() {
+    return this.message;
+  }
+
+  public AlreadyExists setMessage(String message) {
+    this.message = message;
+    return this;
+  }
+
+  public void unsetMessage() {
+    this.message = null;
+  }
+
+  /** Returns true if field message is set (has been asigned a value) and false otherwise */
+  public boolean isSetMessage() {
+    return this.message != null;
+  }
+
+  public void setMessageIsSet(boolean value) {
+    if (!value) {
+      this.message = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case MESSAGE:
+      if (value == null) {
+        unsetMessage();
+      } else {
+        setMessage((String)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case MESSAGE:
+      return getMessage();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case MESSAGE:
+      return isSetMessage();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof AlreadyExists)
+      return this.equals((AlreadyExists)that);
+    return false;
+  }
+
+  public boolean equals(AlreadyExists that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_message = true && this.isSetMessage();
+    boolean that_present_message = true && that.isSetMessage();
+    if (this_present_message || that_present_message) {
+      if (!(this_present_message && that_present_message))
+        return false;
+      if (!this.message.equals(that.message))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_message = true && (isSetMessage());
+    builder.append(present_message);
+    if (present_message)
+      builder.append(message);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(AlreadyExists other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    AlreadyExists typedOther = (AlreadyExists)other;
+
+    lastComparison = Boolean.valueOf(isSetMessage()).compareTo(isSetMessage());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(message, typedOther.message);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case MESSAGE:
+            if (field.type == TType.STRING) {
+              this.message = iprot.readString();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.message != null) {
+      oprot.writeFieldBegin(MESSAGE_FIELD_DESC);
+      oprot.writeString(this.message);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("AlreadyExists(");
+    boolean first = true;
+
+    sb.append("message:");
+    if (this.message == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.message);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
new file mode 100644
index 0000000..5368fa3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/BatchMutation.java
@@ -0,0 +1,458 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * A BatchMutation object is used to apply a number of Mutations to a single row.
+ */
+public class BatchMutation implements TBase<BatchMutation._Fields>, java.io.Serializable, Cloneable, Comparable<BatchMutation> {
+  private static final TStruct STRUCT_DESC = new TStruct("BatchMutation");
+
+  private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)1);
+  private static final TField MUTATIONS_FIELD_DESC = new TField("mutations", TType.LIST, (short)2);
+
+  public byte[] row;
+  public List<Mutation> mutations;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    ROW((short)1, "row"),
+    MUTATIONS((short)2, "mutations");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.MUTATIONS, new FieldMetaData("mutations", TFieldRequirementType.DEFAULT,
+        new ListMetaData(TType.LIST,
+            new StructMetaData(TType.STRUCT, Mutation.class))));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(BatchMutation.class, metaDataMap);
+  }
+
+  public BatchMutation() {
+  }
+
+  public BatchMutation(
+    byte[] row,
+    List<Mutation> mutations)
+  {
+    this();
+    this.row = row;
+    this.mutations = mutations;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public BatchMutation(BatchMutation other) {
+    if (other.isSetRow()) {
+      this.row = other.row;
+    }
+    if (other.isSetMutations()) {
+      List<Mutation> __this__mutations = new ArrayList<Mutation>();
+      for (Mutation other_element : other.mutations) {
+        __this__mutations.add(new Mutation(other_element));
+      }
+      this.mutations = __this__mutations;
+    }
+  }
+
+  public BatchMutation deepCopy() {
+    return new BatchMutation(this);
+  }
+
+  @Deprecated
+  public BatchMutation clone() {
+    return new BatchMutation(this);
+  }
+
+  public byte[] getRow() {
+    return this.row;
+  }
+
+  public BatchMutation setRow(byte[] row) {
+    this.row = row;
+    return this;
+  }
+
+  public void unsetRow() {
+    this.row = null;
+  }
+
+  /** Returns true if field row is set (has been asigned a value) and false otherwise */
+  public boolean isSetRow() {
+    return this.row != null;
+  }
+
+  public void setRowIsSet(boolean value) {
+    if (!value) {
+      this.row = null;
+    }
+  }
+
+  public int getMutationsSize() {
+    return (this.mutations == null) ? 0 : this.mutations.size();
+  }
+
+  public java.util.Iterator<Mutation> getMutationsIterator() {
+    return (this.mutations == null) ? null : this.mutations.iterator();
+  }
+
+  public void addToMutations(Mutation elem) {
+    if (this.mutations == null) {
+      this.mutations = new ArrayList<Mutation>();
+    }
+    this.mutations.add(elem);
+  }
+
+  public List<Mutation> getMutations() {
+    return this.mutations;
+  }
+
+  public BatchMutation setMutations(List<Mutation> mutations) {
+    this.mutations = mutations;
+    return this;
+  }
+
+  public void unsetMutations() {
+    this.mutations = null;
+  }
+
+  /** Returns true if field mutations is set (has been asigned a value) and false otherwise */
+  public boolean isSetMutations() {
+    return this.mutations != null;
+  }
+
+  public void setMutationsIsSet(boolean value) {
+    if (!value) {
+      this.mutations = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case ROW:
+      if (value == null) {
+        unsetRow();
+      } else {
+        setRow((byte[])value);
+      }
+      break;
+
+    case MUTATIONS:
+      if (value == null) {
+        unsetMutations();
+      } else {
+        setMutations((List<Mutation>)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case ROW:
+      return getRow();
+
+    case MUTATIONS:
+      return getMutations();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case ROW:
+      return isSetRow();
+    case MUTATIONS:
+      return isSetMutations();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof BatchMutation)
+      return this.equals((BatchMutation)that);
+    return false;
+  }
+
+  public boolean equals(BatchMutation that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_row = true && this.isSetRow();
+    boolean that_present_row = true && that.isSetRow();
+    if (this_present_row || that_present_row) {
+      if (!(this_present_row && that_present_row))
+        return false;
+      if (!java.util.Arrays.equals(this.row, that.row))
+        return false;
+    }
+
+    boolean this_present_mutations = true && this.isSetMutations();
+    boolean that_present_mutations = true && that.isSetMutations();
+    if (this_present_mutations || that_present_mutations) {
+      if (!(this_present_mutations && that_present_mutations))
+        return false;
+      if (!this.mutations.equals(that.mutations))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_row = true && (isSetRow());
+    builder.append(present_row);
+    if (present_row)
+      builder.append(row);
+
+    boolean present_mutations = true && (isSetMutations());
+    builder.append(present_mutations);
+    if (present_mutations)
+      builder.append(mutations);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(BatchMutation other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    BatchMutation typedOther = (BatchMutation)other;
+
+    lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetMutations()).compareTo(isSetMutations());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(mutations, typedOther.mutations);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case ROW:
+            if (field.type == TType.STRING) {
+              this.row = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case MUTATIONS:
+            if (field.type == TType.LIST) {
+              {
+                TList _list0 = iprot.readListBegin();
+                this.mutations = new ArrayList<Mutation>(_list0.size);
+                for (int _i1 = 0; _i1 < _list0.size; ++_i1)
+                {
+                  Mutation _elem2;
+                  _elem2 = new Mutation();
+                  _elem2.read(iprot);
+                  this.mutations.add(_elem2);
+                }
+                iprot.readListEnd();
+              }
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.row != null) {
+      oprot.writeFieldBegin(ROW_FIELD_DESC);
+      oprot.writeBinary(this.row);
+      oprot.writeFieldEnd();
+    }
+    if (this.mutations != null) {
+      oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
+      {
+        oprot.writeListBegin(new TList(TType.STRUCT, this.mutations.size()));
+        for (Mutation _iter3 : this.mutations)
+        {
+          _iter3.write(oprot);
+        }
+        oprot.writeListEnd();
+      }
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("BatchMutation(");
+    boolean first = true;
+
+    sb.append("row:");
+    if (this.row == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.row);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("mutations:");
+    if (this.mutations == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.mutations);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
new file mode 100644
index 0000000..a883a59
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/ColumnDescriptor.java
@@ -0,0 +1,1028 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * An HColumnDescriptor contains information about a column family
+ * such as the number of versions, compression settings, etc. It is
+ * used as input when creating a table or adding a column.
+ */
+public class ColumnDescriptor implements TBase<ColumnDescriptor._Fields>, java.io.Serializable, Cloneable, Comparable<ColumnDescriptor> {
+  private static final TStruct STRUCT_DESC = new TStruct("ColumnDescriptor");
+
+  private static final TField NAME_FIELD_DESC = new TField("name", TType.STRING, (short)1);
+  private static final TField MAX_VERSIONS_FIELD_DESC = new TField("maxVersions", TType.I32, (short)2);
+  private static final TField COMPRESSION_FIELD_DESC = new TField("compression", TType.STRING, (short)3);
+  private static final TField IN_MEMORY_FIELD_DESC = new TField("inMemory", TType.BOOL, (short)4);
+  private static final TField BLOOM_FILTER_TYPE_FIELD_DESC = new TField("bloomFilterType", TType.STRING, (short)5);
+  private static final TField BLOOM_FILTER_VECTOR_SIZE_FIELD_DESC = new TField("bloomFilterVectorSize", TType.I32, (short)6);
+  private static final TField BLOOM_FILTER_NB_HASHES_FIELD_DESC = new TField("bloomFilterNbHashes", TType.I32, (short)7);
+  private static final TField BLOCK_CACHE_ENABLED_FIELD_DESC = new TField("blockCacheEnabled", TType.BOOL, (short)8);
+  private static final TField TIME_TO_LIVE_FIELD_DESC = new TField("timeToLive", TType.I32, (short)9);
+
+  public byte[] name;
+  public int maxVersions;
+  public String compression;
+  public boolean inMemory;
+  public String bloomFilterType;
+  public int bloomFilterVectorSize;
+  public int bloomFilterNbHashes;
+  public boolean blockCacheEnabled;
+  public int timeToLive;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    NAME((short)1, "name"),
+    MAX_VERSIONS((short)2, "maxVersions"),
+    COMPRESSION((short)3, "compression"),
+    IN_MEMORY((short)4, "inMemory"),
+    BLOOM_FILTER_TYPE((short)5, "bloomFilterType"),
+    BLOOM_FILTER_VECTOR_SIZE((short)6, "bloomFilterVectorSize"),
+    BLOOM_FILTER_NB_HASHES((short)7, "bloomFilterNbHashes"),
+    BLOCK_CACHE_ENABLED((short)8, "blockCacheEnabled"),
+    TIME_TO_LIVE((short)9, "timeToLive");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  private static final int __MAXVERSIONS_ISSET_ID = 0;
+  private static final int __INMEMORY_ISSET_ID = 1;
+  private static final int __BLOOMFILTERVECTORSIZE_ISSET_ID = 2;
+  private static final int __BLOOMFILTERNBHASHES_ISSET_ID = 3;
+  private static final int __BLOCKCACHEENABLED_ISSET_ID = 4;
+  private static final int __TIMETOLIVE_ISSET_ID = 5;
+  private BitSet __isset_bit_vector = new BitSet(6);
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.NAME, new FieldMetaData("name", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.MAX_VERSIONS, new FieldMetaData("maxVersions", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.I32)));
+    put(_Fields.COMPRESSION, new FieldMetaData("compression", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.IN_MEMORY, new FieldMetaData("inMemory", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.BOOL)));
+    put(_Fields.BLOOM_FILTER_TYPE, new FieldMetaData("bloomFilterType", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.BLOOM_FILTER_VECTOR_SIZE, new FieldMetaData("bloomFilterVectorSize", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.I32)));
+    put(_Fields.BLOOM_FILTER_NB_HASHES, new FieldMetaData("bloomFilterNbHashes", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.I32)));
+    put(_Fields.BLOCK_CACHE_ENABLED, new FieldMetaData("blockCacheEnabled", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.BOOL)));
+    put(_Fields.TIME_TO_LIVE, new FieldMetaData("timeToLive", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.I32)));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(ColumnDescriptor.class, metaDataMap);
+  }
+
+  public ColumnDescriptor() {
+    this.maxVersions = 3;
+
+    this.compression = "NONE";
+
+    this.inMemory = false;
+
+    this.bloomFilterType = "NONE";
+
+    this.bloomFilterVectorSize = 0;
+
+    this.bloomFilterNbHashes = 0;
+
+    this.blockCacheEnabled = false;
+
+    this.timeToLive = -1;
+
+  }
+
+  public ColumnDescriptor(
+    byte[] name,
+    int maxVersions,
+    String compression,
+    boolean inMemory,
+    String bloomFilterType,
+    int bloomFilterVectorSize,
+    int bloomFilterNbHashes,
+    boolean blockCacheEnabled,
+    int timeToLive)
+  {
+    this();
+    this.name = name;
+    this.maxVersions = maxVersions;
+    setMaxVersionsIsSet(true);
+    this.compression = compression;
+    this.inMemory = inMemory;
+    setInMemoryIsSet(true);
+    this.bloomFilterType = bloomFilterType;
+    this.bloomFilterVectorSize = bloomFilterVectorSize;
+    setBloomFilterVectorSizeIsSet(true);
+    this.bloomFilterNbHashes = bloomFilterNbHashes;
+    setBloomFilterNbHashesIsSet(true);
+    this.blockCacheEnabled = blockCacheEnabled;
+    setBlockCacheEnabledIsSet(true);
+    this.timeToLive = timeToLive;
+    setTimeToLiveIsSet(true);
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public ColumnDescriptor(ColumnDescriptor other) {
+    __isset_bit_vector.clear();
+    __isset_bit_vector.or(other.__isset_bit_vector);
+    if (other.isSetName()) {
+      this.name = other.name;
+    }
+    this.maxVersions = other.maxVersions;
+    if (other.isSetCompression()) {
+      this.compression = other.compression;
+    }
+    this.inMemory = other.inMemory;
+    if (other.isSetBloomFilterType()) {
+      this.bloomFilterType = other.bloomFilterType;
+    }
+    this.bloomFilterVectorSize = other.bloomFilterVectorSize;
+    this.bloomFilterNbHashes = other.bloomFilterNbHashes;
+    this.blockCacheEnabled = other.blockCacheEnabled;
+    this.timeToLive = other.timeToLive;
+  }
+
+  public ColumnDescriptor deepCopy() {
+    return new ColumnDescriptor(this);
+  }
+
+  @Deprecated
+  public ColumnDescriptor clone() {
+    return new ColumnDescriptor(this);
+  }
+
+  public byte[] getName() {
+    return this.name;
+  }
+
+  public ColumnDescriptor setName(byte[] name) {
+    this.name = name;
+    return this;
+  }
+
+  public void unsetName() {
+    this.name = null;
+  }
+
+  /** Returns true if field name is set (has been asigned a value) and false otherwise */
+  public boolean isSetName() {
+    return this.name != null;
+  }
+
+  public void setNameIsSet(boolean value) {
+    if (!value) {
+      this.name = null;
+    }
+  }
+
+  public int getMaxVersions() {
+    return this.maxVersions;
+  }
+
+  public ColumnDescriptor setMaxVersions(int maxVersions) {
+    this.maxVersions = maxVersions;
+    setMaxVersionsIsSet(true);
+    return this;
+  }
+
+  public void unsetMaxVersions() {
+    __isset_bit_vector.clear(__MAXVERSIONS_ISSET_ID);
+  }
+
+  /** Returns true if field maxVersions is set (has been asigned a value) and false otherwise */
+  public boolean isSetMaxVersions() {
+    return __isset_bit_vector.get(__MAXVERSIONS_ISSET_ID);
+  }
+
+  public void setMaxVersionsIsSet(boolean value) {
+    __isset_bit_vector.set(__MAXVERSIONS_ISSET_ID, value);
+  }
+
+  public String getCompression() {
+    return this.compression;
+  }
+
+  public ColumnDescriptor setCompression(String compression) {
+    this.compression = compression;
+    return this;
+  }
+
+  public void unsetCompression() {
+    this.compression = null;
+  }
+
+  /** Returns true if field compression is set (has been asigned a value) and false otherwise */
+  public boolean isSetCompression() {
+    return this.compression != null;
+  }
+
+  public void setCompressionIsSet(boolean value) {
+    if (!value) {
+      this.compression = null;
+    }
+  }
+
+  public boolean isInMemory() {
+    return this.inMemory;
+  }
+
+  public ColumnDescriptor setInMemory(boolean inMemory) {
+    this.inMemory = inMemory;
+    setInMemoryIsSet(true);
+    return this;
+  }
+
+  public void unsetInMemory() {
+    __isset_bit_vector.clear(__INMEMORY_ISSET_ID);
+  }
+
+  /** Returns true if field inMemory is set (has been asigned a value) and false otherwise */
+  public boolean isSetInMemory() {
+    return __isset_bit_vector.get(__INMEMORY_ISSET_ID);
+  }
+
+  public void setInMemoryIsSet(boolean value) {
+    __isset_bit_vector.set(__INMEMORY_ISSET_ID, value);
+  }
+
+  public String getBloomFilterType() {
+    return this.bloomFilterType;
+  }
+
+  public ColumnDescriptor setBloomFilterType(String bloomFilterType) {
+    this.bloomFilterType = bloomFilterType;
+    return this;
+  }
+
+  public void unsetBloomFilterType() {
+    this.bloomFilterType = null;
+  }
+
+  /** Returns true if field bloomFilterType is set (has been asigned a value) and false otherwise */
+  public boolean isSetBloomFilterType() {
+    return this.bloomFilterType != null;
+  }
+
+  public void setBloomFilterTypeIsSet(boolean value) {
+    if (!value) {
+      this.bloomFilterType = null;
+    }
+  }
+
+  public int getBloomFilterVectorSize() {
+    return this.bloomFilterVectorSize;
+  }
+
+  public ColumnDescriptor setBloomFilterVectorSize(int bloomFilterVectorSize) {
+    this.bloomFilterVectorSize = bloomFilterVectorSize;
+    setBloomFilterVectorSizeIsSet(true);
+    return this;
+  }
+
+  public void unsetBloomFilterVectorSize() {
+    __isset_bit_vector.clear(__BLOOMFILTERVECTORSIZE_ISSET_ID);
+  }
+
+  /** Returns true if field bloomFilterVectorSize is set (has been asigned a value) and false otherwise */
+  public boolean isSetBloomFilterVectorSize() {
+    return __isset_bit_vector.get(__BLOOMFILTERVECTORSIZE_ISSET_ID);
+  }
+
+  public void setBloomFilterVectorSizeIsSet(boolean value) {
+    __isset_bit_vector.set(__BLOOMFILTERVECTORSIZE_ISSET_ID, value);
+  }
+
+  public int getBloomFilterNbHashes() {
+    return this.bloomFilterNbHashes;
+  }
+
+  public ColumnDescriptor setBloomFilterNbHashes(int bloomFilterNbHashes) {
+    this.bloomFilterNbHashes = bloomFilterNbHashes;
+    setBloomFilterNbHashesIsSet(true);
+    return this;
+  }
+
+  public void unsetBloomFilterNbHashes() {
+    __isset_bit_vector.clear(__BLOOMFILTERNBHASHES_ISSET_ID);
+  }
+
+  /** Returns true if field bloomFilterNbHashes is set (has been asigned a value) and false otherwise */
+  public boolean isSetBloomFilterNbHashes() {
+    return __isset_bit_vector.get(__BLOOMFILTERNBHASHES_ISSET_ID);
+  }
+
+  public void setBloomFilterNbHashesIsSet(boolean value) {
+    __isset_bit_vector.set(__BLOOMFILTERNBHASHES_ISSET_ID, value);
+  }
+
+  public boolean isBlockCacheEnabled() {
+    return this.blockCacheEnabled;
+  }
+
+  public ColumnDescriptor setBlockCacheEnabled(boolean blockCacheEnabled) {
+    this.blockCacheEnabled = blockCacheEnabled;
+    setBlockCacheEnabledIsSet(true);
+    return this;
+  }
+
+  public void unsetBlockCacheEnabled() {
+    __isset_bit_vector.clear(__BLOCKCACHEENABLED_ISSET_ID);
+  }
+
+  /** Returns true if field blockCacheEnabled is set (has been asigned a value) and false otherwise */
+  public boolean isSetBlockCacheEnabled() {
+    return __isset_bit_vector.get(__BLOCKCACHEENABLED_ISSET_ID);
+  }
+
+  public void setBlockCacheEnabledIsSet(boolean value) {
+    __isset_bit_vector.set(__BLOCKCACHEENABLED_ISSET_ID, value);
+  }
+
+  public int getTimeToLive() {
+    return this.timeToLive;
+  }
+
+  public ColumnDescriptor setTimeToLive(int timeToLive) {
+    this.timeToLive = timeToLive;
+    setTimeToLiveIsSet(true);
+    return this;
+  }
+
+  public void unsetTimeToLive() {
+    __isset_bit_vector.clear(__TIMETOLIVE_ISSET_ID);
+  }
+
+  /** Returns true if field timeToLive is set (has been asigned a value) and false otherwise */
+  public boolean isSetTimeToLive() {
+    return __isset_bit_vector.get(__TIMETOLIVE_ISSET_ID);
+  }
+
+  public void setTimeToLiveIsSet(boolean value) {
+    __isset_bit_vector.set(__TIMETOLIVE_ISSET_ID, value);
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case NAME:
+      if (value == null) {
+        unsetName();
+      } else {
+        setName((byte[])value);
+      }
+      break;
+
+    case MAX_VERSIONS:
+      if (value == null) {
+        unsetMaxVersions();
+      } else {
+        setMaxVersions((Integer)value);
+      }
+      break;
+
+    case COMPRESSION:
+      if (value == null) {
+        unsetCompression();
+      } else {
+        setCompression((String)value);
+      }
+      break;
+
+    case IN_MEMORY:
+      if (value == null) {
+        unsetInMemory();
+      } else {
+        setInMemory((Boolean)value);
+      }
+      break;
+
+    case BLOOM_FILTER_TYPE:
+      if (value == null) {
+        unsetBloomFilterType();
+      } else {
+        setBloomFilterType((String)value);
+      }
+      break;
+
+    case BLOOM_FILTER_VECTOR_SIZE:
+      if (value == null) {
+        unsetBloomFilterVectorSize();
+      } else {
+        setBloomFilterVectorSize((Integer)value);
+      }
+      break;
+
+    case BLOOM_FILTER_NB_HASHES:
+      if (value == null) {
+        unsetBloomFilterNbHashes();
+      } else {
+        setBloomFilterNbHashes((Integer)value);
+      }
+      break;
+
+    case BLOCK_CACHE_ENABLED:
+      if (value == null) {
+        unsetBlockCacheEnabled();
+      } else {
+        setBlockCacheEnabled((Boolean)value);
+      }
+      break;
+
+    case TIME_TO_LIVE:
+      if (value == null) {
+        unsetTimeToLive();
+      } else {
+        setTimeToLive((Integer)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case NAME:
+      return getName();
+
+    case MAX_VERSIONS:
+      return new Integer(getMaxVersions());
+
+    case COMPRESSION:
+      return getCompression();
+
+    case IN_MEMORY:
+      return new Boolean(isInMemory());
+
+    case BLOOM_FILTER_TYPE:
+      return getBloomFilterType();
+
+    case BLOOM_FILTER_VECTOR_SIZE:
+      return new Integer(getBloomFilterVectorSize());
+
+    case BLOOM_FILTER_NB_HASHES:
+      return new Integer(getBloomFilterNbHashes());
+
+    case BLOCK_CACHE_ENABLED:
+      return new Boolean(isBlockCacheEnabled());
+
+    case TIME_TO_LIVE:
+      return new Integer(getTimeToLive());
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case NAME:
+      return isSetName();
+    case MAX_VERSIONS:
+      return isSetMaxVersions();
+    case COMPRESSION:
+      return isSetCompression();
+    case IN_MEMORY:
+      return isSetInMemory();
+    case BLOOM_FILTER_TYPE:
+      return isSetBloomFilterType();
+    case BLOOM_FILTER_VECTOR_SIZE:
+      return isSetBloomFilterVectorSize();
+    case BLOOM_FILTER_NB_HASHES:
+      return isSetBloomFilterNbHashes();
+    case BLOCK_CACHE_ENABLED:
+      return isSetBlockCacheEnabled();
+    case TIME_TO_LIVE:
+      return isSetTimeToLive();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof ColumnDescriptor)
+      return this.equals((ColumnDescriptor)that);
+    return false;
+  }
+
+  public boolean equals(ColumnDescriptor that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_name = true && this.isSetName();
+    boolean that_present_name = true && that.isSetName();
+    if (this_present_name || that_present_name) {
+      if (!(this_present_name && that_present_name))
+        return false;
+      if (!java.util.Arrays.equals(this.name, that.name))
+        return false;
+    }
+
+    boolean this_present_maxVersions = true;
+    boolean that_present_maxVersions = true;
+    if (this_present_maxVersions || that_present_maxVersions) {
+      if (!(this_present_maxVersions && that_present_maxVersions))
+        return false;
+      if (this.maxVersions != that.maxVersions)
+        return false;
+    }
+
+    boolean this_present_compression = true && this.isSetCompression();
+    boolean that_present_compression = true && that.isSetCompression();
+    if (this_present_compression || that_present_compression) {
+      if (!(this_present_compression && that_present_compression))
+        return false;
+      if (!this.compression.equals(that.compression))
+        return false;
+    }
+
+    boolean this_present_inMemory = true;
+    boolean that_present_inMemory = true;
+    if (this_present_inMemory || that_present_inMemory) {
+      if (!(this_present_inMemory && that_present_inMemory))
+        return false;
+      if (this.inMemory != that.inMemory)
+        return false;
+    }
+
+    boolean this_present_bloomFilterType = true && this.isSetBloomFilterType();
+    boolean that_present_bloomFilterType = true && that.isSetBloomFilterType();
+    if (this_present_bloomFilterType || that_present_bloomFilterType) {
+      if (!(this_present_bloomFilterType && that_present_bloomFilterType))
+        return false;
+      if (!this.bloomFilterType.equals(that.bloomFilterType))
+        return false;
+    }
+
+    boolean this_present_bloomFilterVectorSize = true;
+    boolean that_present_bloomFilterVectorSize = true;
+    if (this_present_bloomFilterVectorSize || that_present_bloomFilterVectorSize) {
+      if (!(this_present_bloomFilterVectorSize && that_present_bloomFilterVectorSize))
+        return false;
+      if (this.bloomFilterVectorSize != that.bloomFilterVectorSize)
+        return false;
+    }
+
+    boolean this_present_bloomFilterNbHashes = true;
+    boolean that_present_bloomFilterNbHashes = true;
+    if (this_present_bloomFilterNbHashes || that_present_bloomFilterNbHashes) {
+      if (!(this_present_bloomFilterNbHashes && that_present_bloomFilterNbHashes))
+        return false;
+      if (this.bloomFilterNbHashes != that.bloomFilterNbHashes)
+        return false;
+    }
+
+    boolean this_present_blockCacheEnabled = true;
+    boolean that_present_blockCacheEnabled = true;
+    if (this_present_blockCacheEnabled || that_present_blockCacheEnabled) {
+      if (!(this_present_blockCacheEnabled && that_present_blockCacheEnabled))
+        return false;
+      if (this.blockCacheEnabled != that.blockCacheEnabled)
+        return false;
+    }
+
+    boolean this_present_timeToLive = true;
+    boolean that_present_timeToLive = true;
+    if (this_present_timeToLive || that_present_timeToLive) {
+      if (!(this_present_timeToLive && that_present_timeToLive))
+        return false;
+      if (this.timeToLive != that.timeToLive)
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_name = true && (isSetName());
+    builder.append(present_name);
+    if (present_name)
+      builder.append(name);
+
+    boolean present_maxVersions = true;
+    builder.append(present_maxVersions);
+    if (present_maxVersions)
+      builder.append(maxVersions);
+
+    boolean present_compression = true && (isSetCompression());
+    builder.append(present_compression);
+    if (present_compression)
+      builder.append(compression);
+
+    boolean present_inMemory = true;
+    builder.append(present_inMemory);
+    if (present_inMemory)
+      builder.append(inMemory);
+
+    boolean present_bloomFilterType = true && (isSetBloomFilterType());
+    builder.append(present_bloomFilterType);
+    if (present_bloomFilterType)
+      builder.append(bloomFilterType);
+
+    boolean present_bloomFilterVectorSize = true;
+    builder.append(present_bloomFilterVectorSize);
+    if (present_bloomFilterVectorSize)
+      builder.append(bloomFilterVectorSize);
+
+    boolean present_bloomFilterNbHashes = true;
+    builder.append(present_bloomFilterNbHashes);
+    if (present_bloomFilterNbHashes)
+      builder.append(bloomFilterNbHashes);
+
+    boolean present_blockCacheEnabled = true;
+    builder.append(present_blockCacheEnabled);
+    if (present_blockCacheEnabled)
+      builder.append(blockCacheEnabled);
+
+    boolean present_timeToLive = true;
+    builder.append(present_timeToLive);
+    if (present_timeToLive)
+      builder.append(timeToLive);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(ColumnDescriptor other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    ColumnDescriptor typedOther = (ColumnDescriptor)other;
+
+    lastComparison = Boolean.valueOf(isSetName()).compareTo(isSetName());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(name, typedOther.name);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetMaxVersions()).compareTo(isSetMaxVersions());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(maxVersions, typedOther.maxVersions);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetCompression()).compareTo(isSetCompression());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(compression, typedOther.compression);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetInMemory()).compareTo(isSetInMemory());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(inMemory, typedOther.inMemory);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetBloomFilterType()).compareTo(isSetBloomFilterType());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(bloomFilterType, typedOther.bloomFilterType);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetBloomFilterVectorSize()).compareTo(isSetBloomFilterVectorSize());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(bloomFilterVectorSize, typedOther.bloomFilterVectorSize);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetBloomFilterNbHashes()).compareTo(isSetBloomFilterNbHashes());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(bloomFilterNbHashes, typedOther.bloomFilterNbHashes);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetBlockCacheEnabled()).compareTo(isSetBlockCacheEnabled());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(blockCacheEnabled, typedOther.blockCacheEnabled);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetTimeToLive()).compareTo(isSetTimeToLive());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(timeToLive, typedOther.timeToLive);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case NAME:
+            if (field.type == TType.STRING) {
+              this.name = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case MAX_VERSIONS:
+            if (field.type == TType.I32) {
+              this.maxVersions = iprot.readI32();
+              setMaxVersionsIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case COMPRESSION:
+            if (field.type == TType.STRING) {
+              this.compression = iprot.readString();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case IN_MEMORY:
+            if (field.type == TType.BOOL) {
+              this.inMemory = iprot.readBool();
+              setInMemoryIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case BLOOM_FILTER_TYPE:
+            if (field.type == TType.STRING) {
+              this.bloomFilterType = iprot.readString();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case BLOOM_FILTER_VECTOR_SIZE:
+            if (field.type == TType.I32) {
+              this.bloomFilterVectorSize = iprot.readI32();
+              setBloomFilterVectorSizeIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case BLOOM_FILTER_NB_HASHES:
+            if (field.type == TType.I32) {
+              this.bloomFilterNbHashes = iprot.readI32();
+              setBloomFilterNbHashesIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case BLOCK_CACHE_ENABLED:
+            if (field.type == TType.BOOL) {
+              this.blockCacheEnabled = iprot.readBool();
+              setBlockCacheEnabledIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case TIME_TO_LIVE:
+            if (field.type == TType.I32) {
+              this.timeToLive = iprot.readI32();
+              setTimeToLiveIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.name != null) {
+      oprot.writeFieldBegin(NAME_FIELD_DESC);
+      oprot.writeBinary(this.name);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldBegin(MAX_VERSIONS_FIELD_DESC);
+    oprot.writeI32(this.maxVersions);
+    oprot.writeFieldEnd();
+    if (this.compression != null) {
+      oprot.writeFieldBegin(COMPRESSION_FIELD_DESC);
+      oprot.writeString(this.compression);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldBegin(IN_MEMORY_FIELD_DESC);
+    oprot.writeBool(this.inMemory);
+    oprot.writeFieldEnd();
+    if (this.bloomFilterType != null) {
+      oprot.writeFieldBegin(BLOOM_FILTER_TYPE_FIELD_DESC);
+      oprot.writeString(this.bloomFilterType);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldBegin(BLOOM_FILTER_VECTOR_SIZE_FIELD_DESC);
+    oprot.writeI32(this.bloomFilterVectorSize);
+    oprot.writeFieldEnd();
+    oprot.writeFieldBegin(BLOOM_FILTER_NB_HASHES_FIELD_DESC);
+    oprot.writeI32(this.bloomFilterNbHashes);
+    oprot.writeFieldEnd();
+    oprot.writeFieldBegin(BLOCK_CACHE_ENABLED_FIELD_DESC);
+    oprot.writeBool(this.blockCacheEnabled);
+    oprot.writeFieldEnd();
+    oprot.writeFieldBegin(TIME_TO_LIVE_FIELD_DESC);
+    oprot.writeI32(this.timeToLive);
+    oprot.writeFieldEnd();
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("ColumnDescriptor(");
+    boolean first = true;
+
+    sb.append("name:");
+    if (this.name == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.name);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("maxVersions:");
+    sb.append(this.maxVersions);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("compression:");
+    if (this.compression == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.compression);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("inMemory:");
+    sb.append(this.inMemory);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("bloomFilterType:");
+    if (this.bloomFilterType == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.bloomFilterType);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("bloomFilterVectorSize:");
+    sb.append(this.bloomFilterVectorSize);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("bloomFilterNbHashes:");
+    sb.append(this.bloomFilterNbHashes);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("blockCacheEnabled:");
+    sb.append(this.blockCacheEnabled);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("timeToLive:");
+    sb.append(this.timeToLive);
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
new file mode 100644
index 0000000..64cbfde
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/Hbase.java
@@ -0,0 +1,31309 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+public class Hbase {
+
+  public interface Iface {
+
+    /**
+     * Brings a table on-line (enables it)
+     *
+     * @param tableName name of the table
+     */
+    public void enableTable(byte[] tableName) throws IOError, TException;
+
+    /**
+     * Disables a table (takes it off-line) If it is being served, the master
+     * will tell the servers to stop serving it.
+     *
+     * @param tableName name of the table
+     */
+    public void disableTable(byte[] tableName) throws IOError, TException;
+
+    /**
+     * @return true if table is on-line
+     *
+     * @param tableName name of the table to check
+     */
+    public boolean isTableEnabled(byte[] tableName) throws IOError, TException;
+
+    public void compact(byte[] tableNameOrRegionName) throws IOError, TException;
+
+    public void majorCompact(byte[] tableNameOrRegionName) throws IOError, TException;
+
+    /**
+     * List all the userspace tables.
+     * @return returns a list of names
+     */
+    public List<byte[]> getTableNames() throws IOError, TException;
+
+    /**
+     * List all the column families assoicated with a table.
+     * @return list of column family descriptors
+     *
+     * @param tableName table name
+     */
+    public Map<byte[],ColumnDescriptor> getColumnDescriptors(byte[] tableName) throws IOError, TException;
+
+    /**
+     * List the regions associated with a table.
+     * @return list of region descriptors
+     *
+     * @param tableName table name
+     */
+    public List<TRegionInfo> getTableRegions(byte[] tableName) throws IOError, TException;
+
+    /**
+     * Create a table with the specified column families.  The name
+     * field for each ColumnDescriptor must be set and must end in a
+     * colon (:). All other fields are optional and will get default
+     * values if not explicitly specified.
+     *
+     * @throws IllegalArgument if an input parameter is invalid
+     * @throws AlreadyExists if the table name already exists
+     *
+     * @param tableName name of table to create
+     *
+     * @param columnFamilies list of column family descriptors
+     */
+    public void createTable(byte[] tableName, List<ColumnDescriptor> columnFamilies) throws IOError, IllegalArgument, AlreadyExists, TException;
+
+    /**
+     * Deletes a table
+     *
+     * @throws IOError if table doesn't exist on server or there was some other
+     * problem
+     *
+     * @param tableName name of table to delete
+     */
+    public void deleteTable(byte[] tableName) throws IOError, TException;
+
+    /**
+     * Get a single TCell for the specified table, row, and column at the
+     * latest timestamp. Returns an empty list if no such value exists.
+     *
+     * @return value for specified row/column
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     *
+     * @param column column name
+     */
+    public List<TCell> get(byte[] tableName, byte[] row, byte[] column) throws IOError, TException;
+
+    /**
+     * Get the specified number of versions for the specified table,
+     * row, and column.
+     *
+     * @return list of cells for specified row/column
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     *
+     * @param column column name
+     *
+     * @param numVersions number of versions to retrieve
+     */
+    public List<TCell> getVer(byte[] tableName, byte[] row, byte[] column, int numVersions) throws IOError, TException;
+
+    /**
+     * Get the specified number of versions for the specified table,
+     * row, and column.  Only versions less than or equal to the specified
+     * timestamp will be returned.
+     *
+     * @return list of cells for specified row/column
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     *
+     * @param column column name
+     *
+     * @param timestamp timestamp
+     *
+     * @param numVersions number of versions to retrieve
+     */
+    public List<TCell> getVerTs(byte[] tableName, byte[] row, byte[] column, long timestamp, int numVersions) throws IOError, TException;
+
+    /**
+     * Get all the data for the specified table and row at the latest
+     * timestamp. Returns an empty list if the row does not exist.
+     *
+     * @return TRowResult containing the row and map of columns to TCells
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     */
+    public List<TRowResult> getRow(byte[] tableName, byte[] row) throws IOError, TException;
+
+    /**
+     * Get the specified columns for the specified table and row at the latest
+     * timestamp. Returns an empty list if the row does not exist.
+     *
+     * @return TRowResult containing the row and map of columns to TCells
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     *
+     * @param columns List of columns to return, null for all columns
+     */
+    public List<TRowResult> getRowWithColumns(byte[] tableName, byte[] row, List<byte[]> columns) throws IOError, TException;
+
+    /**
+     * Get all the data for the specified table and row at the specified
+     * timestamp. Returns an empty list if the row does not exist.
+     *
+     * @return TRowResult containing the row and map of columns to TCells
+     *
+     * @param tableName name of the table
+     *
+     * @param row row key
+     *
+     * @param timestamp timestamp
+     */
+    public List<TRowResult> getRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException;
+
+    /**
+     * Get the specified columns for the specified table and row at the specified
+     * timestamp. Returns an empty list if the row does not exist.
+     *
+     * @return TRowResult containing the row and map of columns to TCells
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     *
+     * @param columns List of columns to return, null for all columns
+     *
+     * @param timestamp
+     */
+    public List<TRowResult> getRowWithColumnsTs(byte[] tableName, byte[] row, List<byte[]> columns, long timestamp) throws IOError, TException;
+
+    /**
+     * Apply a series of mutations (updates/deletes) to a row in a
+     * single transaction.  If an exception is thrown, then the
+     * transaction is aborted.  Default current timestamp is used, and
+     * all entries will have an identical timestamp.
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     *
+     * @param mutations list of mutation commands
+     */
+    public void mutateRow(byte[] tableName, byte[] row, List<Mutation> mutations) throws IOError, IllegalArgument, TException;
+
+    /**
+     * Apply a series of mutations (updates/deletes) to a row in a
+     * single transaction.  If an exception is thrown, then the
+     * transaction is aborted.  The specified timestamp is used, and
+     * all entries will have an identical timestamp.
+     *
+     * @param tableName name of table
+     *
+     * @param row row key
+     *
+     * @param mutations list of mutation commands
+     *
+     * @param timestamp timestamp
+     */
+    public void mutateRowTs(byte[] tableName, byte[] row, List<Mutation> mutations, long timestamp) throws IOError, IllegalArgument, TException;
+
+    /**
+     * Apply a series of batches (each a series of mutations on a single row)
+     * in a single transaction.  If an exception is thrown, then the
+     * transaction is aborted.  Default current timestamp is used, and
+     * all entries will have an identical timestamp.
+     *
+     * @param tableName name of table
+     *
+     * @param rowBatches list of row batches
+     */
+    public void mutateRows(byte[] tableName, List<BatchMutation> rowBatches) throws IOError, IllegalArgument, TException;
+
+    /**
+     * Apply a series of batches (each a series of mutations on a single row)
+     * in a single transaction.  If an exception is thrown, then the
+     * transaction is aborted.  The specified timestamp is used, and
+     * all entries will have an identical timestamp.
+     *
+     * @param tableName name of table
+     *
+     * @param rowBatches list of row batches
+     *
+     * @param timestamp timestamp
+     */
+    public void mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp) throws IOError, IllegalArgument, TException;
+
+    /**
+     * Atomically increment the column value specified.  Returns the next value post increment.
+     *
+     * @param tableName name of table
+     *
+     * @param row row to increment
+     *
+     * @param column name of column
+     *
+     * @param value amount to increment by
+     */
+    public long atomicIncrement(byte[] tableName, byte[] row, byte[] column, long value) throws IOError, IllegalArgument, TException;
+
+    /**
+     * Delete all cells that match the passed row and column.
+     *
+     * @param tableName name of table
+     *
+     * @param row Row to update
+     *
+     * @param column name of column whose value is to be deleted
+     */
+    public void deleteAll(byte[] tableName, byte[] row, byte[] column) throws IOError, TException;
+
+    /**
+     * Delete all cells that match the passed row and column and whose
+     * timestamp is equal-to or older than the passed timestamp.
+     *
+     * @param tableName name of table
+     *
+     * @param row Row to update
+     *
+     * @param column name of column whose value is to be deleted
+     *
+     * @param timestamp timestamp
+     */
+    public void deleteAllTs(byte[] tableName, byte[] row, byte[] column, long timestamp) throws IOError, TException;
+
+    /**
+     * Completely delete the row's cells.
+     *
+     * @param tableName name of table
+     *
+     * @param row key of the row to be completely deleted.
+     */
+    public void deleteAllRow(byte[] tableName, byte[] row) throws IOError, TException;
+
+    /**
+     * Completely delete the row's cells marked with a timestamp
+     * equal-to or older than the passed timestamp.
+     *
+     * @param tableName name of table
+     *
+     * @param row key of the row to be completely deleted.
+     *
+     * @param timestamp timestamp
+     */
+    public void deleteAllRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException;
+
+    /**
+     * Get a scanner on the current table starting at the specified row and
+     * ending at the last row in the table.  Return the specified columns.
+     *
+     * @return scanner id to be used with other scanner procedures
+     *
+     * @param tableName name of table
+     *
+     * @param startRow Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     *
+     * @param columns columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public int scannerOpen(byte[] tableName, byte[] startRow, List<byte[]> columns) throws IOError, TException;
+
+    /**
+     * Get a scanner on the current table starting and stopping at the
+     * specified rows.  ending at the last row in the table.  Return the
+     * specified columns.
+     *
+     * @return scanner id to be used with other scanner procedures
+     *
+     * @param tableName name of table
+     *
+     * @param startRow Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     *
+     * @param stopRow row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     *
+     * @param columns columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public int scannerOpenWithStop(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns) throws IOError, TException;
+
+    /**
+     * Open a scanner for a given prefix.  That is all rows will have the specified
+     * prefix. No other rows will be returned.
+     *
+     * @return scanner id to use with other scanner calls
+     *
+     * @param tableName name of table
+     *
+     * @param startAndPrefix the prefix (and thus start row) of the keys you want
+     *
+     * @param columns the columns you want returned
+     */
+    public int scannerOpenWithPrefix(byte[] tableName, byte[] startAndPrefix, List<byte[]> columns) throws IOError, TException;
+
+    /**
+     * Get a scanner on the current table starting at the specified row and
+     * ending at the last row in the table.  Return the specified columns.
+     * Only values with the specified timestamp are returned.
+     *
+     * @return scanner id to be used with other scanner procedures
+     *
+     * @param tableName name of table
+     *
+     * @param startRow Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     *
+     * @param columns columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     *
+     * @param timestamp timestamp
+     */
+    public int scannerOpenTs(byte[] tableName, byte[] startRow, List<byte[]> columns, long timestamp) throws IOError, TException;
+
+    /**
+     * Get a scanner on the current table starting and stopping at the
+     * specified rows.  ending at the last row in the table.  Return the
+     * specified columns.  Only values with the specified timestamp are
+     * returned.
+     *
+     * @return scanner id to be used with other scanner procedures
+     *
+     * @param tableName name of table
+     *
+     * @param startRow Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     *
+     * @param stopRow row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     *
+     * @param columns columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     *
+     * @param timestamp timestamp
+     */
+    public int scannerOpenWithStopTs(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns, long timestamp) throws IOError, TException;
+
+    /**
+     * Returns the scanner's current row value and advances to the next
+     * row in the table.  When there are no more rows in the table, or a key
+     * greater-than-or-equal-to the scanner's specified stopRow is reached,
+     * an empty list is returned.
+     *
+     * @return a TRowResult containing the current row and a map of the columns to TCells.
+     * @throws IllegalArgument if ScannerID is invalid
+     * @throws NotFound when the scanner reaches the end
+     *
+     * @param id id of a scanner returned by scannerOpen
+     */
+    public List<TRowResult> scannerGet(int id) throws IOError, IllegalArgument, TException;
+
+    /**
+     * Returns, starting at the scanner's current row value nbRows worth of
+     * rows and advances to the next row in the table.  When there are no more
+     * rows in the table, or a key greater-than-or-equal-to the scanner's
+     * specified stopRow is reached,  an empty list is returned.
+     *
+     * @return a TRowResult containing the current row and a map of the columns to TCells.
+     * @throws IllegalArgument if ScannerID is invalid
+     * @throws NotFound when the scanner reaches the end
+     *
+     * @param id id of a scanner returned by scannerOpen
+     *
+     * @param nbRows number of results to return
+     */
+    public List<TRowResult> scannerGetList(int id, int nbRows) throws IOError, IllegalArgument, TException;
+
+    /**
+     * Closes the server-state associated with an open scanner.
+     *
+     * @throws IllegalArgument if ScannerID is invalid
+     *
+     * @param id id of a scanner returned by scannerOpen
+     */
+    public void scannerClose(int id) throws IOError, IllegalArgument, TException;
+
+  }
+
+  public static class Client implements Iface {
+    public Client(TProtocol prot)
+    {
+      this(prot, prot);
+    }
+
+    public Client(TProtocol iprot, TProtocol oprot)
+    {
+      iprot_ = iprot;
+      oprot_ = oprot;
+    }
+
+    protected TProtocol iprot_;
+    protected TProtocol oprot_;
+
+    protected int seqid_;
+
+    public TProtocol getInputProtocol()
+    {
+      return this.iprot_;
+    }
+
+    public TProtocol getOutputProtocol()
+    {
+      return this.oprot_;
+    }
+
+    public void enableTable(byte[] tableName) throws IOError, TException
+    {
+      send_enableTable(tableName);
+      recv_enableTable();
+    }
+
+    public void send_enableTable(byte[] tableName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("enableTable", TMessageType.CALL, seqid_));
+      enableTable_args args = new enableTable_args();
+      args.tableName = tableName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_enableTable() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      enableTable_result result = new enableTable_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public void disableTable(byte[] tableName) throws IOError, TException
+    {
+      send_disableTable(tableName);
+      recv_disableTable();
+    }
+
+    public void send_disableTable(byte[] tableName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("disableTable", TMessageType.CALL, seqid_));
+      disableTable_args args = new disableTable_args();
+      args.tableName = tableName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_disableTable() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      disableTable_result result = new disableTable_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public boolean isTableEnabled(byte[] tableName) throws IOError, TException
+    {
+      send_isTableEnabled(tableName);
+      return recv_isTableEnabled();
+    }
+
+    public void send_isTableEnabled(byte[] tableName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("isTableEnabled", TMessageType.CALL, seqid_));
+      isTableEnabled_args args = new isTableEnabled_args();
+      args.tableName = tableName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public boolean recv_isTableEnabled() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      isTableEnabled_result result = new isTableEnabled_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "isTableEnabled failed: unknown result");
+    }
+
+    public void compact(byte[] tableNameOrRegionName) throws IOError, TException
+    {
+      send_compact(tableNameOrRegionName);
+      recv_compact();
+    }
+
+    public void send_compact(byte[] tableNameOrRegionName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("compact", TMessageType.CALL, seqid_));
+      compact_args args = new compact_args();
+      args.tableNameOrRegionName = tableNameOrRegionName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_compact() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      compact_result result = new compact_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public void majorCompact(byte[] tableNameOrRegionName) throws IOError, TException
+    {
+      send_majorCompact(tableNameOrRegionName);
+      recv_majorCompact();
+    }
+
+    public void send_majorCompact(byte[] tableNameOrRegionName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("majorCompact", TMessageType.CALL, seqid_));
+      majorCompact_args args = new majorCompact_args();
+      args.tableNameOrRegionName = tableNameOrRegionName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_majorCompact() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      majorCompact_result result = new majorCompact_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public List<byte[]> getTableNames() throws IOError, TException
+    {
+      send_getTableNames();
+      return recv_getTableNames();
+    }
+
+    public void send_getTableNames() throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getTableNames", TMessageType.CALL, seqid_));
+      getTableNames_args args = new getTableNames_args();
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<byte[]> recv_getTableNames() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getTableNames_result result = new getTableNames_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getTableNames failed: unknown result");
+    }
+
+    public Map<byte[],ColumnDescriptor> getColumnDescriptors(byte[] tableName) throws IOError, TException
+    {
+      send_getColumnDescriptors(tableName);
+      return recv_getColumnDescriptors();
+    }
+
+    public void send_getColumnDescriptors(byte[] tableName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getColumnDescriptors", TMessageType.CALL, seqid_));
+      getColumnDescriptors_args args = new getColumnDescriptors_args();
+      args.tableName = tableName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public Map<byte[],ColumnDescriptor> recv_getColumnDescriptors() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getColumnDescriptors_result result = new getColumnDescriptors_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getColumnDescriptors failed: unknown result");
+    }
+
+    public List<TRegionInfo> getTableRegions(byte[] tableName) throws IOError, TException
+    {
+      send_getTableRegions(tableName);
+      return recv_getTableRegions();
+    }
+
+    public void send_getTableRegions(byte[] tableName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getTableRegions", TMessageType.CALL, seqid_));
+      getTableRegions_args args = new getTableRegions_args();
+      args.tableName = tableName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TRegionInfo> recv_getTableRegions() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getTableRegions_result result = new getTableRegions_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getTableRegions failed: unknown result");
+    }
+
+    public void createTable(byte[] tableName, List<ColumnDescriptor> columnFamilies) throws IOError, IllegalArgument, AlreadyExists, TException
+    {
+      send_createTable(tableName, columnFamilies);
+      recv_createTable();
+    }
+
+    public void send_createTable(byte[] tableName, List<ColumnDescriptor> columnFamilies) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("createTable", TMessageType.CALL, seqid_));
+      createTable_args args = new createTable_args();
+      args.tableName = tableName;
+      args.columnFamilies = columnFamilies;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_createTable() throws IOError, IllegalArgument, AlreadyExists, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      createTable_result result = new createTable_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      if (result.exist != null) {
+        throw result.exist;
+      }
+      return;
+    }
+
+    public void deleteTable(byte[] tableName) throws IOError, TException
+    {
+      send_deleteTable(tableName);
+      recv_deleteTable();
+    }
+
+    public void send_deleteTable(byte[] tableName) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("deleteTable", TMessageType.CALL, seqid_));
+      deleteTable_args args = new deleteTable_args();
+      args.tableName = tableName;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_deleteTable() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      deleteTable_result result = new deleteTable_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public List<TCell> get(byte[] tableName, byte[] row, byte[] column) throws IOError, TException
+    {
+      send_get(tableName, row, column);
+      return recv_get();
+    }
+
+    public void send_get(byte[] tableName, byte[] row, byte[] column) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("get", TMessageType.CALL, seqid_));
+      get_args args = new get_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.column = column;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TCell> recv_get() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      get_result result = new get_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "get failed: unknown result");
+    }
+
+    public List<TCell> getVer(byte[] tableName, byte[] row, byte[] column, int numVersions) throws IOError, TException
+    {
+      send_getVer(tableName, row, column, numVersions);
+      return recv_getVer();
+    }
+
+    public void send_getVer(byte[] tableName, byte[] row, byte[] column, int numVersions) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getVer", TMessageType.CALL, seqid_));
+      getVer_args args = new getVer_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.column = column;
+      args.numVersions = numVersions;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TCell> recv_getVer() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getVer_result result = new getVer_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getVer failed: unknown result");
+    }
+
+    public List<TCell> getVerTs(byte[] tableName, byte[] row, byte[] column, long timestamp, int numVersions) throws IOError, TException
+    {
+      send_getVerTs(tableName, row, column, timestamp, numVersions);
+      return recv_getVerTs();
+    }
+
+    public void send_getVerTs(byte[] tableName, byte[] row, byte[] column, long timestamp, int numVersions) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getVerTs", TMessageType.CALL, seqid_));
+      getVerTs_args args = new getVerTs_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.column = column;
+      args.timestamp = timestamp;
+      args.numVersions = numVersions;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TCell> recv_getVerTs() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getVerTs_result result = new getVerTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getVerTs failed: unknown result");
+    }
+
+    public List<TRowResult> getRow(byte[] tableName, byte[] row) throws IOError, TException
+    {
+      send_getRow(tableName, row);
+      return recv_getRow();
+    }
+
+    public void send_getRow(byte[] tableName, byte[] row) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getRow", TMessageType.CALL, seqid_));
+      getRow_args args = new getRow_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TRowResult> recv_getRow() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getRow_result result = new getRow_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRow failed: unknown result");
+    }
+
+    public List<TRowResult> getRowWithColumns(byte[] tableName, byte[] row, List<byte[]> columns) throws IOError, TException
+    {
+      send_getRowWithColumns(tableName, row, columns);
+      return recv_getRowWithColumns();
+    }
+
+    public void send_getRowWithColumns(byte[] tableName, byte[] row, List<byte[]> columns) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getRowWithColumns", TMessageType.CALL, seqid_));
+      getRowWithColumns_args args = new getRowWithColumns_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.columns = columns;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TRowResult> recv_getRowWithColumns() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getRowWithColumns_result result = new getRowWithColumns_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRowWithColumns failed: unknown result");
+    }
+
+    public List<TRowResult> getRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException
+    {
+      send_getRowTs(tableName, row, timestamp);
+      return recv_getRowTs();
+    }
+
+    public void send_getRowTs(byte[] tableName, byte[] row, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getRowTs", TMessageType.CALL, seqid_));
+      getRowTs_args args = new getRowTs_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TRowResult> recv_getRowTs() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getRowTs_result result = new getRowTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRowTs failed: unknown result");
+    }
+
+    public List<TRowResult> getRowWithColumnsTs(byte[] tableName, byte[] row, List<byte[]> columns, long timestamp) throws IOError, TException
+    {
+      send_getRowWithColumnsTs(tableName, row, columns, timestamp);
+      return recv_getRowWithColumnsTs();
+    }
+
+    public void send_getRowWithColumnsTs(byte[] tableName, byte[] row, List<byte[]> columns, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("getRowWithColumnsTs", TMessageType.CALL, seqid_));
+      getRowWithColumnsTs_args args = new getRowWithColumnsTs_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.columns = columns;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TRowResult> recv_getRowWithColumnsTs() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      getRowWithColumnsTs_result result = new getRowWithColumnsTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "getRowWithColumnsTs failed: unknown result");
+    }
+
+    public void mutateRow(byte[] tableName, byte[] row, List<Mutation> mutations) throws IOError, IllegalArgument, TException
+    {
+      send_mutateRow(tableName, row, mutations);
+      recv_mutateRow();
+    }
+
+    public void send_mutateRow(byte[] tableName, byte[] row, List<Mutation> mutations) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("mutateRow", TMessageType.CALL, seqid_));
+      mutateRow_args args = new mutateRow_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.mutations = mutations;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_mutateRow() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      mutateRow_result result = new mutateRow_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      return;
+    }
+
+    public void mutateRowTs(byte[] tableName, byte[] row, List<Mutation> mutations, long timestamp) throws IOError, IllegalArgument, TException
+    {
+      send_mutateRowTs(tableName, row, mutations, timestamp);
+      recv_mutateRowTs();
+    }
+
+    public void send_mutateRowTs(byte[] tableName, byte[] row, List<Mutation> mutations, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("mutateRowTs", TMessageType.CALL, seqid_));
+      mutateRowTs_args args = new mutateRowTs_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.mutations = mutations;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_mutateRowTs() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      mutateRowTs_result result = new mutateRowTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      return;
+    }
+
+    public void mutateRows(byte[] tableName, List<BatchMutation> rowBatches) throws IOError, IllegalArgument, TException
+    {
+      send_mutateRows(tableName, rowBatches);
+      recv_mutateRows();
+    }
+
+    public void send_mutateRows(byte[] tableName, List<BatchMutation> rowBatches) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("mutateRows", TMessageType.CALL, seqid_));
+      mutateRows_args args = new mutateRows_args();
+      args.tableName = tableName;
+      args.rowBatches = rowBatches;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_mutateRows() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      mutateRows_result result = new mutateRows_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      return;
+    }
+
+    public void mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp) throws IOError, IllegalArgument, TException
+    {
+      send_mutateRowsTs(tableName, rowBatches, timestamp);
+      recv_mutateRowsTs();
+    }
+
+    public void send_mutateRowsTs(byte[] tableName, List<BatchMutation> rowBatches, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("mutateRowsTs", TMessageType.CALL, seqid_));
+      mutateRowsTs_args args = new mutateRowsTs_args();
+      args.tableName = tableName;
+      args.rowBatches = rowBatches;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_mutateRowsTs() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      mutateRowsTs_result result = new mutateRowsTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      return;
+    }
+
+    public long atomicIncrement(byte[] tableName, byte[] row, byte[] column, long value) throws IOError, IllegalArgument, TException
+    {
+      send_atomicIncrement(tableName, row, column, value);
+      return recv_atomicIncrement();
+    }
+
+    public void send_atomicIncrement(byte[] tableName, byte[] row, byte[] column, long value) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("atomicIncrement", TMessageType.CALL, seqid_));
+      atomicIncrement_args args = new atomicIncrement_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.column = column;
+      args.value = value;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public long recv_atomicIncrement() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      atomicIncrement_result result = new atomicIncrement_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "atomicIncrement failed: unknown result");
+    }
+
+    public void deleteAll(byte[] tableName, byte[] row, byte[] column) throws IOError, TException
+    {
+      send_deleteAll(tableName, row, column);
+      recv_deleteAll();
+    }
+
+    public void send_deleteAll(byte[] tableName, byte[] row, byte[] column) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("deleteAll", TMessageType.CALL, seqid_));
+      deleteAll_args args = new deleteAll_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.column = column;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_deleteAll() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      deleteAll_result result = new deleteAll_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public void deleteAllTs(byte[] tableName, byte[] row, byte[] column, long timestamp) throws IOError, TException
+    {
+      send_deleteAllTs(tableName, row, column, timestamp);
+      recv_deleteAllTs();
+    }
+
+    public void send_deleteAllTs(byte[] tableName, byte[] row, byte[] column, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("deleteAllTs", TMessageType.CALL, seqid_));
+      deleteAllTs_args args = new deleteAllTs_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.column = column;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_deleteAllTs() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      deleteAllTs_result result = new deleteAllTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public void deleteAllRow(byte[] tableName, byte[] row) throws IOError, TException
+    {
+      send_deleteAllRow(tableName, row);
+      recv_deleteAllRow();
+    }
+
+    public void send_deleteAllRow(byte[] tableName, byte[] row) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("deleteAllRow", TMessageType.CALL, seqid_));
+      deleteAllRow_args args = new deleteAllRow_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_deleteAllRow() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      deleteAllRow_result result = new deleteAllRow_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public void deleteAllRowTs(byte[] tableName, byte[] row, long timestamp) throws IOError, TException
+    {
+      send_deleteAllRowTs(tableName, row, timestamp);
+      recv_deleteAllRowTs();
+    }
+
+    public void send_deleteAllRowTs(byte[] tableName, byte[] row, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("deleteAllRowTs", TMessageType.CALL, seqid_));
+      deleteAllRowTs_args args = new deleteAllRowTs_args();
+      args.tableName = tableName;
+      args.row = row;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_deleteAllRowTs() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      deleteAllRowTs_result result = new deleteAllRowTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      return;
+    }
+
+    public int scannerOpen(byte[] tableName, byte[] startRow, List<byte[]> columns) throws IOError, TException
+    {
+      send_scannerOpen(tableName, startRow, columns);
+      return recv_scannerOpen();
+    }
+
+    public void send_scannerOpen(byte[] tableName, byte[] startRow, List<byte[]> columns) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerOpen", TMessageType.CALL, seqid_));
+      scannerOpen_args args = new scannerOpen_args();
+      args.tableName = tableName;
+      args.startRow = startRow;
+      args.columns = columns;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public int recv_scannerOpen() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerOpen_result result = new scannerOpen_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpen failed: unknown result");
+    }
+
+    public int scannerOpenWithStop(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns) throws IOError, TException
+    {
+      send_scannerOpenWithStop(tableName, startRow, stopRow, columns);
+      return recv_scannerOpenWithStop();
+    }
+
+    public void send_scannerOpenWithStop(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerOpenWithStop", TMessageType.CALL, seqid_));
+      scannerOpenWithStop_args args = new scannerOpenWithStop_args();
+      args.tableName = tableName;
+      args.startRow = startRow;
+      args.stopRow = stopRow;
+      args.columns = columns;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public int recv_scannerOpenWithStop() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerOpenWithStop_result result = new scannerOpenWithStop_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenWithStop failed: unknown result");
+    }
+
+    public int scannerOpenWithPrefix(byte[] tableName, byte[] startAndPrefix, List<byte[]> columns) throws IOError, TException
+    {
+      send_scannerOpenWithPrefix(tableName, startAndPrefix, columns);
+      return recv_scannerOpenWithPrefix();
+    }
+
+    public void send_scannerOpenWithPrefix(byte[] tableName, byte[] startAndPrefix, List<byte[]> columns) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerOpenWithPrefix", TMessageType.CALL, seqid_));
+      scannerOpenWithPrefix_args args = new scannerOpenWithPrefix_args();
+      args.tableName = tableName;
+      args.startAndPrefix = startAndPrefix;
+      args.columns = columns;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public int recv_scannerOpenWithPrefix() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerOpenWithPrefix_result result = new scannerOpenWithPrefix_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenWithPrefix failed: unknown result");
+    }
+
+    public int scannerOpenTs(byte[] tableName, byte[] startRow, List<byte[]> columns, long timestamp) throws IOError, TException
+    {
+      send_scannerOpenTs(tableName, startRow, columns, timestamp);
+      return recv_scannerOpenTs();
+    }
+
+    public void send_scannerOpenTs(byte[] tableName, byte[] startRow, List<byte[]> columns, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerOpenTs", TMessageType.CALL, seqid_));
+      scannerOpenTs_args args = new scannerOpenTs_args();
+      args.tableName = tableName;
+      args.startRow = startRow;
+      args.columns = columns;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public int recv_scannerOpenTs() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerOpenTs_result result = new scannerOpenTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenTs failed: unknown result");
+    }
+
+    public int scannerOpenWithStopTs(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns, long timestamp) throws IOError, TException
+    {
+      send_scannerOpenWithStopTs(tableName, startRow, stopRow, columns, timestamp);
+      return recv_scannerOpenWithStopTs();
+    }
+
+    public void send_scannerOpenWithStopTs(byte[] tableName, byte[] startRow, byte[] stopRow, List<byte[]> columns, long timestamp) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerOpenWithStopTs", TMessageType.CALL, seqid_));
+      scannerOpenWithStopTs_args args = new scannerOpenWithStopTs_args();
+      args.tableName = tableName;
+      args.startRow = startRow;
+      args.stopRow = stopRow;
+      args.columns = columns;
+      args.timestamp = timestamp;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public int recv_scannerOpenWithStopTs() throws IOError, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerOpenWithStopTs_result result = new scannerOpenWithStopTs_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerOpenWithStopTs failed: unknown result");
+    }
+
+    public List<TRowResult> scannerGet(int id) throws IOError, IllegalArgument, TException
+    {
+      send_scannerGet(id);
+      return recv_scannerGet();
+    }
+
+    public void send_scannerGet(int id) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerGet", TMessageType.CALL, seqid_));
+      scannerGet_args args = new scannerGet_args();
+      args.id = id;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TRowResult> recv_scannerGet() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerGet_result result = new scannerGet_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerGet failed: unknown result");
+    }
+
+    public List<TRowResult> scannerGetList(int id, int nbRows) throws IOError, IllegalArgument, TException
+    {
+      send_scannerGetList(id, nbRows);
+      return recv_scannerGetList();
+    }
+
+    public void send_scannerGetList(int id, int nbRows) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerGetList", TMessageType.CALL, seqid_));
+      scannerGetList_args args = new scannerGetList_args();
+      args.id = id;
+      args.nbRows = nbRows;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public List<TRowResult> recv_scannerGetList() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerGetList_result result = new scannerGetList_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.isSetSuccess()) {
+        return result.success;
+      }
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      throw new TApplicationException(TApplicationException.MISSING_RESULT, "scannerGetList failed: unknown result");
+    }
+
+    public void scannerClose(int id) throws IOError, IllegalArgument, TException
+    {
+      send_scannerClose(id);
+      recv_scannerClose();
+    }
+
+    public void send_scannerClose(int id) throws TException
+    {
+      oprot_.writeMessageBegin(new TMessage("scannerClose", TMessageType.CALL, seqid_));
+      scannerClose_args args = new scannerClose_args();
+      args.id = id;
+      args.write(oprot_);
+      oprot_.writeMessageEnd();
+      oprot_.getTransport().flush();
+    }
+
+    public void recv_scannerClose() throws IOError, IllegalArgument, TException
+    {
+      TMessage msg = iprot_.readMessageBegin();
+      if (msg.type == TMessageType.EXCEPTION) {
+        TApplicationException x = TApplicationException.read(iprot_);
+        iprot_.readMessageEnd();
+        throw x;
+      }
+      scannerClose_result result = new scannerClose_result();
+      result.read(iprot_);
+      iprot_.readMessageEnd();
+      if (result.io != null) {
+        throw result.io;
+      }
+      if (result.ia != null) {
+        throw result.ia;
+      }
+      return;
+    }
+
+  }
+  public static class Processor implements TProcessor {
+    private static final Logger LOGGER = LoggerFactory.getLogger(Processor.class.getName());
+    public Processor(Iface iface)
+    {
+      iface_ = iface;
+      processMap_.put("enableTable", new enableTable());
+      processMap_.put("disableTable", new disableTable());
+      processMap_.put("isTableEnabled", new isTableEnabled());
+      processMap_.put("compact", new compact());
+      processMap_.put("majorCompact", new majorCompact());
+      processMap_.put("getTableNames", new getTableNames());
+      processMap_.put("getColumnDescriptors", new getColumnDescriptors());
+      processMap_.put("getTableRegions", new getTableRegions());
+      processMap_.put("createTable", new createTable());
+      processMap_.put("deleteTable", new deleteTable());
+      processMap_.put("get", new get());
+      processMap_.put("getVer", new getVer());
+      processMap_.put("getVerTs", new getVerTs());
+      processMap_.put("getRow", new getRow());
+      processMap_.put("getRowWithColumns", new getRowWithColumns());
+      processMap_.put("getRowTs", new getRowTs());
+      processMap_.put("getRowWithColumnsTs", new getRowWithColumnsTs());
+      processMap_.put("mutateRow", new mutateRow());
+      processMap_.put("mutateRowTs", new mutateRowTs());
+      processMap_.put("mutateRows", new mutateRows());
+      processMap_.put("mutateRowsTs", new mutateRowsTs());
+      processMap_.put("atomicIncrement", new atomicIncrement());
+      processMap_.put("deleteAll", new deleteAll());
+      processMap_.put("deleteAllTs", new deleteAllTs());
+      processMap_.put("deleteAllRow", new deleteAllRow());
+      processMap_.put("deleteAllRowTs", new deleteAllRowTs());
+      processMap_.put("scannerOpen", new scannerOpen());
+      processMap_.put("scannerOpenWithStop", new scannerOpenWithStop());
+      processMap_.put("scannerOpenWithPrefix", new scannerOpenWithPrefix());
+      processMap_.put("scannerOpenTs", new scannerOpenTs());
+      processMap_.put("scannerOpenWithStopTs", new scannerOpenWithStopTs());
+      processMap_.put("scannerGet", new scannerGet());
+      processMap_.put("scannerGetList", new scannerGetList());
+      processMap_.put("scannerClose", new scannerClose());
+    }
+
+    protected static interface ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException;
+    }
+
+    private Iface iface_;
+    protected final HashMap<String,ProcessFunction> processMap_ = new HashMap<String,ProcessFunction>();
+
+    public boolean process(TProtocol iprot, TProtocol oprot) throws TException
+    {
+      TMessage msg = iprot.readMessageBegin();
+      ProcessFunction fn = processMap_.get(msg.name);
+      if (fn == null) {
+        TProtocolUtil.skip(iprot, TType.STRUCT);
+        iprot.readMessageEnd();
+        TApplicationException x = new TApplicationException(TApplicationException.UNKNOWN_METHOD, "Invalid method name: '"+msg.name+"'");
+        oprot.writeMessageBegin(new TMessage(msg.name, TMessageType.EXCEPTION, msg.seqid));
+        x.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+        return true;
+      }
+      fn.process(msg.seqid, iprot, oprot);
+      return true;
+    }
+
+    private class enableTable implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        enableTable_args args = new enableTable_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        enableTable_result result = new enableTable_result();
+        try {
+          iface_.enableTable(args.tableName);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing enableTable", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing enableTable");
+          oprot.writeMessageBegin(new TMessage("enableTable", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("enableTable", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class disableTable implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        disableTable_args args = new disableTable_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        disableTable_result result = new disableTable_result();
+        try {
+          iface_.disableTable(args.tableName);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing disableTable", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing disableTable");
+          oprot.writeMessageBegin(new TMessage("disableTable", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("disableTable", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class isTableEnabled implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        isTableEnabled_args args = new isTableEnabled_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        isTableEnabled_result result = new isTableEnabled_result();
+        try {
+          result.success = iface_.isTableEnabled(args.tableName);
+          result.setSuccessIsSet(true);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing isTableEnabled", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing isTableEnabled");
+          oprot.writeMessageBegin(new TMessage("isTableEnabled", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("isTableEnabled", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class compact implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        compact_args args = new compact_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        compact_result result = new compact_result();
+        try {
+          iface_.compact(args.tableNameOrRegionName);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing compact", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing compact");
+          oprot.writeMessageBegin(new TMessage("compact", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("compact", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class majorCompact implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        majorCompact_args args = new majorCompact_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        majorCompact_result result = new majorCompact_result();
+        try {
+          iface_.majorCompact(args.tableNameOrRegionName);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing majorCompact", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing majorCompact");
+          oprot.writeMessageBegin(new TMessage("majorCompact", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("majorCompact", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getTableNames implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getTableNames_args args = new getTableNames_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getTableNames_result result = new getTableNames_result();
+        try {
+          result.success = iface_.getTableNames();
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getTableNames", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getTableNames");
+          oprot.writeMessageBegin(new TMessage("getTableNames", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getTableNames", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getColumnDescriptors implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getColumnDescriptors_args args = new getColumnDescriptors_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getColumnDescriptors_result result = new getColumnDescriptors_result();
+        try {
+          result.success = iface_.getColumnDescriptors(args.tableName);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getColumnDescriptors", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getColumnDescriptors");
+          oprot.writeMessageBegin(new TMessage("getColumnDescriptors", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getColumnDescriptors", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getTableRegions implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getTableRegions_args args = new getTableRegions_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getTableRegions_result result = new getTableRegions_result();
+        try {
+          result.success = iface_.getTableRegions(args.tableName);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getTableRegions", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getTableRegions");
+          oprot.writeMessageBegin(new TMessage("getTableRegions", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getTableRegions", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class createTable implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        createTable_args args = new createTable_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        createTable_result result = new createTable_result();
+        try {
+          iface_.createTable(args.tableName, args.columnFamilies);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (AlreadyExists exist) {
+          result.exist = exist;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing createTable", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing createTable");
+          oprot.writeMessageBegin(new TMessage("createTable", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("createTable", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class deleteTable implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        deleteTable_args args = new deleteTable_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        deleteTable_result result = new deleteTable_result();
+        try {
+          iface_.deleteTable(args.tableName);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing deleteTable", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing deleteTable");
+          oprot.writeMessageBegin(new TMessage("deleteTable", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("deleteTable", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class get implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        get_args args = new get_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        get_result result = new get_result();
+        try {
+          result.success = iface_.get(args.tableName, args.row, args.column);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing get", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing get");
+          oprot.writeMessageBegin(new TMessage("get", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("get", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getVer implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getVer_args args = new getVer_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getVer_result result = new getVer_result();
+        try {
+          result.success = iface_.getVer(args.tableName, args.row, args.column, args.numVersions);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getVer", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getVer");
+          oprot.writeMessageBegin(new TMessage("getVer", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getVer", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getVerTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getVerTs_args args = new getVerTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getVerTs_result result = new getVerTs_result();
+        try {
+          result.success = iface_.getVerTs(args.tableName, args.row, args.column, args.timestamp, args.numVersions);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getVerTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getVerTs");
+          oprot.writeMessageBegin(new TMessage("getVerTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getVerTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getRow implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getRow_args args = new getRow_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getRow_result result = new getRow_result();
+        try {
+          result.success = iface_.getRow(args.tableName, args.row);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getRow", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getRow");
+          oprot.writeMessageBegin(new TMessage("getRow", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getRow", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getRowWithColumns implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getRowWithColumns_args args = new getRowWithColumns_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getRowWithColumns_result result = new getRowWithColumns_result();
+        try {
+          result.success = iface_.getRowWithColumns(args.tableName, args.row, args.columns);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getRowWithColumns", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getRowWithColumns");
+          oprot.writeMessageBegin(new TMessage("getRowWithColumns", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getRowWithColumns", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getRowTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getRowTs_args args = new getRowTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getRowTs_result result = new getRowTs_result();
+        try {
+          result.success = iface_.getRowTs(args.tableName, args.row, args.timestamp);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getRowTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getRowTs");
+          oprot.writeMessageBegin(new TMessage("getRowTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getRowTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class getRowWithColumnsTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        getRowWithColumnsTs_args args = new getRowWithColumnsTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        getRowWithColumnsTs_result result = new getRowWithColumnsTs_result();
+        try {
+          result.success = iface_.getRowWithColumnsTs(args.tableName, args.row, args.columns, args.timestamp);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing getRowWithColumnsTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing getRowWithColumnsTs");
+          oprot.writeMessageBegin(new TMessage("getRowWithColumnsTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("getRowWithColumnsTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class mutateRow implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        mutateRow_args args = new mutateRow_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        mutateRow_result result = new mutateRow_result();
+        try {
+          iface_.mutateRow(args.tableName, args.row, args.mutations);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing mutateRow", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing mutateRow");
+          oprot.writeMessageBegin(new TMessage("mutateRow", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("mutateRow", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class mutateRowTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        mutateRowTs_args args = new mutateRowTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        mutateRowTs_result result = new mutateRowTs_result();
+        try {
+          iface_.mutateRowTs(args.tableName, args.row, args.mutations, args.timestamp);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing mutateRowTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing mutateRowTs");
+          oprot.writeMessageBegin(new TMessage("mutateRowTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("mutateRowTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class mutateRows implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        mutateRows_args args = new mutateRows_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        mutateRows_result result = new mutateRows_result();
+        try {
+          iface_.mutateRows(args.tableName, args.rowBatches);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing mutateRows", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing mutateRows");
+          oprot.writeMessageBegin(new TMessage("mutateRows", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("mutateRows", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class mutateRowsTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        mutateRowsTs_args args = new mutateRowsTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        mutateRowsTs_result result = new mutateRowsTs_result();
+        try {
+          iface_.mutateRowsTs(args.tableName, args.rowBatches, args.timestamp);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing mutateRowsTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing mutateRowsTs");
+          oprot.writeMessageBegin(new TMessage("mutateRowsTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("mutateRowsTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class atomicIncrement implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        atomicIncrement_args args = new atomicIncrement_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        atomicIncrement_result result = new atomicIncrement_result();
+        try {
+          result.success = iface_.atomicIncrement(args.tableName, args.row, args.column, args.value);
+          result.setSuccessIsSet(true);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing atomicIncrement", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing atomicIncrement");
+          oprot.writeMessageBegin(new TMessage("atomicIncrement", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("atomicIncrement", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class deleteAll implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        deleteAll_args args = new deleteAll_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        deleteAll_result result = new deleteAll_result();
+        try {
+          iface_.deleteAll(args.tableName, args.row, args.column);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing deleteAll", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing deleteAll");
+          oprot.writeMessageBegin(new TMessage("deleteAll", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("deleteAll", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class deleteAllTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        deleteAllTs_args args = new deleteAllTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        deleteAllTs_result result = new deleteAllTs_result();
+        try {
+          iface_.deleteAllTs(args.tableName, args.row, args.column, args.timestamp);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing deleteAllTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing deleteAllTs");
+          oprot.writeMessageBegin(new TMessage("deleteAllTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("deleteAllTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class deleteAllRow implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        deleteAllRow_args args = new deleteAllRow_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        deleteAllRow_result result = new deleteAllRow_result();
+        try {
+          iface_.deleteAllRow(args.tableName, args.row);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing deleteAllRow", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing deleteAllRow");
+          oprot.writeMessageBegin(new TMessage("deleteAllRow", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("deleteAllRow", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class deleteAllRowTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        deleteAllRowTs_args args = new deleteAllRowTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        deleteAllRowTs_result result = new deleteAllRowTs_result();
+        try {
+          iface_.deleteAllRowTs(args.tableName, args.row, args.timestamp);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing deleteAllRowTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing deleteAllRowTs");
+          oprot.writeMessageBegin(new TMessage("deleteAllRowTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("deleteAllRowTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerOpen implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerOpen_args args = new scannerOpen_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerOpen_result result = new scannerOpen_result();
+        try {
+          result.success = iface_.scannerOpen(args.tableName, args.startRow, args.columns);
+          result.setSuccessIsSet(true);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerOpen", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerOpen");
+          oprot.writeMessageBegin(new TMessage("scannerOpen", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerOpen", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerOpenWithStop implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerOpenWithStop_args args = new scannerOpenWithStop_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerOpenWithStop_result result = new scannerOpenWithStop_result();
+        try {
+          result.success = iface_.scannerOpenWithStop(args.tableName, args.startRow, args.stopRow, args.columns);
+          result.setSuccessIsSet(true);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerOpenWithStop", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerOpenWithStop");
+          oprot.writeMessageBegin(new TMessage("scannerOpenWithStop", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerOpenWithStop", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerOpenWithPrefix implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerOpenWithPrefix_args args = new scannerOpenWithPrefix_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerOpenWithPrefix_result result = new scannerOpenWithPrefix_result();
+        try {
+          result.success = iface_.scannerOpenWithPrefix(args.tableName, args.startAndPrefix, args.columns);
+          result.setSuccessIsSet(true);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerOpenWithPrefix", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerOpenWithPrefix");
+          oprot.writeMessageBegin(new TMessage("scannerOpenWithPrefix", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerOpenWithPrefix", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerOpenTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerOpenTs_args args = new scannerOpenTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerOpenTs_result result = new scannerOpenTs_result();
+        try {
+          result.success = iface_.scannerOpenTs(args.tableName, args.startRow, args.columns, args.timestamp);
+          result.setSuccessIsSet(true);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerOpenTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerOpenTs");
+          oprot.writeMessageBegin(new TMessage("scannerOpenTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerOpenTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerOpenWithStopTs implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerOpenWithStopTs_args args = new scannerOpenWithStopTs_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerOpenWithStopTs_result result = new scannerOpenWithStopTs_result();
+        try {
+          result.success = iface_.scannerOpenWithStopTs(args.tableName, args.startRow, args.stopRow, args.columns, args.timestamp);
+          result.setSuccessIsSet(true);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerOpenWithStopTs", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerOpenWithStopTs");
+          oprot.writeMessageBegin(new TMessage("scannerOpenWithStopTs", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerOpenWithStopTs", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerGet implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerGet_args args = new scannerGet_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerGet_result result = new scannerGet_result();
+        try {
+          result.success = iface_.scannerGet(args.id);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerGet", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerGet");
+          oprot.writeMessageBegin(new TMessage("scannerGet", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerGet", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerGetList implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerGetList_args args = new scannerGetList_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerGetList_result result = new scannerGetList_result();
+        try {
+          result.success = iface_.scannerGetList(args.id, args.nbRows);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerGetList", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerGetList");
+          oprot.writeMessageBegin(new TMessage("scannerGetList", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerGetList", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+    private class scannerClose implements ProcessFunction {
+      public void process(int seqid, TProtocol iprot, TProtocol oprot) throws TException
+      {
+        scannerClose_args args = new scannerClose_args();
+        args.read(iprot);
+        iprot.readMessageEnd();
+        scannerClose_result result = new scannerClose_result();
+        try {
+          iface_.scannerClose(args.id);
+        } catch (IOError io) {
+          result.io = io;
+        } catch (IllegalArgument ia) {
+          result.ia = ia;
+        } catch (Throwable th) {
+          LOGGER.error("Internal error processing scannerClose", th);
+          TApplicationException x = new TApplicationException(TApplicationException.INTERNAL_ERROR, "Internal error processing scannerClose");
+          oprot.writeMessageBegin(new TMessage("scannerClose", TMessageType.EXCEPTION, seqid));
+          x.write(oprot);
+          oprot.writeMessageEnd();
+          oprot.getTransport().flush();
+          return;
+        }
+        oprot.writeMessageBegin(new TMessage("scannerClose", TMessageType.REPLY, seqid));
+        result.write(oprot);
+        oprot.writeMessageEnd();
+        oprot.getTransport().flush();
+      }
+
+    }
+
+  }
+
+  public static class enableTable_args implements TBase<enableTable_args._Fields>, java.io.Serializable, Cloneable, Comparable<enableTable_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("enableTable_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+    /**
+     * name of the table
+     */
+    public byte[] tableName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of the table
+       */
+      TABLE_NAME((short)1, "tableName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(enableTable_args.class, metaDataMap);
+    }
+
+    public enableTable_args() {
+    }
+
+    public enableTable_args(
+      byte[] tableName)
+    {
+      this();
+      this.tableName = tableName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public enableTable_args(enableTable_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+    }
+
+    public enableTable_args deepCopy() {
+      return new enableTable_args(this);
+    }
+
+    @Deprecated
+    public enableTable_args clone() {
+      return new enableTable_args(this);
+    }
+
+    /**
+     * name of the table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of the table
+     */
+    public enableTable_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof enableTable_args)
+        return this.equals((enableTable_args)that);
+      return false;
+    }
+
+    public boolean equals(enableTable_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(enableTable_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      enableTable_args typedOther = (enableTable_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("enableTable_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class enableTable_result implements TBase<enableTable_result._Fields>, java.io.Serializable, Cloneable, Comparable<enableTable_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("enableTable_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(enableTable_result.class, metaDataMap);
+    }
+
+    public enableTable_result() {
+    }
+
+    public enableTable_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public enableTable_result(enableTable_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public enableTable_result deepCopy() {
+      return new enableTable_result(this);
+    }
+
+    @Deprecated
+    public enableTable_result clone() {
+      return new enableTable_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public enableTable_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof enableTable_result)
+        return this.equals((enableTable_result)that);
+      return false;
+    }
+
+    public boolean equals(enableTable_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(enableTable_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      enableTable_result typedOther = (enableTable_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("enableTable_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class disableTable_args implements TBase<disableTable_args._Fields>, java.io.Serializable, Cloneable, Comparable<disableTable_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("disableTable_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+    /**
+     * name of the table
+     */
+    public byte[] tableName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of the table
+       */
+      TABLE_NAME((short)1, "tableName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(disableTable_args.class, metaDataMap);
+    }
+
+    public disableTable_args() {
+    }
+
+    public disableTable_args(
+      byte[] tableName)
+    {
+      this();
+      this.tableName = tableName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public disableTable_args(disableTable_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+    }
+
+    public disableTable_args deepCopy() {
+      return new disableTable_args(this);
+    }
+
+    @Deprecated
+    public disableTable_args clone() {
+      return new disableTable_args(this);
+    }
+
+    /**
+     * name of the table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of the table
+     */
+    public disableTable_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof disableTable_args)
+        return this.equals((disableTable_args)that);
+      return false;
+    }
+
+    public boolean equals(disableTable_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(disableTable_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      disableTable_args typedOther = (disableTable_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("disableTable_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class disableTable_result implements TBase<disableTable_result._Fields>, java.io.Serializable, Cloneable, Comparable<disableTable_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("disableTable_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(disableTable_result.class, metaDataMap);
+    }
+
+    public disableTable_result() {
+    }
+
+    public disableTable_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public disableTable_result(disableTable_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public disableTable_result deepCopy() {
+      return new disableTable_result(this);
+    }
+
+    @Deprecated
+    public disableTable_result clone() {
+      return new disableTable_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public disableTable_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof disableTable_result)
+        return this.equals((disableTable_result)that);
+      return false;
+    }
+
+    public boolean equals(disableTable_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(disableTable_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      disableTable_result typedOther = (disableTable_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("disableTable_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class isTableEnabled_args implements TBase<isTableEnabled_args._Fields>, java.io.Serializable, Cloneable, Comparable<isTableEnabled_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("isTableEnabled_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+    /**
+     * name of the table to check
+     */
+    public byte[] tableName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of the table to check
+       */
+      TABLE_NAME((short)1, "tableName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(isTableEnabled_args.class, metaDataMap);
+    }
+
+    public isTableEnabled_args() {
+    }
+
+    public isTableEnabled_args(
+      byte[] tableName)
+    {
+      this();
+      this.tableName = tableName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public isTableEnabled_args(isTableEnabled_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+    }
+
+    public isTableEnabled_args deepCopy() {
+      return new isTableEnabled_args(this);
+    }
+
+    @Deprecated
+    public isTableEnabled_args clone() {
+      return new isTableEnabled_args(this);
+    }
+
+    /**
+     * name of the table to check
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of the table to check
+     */
+    public isTableEnabled_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof isTableEnabled_args)
+        return this.equals((isTableEnabled_args)that);
+      return false;
+    }
+
+    public boolean equals(isTableEnabled_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(isTableEnabled_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      isTableEnabled_args typedOther = (isTableEnabled_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("isTableEnabled_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class isTableEnabled_result implements TBase<isTableEnabled_result._Fields>, java.io.Serializable, Cloneable, Comparable<isTableEnabled_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("isTableEnabled_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.BOOL, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public boolean success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.BOOL)));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(isTableEnabled_result.class, metaDataMap);
+    }
+
+    public isTableEnabled_result() {
+    }
+
+    public isTableEnabled_result(
+      boolean success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public isTableEnabled_result(isTableEnabled_result other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.success = other.success;
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public isTableEnabled_result deepCopy() {
+      return new isTableEnabled_result(this);
+    }
+
+    @Deprecated
+    public isTableEnabled_result clone() {
+      return new isTableEnabled_result(this);
+    }
+
+    public boolean isSuccess() {
+      return this.success;
+    }
+
+    public isTableEnabled_result setSuccess(boolean success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bit_vector.clear(__SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return __isset_bit_vector.get(__SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bit_vector.set(__SUCCESS_ISSET_ID, value);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public isTableEnabled_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Boolean)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return new Boolean(isSuccess());
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof isTableEnabled_result)
+        return this.equals((isTableEnabled_result)that);
+      return false;
+    }
+
+    public boolean equals(isTableEnabled_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true;
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(isTableEnabled_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      isTableEnabled_result typedOther = (isTableEnabled_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.BOOL) {
+                this.success = iprot.readBool();
+                setSuccessIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        oprot.writeBool(this.success);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("isTableEnabled_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class compact_args implements TBase<compact_args._Fields>, java.io.Serializable, Cloneable, Comparable<compact_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("compact_args");
+
+    private static final TField TABLE_NAME_OR_REGION_NAME_FIELD_DESC = new TField("tableNameOrRegionName", TType.STRING, (short)1);
+
+    public byte[] tableNameOrRegionName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      TABLE_NAME_OR_REGION_NAME((short)1, "tableNameOrRegionName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME_OR_REGION_NAME, new FieldMetaData("tableNameOrRegionName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(compact_args.class, metaDataMap);
+    }
+
+    public compact_args() {
+    }
+
+    public compact_args(
+      byte[] tableNameOrRegionName)
+    {
+      this();
+      this.tableNameOrRegionName = tableNameOrRegionName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public compact_args(compact_args other) {
+      if (other.isSetTableNameOrRegionName()) {
+        this.tableNameOrRegionName = other.tableNameOrRegionName;
+      }
+    }
+
+    public compact_args deepCopy() {
+      return new compact_args(this);
+    }
+
+    @Deprecated
+    public compact_args clone() {
+      return new compact_args(this);
+    }
+
+    public byte[] getTableNameOrRegionName() {
+      return this.tableNameOrRegionName;
+    }
+
+    public compact_args setTableNameOrRegionName(byte[] tableNameOrRegionName) {
+      this.tableNameOrRegionName = tableNameOrRegionName;
+      return this;
+    }
+
+    public void unsetTableNameOrRegionName() {
+      this.tableNameOrRegionName = null;
+    }
+
+    /** Returns true if field tableNameOrRegionName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableNameOrRegionName() {
+      return this.tableNameOrRegionName != null;
+    }
+
+    public void setTableNameOrRegionNameIsSet(boolean value) {
+      if (!value) {
+        this.tableNameOrRegionName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME_OR_REGION_NAME:
+        if (value == null) {
+          unsetTableNameOrRegionName();
+        } else {
+          setTableNameOrRegionName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME_OR_REGION_NAME:
+        return getTableNameOrRegionName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME_OR_REGION_NAME:
+        return isSetTableNameOrRegionName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof compact_args)
+        return this.equals((compact_args)that);
+      return false;
+    }
+
+    public boolean equals(compact_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableNameOrRegionName = true && this.isSetTableNameOrRegionName();
+      boolean that_present_tableNameOrRegionName = true && that.isSetTableNameOrRegionName();
+      if (this_present_tableNameOrRegionName || that_present_tableNameOrRegionName) {
+        if (!(this_present_tableNameOrRegionName && that_present_tableNameOrRegionName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableNameOrRegionName, that.tableNameOrRegionName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableNameOrRegionName = true && (isSetTableNameOrRegionName());
+      builder.append(present_tableNameOrRegionName);
+      if (present_tableNameOrRegionName)
+        builder.append(tableNameOrRegionName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(compact_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      compact_args typedOther = (compact_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableNameOrRegionName()).compareTo(isSetTableNameOrRegionName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableNameOrRegionName, typedOther.tableNameOrRegionName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME_OR_REGION_NAME:
+              if (field.type == TType.STRING) {
+                this.tableNameOrRegionName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableNameOrRegionName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_OR_REGION_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableNameOrRegionName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("compact_args(");
+      boolean first = true;
+
+      sb.append("tableNameOrRegionName:");
+      if (this.tableNameOrRegionName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableNameOrRegionName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class compact_result implements TBase<compact_result._Fields>, java.io.Serializable, Cloneable, Comparable<compact_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("compact_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(compact_result.class, metaDataMap);
+    }
+
+    public compact_result() {
+    }
+
+    public compact_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public compact_result(compact_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public compact_result deepCopy() {
+      return new compact_result(this);
+    }
+
+    @Deprecated
+    public compact_result clone() {
+      return new compact_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public compact_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof compact_result)
+        return this.equals((compact_result)that);
+      return false;
+    }
+
+    public boolean equals(compact_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(compact_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      compact_result typedOther = (compact_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("compact_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class majorCompact_args implements TBase<majorCompact_args._Fields>, java.io.Serializable, Cloneable, Comparable<majorCompact_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("majorCompact_args");
+
+    private static final TField TABLE_NAME_OR_REGION_NAME_FIELD_DESC = new TField("tableNameOrRegionName", TType.STRING, (short)1);
+
+    public byte[] tableNameOrRegionName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      TABLE_NAME_OR_REGION_NAME((short)1, "tableNameOrRegionName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME_OR_REGION_NAME, new FieldMetaData("tableNameOrRegionName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(majorCompact_args.class, metaDataMap);
+    }
+
+    public majorCompact_args() {
+    }
+
+    public majorCompact_args(
+      byte[] tableNameOrRegionName)
+    {
+      this();
+      this.tableNameOrRegionName = tableNameOrRegionName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public majorCompact_args(majorCompact_args other) {
+      if (other.isSetTableNameOrRegionName()) {
+        this.tableNameOrRegionName = other.tableNameOrRegionName;
+      }
+    }
+
+    public majorCompact_args deepCopy() {
+      return new majorCompact_args(this);
+    }
+
+    @Deprecated
+    public majorCompact_args clone() {
+      return new majorCompact_args(this);
+    }
+
+    public byte[] getTableNameOrRegionName() {
+      return this.tableNameOrRegionName;
+    }
+
+    public majorCompact_args setTableNameOrRegionName(byte[] tableNameOrRegionName) {
+      this.tableNameOrRegionName = tableNameOrRegionName;
+      return this;
+    }
+
+    public void unsetTableNameOrRegionName() {
+      this.tableNameOrRegionName = null;
+    }
+
+    /** Returns true if field tableNameOrRegionName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableNameOrRegionName() {
+      return this.tableNameOrRegionName != null;
+    }
+
+    public void setTableNameOrRegionNameIsSet(boolean value) {
+      if (!value) {
+        this.tableNameOrRegionName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME_OR_REGION_NAME:
+        if (value == null) {
+          unsetTableNameOrRegionName();
+        } else {
+          setTableNameOrRegionName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME_OR_REGION_NAME:
+        return getTableNameOrRegionName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME_OR_REGION_NAME:
+        return isSetTableNameOrRegionName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof majorCompact_args)
+        return this.equals((majorCompact_args)that);
+      return false;
+    }
+
+    public boolean equals(majorCompact_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableNameOrRegionName = true && this.isSetTableNameOrRegionName();
+      boolean that_present_tableNameOrRegionName = true && that.isSetTableNameOrRegionName();
+      if (this_present_tableNameOrRegionName || that_present_tableNameOrRegionName) {
+        if (!(this_present_tableNameOrRegionName && that_present_tableNameOrRegionName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableNameOrRegionName, that.tableNameOrRegionName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableNameOrRegionName = true && (isSetTableNameOrRegionName());
+      builder.append(present_tableNameOrRegionName);
+      if (present_tableNameOrRegionName)
+        builder.append(tableNameOrRegionName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(majorCompact_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      majorCompact_args typedOther = (majorCompact_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableNameOrRegionName()).compareTo(isSetTableNameOrRegionName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableNameOrRegionName, typedOther.tableNameOrRegionName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME_OR_REGION_NAME:
+              if (field.type == TType.STRING) {
+                this.tableNameOrRegionName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableNameOrRegionName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_OR_REGION_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableNameOrRegionName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("majorCompact_args(");
+      boolean first = true;
+
+      sb.append("tableNameOrRegionName:");
+      if (this.tableNameOrRegionName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableNameOrRegionName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class majorCompact_result implements TBase<majorCompact_result._Fields>, java.io.Serializable, Cloneable, Comparable<majorCompact_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("majorCompact_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(majorCompact_result.class, metaDataMap);
+    }
+
+    public majorCompact_result() {
+    }
+
+    public majorCompact_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public majorCompact_result(majorCompact_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public majorCompact_result deepCopy() {
+      return new majorCompact_result(this);
+    }
+
+    @Deprecated
+    public majorCompact_result clone() {
+      return new majorCompact_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public majorCompact_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof majorCompact_result)
+        return this.equals((majorCompact_result)that);
+      return false;
+    }
+
+    public boolean equals(majorCompact_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(majorCompact_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      majorCompact_result typedOther = (majorCompact_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("majorCompact_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getTableNames_args implements TBase<getTableNames_args._Fields>, java.io.Serializable, Cloneable, Comparable<getTableNames_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getTableNames_args");
+
+
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+;
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getTableNames_args.class, metaDataMap);
+    }
+
+    public getTableNames_args() {
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getTableNames_args(getTableNames_args other) {
+    }
+
+    public getTableNames_args deepCopy() {
+      return new getTableNames_args(this);
+    }
+
+    @Deprecated
+    public getTableNames_args clone() {
+      return new getTableNames_args(this);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getTableNames_args)
+        return this.equals((getTableNames_args)that);
+      return false;
+    }
+
+    public boolean equals(getTableNames_args that) {
+      if (that == null)
+        return false;
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getTableNames_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getTableNames_args typedOther = (getTableNames_args)other;
+
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getTableNames_args(");
+      boolean first = true;
+
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getTableNames_result implements TBase<getTableNames_result._Fields>, java.io.Serializable, Cloneable, Comparable<getTableNames_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getTableNames_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<byte[]> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getTableNames_result.class, metaDataMap);
+    }
+
+    public getTableNames_result() {
+    }
+
+    public getTableNames_result(
+      List<byte[]> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getTableNames_result(getTableNames_result other) {
+      if (other.isSetSuccess()) {
+        List<byte[]> __this__success = new ArrayList<byte[]>();
+        for (byte[] other_element : other.success) {
+          __this__success.add(other_element);
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getTableNames_result deepCopy() {
+      return new getTableNames_result(this);
+    }
+
+    @Deprecated
+    public getTableNames_result clone() {
+      return new getTableNames_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<byte[]> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(byte[] elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<byte[]>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<byte[]> getSuccess() {
+      return this.success;
+    }
+
+    public getTableNames_result setSuccess(List<byte[]> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getTableNames_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<byte[]>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getTableNames_result)
+        return this.equals((getTableNames_result)that);
+      return false;
+    }
+
+    public boolean equals(getTableNames_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getTableNames_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getTableNames_result typedOther = (getTableNames_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list9 = iprot.readListBegin();
+                  this.success = new ArrayList<byte[]>(_list9.size);
+                  for (int _i10 = 0; _i10 < _list9.size; ++_i10)
+                  {
+                    byte[] _elem11;
+                    _elem11 = iprot.readBinary();
+                    this.success.add(_elem11);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.success.size()));
+          for (byte[] _iter12 : this.success)
+          {
+            oprot.writeBinary(_iter12);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getTableNames_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getColumnDescriptors_args implements TBase<getColumnDescriptors_args._Fields>, java.io.Serializable, Cloneable, Comparable<getColumnDescriptors_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getColumnDescriptors_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+    /**
+     * table name
+     */
+    public byte[] tableName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * table name
+       */
+      TABLE_NAME((short)1, "tableName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getColumnDescriptors_args.class, metaDataMap);
+    }
+
+    public getColumnDescriptors_args() {
+    }
+
+    public getColumnDescriptors_args(
+      byte[] tableName)
+    {
+      this();
+      this.tableName = tableName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getColumnDescriptors_args(getColumnDescriptors_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+    }
+
+    public getColumnDescriptors_args deepCopy() {
+      return new getColumnDescriptors_args(this);
+    }
+
+    @Deprecated
+    public getColumnDescriptors_args clone() {
+      return new getColumnDescriptors_args(this);
+    }
+
+    /**
+     * table name
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * table name
+     */
+    public getColumnDescriptors_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getColumnDescriptors_args)
+        return this.equals((getColumnDescriptors_args)that);
+      return false;
+    }
+
+    public boolean equals(getColumnDescriptors_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getColumnDescriptors_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getColumnDescriptors_args typedOther = (getColumnDescriptors_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getColumnDescriptors_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getColumnDescriptors_result implements TBase<getColumnDescriptors_result._Fields>, java.io.Serializable, Cloneable   {
+    private static final TStruct STRUCT_DESC = new TStruct("getColumnDescriptors_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.MAP, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public Map<byte[],ColumnDescriptor> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new MapMetaData(TType.MAP,
+              new FieldValueMetaData(TType.STRING),
+              new StructMetaData(TType.STRUCT, ColumnDescriptor.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getColumnDescriptors_result.class, metaDataMap);
+    }
+
+    public getColumnDescriptors_result() {
+    }
+
+    public getColumnDescriptors_result(
+      Map<byte[],ColumnDescriptor> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getColumnDescriptors_result(getColumnDescriptors_result other) {
+      if (other.isSetSuccess()) {
+        Map<byte[],ColumnDescriptor> __this__success = new HashMap<byte[],ColumnDescriptor>();
+        for (Map.Entry<byte[], ColumnDescriptor> other_element : other.success.entrySet()) {
+
+          byte[] other_element_key = other_element.getKey();
+          ColumnDescriptor other_element_value = other_element.getValue();
+
+          byte[] __this__success_copy_key = other_element_key;
+
+          ColumnDescriptor __this__success_copy_value = new ColumnDescriptor(other_element_value);
+
+          __this__success.put(__this__success_copy_key, __this__success_copy_value);
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getColumnDescriptors_result deepCopy() {
+      return new getColumnDescriptors_result(this);
+    }
+
+    @Deprecated
+    public getColumnDescriptors_result clone() {
+      return new getColumnDescriptors_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public void putToSuccess(byte[] key, ColumnDescriptor val) {
+      if (this.success == null) {
+        this.success = new HashMap<byte[],ColumnDescriptor>();
+      }
+      this.success.put(key, val);
+    }
+
+    public Map<byte[],ColumnDescriptor> getSuccess() {
+      return this.success;
+    }
+
+    public getColumnDescriptors_result setSuccess(Map<byte[],ColumnDescriptor> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getColumnDescriptors_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Map<byte[],ColumnDescriptor>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getColumnDescriptors_result)
+        return this.equals((getColumnDescriptors_result)that);
+      return false;
+    }
+
+    public boolean equals(getColumnDescriptors_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.MAP) {
+                {
+                  TMap _map13 = iprot.readMapBegin();
+                  this.success = new HashMap<byte[],ColumnDescriptor>(2*_map13.size);
+                  for (int _i14 = 0; _i14 < _map13.size; ++_i14)
+                  {
+                    byte[] _key15;
+                    ColumnDescriptor _val16;
+                    _key15 = iprot.readBinary();
+                    _val16 = new ColumnDescriptor();
+                    _val16.read(iprot);
+                    this.success.put(_key15, _val16);
+                  }
+                  iprot.readMapEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeMapBegin(new TMap(TType.STRING, TType.STRUCT, this.success.size()));
+          for (Map.Entry<byte[], ColumnDescriptor> _iter17 : this.success.entrySet())
+          {
+            oprot.writeBinary(_iter17.getKey());
+            _iter17.getValue().write(oprot);
+          }
+          oprot.writeMapEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getColumnDescriptors_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getTableRegions_args implements TBase<getTableRegions_args._Fields>, java.io.Serializable, Cloneable, Comparable<getTableRegions_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getTableRegions_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+    /**
+     * table name
+     */
+    public byte[] tableName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * table name
+       */
+      TABLE_NAME((short)1, "tableName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getTableRegions_args.class, metaDataMap);
+    }
+
+    public getTableRegions_args() {
+    }
+
+    public getTableRegions_args(
+      byte[] tableName)
+    {
+      this();
+      this.tableName = tableName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getTableRegions_args(getTableRegions_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+    }
+
+    public getTableRegions_args deepCopy() {
+      return new getTableRegions_args(this);
+    }
+
+    @Deprecated
+    public getTableRegions_args clone() {
+      return new getTableRegions_args(this);
+    }
+
+    /**
+     * table name
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * table name
+     */
+    public getTableRegions_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getTableRegions_args)
+        return this.equals((getTableRegions_args)that);
+      return false;
+    }
+
+    public boolean equals(getTableRegions_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getTableRegions_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getTableRegions_args typedOther = (getTableRegions_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getTableRegions_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getTableRegions_result implements TBase<getTableRegions_result._Fields>, java.io.Serializable, Cloneable, Comparable<getTableRegions_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getTableRegions_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TRegionInfo> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TRegionInfo.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getTableRegions_result.class, metaDataMap);
+    }
+
+    public getTableRegions_result() {
+    }
+
+    public getTableRegions_result(
+      List<TRegionInfo> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getTableRegions_result(getTableRegions_result other) {
+      if (other.isSetSuccess()) {
+        List<TRegionInfo> __this__success = new ArrayList<TRegionInfo>();
+        for (TRegionInfo other_element : other.success) {
+          __this__success.add(new TRegionInfo(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getTableRegions_result deepCopy() {
+      return new getTableRegions_result(this);
+    }
+
+    @Deprecated
+    public getTableRegions_result clone() {
+      return new getTableRegions_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TRegionInfo> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TRegionInfo elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TRegionInfo>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TRegionInfo> getSuccess() {
+      return this.success;
+    }
+
+    public getTableRegions_result setSuccess(List<TRegionInfo> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getTableRegions_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TRegionInfo>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getTableRegions_result)
+        return this.equals((getTableRegions_result)that);
+      return false;
+    }
+
+    public boolean equals(getTableRegions_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getTableRegions_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getTableRegions_result typedOther = (getTableRegions_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list18 = iprot.readListBegin();
+                  this.success = new ArrayList<TRegionInfo>(_list18.size);
+                  for (int _i19 = 0; _i19 < _list18.size; ++_i19)
+                  {
+                    TRegionInfo _elem20;
+                    _elem20 = new TRegionInfo();
+                    _elem20.read(iprot);
+                    this.success.add(_elem20);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TRegionInfo _iter21 : this.success)
+          {
+            _iter21.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getTableRegions_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class createTable_args implements TBase<createTable_args._Fields>, java.io.Serializable, Cloneable, Comparable<createTable_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("createTable_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField COLUMN_FAMILIES_FIELD_DESC = new TField("columnFamilies", TType.LIST, (short)2);
+
+    /**
+     * name of table to create
+     */
+    public byte[] tableName;
+    /**
+     * list of column family descriptors
+     */
+    public List<ColumnDescriptor> columnFamilies;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table to create
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * list of column family descriptors
+       */
+      COLUMN_FAMILIES((short)2, "columnFamilies");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMN_FAMILIES, new FieldMetaData("columnFamilies", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, ColumnDescriptor.class))));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(createTable_args.class, metaDataMap);
+    }
+
+    public createTable_args() {
+    }
+
+    public createTable_args(
+      byte[] tableName,
+      List<ColumnDescriptor> columnFamilies)
+    {
+      this();
+      this.tableName = tableName;
+      this.columnFamilies = columnFamilies;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public createTable_args(createTable_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetColumnFamilies()) {
+        List<ColumnDescriptor> __this__columnFamilies = new ArrayList<ColumnDescriptor>();
+        for (ColumnDescriptor other_element : other.columnFamilies) {
+          __this__columnFamilies.add(new ColumnDescriptor(other_element));
+        }
+        this.columnFamilies = __this__columnFamilies;
+      }
+    }
+
+    public createTable_args deepCopy() {
+      return new createTable_args(this);
+    }
+
+    @Deprecated
+    public createTable_args clone() {
+      return new createTable_args(this);
+    }
+
+    /**
+     * name of table to create
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table to create
+     */
+    public createTable_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public int getColumnFamiliesSize() {
+      return (this.columnFamilies == null) ? 0 : this.columnFamilies.size();
+    }
+
+    public java.util.Iterator<ColumnDescriptor> getColumnFamiliesIterator() {
+      return (this.columnFamilies == null) ? null : this.columnFamilies.iterator();
+    }
+
+    public void addToColumnFamilies(ColumnDescriptor elem) {
+      if (this.columnFamilies == null) {
+        this.columnFamilies = new ArrayList<ColumnDescriptor>();
+      }
+      this.columnFamilies.add(elem);
+    }
+
+    /**
+     * list of column family descriptors
+     */
+    public List<ColumnDescriptor> getColumnFamilies() {
+      return this.columnFamilies;
+    }
+
+    /**
+     * list of column family descriptors
+     */
+    public createTable_args setColumnFamilies(List<ColumnDescriptor> columnFamilies) {
+      this.columnFamilies = columnFamilies;
+      return this;
+    }
+
+    public void unsetColumnFamilies() {
+      this.columnFamilies = null;
+    }
+
+    /** Returns true if field columnFamilies is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumnFamilies() {
+      return this.columnFamilies != null;
+    }
+
+    public void setColumnFamiliesIsSet(boolean value) {
+      if (!value) {
+        this.columnFamilies = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case COLUMN_FAMILIES:
+        if (value == null) {
+          unsetColumnFamilies();
+        } else {
+          setColumnFamilies((List<ColumnDescriptor>)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case COLUMN_FAMILIES:
+        return getColumnFamilies();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case COLUMN_FAMILIES:
+        return isSetColumnFamilies();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof createTable_args)
+        return this.equals((createTable_args)that);
+      return false;
+    }
+
+    public boolean equals(createTable_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_columnFamilies = true && this.isSetColumnFamilies();
+      boolean that_present_columnFamilies = true && that.isSetColumnFamilies();
+      if (this_present_columnFamilies || that_present_columnFamilies) {
+        if (!(this_present_columnFamilies && that_present_columnFamilies))
+          return false;
+        if (!this.columnFamilies.equals(that.columnFamilies))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_columnFamilies = true && (isSetColumnFamilies());
+      builder.append(present_columnFamilies);
+      if (present_columnFamilies)
+        builder.append(columnFamilies);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(createTable_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      createTable_args typedOther = (createTable_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumnFamilies()).compareTo(isSetColumnFamilies());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columnFamilies, typedOther.columnFamilies);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMN_FAMILIES:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list22 = iprot.readListBegin();
+                  this.columnFamilies = new ArrayList<ColumnDescriptor>(_list22.size);
+                  for (int _i23 = 0; _i23 < _list22.size; ++_i23)
+                  {
+                    ColumnDescriptor _elem24;
+                    _elem24 = new ColumnDescriptor();
+                    _elem24.read(iprot);
+                    this.columnFamilies.add(_elem24);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.columnFamilies != null) {
+        oprot.writeFieldBegin(COLUMN_FAMILIES_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.columnFamilies.size()));
+          for (ColumnDescriptor _iter25 : this.columnFamilies)
+          {
+            _iter25.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("createTable_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columnFamilies:");
+      if (this.columnFamilies == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columnFamilies);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class createTable_result implements TBase<createTable_result._Fields>, java.io.Serializable, Cloneable, Comparable<createTable_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("createTable_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+    private static final TField EXIST_FIELD_DESC = new TField("exist", TType.STRUCT, (short)3);
+
+    public IOError io;
+    public IllegalArgument ia;
+    public AlreadyExists exist;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io"),
+      IA((short)2, "ia"),
+      EXIST((short)3, "exist");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.EXIST, new FieldMetaData("exist", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(createTable_result.class, metaDataMap);
+    }
+
+    public createTable_result() {
+    }
+
+    public createTable_result(
+      IOError io,
+      IllegalArgument ia,
+      AlreadyExists exist)
+    {
+      this();
+      this.io = io;
+      this.ia = ia;
+      this.exist = exist;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public createTable_result(createTable_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+      if (other.isSetExist()) {
+        this.exist = new AlreadyExists(other.exist);
+      }
+    }
+
+    public createTable_result deepCopy() {
+      return new createTable_result(this);
+    }
+
+    @Deprecated
+    public createTable_result clone() {
+      return new createTable_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public createTable_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public createTable_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public AlreadyExists getExist() {
+      return this.exist;
+    }
+
+    public createTable_result setExist(AlreadyExists exist) {
+      this.exist = exist;
+      return this;
+    }
+
+    public void unsetExist() {
+      this.exist = null;
+    }
+
+    /** Returns true if field exist is set (has been asigned a value) and false otherwise */
+    public boolean isSetExist() {
+      return this.exist != null;
+    }
+
+    public void setExistIsSet(boolean value) {
+      if (!value) {
+        this.exist = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      case EXIST:
+        if (value == null) {
+          unsetExist();
+        } else {
+          setExist((AlreadyExists)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      case EXIST:
+        return getExist();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      case EXIST:
+        return isSetExist();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof createTable_result)
+        return this.equals((createTable_result)that);
+      return false;
+    }
+
+    public boolean equals(createTable_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      boolean this_present_exist = true && this.isSetExist();
+      boolean that_present_exist = true && that.isSetExist();
+      if (this_present_exist || that_present_exist) {
+        if (!(this_present_exist && that_present_exist))
+          return false;
+        if (!this.exist.equals(that.exist))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      boolean present_exist = true && (isSetExist());
+      builder.append(present_exist);
+      if (present_exist)
+        builder.append(exist);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(createTable_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      createTable_result typedOther = (createTable_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIa()).compareTo(isSetIa());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(ia, typedOther.ia);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetExist()).compareTo(isSetExist());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(exist, typedOther.exist);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case EXIST:
+              if (field.type == TType.STRUCT) {
+                this.exist = new AlreadyExists();
+                this.exist.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetExist()) {
+        oprot.writeFieldBegin(EXIST_FIELD_DESC);
+        this.exist.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("createTable_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("exist:");
+      if (this.exist == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.exist);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteTable_args implements TBase<deleteTable_args._Fields>, java.io.Serializable, Cloneable, Comparable<deleteTable_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteTable_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+
+    /**
+     * name of table to delete
+     */
+    public byte[] tableName;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table to delete
+       */
+      TABLE_NAME((short)1, "tableName");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteTable_args.class, metaDataMap);
+    }
+
+    public deleteTable_args() {
+    }
+
+    public deleteTable_args(
+      byte[] tableName)
+    {
+      this();
+      this.tableName = tableName;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteTable_args(deleteTable_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+    }
+
+    public deleteTable_args deepCopy() {
+      return new deleteTable_args(this);
+    }
+
+    @Deprecated
+    public deleteTable_args clone() {
+      return new deleteTable_args(this);
+    }
+
+    /**
+     * name of table to delete
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table to delete
+     */
+    public deleteTable_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteTable_args)
+        return this.equals((deleteTable_args)that);
+      return false;
+    }
+
+    public boolean equals(deleteTable_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteTable_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteTable_args typedOther = (deleteTable_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteTable_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteTable_result implements TBase<deleteTable_result._Fields>, java.io.Serializable, Cloneable, Comparable<deleteTable_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteTable_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteTable_result.class, metaDataMap);
+    }
+
+    public deleteTable_result() {
+    }
+
+    public deleteTable_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteTable_result(deleteTable_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public deleteTable_result deepCopy() {
+      return new deleteTable_result(this);
+    }
+
+    @Deprecated
+    public deleteTable_result clone() {
+      return new deleteTable_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public deleteTable_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteTable_result)
+        return this.equals((deleteTable_result)that);
+      return false;
+    }
+
+    public boolean equals(deleteTable_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteTable_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteTable_result typedOther = (deleteTable_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteTable_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class get_args implements TBase<get_args._Fields>, java.io.Serializable, Cloneable, Comparable<get_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("get_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * column name
+     */
+    public byte[] column;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * column name
+       */
+      COLUMN((short)3, "column");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(get_args.class, metaDataMap);
+    }
+
+    public get_args() {
+    }
+
+    public get_args(
+      byte[] tableName,
+      byte[] row,
+      byte[] column)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.column = column;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public get_args(get_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumn()) {
+        this.column = other.column;
+      }
+    }
+
+    public get_args deepCopy() {
+      return new get_args(this);
+    }
+
+    @Deprecated
+    public get_args clone() {
+      return new get_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public get_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public get_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * column name
+     */
+    public byte[] getColumn() {
+      return this.column;
+    }
+
+    /**
+     * column name
+     */
+    public get_args setColumn(byte[] column) {
+      this.column = column;
+      return this;
+    }
+
+    public void unsetColumn() {
+      this.column = null;
+    }
+
+    /** Returns true if field column is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumn() {
+      return this.column != null;
+    }
+
+    public void setColumnIsSet(boolean value) {
+      if (!value) {
+        this.column = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMN:
+        if (value == null) {
+          unsetColumn();
+        } else {
+          setColumn((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMN:
+        return getColumn();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMN:
+        return isSetColumn();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof get_args)
+        return this.equals((get_args)that);
+      return false;
+    }
+
+    public boolean equals(get_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_column = true && this.isSetColumn();
+      boolean that_present_column = true && that.isSetColumn();
+      if (this_present_column || that_present_column) {
+        if (!(this_present_column && that_present_column))
+          return false;
+        if (!java.util.Arrays.equals(this.column, that.column))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_column = true && (isSetColumn());
+      builder.append(present_column);
+      if (present_column)
+        builder.append(column);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(get_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      get_args typedOther = (get_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumn()).compareTo(isSetColumn());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(column, typedOther.column);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMN:
+              if (field.type == TType.STRING) {
+                this.column = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.column != null) {
+        oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+        oprot.writeBinary(this.column);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("get_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("column:");
+      if (this.column == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.column);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class get_result implements TBase<get_result._Fields>, java.io.Serializable, Cloneable, Comparable<get_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("get_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TCell> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TCell.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(get_result.class, metaDataMap);
+    }
+
+    public get_result() {
+    }
+
+    public get_result(
+      List<TCell> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public get_result(get_result other) {
+      if (other.isSetSuccess()) {
+        List<TCell> __this__success = new ArrayList<TCell>();
+        for (TCell other_element : other.success) {
+          __this__success.add(new TCell(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public get_result deepCopy() {
+      return new get_result(this);
+    }
+
+    @Deprecated
+    public get_result clone() {
+      return new get_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TCell> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TCell elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TCell>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TCell> getSuccess() {
+      return this.success;
+    }
+
+    public get_result setSuccess(List<TCell> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public get_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TCell>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof get_result)
+        return this.equals((get_result)that);
+      return false;
+    }
+
+    public boolean equals(get_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(get_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      get_result typedOther = (get_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list26 = iprot.readListBegin();
+                  this.success = new ArrayList<TCell>(_list26.size);
+                  for (int _i27 = 0; _i27 < _list26.size; ++_i27)
+                  {
+                    TCell _elem28;
+                    _elem28 = new TCell();
+                    _elem28.read(iprot);
+                    this.success.add(_elem28);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TCell _iter29 : this.success)
+          {
+            _iter29.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("get_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getVer_args implements TBase<getVer_args._Fields>, java.io.Serializable, Cloneable, Comparable<getVer_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getVer_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+    private static final TField NUM_VERSIONS_FIELD_DESC = new TField("numVersions", TType.I32, (short)4);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * column name
+     */
+    public byte[] column;
+    /**
+     * number of versions to retrieve
+     */
+    public int numVersions;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * column name
+       */
+      COLUMN((short)3, "column"),
+      /**
+       * number of versions to retrieve
+       */
+      NUM_VERSIONS((short)4, "numVersions");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __NUMVERSIONS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.NUM_VERSIONS, new FieldMetaData("numVersions", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getVer_args.class, metaDataMap);
+    }
+
+    public getVer_args() {
+    }
+
+    public getVer_args(
+      byte[] tableName,
+      byte[] row,
+      byte[] column,
+      int numVersions)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.column = column;
+      this.numVersions = numVersions;
+      setNumVersionsIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getVer_args(getVer_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumn()) {
+        this.column = other.column;
+      }
+      this.numVersions = other.numVersions;
+    }
+
+    public getVer_args deepCopy() {
+      return new getVer_args(this);
+    }
+
+    @Deprecated
+    public getVer_args clone() {
+      return new getVer_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public getVer_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public getVer_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * column name
+     */
+    public byte[] getColumn() {
+      return this.column;
+    }
+
+    /**
+     * column name
+     */
+    public getVer_args setColumn(byte[] column) {
+      this.column = column;
+      return this;
+    }
+
+    public void unsetColumn() {
+      this.column = null;
+    }
+
+    /** Returns true if field column is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumn() {
+      return this.column != null;
+    }
+
+    public void setColumnIsSet(boolean value) {
+      if (!value) {
+        this.column = null;
+      }
+    }
+
+    /**
+     * number of versions to retrieve
+     */
+    public int getNumVersions() {
+      return this.numVersions;
+    }
+
+    /**
+     * number of versions to retrieve
+     */
+    public getVer_args setNumVersions(int numVersions) {
+      this.numVersions = numVersions;
+      setNumVersionsIsSet(true);
+      return this;
+    }
+
+    public void unsetNumVersions() {
+      __isset_bit_vector.clear(__NUMVERSIONS_ISSET_ID);
+    }
+
+    /** Returns true if field numVersions is set (has been asigned a value) and false otherwise */
+    public boolean isSetNumVersions() {
+      return __isset_bit_vector.get(__NUMVERSIONS_ISSET_ID);
+    }
+
+    public void setNumVersionsIsSet(boolean value) {
+      __isset_bit_vector.set(__NUMVERSIONS_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMN:
+        if (value == null) {
+          unsetColumn();
+        } else {
+          setColumn((byte[])value);
+        }
+        break;
+
+      case NUM_VERSIONS:
+        if (value == null) {
+          unsetNumVersions();
+        } else {
+          setNumVersions((Integer)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMN:
+        return getColumn();
+
+      case NUM_VERSIONS:
+        return new Integer(getNumVersions());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMN:
+        return isSetColumn();
+      case NUM_VERSIONS:
+        return isSetNumVersions();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getVer_args)
+        return this.equals((getVer_args)that);
+      return false;
+    }
+
+    public boolean equals(getVer_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_column = true && this.isSetColumn();
+      boolean that_present_column = true && that.isSetColumn();
+      if (this_present_column || that_present_column) {
+        if (!(this_present_column && that_present_column))
+          return false;
+        if (!java.util.Arrays.equals(this.column, that.column))
+          return false;
+      }
+
+      boolean this_present_numVersions = true;
+      boolean that_present_numVersions = true;
+      if (this_present_numVersions || that_present_numVersions) {
+        if (!(this_present_numVersions && that_present_numVersions))
+          return false;
+        if (this.numVersions != that.numVersions)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_column = true && (isSetColumn());
+      builder.append(present_column);
+      if (present_column)
+        builder.append(column);
+
+      boolean present_numVersions = true;
+      builder.append(present_numVersions);
+      if (present_numVersions)
+        builder.append(numVersions);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getVer_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getVer_args typedOther = (getVer_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumn()).compareTo(isSetColumn());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(column, typedOther.column);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetNumVersions()).compareTo(isSetNumVersions());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(numVersions, typedOther.numVersions);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMN:
+              if (field.type == TType.STRING) {
+                this.column = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case NUM_VERSIONS:
+              if (field.type == TType.I32) {
+                this.numVersions = iprot.readI32();
+                setNumVersionsIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.column != null) {
+        oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+        oprot.writeBinary(this.column);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(NUM_VERSIONS_FIELD_DESC);
+      oprot.writeI32(this.numVersions);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getVer_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("column:");
+      if (this.column == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.column);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("numVersions:");
+      sb.append(this.numVersions);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getVer_result implements TBase<getVer_result._Fields>, java.io.Serializable, Cloneable, Comparable<getVer_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getVer_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TCell> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TCell.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getVer_result.class, metaDataMap);
+    }
+
+    public getVer_result() {
+    }
+
+    public getVer_result(
+      List<TCell> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getVer_result(getVer_result other) {
+      if (other.isSetSuccess()) {
+        List<TCell> __this__success = new ArrayList<TCell>();
+        for (TCell other_element : other.success) {
+          __this__success.add(new TCell(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getVer_result deepCopy() {
+      return new getVer_result(this);
+    }
+
+    @Deprecated
+    public getVer_result clone() {
+      return new getVer_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TCell> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TCell elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TCell>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TCell> getSuccess() {
+      return this.success;
+    }
+
+    public getVer_result setSuccess(List<TCell> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getVer_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TCell>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getVer_result)
+        return this.equals((getVer_result)that);
+      return false;
+    }
+
+    public boolean equals(getVer_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getVer_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getVer_result typedOther = (getVer_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list30 = iprot.readListBegin();
+                  this.success = new ArrayList<TCell>(_list30.size);
+                  for (int _i31 = 0; _i31 < _list30.size; ++_i31)
+                  {
+                    TCell _elem32;
+                    _elem32 = new TCell();
+                    _elem32.read(iprot);
+                    this.success.add(_elem32);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TCell _iter33 : this.success)
+          {
+            _iter33.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getVer_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getVerTs_args implements TBase<getVerTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<getVerTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getVerTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+    private static final TField NUM_VERSIONS_FIELD_DESC = new TField("numVersions", TType.I32, (short)5);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * column name
+     */
+    public byte[] column;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+    /**
+     * number of versions to retrieve
+     */
+    public int numVersions;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * column name
+       */
+      COLUMN((short)3, "column"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)4, "timestamp"),
+      /**
+       * number of versions to retrieve
+       */
+      NUM_VERSIONS((short)5, "numVersions");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private static final int __NUMVERSIONS_ISSET_ID = 1;
+    private BitSet __isset_bit_vector = new BitSet(2);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+      put(_Fields.NUM_VERSIONS, new FieldMetaData("numVersions", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getVerTs_args.class, metaDataMap);
+    }
+
+    public getVerTs_args() {
+    }
+
+    public getVerTs_args(
+      byte[] tableName,
+      byte[] row,
+      byte[] column,
+      long timestamp,
+      int numVersions)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.column = column;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      this.numVersions = numVersions;
+      setNumVersionsIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getVerTs_args(getVerTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumn()) {
+        this.column = other.column;
+      }
+      this.timestamp = other.timestamp;
+      this.numVersions = other.numVersions;
+    }
+
+    public getVerTs_args deepCopy() {
+      return new getVerTs_args(this);
+    }
+
+    @Deprecated
+    public getVerTs_args clone() {
+      return new getVerTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public getVerTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public getVerTs_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * column name
+     */
+    public byte[] getColumn() {
+      return this.column;
+    }
+
+    /**
+     * column name
+     */
+    public getVerTs_args setColumn(byte[] column) {
+      this.column = column;
+      return this;
+    }
+
+    public void unsetColumn() {
+      this.column = null;
+    }
+
+    /** Returns true if field column is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumn() {
+      return this.column != null;
+    }
+
+    public void setColumnIsSet(boolean value) {
+      if (!value) {
+        this.column = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public getVerTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    /**
+     * number of versions to retrieve
+     */
+    public int getNumVersions() {
+      return this.numVersions;
+    }
+
+    /**
+     * number of versions to retrieve
+     */
+    public getVerTs_args setNumVersions(int numVersions) {
+      this.numVersions = numVersions;
+      setNumVersionsIsSet(true);
+      return this;
+    }
+
+    public void unsetNumVersions() {
+      __isset_bit_vector.clear(__NUMVERSIONS_ISSET_ID);
+    }
+
+    /** Returns true if field numVersions is set (has been asigned a value) and false otherwise */
+    public boolean isSetNumVersions() {
+      return __isset_bit_vector.get(__NUMVERSIONS_ISSET_ID);
+    }
+
+    public void setNumVersionsIsSet(boolean value) {
+      __isset_bit_vector.set(__NUMVERSIONS_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMN:
+        if (value == null) {
+          unsetColumn();
+        } else {
+          setColumn((byte[])value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      case NUM_VERSIONS:
+        if (value == null) {
+          unsetNumVersions();
+        } else {
+          setNumVersions((Integer)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMN:
+        return getColumn();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      case NUM_VERSIONS:
+        return new Integer(getNumVersions());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMN:
+        return isSetColumn();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      case NUM_VERSIONS:
+        return isSetNumVersions();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getVerTs_args)
+        return this.equals((getVerTs_args)that);
+      return false;
+    }
+
+    public boolean equals(getVerTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_column = true && this.isSetColumn();
+      boolean that_present_column = true && that.isSetColumn();
+      if (this_present_column || that_present_column) {
+        if (!(this_present_column && that_present_column))
+          return false;
+        if (!java.util.Arrays.equals(this.column, that.column))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      boolean this_present_numVersions = true;
+      boolean that_present_numVersions = true;
+      if (this_present_numVersions || that_present_numVersions) {
+        if (!(this_present_numVersions && that_present_numVersions))
+          return false;
+        if (this.numVersions != that.numVersions)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_column = true && (isSetColumn());
+      builder.append(present_column);
+      if (present_column)
+        builder.append(column);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      boolean present_numVersions = true;
+      builder.append(present_numVersions);
+      if (present_numVersions)
+        builder.append(numVersions);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getVerTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getVerTs_args typedOther = (getVerTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumn()).compareTo(isSetColumn());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(column, typedOther.column);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetNumVersions()).compareTo(isSetNumVersions());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(numVersions, typedOther.numVersions);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMN:
+              if (field.type == TType.STRING) {
+                this.column = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case NUM_VERSIONS:
+              if (field.type == TType.I32) {
+                this.numVersions = iprot.readI32();
+                setNumVersionsIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.column != null) {
+        oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+        oprot.writeBinary(this.column);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldBegin(NUM_VERSIONS_FIELD_DESC);
+      oprot.writeI32(this.numVersions);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getVerTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("column:");
+      if (this.column == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.column);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("numVersions:");
+      sb.append(this.numVersions);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getVerTs_result implements TBase<getVerTs_result._Fields>, java.io.Serializable, Cloneable, Comparable<getVerTs_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getVerTs_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TCell> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TCell.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getVerTs_result.class, metaDataMap);
+    }
+
+    public getVerTs_result() {
+    }
+
+    public getVerTs_result(
+      List<TCell> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getVerTs_result(getVerTs_result other) {
+      if (other.isSetSuccess()) {
+        List<TCell> __this__success = new ArrayList<TCell>();
+        for (TCell other_element : other.success) {
+          __this__success.add(new TCell(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getVerTs_result deepCopy() {
+      return new getVerTs_result(this);
+    }
+
+    @Deprecated
+    public getVerTs_result clone() {
+      return new getVerTs_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TCell> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TCell elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TCell>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TCell> getSuccess() {
+      return this.success;
+    }
+
+    public getVerTs_result setSuccess(List<TCell> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getVerTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TCell>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getVerTs_result)
+        return this.equals((getVerTs_result)that);
+      return false;
+    }
+
+    public boolean equals(getVerTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getVerTs_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getVerTs_result typedOther = (getVerTs_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list34 = iprot.readListBegin();
+                  this.success = new ArrayList<TCell>(_list34.size);
+                  for (int _i35 = 0; _i35 < _list34.size; ++_i35)
+                  {
+                    TCell _elem36;
+                    _elem36 = new TCell();
+                    _elem36.read(iprot);
+                    this.success.add(_elem36);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TCell _iter37 : this.success)
+          {
+            _iter37.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getVerTs_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRow_args implements TBase<getRow_args._Fields>, java.io.Serializable, Cloneable, Comparable<getRow_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRow_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRow_args.class, metaDataMap);
+    }
+
+    public getRow_args() {
+    }
+
+    public getRow_args(
+      byte[] tableName,
+      byte[] row)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRow_args(getRow_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+    }
+
+    public getRow_args deepCopy() {
+      return new getRow_args(this);
+    }
+
+    @Deprecated
+    public getRow_args clone() {
+      return new getRow_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public getRow_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public getRow_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRow_args)
+        return this.equals((getRow_args)that);
+      return false;
+    }
+
+    public boolean equals(getRow_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getRow_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getRow_args typedOther = (getRow_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRow_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRow_result implements TBase<getRow_result._Fields>, java.io.Serializable, Cloneable   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRow_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TRowResult> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TRowResult.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRow_result.class, metaDataMap);
+    }
+
+    public getRow_result() {
+    }
+
+    public getRow_result(
+      List<TRowResult> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRow_result(getRow_result other) {
+      if (other.isSetSuccess()) {
+        List<TRowResult> __this__success = new ArrayList<TRowResult>();
+        for (TRowResult other_element : other.success) {
+          __this__success.add(new TRowResult(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getRow_result deepCopy() {
+      return new getRow_result(this);
+    }
+
+    @Deprecated
+    public getRow_result clone() {
+      return new getRow_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TRowResult> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TRowResult elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TRowResult>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TRowResult> getSuccess() {
+      return this.success;
+    }
+
+    public getRow_result setSuccess(List<TRowResult> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getRow_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TRowResult>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRow_result)
+        return this.equals((getRow_result)that);
+      return false;
+    }
+
+    public boolean equals(getRow_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list38 = iprot.readListBegin();
+                  this.success = new ArrayList<TRowResult>(_list38.size);
+                  for (int _i39 = 0; _i39 < _list38.size; ++_i39)
+                  {
+                    TRowResult _elem40;
+                    _elem40 = new TRowResult();
+                    _elem40.read(iprot);
+                    this.success.add(_elem40);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TRowResult _iter41 : this.success)
+          {
+            _iter41.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRow_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRowWithColumns_args implements TBase<getRowWithColumns_args._Fields>, java.io.Serializable, Cloneable, Comparable<getRowWithColumns_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumns_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * List of columns to return, null for all columns
+     */
+    public List<byte[]> columns;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * List of columns to return, null for all columns
+       */
+      COLUMNS((short)3, "columns");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRowWithColumns_args.class, metaDataMap);
+    }
+
+    public getRowWithColumns_args() {
+    }
+
+    public getRowWithColumns_args(
+      byte[] tableName,
+      byte[] row,
+      List<byte[]> columns)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.columns = columns;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRowWithColumns_args(getRowWithColumns_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumns()) {
+        List<byte[]> __this__columns = new ArrayList<byte[]>();
+        for (byte[] other_element : other.columns) {
+          __this__columns.add(other_element);
+        }
+        this.columns = __this__columns;
+      }
+    }
+
+    public getRowWithColumns_args deepCopy() {
+      return new getRowWithColumns_args(this);
+    }
+
+    @Deprecated
+    public getRowWithColumns_args clone() {
+      return new getRowWithColumns_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public getRowWithColumns_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public getRowWithColumns_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    public int getColumnsSize() {
+      return (this.columns == null) ? 0 : this.columns.size();
+    }
+
+    public java.util.Iterator<byte[]> getColumnsIterator() {
+      return (this.columns == null) ? null : this.columns.iterator();
+    }
+
+    public void addToColumns(byte[] elem) {
+      if (this.columns == null) {
+        this.columns = new ArrayList<byte[]>();
+      }
+      this.columns.add(elem);
+    }
+
+    /**
+     * List of columns to return, null for all columns
+     */
+    public List<byte[]> getColumns() {
+      return this.columns;
+    }
+
+    /**
+     * List of columns to return, null for all columns
+     */
+    public getRowWithColumns_args setColumns(List<byte[]> columns) {
+      this.columns = columns;
+      return this;
+    }
+
+    public void unsetColumns() {
+      this.columns = null;
+    }
+
+    /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumns() {
+      return this.columns != null;
+    }
+
+    public void setColumnsIsSet(boolean value) {
+      if (!value) {
+        this.columns = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMNS:
+        if (value == null) {
+          unsetColumns();
+        } else {
+          setColumns((List<byte[]>)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMNS:
+        return getColumns();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMNS:
+        return isSetColumns();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRowWithColumns_args)
+        return this.equals((getRowWithColumns_args)that);
+      return false;
+    }
+
+    public boolean equals(getRowWithColumns_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_columns = true && this.isSetColumns();
+      boolean that_present_columns = true && that.isSetColumns();
+      if (this_present_columns || that_present_columns) {
+        if (!(this_present_columns && that_present_columns))
+          return false;
+        if (!this.columns.equals(that.columns))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_columns = true && (isSetColumns());
+      builder.append(present_columns);
+      if (present_columns)
+        builder.append(columns);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getRowWithColumns_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getRowWithColumns_args typedOther = (getRowWithColumns_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumns()).compareTo(isSetColumns());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columns, typedOther.columns);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMNS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list42 = iprot.readListBegin();
+                  this.columns = new ArrayList<byte[]>(_list42.size);
+                  for (int _i43 = 0; _i43 < _list42.size; ++_i43)
+                  {
+                    byte[] _elem44;
+                    _elem44 = iprot.readBinary();
+                    this.columns.add(_elem44);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.columns != null) {
+        oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+          for (byte[] _iter45 : this.columns)
+          {
+            oprot.writeBinary(_iter45);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRowWithColumns_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columns:");
+      if (this.columns == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columns);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRowWithColumns_result implements TBase<getRowWithColumns_result._Fields>, java.io.Serializable, Cloneable   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumns_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TRowResult> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TRowResult.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRowWithColumns_result.class, metaDataMap);
+    }
+
+    public getRowWithColumns_result() {
+    }
+
+    public getRowWithColumns_result(
+      List<TRowResult> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRowWithColumns_result(getRowWithColumns_result other) {
+      if (other.isSetSuccess()) {
+        List<TRowResult> __this__success = new ArrayList<TRowResult>();
+        for (TRowResult other_element : other.success) {
+          __this__success.add(new TRowResult(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getRowWithColumns_result deepCopy() {
+      return new getRowWithColumns_result(this);
+    }
+
+    @Deprecated
+    public getRowWithColumns_result clone() {
+      return new getRowWithColumns_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TRowResult> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TRowResult elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TRowResult>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TRowResult> getSuccess() {
+      return this.success;
+    }
+
+    public getRowWithColumns_result setSuccess(List<TRowResult> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getRowWithColumns_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TRowResult>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRowWithColumns_result)
+        return this.equals((getRowWithColumns_result)that);
+      return false;
+    }
+
+    public boolean equals(getRowWithColumns_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list46 = iprot.readListBegin();
+                  this.success = new ArrayList<TRowResult>(_list46.size);
+                  for (int _i47 = 0; _i47 < _list46.size; ++_i47)
+                  {
+                    TRowResult _elem48;
+                    _elem48 = new TRowResult();
+                    _elem48.read(iprot);
+                    this.success.add(_elem48);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TRowResult _iter49 : this.success)
+          {
+            _iter49.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRowWithColumns_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRowTs_args implements TBase<getRowTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<getRowTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRowTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)3);
+
+    /**
+     * name of the table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of the table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)3, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRowTs_args.class, metaDataMap);
+    }
+
+    public getRowTs_args() {
+    }
+
+    public getRowTs_args(
+      byte[] tableName,
+      byte[] row,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRowTs_args(getRowTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public getRowTs_args deepCopy() {
+      return new getRowTs_args(this);
+    }
+
+    @Deprecated
+    public getRowTs_args clone() {
+      return new getRowTs_args(this);
+    }
+
+    /**
+     * name of the table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of the table
+     */
+    public getRowTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public getRowTs_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public getRowTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRowTs_args)
+        return this.equals((getRowTs_args)that);
+      return false;
+    }
+
+    public boolean equals(getRowTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getRowTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getRowTs_args typedOther = (getRowTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRowTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRowTs_result implements TBase<getRowTs_result._Fields>, java.io.Serializable, Cloneable   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRowTs_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TRowResult> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TRowResult.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRowTs_result.class, metaDataMap);
+    }
+
+    public getRowTs_result() {
+    }
+
+    public getRowTs_result(
+      List<TRowResult> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRowTs_result(getRowTs_result other) {
+      if (other.isSetSuccess()) {
+        List<TRowResult> __this__success = new ArrayList<TRowResult>();
+        for (TRowResult other_element : other.success) {
+          __this__success.add(new TRowResult(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getRowTs_result deepCopy() {
+      return new getRowTs_result(this);
+    }
+
+    @Deprecated
+    public getRowTs_result clone() {
+      return new getRowTs_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TRowResult> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TRowResult elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TRowResult>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TRowResult> getSuccess() {
+      return this.success;
+    }
+
+    public getRowTs_result setSuccess(List<TRowResult> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getRowTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TRowResult>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRowTs_result)
+        return this.equals((getRowTs_result)that);
+      return false;
+    }
+
+    public boolean equals(getRowTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list50 = iprot.readListBegin();
+                  this.success = new ArrayList<TRowResult>(_list50.size);
+                  for (int _i51 = 0; _i51 < _list50.size; ++_i51)
+                  {
+                    TRowResult _elem52;
+                    _elem52 = new TRowResult();
+                    _elem52.read(iprot);
+                    this.success.add(_elem52);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TRowResult _iter53 : this.success)
+          {
+            _iter53.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRowTs_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRowWithColumnsTs_args implements TBase<getRowWithColumnsTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<getRowWithColumnsTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumnsTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * List of columns to return, null for all columns
+     */
+    public List<byte[]> columns;
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * List of columns to return, null for all columns
+       */
+      COLUMNS((short)3, "columns"),
+      TIMESTAMP((short)4, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRowWithColumnsTs_args.class, metaDataMap);
+    }
+
+    public getRowWithColumnsTs_args() {
+    }
+
+    public getRowWithColumnsTs_args(
+      byte[] tableName,
+      byte[] row,
+      List<byte[]> columns,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.columns = columns;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRowWithColumnsTs_args(getRowWithColumnsTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumns()) {
+        List<byte[]> __this__columns = new ArrayList<byte[]>();
+        for (byte[] other_element : other.columns) {
+          __this__columns.add(other_element);
+        }
+        this.columns = __this__columns;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public getRowWithColumnsTs_args deepCopy() {
+      return new getRowWithColumnsTs_args(this);
+    }
+
+    @Deprecated
+    public getRowWithColumnsTs_args clone() {
+      return new getRowWithColumnsTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public getRowWithColumnsTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public getRowWithColumnsTs_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    public int getColumnsSize() {
+      return (this.columns == null) ? 0 : this.columns.size();
+    }
+
+    public java.util.Iterator<byte[]> getColumnsIterator() {
+      return (this.columns == null) ? null : this.columns.iterator();
+    }
+
+    public void addToColumns(byte[] elem) {
+      if (this.columns == null) {
+        this.columns = new ArrayList<byte[]>();
+      }
+      this.columns.add(elem);
+    }
+
+    /**
+     * List of columns to return, null for all columns
+     */
+    public List<byte[]> getColumns() {
+      return this.columns;
+    }
+
+    /**
+     * List of columns to return, null for all columns
+     */
+    public getRowWithColumnsTs_args setColumns(List<byte[]> columns) {
+      this.columns = columns;
+      return this;
+    }
+
+    public void unsetColumns() {
+      this.columns = null;
+    }
+
+    /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumns() {
+      return this.columns != null;
+    }
+
+    public void setColumnsIsSet(boolean value) {
+      if (!value) {
+        this.columns = null;
+      }
+    }
+
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    public getRowWithColumnsTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMNS:
+        if (value == null) {
+          unsetColumns();
+        } else {
+          setColumns((List<byte[]>)value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMNS:
+        return getColumns();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMNS:
+        return isSetColumns();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRowWithColumnsTs_args)
+        return this.equals((getRowWithColumnsTs_args)that);
+      return false;
+    }
+
+    public boolean equals(getRowWithColumnsTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_columns = true && this.isSetColumns();
+      boolean that_present_columns = true && that.isSetColumns();
+      if (this_present_columns || that_present_columns) {
+        if (!(this_present_columns && that_present_columns))
+          return false;
+        if (!this.columns.equals(that.columns))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_columns = true && (isSetColumns());
+      builder.append(present_columns);
+      if (present_columns)
+        builder.append(columns);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(getRowWithColumnsTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      getRowWithColumnsTs_args typedOther = (getRowWithColumnsTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumns()).compareTo(isSetColumns());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columns, typedOther.columns);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMNS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list54 = iprot.readListBegin();
+                  this.columns = new ArrayList<byte[]>(_list54.size);
+                  for (int _i55 = 0; _i55 < _list54.size; ++_i55)
+                  {
+                    byte[] _elem56;
+                    _elem56 = iprot.readBinary();
+                    this.columns.add(_elem56);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.columns != null) {
+        oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+          for (byte[] _iter57 : this.columns)
+          {
+            oprot.writeBinary(_iter57);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRowWithColumnsTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columns:");
+      if (this.columns == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columns);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class getRowWithColumnsTs_result implements TBase<getRowWithColumnsTs_result._Fields>, java.io.Serializable, Cloneable   {
+    private static final TStruct STRUCT_DESC = new TStruct("getRowWithColumnsTs_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public List<TRowResult> success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TRowResult.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(getRowWithColumnsTs_result.class, metaDataMap);
+    }
+
+    public getRowWithColumnsTs_result() {
+    }
+
+    public getRowWithColumnsTs_result(
+      List<TRowResult> success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public getRowWithColumnsTs_result(getRowWithColumnsTs_result other) {
+      if (other.isSetSuccess()) {
+        List<TRowResult> __this__success = new ArrayList<TRowResult>();
+        for (TRowResult other_element : other.success) {
+          __this__success.add(new TRowResult(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public getRowWithColumnsTs_result deepCopy() {
+      return new getRowWithColumnsTs_result(this);
+    }
+
+    @Deprecated
+    public getRowWithColumnsTs_result clone() {
+      return new getRowWithColumnsTs_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TRowResult> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TRowResult elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TRowResult>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TRowResult> getSuccess() {
+      return this.success;
+    }
+
+    public getRowWithColumnsTs_result setSuccess(List<TRowResult> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public getRowWithColumnsTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TRowResult>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof getRowWithColumnsTs_result)
+        return this.equals((getRowWithColumnsTs_result)that);
+      return false;
+    }
+
+    public boolean equals(getRowWithColumnsTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list58 = iprot.readListBegin();
+                  this.success = new ArrayList<TRowResult>(_list58.size);
+                  for (int _i59 = 0; _i59 < _list58.size; ++_i59)
+                  {
+                    TRowResult _elem60;
+                    _elem60 = new TRowResult();
+                    _elem60.read(iprot);
+                    this.success.add(_elem60);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TRowResult _iter61 : this.success)
+          {
+            _iter61.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("getRowWithColumnsTs_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRow_args implements TBase<mutateRow_args._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRow_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRow_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField MUTATIONS_FIELD_DESC = new TField("mutations", TType.LIST, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * list of mutation commands
+     */
+    public List<Mutation> mutations;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * list of mutation commands
+       */
+      MUTATIONS((short)3, "mutations");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.MUTATIONS, new FieldMetaData("mutations", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, Mutation.class))));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRow_args.class, metaDataMap);
+    }
+
+    public mutateRow_args() {
+    }
+
+    public mutateRow_args(
+      byte[] tableName,
+      byte[] row,
+      List<Mutation> mutations)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.mutations = mutations;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRow_args(mutateRow_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetMutations()) {
+        List<Mutation> __this__mutations = new ArrayList<Mutation>();
+        for (Mutation other_element : other.mutations) {
+          __this__mutations.add(new Mutation(other_element));
+        }
+        this.mutations = __this__mutations;
+      }
+    }
+
+    public mutateRow_args deepCopy() {
+      return new mutateRow_args(this);
+    }
+
+    @Deprecated
+    public mutateRow_args clone() {
+      return new mutateRow_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public mutateRow_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public mutateRow_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    public int getMutationsSize() {
+      return (this.mutations == null) ? 0 : this.mutations.size();
+    }
+
+    public java.util.Iterator<Mutation> getMutationsIterator() {
+      return (this.mutations == null) ? null : this.mutations.iterator();
+    }
+
+    public void addToMutations(Mutation elem) {
+      if (this.mutations == null) {
+        this.mutations = new ArrayList<Mutation>();
+      }
+      this.mutations.add(elem);
+    }
+
+    /**
+     * list of mutation commands
+     */
+    public List<Mutation> getMutations() {
+      return this.mutations;
+    }
+
+    /**
+     * list of mutation commands
+     */
+    public mutateRow_args setMutations(List<Mutation> mutations) {
+      this.mutations = mutations;
+      return this;
+    }
+
+    public void unsetMutations() {
+      this.mutations = null;
+    }
+
+    /** Returns true if field mutations is set (has been asigned a value) and false otherwise */
+    public boolean isSetMutations() {
+      return this.mutations != null;
+    }
+
+    public void setMutationsIsSet(boolean value) {
+      if (!value) {
+        this.mutations = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case MUTATIONS:
+        if (value == null) {
+          unsetMutations();
+        } else {
+          setMutations((List<Mutation>)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case MUTATIONS:
+        return getMutations();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case MUTATIONS:
+        return isSetMutations();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRow_args)
+        return this.equals((mutateRow_args)that);
+      return false;
+    }
+
+    public boolean equals(mutateRow_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_mutations = true && this.isSetMutations();
+      boolean that_present_mutations = true && that.isSetMutations();
+      if (this_present_mutations || that_present_mutations) {
+        if (!(this_present_mutations && that_present_mutations))
+          return false;
+        if (!this.mutations.equals(that.mutations))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_mutations = true && (isSetMutations());
+      builder.append(present_mutations);
+      if (present_mutations)
+        builder.append(mutations);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRow_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRow_args typedOther = (mutateRow_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetMutations()).compareTo(isSetMutations());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(mutations, typedOther.mutations);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case MUTATIONS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list62 = iprot.readListBegin();
+                  this.mutations = new ArrayList<Mutation>(_list62.size);
+                  for (int _i63 = 0; _i63 < _list62.size; ++_i63)
+                  {
+                    Mutation _elem64;
+                    _elem64 = new Mutation();
+                    _elem64.read(iprot);
+                    this.mutations.add(_elem64);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.mutations != null) {
+        oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.mutations.size()));
+          for (Mutation _iter65 : this.mutations)
+          {
+            _iter65.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRow_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("mutations:");
+      if (this.mutations == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.mutations);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRow_result implements TBase<mutateRow_result._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRow_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRow_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRow_result.class, metaDataMap);
+    }
+
+    public mutateRow_result() {
+    }
+
+    public mutateRow_result(
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRow_result(mutateRow_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public mutateRow_result deepCopy() {
+      return new mutateRow_result(this);
+    }
+
+    @Deprecated
+    public mutateRow_result clone() {
+      return new mutateRow_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public mutateRow_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public mutateRow_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRow_result)
+        return this.equals((mutateRow_result)that);
+      return false;
+    }
+
+    public boolean equals(mutateRow_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRow_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRow_result typedOther = (mutateRow_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIa()).compareTo(isSetIa());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(ia, typedOther.ia);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRow_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRowTs_args implements TBase<mutateRowTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRowTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRowTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField MUTATIONS_FIELD_DESC = new TField("mutations", TType.LIST, (short)3);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row key
+     */
+    public byte[] row;
+    /**
+     * list of mutation commands
+     */
+    public List<Mutation> mutations;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row key
+       */
+      ROW((short)2, "row"),
+      /**
+       * list of mutation commands
+       */
+      MUTATIONS((short)3, "mutations"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)4, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.MUTATIONS, new FieldMetaData("mutations", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, Mutation.class))));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRowTs_args.class, metaDataMap);
+    }
+
+    public mutateRowTs_args() {
+    }
+
+    public mutateRowTs_args(
+      byte[] tableName,
+      byte[] row,
+      List<Mutation> mutations,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.mutations = mutations;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRowTs_args(mutateRowTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetMutations()) {
+        List<Mutation> __this__mutations = new ArrayList<Mutation>();
+        for (Mutation other_element : other.mutations) {
+          __this__mutations.add(new Mutation(other_element));
+        }
+        this.mutations = __this__mutations;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public mutateRowTs_args deepCopy() {
+      return new mutateRowTs_args(this);
+    }
+
+    @Deprecated
+    public mutateRowTs_args clone() {
+      return new mutateRowTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public mutateRowTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row key
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row key
+     */
+    public mutateRowTs_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    public int getMutationsSize() {
+      return (this.mutations == null) ? 0 : this.mutations.size();
+    }
+
+    public java.util.Iterator<Mutation> getMutationsIterator() {
+      return (this.mutations == null) ? null : this.mutations.iterator();
+    }
+
+    public void addToMutations(Mutation elem) {
+      if (this.mutations == null) {
+        this.mutations = new ArrayList<Mutation>();
+      }
+      this.mutations.add(elem);
+    }
+
+    /**
+     * list of mutation commands
+     */
+    public List<Mutation> getMutations() {
+      return this.mutations;
+    }
+
+    /**
+     * list of mutation commands
+     */
+    public mutateRowTs_args setMutations(List<Mutation> mutations) {
+      this.mutations = mutations;
+      return this;
+    }
+
+    public void unsetMutations() {
+      this.mutations = null;
+    }
+
+    /** Returns true if field mutations is set (has been asigned a value) and false otherwise */
+    public boolean isSetMutations() {
+      return this.mutations != null;
+    }
+
+    public void setMutationsIsSet(boolean value) {
+      if (!value) {
+        this.mutations = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public mutateRowTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case MUTATIONS:
+        if (value == null) {
+          unsetMutations();
+        } else {
+          setMutations((List<Mutation>)value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case MUTATIONS:
+        return getMutations();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case MUTATIONS:
+        return isSetMutations();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRowTs_args)
+        return this.equals((mutateRowTs_args)that);
+      return false;
+    }
+
+    public boolean equals(mutateRowTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_mutations = true && this.isSetMutations();
+      boolean that_present_mutations = true && that.isSetMutations();
+      if (this_present_mutations || that_present_mutations) {
+        if (!(this_present_mutations && that_present_mutations))
+          return false;
+        if (!this.mutations.equals(that.mutations))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_mutations = true && (isSetMutations());
+      builder.append(present_mutations);
+      if (present_mutations)
+        builder.append(mutations);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRowTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRowTs_args typedOther = (mutateRowTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetMutations()).compareTo(isSetMutations());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(mutations, typedOther.mutations);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case MUTATIONS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list66 = iprot.readListBegin();
+                  this.mutations = new ArrayList<Mutation>(_list66.size);
+                  for (int _i67 = 0; _i67 < _list66.size; ++_i67)
+                  {
+                    Mutation _elem68;
+                    _elem68 = new Mutation();
+                    _elem68.read(iprot);
+                    this.mutations.add(_elem68);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.mutations != null) {
+        oprot.writeFieldBegin(MUTATIONS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.mutations.size()));
+          for (Mutation _iter69 : this.mutations)
+          {
+            _iter69.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRowTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("mutations:");
+      if (this.mutations == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.mutations);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRowTs_result implements TBase<mutateRowTs_result._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRowTs_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRowTs_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRowTs_result.class, metaDataMap);
+    }
+
+    public mutateRowTs_result() {
+    }
+
+    public mutateRowTs_result(
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRowTs_result(mutateRowTs_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public mutateRowTs_result deepCopy() {
+      return new mutateRowTs_result(this);
+    }
+
+    @Deprecated
+    public mutateRowTs_result clone() {
+      return new mutateRowTs_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public mutateRowTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public mutateRowTs_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRowTs_result)
+        return this.equals((mutateRowTs_result)that);
+      return false;
+    }
+
+    public boolean equals(mutateRowTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRowTs_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRowTs_result typedOther = (mutateRowTs_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIa()).compareTo(isSetIa());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(ia, typedOther.ia);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRowTs_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRows_args implements TBase<mutateRows_args._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRows_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRows_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_BATCHES_FIELD_DESC = new TField("rowBatches", TType.LIST, (short)2);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * list of row batches
+     */
+    public List<BatchMutation> rowBatches;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * list of row batches
+       */
+      ROW_BATCHES((short)2, "rowBatches");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW_BATCHES, new FieldMetaData("rowBatches", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, BatchMutation.class))));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRows_args.class, metaDataMap);
+    }
+
+    public mutateRows_args() {
+    }
+
+    public mutateRows_args(
+      byte[] tableName,
+      List<BatchMutation> rowBatches)
+    {
+      this();
+      this.tableName = tableName;
+      this.rowBatches = rowBatches;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRows_args(mutateRows_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRowBatches()) {
+        List<BatchMutation> __this__rowBatches = new ArrayList<BatchMutation>();
+        for (BatchMutation other_element : other.rowBatches) {
+          __this__rowBatches.add(new BatchMutation(other_element));
+        }
+        this.rowBatches = __this__rowBatches;
+      }
+    }
+
+    public mutateRows_args deepCopy() {
+      return new mutateRows_args(this);
+    }
+
+    @Deprecated
+    public mutateRows_args clone() {
+      return new mutateRows_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public mutateRows_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public int getRowBatchesSize() {
+      return (this.rowBatches == null) ? 0 : this.rowBatches.size();
+    }
+
+    public java.util.Iterator<BatchMutation> getRowBatchesIterator() {
+      return (this.rowBatches == null) ? null : this.rowBatches.iterator();
+    }
+
+    public void addToRowBatches(BatchMutation elem) {
+      if (this.rowBatches == null) {
+        this.rowBatches = new ArrayList<BatchMutation>();
+      }
+      this.rowBatches.add(elem);
+    }
+
+    /**
+     * list of row batches
+     */
+    public List<BatchMutation> getRowBatches() {
+      return this.rowBatches;
+    }
+
+    /**
+     * list of row batches
+     */
+    public mutateRows_args setRowBatches(List<BatchMutation> rowBatches) {
+      this.rowBatches = rowBatches;
+      return this;
+    }
+
+    public void unsetRowBatches() {
+      this.rowBatches = null;
+    }
+
+    /** Returns true if field rowBatches is set (has been asigned a value) and false otherwise */
+    public boolean isSetRowBatches() {
+      return this.rowBatches != null;
+    }
+
+    public void setRowBatchesIsSet(boolean value) {
+      if (!value) {
+        this.rowBatches = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW_BATCHES:
+        if (value == null) {
+          unsetRowBatches();
+        } else {
+          setRowBatches((List<BatchMutation>)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW_BATCHES:
+        return getRowBatches();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW_BATCHES:
+        return isSetRowBatches();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRows_args)
+        return this.equals((mutateRows_args)that);
+      return false;
+    }
+
+    public boolean equals(mutateRows_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_rowBatches = true && this.isSetRowBatches();
+      boolean that_present_rowBatches = true && that.isSetRowBatches();
+      if (this_present_rowBatches || that_present_rowBatches) {
+        if (!(this_present_rowBatches && that_present_rowBatches))
+          return false;
+        if (!this.rowBatches.equals(that.rowBatches))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_rowBatches = true && (isSetRowBatches());
+      builder.append(present_rowBatches);
+      if (present_rowBatches)
+        builder.append(rowBatches);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRows_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRows_args typedOther = (mutateRows_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRowBatches()).compareTo(isSetRowBatches());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(rowBatches, typedOther.rowBatches);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW_BATCHES:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list70 = iprot.readListBegin();
+                  this.rowBatches = new ArrayList<BatchMutation>(_list70.size);
+                  for (int _i71 = 0; _i71 < _list70.size; ++_i71)
+                  {
+                    BatchMutation _elem72;
+                    _elem72 = new BatchMutation();
+                    _elem72.read(iprot);
+                    this.rowBatches.add(_elem72);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.rowBatches != null) {
+        oprot.writeFieldBegin(ROW_BATCHES_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.rowBatches.size()));
+          for (BatchMutation _iter73 : this.rowBatches)
+          {
+            _iter73.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRows_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("rowBatches:");
+      if (this.rowBatches == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.rowBatches);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRows_result implements TBase<mutateRows_result._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRows_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRows_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRows_result.class, metaDataMap);
+    }
+
+    public mutateRows_result() {
+    }
+
+    public mutateRows_result(
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRows_result(mutateRows_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public mutateRows_result deepCopy() {
+      return new mutateRows_result(this);
+    }
+
+    @Deprecated
+    public mutateRows_result clone() {
+      return new mutateRows_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public mutateRows_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public mutateRows_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRows_result)
+        return this.equals((mutateRows_result)that);
+      return false;
+    }
+
+    public boolean equals(mutateRows_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRows_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRows_result typedOther = (mutateRows_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIa()).compareTo(isSetIa());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(ia, typedOther.ia);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRows_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRowsTs_args implements TBase<mutateRowsTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRowsTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRowsTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_BATCHES_FIELD_DESC = new TField("rowBatches", TType.LIST, (short)2);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * list of row batches
+     */
+    public List<BatchMutation> rowBatches;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * list of row batches
+       */
+      ROW_BATCHES((short)2, "rowBatches"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)3, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW_BATCHES, new FieldMetaData("rowBatches", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, BatchMutation.class))));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRowsTs_args.class, metaDataMap);
+    }
+
+    public mutateRowsTs_args() {
+    }
+
+    public mutateRowsTs_args(
+      byte[] tableName,
+      List<BatchMutation> rowBatches,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.rowBatches = rowBatches;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRowsTs_args(mutateRowsTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRowBatches()) {
+        List<BatchMutation> __this__rowBatches = new ArrayList<BatchMutation>();
+        for (BatchMutation other_element : other.rowBatches) {
+          __this__rowBatches.add(new BatchMutation(other_element));
+        }
+        this.rowBatches = __this__rowBatches;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public mutateRowsTs_args deepCopy() {
+      return new mutateRowsTs_args(this);
+    }
+
+    @Deprecated
+    public mutateRowsTs_args clone() {
+      return new mutateRowsTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public mutateRowsTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    public int getRowBatchesSize() {
+      return (this.rowBatches == null) ? 0 : this.rowBatches.size();
+    }
+
+    public java.util.Iterator<BatchMutation> getRowBatchesIterator() {
+      return (this.rowBatches == null) ? null : this.rowBatches.iterator();
+    }
+
+    public void addToRowBatches(BatchMutation elem) {
+      if (this.rowBatches == null) {
+        this.rowBatches = new ArrayList<BatchMutation>();
+      }
+      this.rowBatches.add(elem);
+    }
+
+    /**
+     * list of row batches
+     */
+    public List<BatchMutation> getRowBatches() {
+      return this.rowBatches;
+    }
+
+    /**
+     * list of row batches
+     */
+    public mutateRowsTs_args setRowBatches(List<BatchMutation> rowBatches) {
+      this.rowBatches = rowBatches;
+      return this;
+    }
+
+    public void unsetRowBatches() {
+      this.rowBatches = null;
+    }
+
+    /** Returns true if field rowBatches is set (has been asigned a value) and false otherwise */
+    public boolean isSetRowBatches() {
+      return this.rowBatches != null;
+    }
+
+    public void setRowBatchesIsSet(boolean value) {
+      if (!value) {
+        this.rowBatches = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public mutateRowsTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW_BATCHES:
+        if (value == null) {
+          unsetRowBatches();
+        } else {
+          setRowBatches((List<BatchMutation>)value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW_BATCHES:
+        return getRowBatches();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW_BATCHES:
+        return isSetRowBatches();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRowsTs_args)
+        return this.equals((mutateRowsTs_args)that);
+      return false;
+    }
+
+    public boolean equals(mutateRowsTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_rowBatches = true && this.isSetRowBatches();
+      boolean that_present_rowBatches = true && that.isSetRowBatches();
+      if (this_present_rowBatches || that_present_rowBatches) {
+        if (!(this_present_rowBatches && that_present_rowBatches))
+          return false;
+        if (!this.rowBatches.equals(that.rowBatches))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_rowBatches = true && (isSetRowBatches());
+      builder.append(present_rowBatches);
+      if (present_rowBatches)
+        builder.append(rowBatches);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRowsTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRowsTs_args typedOther = (mutateRowsTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRowBatches()).compareTo(isSetRowBatches());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(rowBatches, typedOther.rowBatches);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW_BATCHES:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list74 = iprot.readListBegin();
+                  this.rowBatches = new ArrayList<BatchMutation>(_list74.size);
+                  for (int _i75 = 0; _i75 < _list74.size; ++_i75)
+                  {
+                    BatchMutation _elem76;
+                    _elem76 = new BatchMutation();
+                    _elem76.read(iprot);
+                    this.rowBatches.add(_elem76);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.rowBatches != null) {
+        oprot.writeFieldBegin(ROW_BATCHES_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.rowBatches.size()));
+          for (BatchMutation _iter77 : this.rowBatches)
+          {
+            _iter77.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRowsTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("rowBatches:");
+      if (this.rowBatches == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.rowBatches);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class mutateRowsTs_result implements TBase<mutateRowsTs_result._Fields>, java.io.Serializable, Cloneable, Comparable<mutateRowsTs_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("mutateRowsTs_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(mutateRowsTs_result.class, metaDataMap);
+    }
+
+    public mutateRowsTs_result() {
+    }
+
+    public mutateRowsTs_result(
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public mutateRowsTs_result(mutateRowsTs_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public mutateRowsTs_result deepCopy() {
+      return new mutateRowsTs_result(this);
+    }
+
+    @Deprecated
+    public mutateRowsTs_result clone() {
+      return new mutateRowsTs_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public mutateRowsTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public mutateRowsTs_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof mutateRowsTs_result)
+        return this.equals((mutateRowsTs_result)that);
+      return false;
+    }
+
+    public boolean equals(mutateRowsTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(mutateRowsTs_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      mutateRowsTs_result typedOther = (mutateRowsTs_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIa()).compareTo(isSetIa());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(ia, typedOther.ia);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("mutateRowsTs_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class atomicIncrement_args implements TBase<atomicIncrement_args._Fields>, java.io.Serializable, Cloneable, Comparable<atomicIncrement_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("atomicIncrement_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+    private static final TField VALUE_FIELD_DESC = new TField("value", TType.I64, (short)4);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * row to increment
+     */
+    public byte[] row;
+    /**
+     * name of column
+     */
+    public byte[] column;
+    /**
+     * amount to increment by
+     */
+    public long value;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * row to increment
+       */
+      ROW((short)2, "row"),
+      /**
+       * name of column
+       */
+      COLUMN((short)3, "column"),
+      /**
+       * amount to increment by
+       */
+      VALUE((short)4, "value");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __VALUE_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.VALUE, new FieldMetaData("value", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(atomicIncrement_args.class, metaDataMap);
+    }
+
+    public atomicIncrement_args() {
+    }
+
+    public atomicIncrement_args(
+      byte[] tableName,
+      byte[] row,
+      byte[] column,
+      long value)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.column = column;
+      this.value = value;
+      setValueIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public atomicIncrement_args(atomicIncrement_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumn()) {
+        this.column = other.column;
+      }
+      this.value = other.value;
+    }
+
+    public atomicIncrement_args deepCopy() {
+      return new atomicIncrement_args(this);
+    }
+
+    @Deprecated
+    public atomicIncrement_args clone() {
+      return new atomicIncrement_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public atomicIncrement_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * row to increment
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * row to increment
+     */
+    public atomicIncrement_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * name of column
+     */
+    public byte[] getColumn() {
+      return this.column;
+    }
+
+    /**
+     * name of column
+     */
+    public atomicIncrement_args setColumn(byte[] column) {
+      this.column = column;
+      return this;
+    }
+
+    public void unsetColumn() {
+      this.column = null;
+    }
+
+    /** Returns true if field column is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumn() {
+      return this.column != null;
+    }
+
+    public void setColumnIsSet(boolean value) {
+      if (!value) {
+        this.column = null;
+      }
+    }
+
+    /**
+     * amount to increment by
+     */
+    public long getValue() {
+      return this.value;
+    }
+
+    /**
+     * amount to increment by
+     */
+    public atomicIncrement_args setValue(long value) {
+      this.value = value;
+      setValueIsSet(true);
+      return this;
+    }
+
+    public void unsetValue() {
+      __isset_bit_vector.clear(__VALUE_ISSET_ID);
+    }
+
+    /** Returns true if field value is set (has been asigned a value) and false otherwise */
+    public boolean isSetValue() {
+      return __isset_bit_vector.get(__VALUE_ISSET_ID);
+    }
+
+    public void setValueIsSet(boolean value) {
+      __isset_bit_vector.set(__VALUE_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMN:
+        if (value == null) {
+          unsetColumn();
+        } else {
+          setColumn((byte[])value);
+        }
+        break;
+
+      case VALUE:
+        if (value == null) {
+          unsetValue();
+        } else {
+          setValue((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMN:
+        return getColumn();
+
+      case VALUE:
+        return new Long(getValue());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMN:
+        return isSetColumn();
+      case VALUE:
+        return isSetValue();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof atomicIncrement_args)
+        return this.equals((atomicIncrement_args)that);
+      return false;
+    }
+
+    public boolean equals(atomicIncrement_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_column = true && this.isSetColumn();
+      boolean that_present_column = true && that.isSetColumn();
+      if (this_present_column || that_present_column) {
+        if (!(this_present_column && that_present_column))
+          return false;
+        if (!java.util.Arrays.equals(this.column, that.column))
+          return false;
+      }
+
+      boolean this_present_value = true;
+      boolean that_present_value = true;
+      if (this_present_value || that_present_value) {
+        if (!(this_present_value && that_present_value))
+          return false;
+        if (this.value != that.value)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_column = true && (isSetColumn());
+      builder.append(present_column);
+      if (present_column)
+        builder.append(column);
+
+      boolean present_value = true;
+      builder.append(present_value);
+      if (present_value)
+        builder.append(value);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(atomicIncrement_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      atomicIncrement_args typedOther = (atomicIncrement_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumn()).compareTo(isSetColumn());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(column, typedOther.column);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetValue()).compareTo(isSetValue());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(value, typedOther.value);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMN:
+              if (field.type == TType.STRING) {
+                this.column = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case VALUE:
+              if (field.type == TType.I64) {
+                this.value = iprot.readI64();
+                setValueIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.column != null) {
+        oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+        oprot.writeBinary(this.column);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(VALUE_FIELD_DESC);
+      oprot.writeI64(this.value);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("atomicIncrement_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("column:");
+      if (this.column == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.column);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("value:");
+      sb.append(this.value);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class atomicIncrement_result implements TBase<atomicIncrement_result._Fields>, java.io.Serializable, Cloneable, Comparable<atomicIncrement_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("atomicIncrement_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I64, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public long success;
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(atomicIncrement_result.class, metaDataMap);
+    }
+
+    public atomicIncrement_result() {
+    }
+
+    public atomicIncrement_result(
+      long success,
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public atomicIncrement_result(atomicIncrement_result other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.success = other.success;
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public atomicIncrement_result deepCopy() {
+      return new atomicIncrement_result(this);
+    }
+
+    @Deprecated
+    public atomicIncrement_result clone() {
+      return new atomicIncrement_result(this);
+    }
+
+    public long getSuccess() {
+      return this.success;
+    }
+
+    public atomicIncrement_result setSuccess(long success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bit_vector.clear(__SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return __isset_bit_vector.get(__SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bit_vector.set(__SUCCESS_ISSET_ID, value);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public atomicIncrement_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public atomicIncrement_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Long)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return new Long(getSuccess());
+
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof atomicIncrement_result)
+        return this.equals((atomicIncrement_result)that);
+      return false;
+    }
+
+    public boolean equals(atomicIncrement_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true;
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(atomicIncrement_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      atomicIncrement_result typedOther = (atomicIncrement_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIa()).compareTo(isSetIa());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(ia, typedOther.ia);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.I64) {
+                this.success = iprot.readI64();
+                setSuccessIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        oprot.writeI64(this.success);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("atomicIncrement_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAll_args implements TBase<deleteAll_args._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAll_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAll_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * Row to update
+     */
+    public byte[] row;
+    /**
+     * name of column whose value is to be deleted
+     */
+    public byte[] column;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * Row to update
+       */
+      ROW((short)2, "row"),
+      /**
+       * name of column whose value is to be deleted
+       */
+      COLUMN((short)3, "column");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAll_args.class, metaDataMap);
+    }
+
+    public deleteAll_args() {
+    }
+
+    public deleteAll_args(
+      byte[] tableName,
+      byte[] row,
+      byte[] column)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.column = column;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAll_args(deleteAll_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumn()) {
+        this.column = other.column;
+      }
+    }
+
+    public deleteAll_args deepCopy() {
+      return new deleteAll_args(this);
+    }
+
+    @Deprecated
+    public deleteAll_args clone() {
+      return new deleteAll_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public deleteAll_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * Row to update
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * Row to update
+     */
+    public deleteAll_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * name of column whose value is to be deleted
+     */
+    public byte[] getColumn() {
+      return this.column;
+    }
+
+    /**
+     * name of column whose value is to be deleted
+     */
+    public deleteAll_args setColumn(byte[] column) {
+      this.column = column;
+      return this;
+    }
+
+    public void unsetColumn() {
+      this.column = null;
+    }
+
+    /** Returns true if field column is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumn() {
+      return this.column != null;
+    }
+
+    public void setColumnIsSet(boolean value) {
+      if (!value) {
+        this.column = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMN:
+        if (value == null) {
+          unsetColumn();
+        } else {
+          setColumn((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMN:
+        return getColumn();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMN:
+        return isSetColumn();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAll_args)
+        return this.equals((deleteAll_args)that);
+      return false;
+    }
+
+    public boolean equals(deleteAll_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_column = true && this.isSetColumn();
+      boolean that_present_column = true && that.isSetColumn();
+      if (this_present_column || that_present_column) {
+        if (!(this_present_column && that_present_column))
+          return false;
+        if (!java.util.Arrays.equals(this.column, that.column))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_column = true && (isSetColumn());
+      builder.append(present_column);
+      if (present_column)
+        builder.append(column);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAll_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAll_args typedOther = (deleteAll_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumn()).compareTo(isSetColumn());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(column, typedOther.column);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMN:
+              if (field.type == TType.STRING) {
+                this.column = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.column != null) {
+        oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+        oprot.writeBinary(this.column);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAll_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("column:");
+      if (this.column == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.column);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAll_result implements TBase<deleteAll_result._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAll_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAll_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAll_result.class, metaDataMap);
+    }
+
+    public deleteAll_result() {
+    }
+
+    public deleteAll_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAll_result(deleteAll_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public deleteAll_result deepCopy() {
+      return new deleteAll_result(this);
+    }
+
+    @Deprecated
+    public deleteAll_result clone() {
+      return new deleteAll_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public deleteAll_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAll_result)
+        return this.equals((deleteAll_result)that);
+      return false;
+    }
+
+    public boolean equals(deleteAll_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAll_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAll_result typedOther = (deleteAll_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAll_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAllTs_args implements TBase<deleteAllTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAllTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAllTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)3);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * Row to update
+     */
+    public byte[] row;
+    /**
+     * name of column whose value is to be deleted
+     */
+    public byte[] column;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * Row to update
+       */
+      ROW((short)2, "row"),
+      /**
+       * name of column whose value is to be deleted
+       */
+      COLUMN((short)3, "column"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)4, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAllTs_args.class, metaDataMap);
+    }
+
+    public deleteAllTs_args() {
+    }
+
+    public deleteAllTs_args(
+      byte[] tableName,
+      byte[] row,
+      byte[] column,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.column = column;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAllTs_args(deleteAllTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      if (other.isSetColumn()) {
+        this.column = other.column;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public deleteAllTs_args deepCopy() {
+      return new deleteAllTs_args(this);
+    }
+
+    @Deprecated
+    public deleteAllTs_args clone() {
+      return new deleteAllTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public deleteAllTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * Row to update
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * Row to update
+     */
+    public deleteAllTs_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * name of column whose value is to be deleted
+     */
+    public byte[] getColumn() {
+      return this.column;
+    }
+
+    /**
+     * name of column whose value is to be deleted
+     */
+    public deleteAllTs_args setColumn(byte[] column) {
+      this.column = column;
+      return this;
+    }
+
+    public void unsetColumn() {
+      this.column = null;
+    }
+
+    /** Returns true if field column is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumn() {
+      return this.column != null;
+    }
+
+    public void setColumnIsSet(boolean value) {
+      if (!value) {
+        this.column = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public deleteAllTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case COLUMN:
+        if (value == null) {
+          unsetColumn();
+        } else {
+          setColumn((byte[])value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case COLUMN:
+        return getColumn();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case COLUMN:
+        return isSetColumn();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAllTs_args)
+        return this.equals((deleteAllTs_args)that);
+      return false;
+    }
+
+    public boolean equals(deleteAllTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_column = true && this.isSetColumn();
+      boolean that_present_column = true && that.isSetColumn();
+      if (this_present_column || that_present_column) {
+        if (!(this_present_column && that_present_column))
+          return false;
+        if (!java.util.Arrays.equals(this.column, that.column))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_column = true && (isSetColumn());
+      builder.append(present_column);
+      if (present_column)
+        builder.append(column);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAllTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAllTs_args typedOther = (deleteAllTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumn()).compareTo(isSetColumn());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(column, typedOther.column);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMN:
+              if (field.type == TType.STRING) {
+                this.column = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      if (this.column != null) {
+        oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+        oprot.writeBinary(this.column);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAllTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("column:");
+      if (this.column == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.column);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAllTs_result implements TBase<deleteAllTs_result._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAllTs_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAllTs_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAllTs_result.class, metaDataMap);
+    }
+
+    public deleteAllTs_result() {
+    }
+
+    public deleteAllTs_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAllTs_result(deleteAllTs_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public deleteAllTs_result deepCopy() {
+      return new deleteAllTs_result(this);
+    }
+
+    @Deprecated
+    public deleteAllTs_result clone() {
+      return new deleteAllTs_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public deleteAllTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAllTs_result)
+        return this.equals((deleteAllTs_result)that);
+      return false;
+    }
+
+    public boolean equals(deleteAllTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAllTs_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAllTs_result typedOther = (deleteAllTs_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAllTs_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAllRow_args implements TBase<deleteAllRow_args._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAllRow_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAllRow_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * key of the row to be completely deleted.
+     */
+    public byte[] row;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * key of the row to be completely deleted.
+       */
+      ROW((short)2, "row");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAllRow_args.class, metaDataMap);
+    }
+
+    public deleteAllRow_args() {
+    }
+
+    public deleteAllRow_args(
+      byte[] tableName,
+      byte[] row)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAllRow_args(deleteAllRow_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+    }
+
+    public deleteAllRow_args deepCopy() {
+      return new deleteAllRow_args(this);
+    }
+
+    @Deprecated
+    public deleteAllRow_args clone() {
+      return new deleteAllRow_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public deleteAllRow_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * key of the row to be completely deleted.
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * key of the row to be completely deleted.
+     */
+    public deleteAllRow_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAllRow_args)
+        return this.equals((deleteAllRow_args)that);
+      return false;
+    }
+
+    public boolean equals(deleteAllRow_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAllRow_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAllRow_args typedOther = (deleteAllRow_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAllRow_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAllRow_result implements TBase<deleteAllRow_result._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAllRow_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAllRow_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAllRow_result.class, metaDataMap);
+    }
+
+    public deleteAllRow_result() {
+    }
+
+    public deleteAllRow_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAllRow_result(deleteAllRow_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public deleteAllRow_result deepCopy() {
+      return new deleteAllRow_result(this);
+    }
+
+    @Deprecated
+    public deleteAllRow_result clone() {
+      return new deleteAllRow_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public deleteAllRow_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAllRow_result)
+        return this.equals((deleteAllRow_result)that);
+      return false;
+    }
+
+    public boolean equals(deleteAllRow_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAllRow_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAllRow_result typedOther = (deleteAllRow_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAllRow_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAllRowTs_args implements TBase<deleteAllRowTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAllRowTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAllRowTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)2);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * key of the row to be completely deleted.
+     */
+    public byte[] row;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * key of the row to be completely deleted.
+       */
+      ROW((short)2, "row"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)3, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAllRowTs_args.class, metaDataMap);
+    }
+
+    public deleteAllRowTs_args() {
+    }
+
+    public deleteAllRowTs_args(
+      byte[] tableName,
+      byte[] row,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.row = row;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAllRowTs_args(deleteAllRowTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetRow()) {
+        this.row = other.row;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public deleteAllRowTs_args deepCopy() {
+      return new deleteAllRowTs_args(this);
+    }
+
+    @Deprecated
+    public deleteAllRowTs_args clone() {
+      return new deleteAllRowTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public deleteAllRowTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * key of the row to be completely deleted.
+     */
+    public byte[] getRow() {
+      return this.row;
+    }
+
+    /**
+     * key of the row to be completely deleted.
+     */
+    public deleteAllRowTs_args setRow(byte[] row) {
+      this.row = row;
+      return this;
+    }
+
+    public void unsetRow() {
+      this.row = null;
+    }
+
+    /** Returns true if field row is set (has been asigned a value) and false otherwise */
+    public boolean isSetRow() {
+      return this.row != null;
+    }
+
+    public void setRowIsSet(boolean value) {
+      if (!value) {
+        this.row = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public deleteAllRowTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case ROW:
+        if (value == null) {
+          unsetRow();
+        } else {
+          setRow((byte[])value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case ROW:
+        return getRow();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case ROW:
+        return isSetRow();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAllRowTs_args)
+        return this.equals((deleteAllRowTs_args)that);
+      return false;
+    }
+
+    public boolean equals(deleteAllRowTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_row = true && this.isSetRow();
+      boolean that_present_row = true && that.isSetRow();
+      if (this_present_row || that_present_row) {
+        if (!(this_present_row && that_present_row))
+          return false;
+        if (!java.util.Arrays.equals(this.row, that.row))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_row = true && (isSetRow());
+      builder.append(present_row);
+      if (present_row)
+        builder.append(row);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAllRowTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAllRowTs_args typedOther = (deleteAllRowTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetRow()).compareTo(isSetRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(row, typedOther.row);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case ROW:
+              if (field.type == TType.STRING) {
+                this.row = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.row != null) {
+        oprot.writeFieldBegin(ROW_FIELD_DESC);
+        oprot.writeBinary(this.row);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAllRowTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("row:");
+      if (this.row == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.row);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class deleteAllRowTs_result implements TBase<deleteAllRowTs_result._Fields>, java.io.Serializable, Cloneable, Comparable<deleteAllRowTs_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("deleteAllRowTs_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(deleteAllRowTs_result.class, metaDataMap);
+    }
+
+    public deleteAllRowTs_result() {
+    }
+
+    public deleteAllRowTs_result(
+      IOError io)
+    {
+      this();
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public deleteAllRowTs_result(deleteAllRowTs_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public deleteAllRowTs_result deepCopy() {
+      return new deleteAllRowTs_result(this);
+    }
+
+    @Deprecated
+    public deleteAllRowTs_result clone() {
+      return new deleteAllRowTs_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public deleteAllRowTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof deleteAllRowTs_result)
+        return this.equals((deleteAllRowTs_result)that);
+      return false;
+    }
+
+    public boolean equals(deleteAllRowTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(deleteAllRowTs_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      deleteAllRowTs_result typedOther = (deleteAllRowTs_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("deleteAllRowTs_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpen_args implements TBase<scannerOpen_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpen_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpen_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+    private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] startRow;
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> columns;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * Starting row in table to scan.
+       * Send "" (empty string) to start at the first row.
+       */
+      START_ROW((short)2, "startRow"),
+      /**
+       * columns to scan. If column name is a column family, all
+       * columns of the specified column family are returned. It's also possible
+       * to pass a regex in the column qualifier.
+       */
+      COLUMNS((short)3, "columns");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.START_ROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpen_args.class, metaDataMap);
+    }
+
+    public scannerOpen_args() {
+    }
+
+    public scannerOpen_args(
+      byte[] tableName,
+      byte[] startRow,
+      List<byte[]> columns)
+    {
+      this();
+      this.tableName = tableName;
+      this.startRow = startRow;
+      this.columns = columns;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpen_args(scannerOpen_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetStartRow()) {
+        this.startRow = other.startRow;
+      }
+      if (other.isSetColumns()) {
+        List<byte[]> __this__columns = new ArrayList<byte[]>();
+        for (byte[] other_element : other.columns) {
+          __this__columns.add(other_element);
+        }
+        this.columns = __this__columns;
+      }
+    }
+
+    public scannerOpen_args deepCopy() {
+      return new scannerOpen_args(this);
+    }
+
+    @Deprecated
+    public scannerOpen_args clone() {
+      return new scannerOpen_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public scannerOpen_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] getStartRow() {
+      return this.startRow;
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public scannerOpen_args setStartRow(byte[] startRow) {
+      this.startRow = startRow;
+      return this;
+    }
+
+    public void unsetStartRow() {
+      this.startRow = null;
+    }
+
+    /** Returns true if field startRow is set (has been asigned a value) and false otherwise */
+    public boolean isSetStartRow() {
+      return this.startRow != null;
+    }
+
+    public void setStartRowIsSet(boolean value) {
+      if (!value) {
+        this.startRow = null;
+      }
+    }
+
+    public int getColumnsSize() {
+      return (this.columns == null) ? 0 : this.columns.size();
+    }
+
+    public java.util.Iterator<byte[]> getColumnsIterator() {
+      return (this.columns == null) ? null : this.columns.iterator();
+    }
+
+    public void addToColumns(byte[] elem) {
+      if (this.columns == null) {
+        this.columns = new ArrayList<byte[]>();
+      }
+      this.columns.add(elem);
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> getColumns() {
+      return this.columns;
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public scannerOpen_args setColumns(List<byte[]> columns) {
+      this.columns = columns;
+      return this;
+    }
+
+    public void unsetColumns() {
+      this.columns = null;
+    }
+
+    /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumns() {
+      return this.columns != null;
+    }
+
+    public void setColumnsIsSet(boolean value) {
+      if (!value) {
+        this.columns = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case START_ROW:
+        if (value == null) {
+          unsetStartRow();
+        } else {
+          setStartRow((byte[])value);
+        }
+        break;
+
+      case COLUMNS:
+        if (value == null) {
+          unsetColumns();
+        } else {
+          setColumns((List<byte[]>)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case START_ROW:
+        return getStartRow();
+
+      case COLUMNS:
+        return getColumns();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case START_ROW:
+        return isSetStartRow();
+      case COLUMNS:
+        return isSetColumns();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpen_args)
+        return this.equals((scannerOpen_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpen_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_startRow = true && this.isSetStartRow();
+      boolean that_present_startRow = true && that.isSetStartRow();
+      if (this_present_startRow || that_present_startRow) {
+        if (!(this_present_startRow && that_present_startRow))
+          return false;
+        if (!java.util.Arrays.equals(this.startRow, that.startRow))
+          return false;
+      }
+
+      boolean this_present_columns = true && this.isSetColumns();
+      boolean that_present_columns = true && that.isSetColumns();
+      if (this_present_columns || that_present_columns) {
+        if (!(this_present_columns && that_present_columns))
+          return false;
+        if (!this.columns.equals(that.columns))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      builder.append(present_startRow);
+      if (present_startRow)
+        builder.append(startRow);
+
+      boolean present_columns = true && (isSetColumns());
+      builder.append(present_columns);
+      if (present_columns)
+        builder.append(columns);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpen_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpen_args typedOther = (scannerOpen_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetStartRow()).compareTo(isSetStartRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(startRow, typedOther.startRow);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumns()).compareTo(isSetColumns());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columns, typedOther.columns);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case START_ROW:
+              if (field.type == TType.STRING) {
+                this.startRow = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMNS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list78 = iprot.readListBegin();
+                  this.columns = new ArrayList<byte[]>(_list78.size);
+                  for (int _i79 = 0; _i79 < _list78.size; ++_i79)
+                  {
+                    byte[] _elem80;
+                    _elem80 = iprot.readBinary();
+                    this.columns.add(_elem80);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.startRow != null) {
+        oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+        oprot.writeBinary(this.startRow);
+        oprot.writeFieldEnd();
+      }
+      if (this.columns != null) {
+        oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+          for (byte[] _iter81 : this.columns)
+          {
+            oprot.writeBinary(_iter81);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpen_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("startRow:");
+      if (this.startRow == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.startRow);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columns:");
+      if (this.columns == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columns);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpen_result implements TBase<scannerOpen_result._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpen_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpen_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public int success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpen_result.class, metaDataMap);
+    }
+
+    public scannerOpen_result() {
+    }
+
+    public scannerOpen_result(
+      int success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpen_result(scannerOpen_result other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.success = other.success;
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public scannerOpen_result deepCopy() {
+      return new scannerOpen_result(this);
+    }
+
+    @Deprecated
+    public scannerOpen_result clone() {
+      return new scannerOpen_result(this);
+    }
+
+    public int getSuccess() {
+      return this.success;
+    }
+
+    public scannerOpen_result setSuccess(int success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bit_vector.clear(__SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return __isset_bit_vector.get(__SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bit_vector.set(__SUCCESS_ISSET_ID, value);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerOpen_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Integer)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return new Integer(getSuccess());
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpen_result)
+        return this.equals((scannerOpen_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpen_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true;
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpen_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpen_result typedOther = (scannerOpen_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.I32) {
+                this.success = iprot.readI32();
+                setSuccessIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        oprot.writeI32(this.success);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpen_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenWithStop_args implements TBase<scannerOpenWithStop_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenWithStop_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStop_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+    private static final TField STOP_ROW_FIELD_DESC = new TField("stopRow", TType.STRING, (short)3);
+    private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)4);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] startRow;
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    public byte[] stopRow;
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> columns;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * Starting row in table to scan.
+       * Send "" (empty string) to start at the first row.
+       */
+      START_ROW((short)2, "startRow"),
+      /**
+       * row to stop scanning on. This row is *not* included in the
+       * scanner's results
+       */
+      STOP_ROW((short)3, "stopRow"),
+      /**
+       * columns to scan. If column name is a column family, all
+       * columns of the specified column family are returned. It's also possible
+       * to pass a regex in the column qualifier.
+       */
+      COLUMNS((short)4, "columns");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.START_ROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.STOP_ROW, new FieldMetaData("stopRow", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenWithStop_args.class, metaDataMap);
+    }
+
+    public scannerOpenWithStop_args() {
+    }
+
+    public scannerOpenWithStop_args(
+      byte[] tableName,
+      byte[] startRow,
+      byte[] stopRow,
+      List<byte[]> columns)
+    {
+      this();
+      this.tableName = tableName;
+      this.startRow = startRow;
+      this.stopRow = stopRow;
+      this.columns = columns;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenWithStop_args(scannerOpenWithStop_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetStartRow()) {
+        this.startRow = other.startRow;
+      }
+      if (other.isSetStopRow()) {
+        this.stopRow = other.stopRow;
+      }
+      if (other.isSetColumns()) {
+        List<byte[]> __this__columns = new ArrayList<byte[]>();
+        for (byte[] other_element : other.columns) {
+          __this__columns.add(other_element);
+        }
+        this.columns = __this__columns;
+      }
+    }
+
+    public scannerOpenWithStop_args deepCopy() {
+      return new scannerOpenWithStop_args(this);
+    }
+
+    @Deprecated
+    public scannerOpenWithStop_args clone() {
+      return new scannerOpenWithStop_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public scannerOpenWithStop_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] getStartRow() {
+      return this.startRow;
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public scannerOpenWithStop_args setStartRow(byte[] startRow) {
+      this.startRow = startRow;
+      return this;
+    }
+
+    public void unsetStartRow() {
+      this.startRow = null;
+    }
+
+    /** Returns true if field startRow is set (has been asigned a value) and false otherwise */
+    public boolean isSetStartRow() {
+      return this.startRow != null;
+    }
+
+    public void setStartRowIsSet(boolean value) {
+      if (!value) {
+        this.startRow = null;
+      }
+    }
+
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    public byte[] getStopRow() {
+      return this.stopRow;
+    }
+
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    public scannerOpenWithStop_args setStopRow(byte[] stopRow) {
+      this.stopRow = stopRow;
+      return this;
+    }
+
+    public void unsetStopRow() {
+      this.stopRow = null;
+    }
+
+    /** Returns true if field stopRow is set (has been asigned a value) and false otherwise */
+    public boolean isSetStopRow() {
+      return this.stopRow != null;
+    }
+
+    public void setStopRowIsSet(boolean value) {
+      if (!value) {
+        this.stopRow = null;
+      }
+    }
+
+    public int getColumnsSize() {
+      return (this.columns == null) ? 0 : this.columns.size();
+    }
+
+    public java.util.Iterator<byte[]> getColumnsIterator() {
+      return (this.columns == null) ? null : this.columns.iterator();
+    }
+
+    public void addToColumns(byte[] elem) {
+      if (this.columns == null) {
+        this.columns = new ArrayList<byte[]>();
+      }
+      this.columns.add(elem);
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> getColumns() {
+      return this.columns;
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public scannerOpenWithStop_args setColumns(List<byte[]> columns) {
+      this.columns = columns;
+      return this;
+    }
+
+    public void unsetColumns() {
+      this.columns = null;
+    }
+
+    /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumns() {
+      return this.columns != null;
+    }
+
+    public void setColumnsIsSet(boolean value) {
+      if (!value) {
+        this.columns = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case START_ROW:
+        if (value == null) {
+          unsetStartRow();
+        } else {
+          setStartRow((byte[])value);
+        }
+        break;
+
+      case STOP_ROW:
+        if (value == null) {
+          unsetStopRow();
+        } else {
+          setStopRow((byte[])value);
+        }
+        break;
+
+      case COLUMNS:
+        if (value == null) {
+          unsetColumns();
+        } else {
+          setColumns((List<byte[]>)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case START_ROW:
+        return getStartRow();
+
+      case STOP_ROW:
+        return getStopRow();
+
+      case COLUMNS:
+        return getColumns();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case START_ROW:
+        return isSetStartRow();
+      case STOP_ROW:
+        return isSetStopRow();
+      case COLUMNS:
+        return isSetColumns();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenWithStop_args)
+        return this.equals((scannerOpenWithStop_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenWithStop_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_startRow = true && this.isSetStartRow();
+      boolean that_present_startRow = true && that.isSetStartRow();
+      if (this_present_startRow || that_present_startRow) {
+        if (!(this_present_startRow && that_present_startRow))
+          return false;
+        if (!java.util.Arrays.equals(this.startRow, that.startRow))
+          return false;
+      }
+
+      boolean this_present_stopRow = true && this.isSetStopRow();
+      boolean that_present_stopRow = true && that.isSetStopRow();
+      if (this_present_stopRow || that_present_stopRow) {
+        if (!(this_present_stopRow && that_present_stopRow))
+          return false;
+        if (!java.util.Arrays.equals(this.stopRow, that.stopRow))
+          return false;
+      }
+
+      boolean this_present_columns = true && this.isSetColumns();
+      boolean that_present_columns = true && that.isSetColumns();
+      if (this_present_columns || that_present_columns) {
+        if (!(this_present_columns && that_present_columns))
+          return false;
+        if (!this.columns.equals(that.columns))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      builder.append(present_startRow);
+      if (present_startRow)
+        builder.append(startRow);
+
+      boolean present_stopRow = true && (isSetStopRow());
+      builder.append(present_stopRow);
+      if (present_stopRow)
+        builder.append(stopRow);
+
+      boolean present_columns = true && (isSetColumns());
+      builder.append(present_columns);
+      if (present_columns)
+        builder.append(columns);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenWithStop_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenWithStop_args typedOther = (scannerOpenWithStop_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetStartRow()).compareTo(isSetStartRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(startRow, typedOther.startRow);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetStopRow()).compareTo(isSetStopRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(stopRow, typedOther.stopRow);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumns()).compareTo(isSetColumns());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columns, typedOther.columns);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case START_ROW:
+              if (field.type == TType.STRING) {
+                this.startRow = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case STOP_ROW:
+              if (field.type == TType.STRING) {
+                this.stopRow = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMNS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list82 = iprot.readListBegin();
+                  this.columns = new ArrayList<byte[]>(_list82.size);
+                  for (int _i83 = 0; _i83 < _list82.size; ++_i83)
+                  {
+                    byte[] _elem84;
+                    _elem84 = iprot.readBinary();
+                    this.columns.add(_elem84);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.startRow != null) {
+        oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+        oprot.writeBinary(this.startRow);
+        oprot.writeFieldEnd();
+      }
+      if (this.stopRow != null) {
+        oprot.writeFieldBegin(STOP_ROW_FIELD_DESC);
+        oprot.writeBinary(this.stopRow);
+        oprot.writeFieldEnd();
+      }
+      if (this.columns != null) {
+        oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+          for (byte[] _iter85 : this.columns)
+          {
+            oprot.writeBinary(_iter85);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenWithStop_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("startRow:");
+      if (this.startRow == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.startRow);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("stopRow:");
+      if (this.stopRow == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.stopRow);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columns:");
+      if (this.columns == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columns);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenWithStop_result implements TBase<scannerOpenWithStop_result._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenWithStop_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStop_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public int success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenWithStop_result.class, metaDataMap);
+    }
+
+    public scannerOpenWithStop_result() {
+    }
+
+    public scannerOpenWithStop_result(
+      int success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenWithStop_result(scannerOpenWithStop_result other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.success = other.success;
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public scannerOpenWithStop_result deepCopy() {
+      return new scannerOpenWithStop_result(this);
+    }
+
+    @Deprecated
+    public scannerOpenWithStop_result clone() {
+      return new scannerOpenWithStop_result(this);
+    }
+
+    public int getSuccess() {
+      return this.success;
+    }
+
+    public scannerOpenWithStop_result setSuccess(int success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bit_vector.clear(__SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return __isset_bit_vector.get(__SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bit_vector.set(__SUCCESS_ISSET_ID, value);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerOpenWithStop_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Integer)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return new Integer(getSuccess());
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenWithStop_result)
+        return this.equals((scannerOpenWithStop_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenWithStop_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true;
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenWithStop_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenWithStop_result typedOther = (scannerOpenWithStop_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.I32) {
+                this.success = iprot.readI32();
+                setSuccessIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        oprot.writeI32(this.success);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenWithStop_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenWithPrefix_args implements TBase<scannerOpenWithPrefix_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenWithPrefix_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithPrefix_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField START_AND_PREFIX_FIELD_DESC = new TField("startAndPrefix", TType.STRING, (short)2);
+    private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * the prefix (and thus start row) of the keys you want
+     */
+    public byte[] startAndPrefix;
+    /**
+     * the columns you want returned
+     */
+    public List<byte[]> columns;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * the prefix (and thus start row) of the keys you want
+       */
+      START_AND_PREFIX((short)2, "startAndPrefix"),
+      /**
+       * the columns you want returned
+       */
+      COLUMNS((short)3, "columns");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.START_AND_PREFIX, new FieldMetaData("startAndPrefix", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenWithPrefix_args.class, metaDataMap);
+    }
+
+    public scannerOpenWithPrefix_args() {
+    }
+
+    public scannerOpenWithPrefix_args(
+      byte[] tableName,
+      byte[] startAndPrefix,
+      List<byte[]> columns)
+    {
+      this();
+      this.tableName = tableName;
+      this.startAndPrefix = startAndPrefix;
+      this.columns = columns;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenWithPrefix_args(scannerOpenWithPrefix_args other) {
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetStartAndPrefix()) {
+        this.startAndPrefix = other.startAndPrefix;
+      }
+      if (other.isSetColumns()) {
+        List<byte[]> __this__columns = new ArrayList<byte[]>();
+        for (byte[] other_element : other.columns) {
+          __this__columns.add(other_element);
+        }
+        this.columns = __this__columns;
+      }
+    }
+
+    public scannerOpenWithPrefix_args deepCopy() {
+      return new scannerOpenWithPrefix_args(this);
+    }
+
+    @Deprecated
+    public scannerOpenWithPrefix_args clone() {
+      return new scannerOpenWithPrefix_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public scannerOpenWithPrefix_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * the prefix (and thus start row) of the keys you want
+     */
+    public byte[] getStartAndPrefix() {
+      return this.startAndPrefix;
+    }
+
+    /**
+     * the prefix (and thus start row) of the keys you want
+     */
+    public scannerOpenWithPrefix_args setStartAndPrefix(byte[] startAndPrefix) {
+      this.startAndPrefix = startAndPrefix;
+      return this;
+    }
+
+    public void unsetStartAndPrefix() {
+      this.startAndPrefix = null;
+    }
+
+    /** Returns true if field startAndPrefix is set (has been asigned a value) and false otherwise */
+    public boolean isSetStartAndPrefix() {
+      return this.startAndPrefix != null;
+    }
+
+    public void setStartAndPrefixIsSet(boolean value) {
+      if (!value) {
+        this.startAndPrefix = null;
+      }
+    }
+
+    public int getColumnsSize() {
+      return (this.columns == null) ? 0 : this.columns.size();
+    }
+
+    public java.util.Iterator<byte[]> getColumnsIterator() {
+      return (this.columns == null) ? null : this.columns.iterator();
+    }
+
+    public void addToColumns(byte[] elem) {
+      if (this.columns == null) {
+        this.columns = new ArrayList<byte[]>();
+      }
+      this.columns.add(elem);
+    }
+
+    /**
+     * the columns you want returned
+     */
+    public List<byte[]> getColumns() {
+      return this.columns;
+    }
+
+    /**
+     * the columns you want returned
+     */
+    public scannerOpenWithPrefix_args setColumns(List<byte[]> columns) {
+      this.columns = columns;
+      return this;
+    }
+
+    public void unsetColumns() {
+      this.columns = null;
+    }
+
+    /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumns() {
+      return this.columns != null;
+    }
+
+    public void setColumnsIsSet(boolean value) {
+      if (!value) {
+        this.columns = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case START_AND_PREFIX:
+        if (value == null) {
+          unsetStartAndPrefix();
+        } else {
+          setStartAndPrefix((byte[])value);
+        }
+        break;
+
+      case COLUMNS:
+        if (value == null) {
+          unsetColumns();
+        } else {
+          setColumns((List<byte[]>)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case START_AND_PREFIX:
+        return getStartAndPrefix();
+
+      case COLUMNS:
+        return getColumns();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case START_AND_PREFIX:
+        return isSetStartAndPrefix();
+      case COLUMNS:
+        return isSetColumns();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenWithPrefix_args)
+        return this.equals((scannerOpenWithPrefix_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenWithPrefix_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_startAndPrefix = true && this.isSetStartAndPrefix();
+      boolean that_present_startAndPrefix = true && that.isSetStartAndPrefix();
+      if (this_present_startAndPrefix || that_present_startAndPrefix) {
+        if (!(this_present_startAndPrefix && that_present_startAndPrefix))
+          return false;
+        if (!java.util.Arrays.equals(this.startAndPrefix, that.startAndPrefix))
+          return false;
+      }
+
+      boolean this_present_columns = true && this.isSetColumns();
+      boolean that_present_columns = true && that.isSetColumns();
+      if (this_present_columns || that_present_columns) {
+        if (!(this_present_columns && that_present_columns))
+          return false;
+        if (!this.columns.equals(that.columns))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_startAndPrefix = true && (isSetStartAndPrefix());
+      builder.append(present_startAndPrefix);
+      if (present_startAndPrefix)
+        builder.append(startAndPrefix);
+
+      boolean present_columns = true && (isSetColumns());
+      builder.append(present_columns);
+      if (present_columns)
+        builder.append(columns);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenWithPrefix_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenWithPrefix_args typedOther = (scannerOpenWithPrefix_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetStartAndPrefix()).compareTo(isSetStartAndPrefix());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(startAndPrefix, typedOther.startAndPrefix);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumns()).compareTo(isSetColumns());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columns, typedOther.columns);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case START_AND_PREFIX:
+              if (field.type == TType.STRING) {
+                this.startAndPrefix = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMNS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list86 = iprot.readListBegin();
+                  this.columns = new ArrayList<byte[]>(_list86.size);
+                  for (int _i87 = 0; _i87 < _list86.size; ++_i87)
+                  {
+                    byte[] _elem88;
+                    _elem88 = iprot.readBinary();
+                    this.columns.add(_elem88);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.startAndPrefix != null) {
+        oprot.writeFieldBegin(START_AND_PREFIX_FIELD_DESC);
+        oprot.writeBinary(this.startAndPrefix);
+        oprot.writeFieldEnd();
+      }
+      if (this.columns != null) {
+        oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+          for (byte[] _iter89 : this.columns)
+          {
+            oprot.writeBinary(_iter89);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenWithPrefix_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("startAndPrefix:");
+      if (this.startAndPrefix == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.startAndPrefix);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columns:");
+      if (this.columns == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columns);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenWithPrefix_result implements TBase<scannerOpenWithPrefix_result._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenWithPrefix_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithPrefix_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public int success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenWithPrefix_result.class, metaDataMap);
+    }
+
+    public scannerOpenWithPrefix_result() {
+    }
+
+    public scannerOpenWithPrefix_result(
+      int success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenWithPrefix_result(scannerOpenWithPrefix_result other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.success = other.success;
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public scannerOpenWithPrefix_result deepCopy() {
+      return new scannerOpenWithPrefix_result(this);
+    }
+
+    @Deprecated
+    public scannerOpenWithPrefix_result clone() {
+      return new scannerOpenWithPrefix_result(this);
+    }
+
+    public int getSuccess() {
+      return this.success;
+    }
+
+    public scannerOpenWithPrefix_result setSuccess(int success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bit_vector.clear(__SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return __isset_bit_vector.get(__SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bit_vector.set(__SUCCESS_ISSET_ID, value);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerOpenWithPrefix_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Integer)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return new Integer(getSuccess());
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenWithPrefix_result)
+        return this.equals((scannerOpenWithPrefix_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenWithPrefix_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true;
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenWithPrefix_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenWithPrefix_result typedOther = (scannerOpenWithPrefix_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.I32) {
+                this.success = iprot.readI32();
+                setSuccessIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        oprot.writeI32(this.success);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenWithPrefix_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenTs_args implements TBase<scannerOpenTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+    private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)3);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)4);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] startRow;
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> columns;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * Starting row in table to scan.
+       * Send "" (empty string) to start at the first row.
+       */
+      START_ROW((short)2, "startRow"),
+      /**
+       * columns to scan. If column name is a column family, all
+       * columns of the specified column family are returned. It's also possible
+       * to pass a regex in the column qualifier.
+       */
+      COLUMNS((short)3, "columns"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)4, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.START_ROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenTs_args.class, metaDataMap);
+    }
+
+    public scannerOpenTs_args() {
+    }
+
+    public scannerOpenTs_args(
+      byte[] tableName,
+      byte[] startRow,
+      List<byte[]> columns,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.startRow = startRow;
+      this.columns = columns;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenTs_args(scannerOpenTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetStartRow()) {
+        this.startRow = other.startRow;
+      }
+      if (other.isSetColumns()) {
+        List<byte[]> __this__columns = new ArrayList<byte[]>();
+        for (byte[] other_element : other.columns) {
+          __this__columns.add(other_element);
+        }
+        this.columns = __this__columns;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public scannerOpenTs_args deepCopy() {
+      return new scannerOpenTs_args(this);
+    }
+
+    @Deprecated
+    public scannerOpenTs_args clone() {
+      return new scannerOpenTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public scannerOpenTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] getStartRow() {
+      return this.startRow;
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public scannerOpenTs_args setStartRow(byte[] startRow) {
+      this.startRow = startRow;
+      return this;
+    }
+
+    public void unsetStartRow() {
+      this.startRow = null;
+    }
+
+    /** Returns true if field startRow is set (has been asigned a value) and false otherwise */
+    public boolean isSetStartRow() {
+      return this.startRow != null;
+    }
+
+    public void setStartRowIsSet(boolean value) {
+      if (!value) {
+        this.startRow = null;
+      }
+    }
+
+    public int getColumnsSize() {
+      return (this.columns == null) ? 0 : this.columns.size();
+    }
+
+    public java.util.Iterator<byte[]> getColumnsIterator() {
+      return (this.columns == null) ? null : this.columns.iterator();
+    }
+
+    public void addToColumns(byte[] elem) {
+      if (this.columns == null) {
+        this.columns = new ArrayList<byte[]>();
+      }
+      this.columns.add(elem);
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> getColumns() {
+      return this.columns;
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public scannerOpenTs_args setColumns(List<byte[]> columns) {
+      this.columns = columns;
+      return this;
+    }
+
+    public void unsetColumns() {
+      this.columns = null;
+    }
+
+    /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumns() {
+      return this.columns != null;
+    }
+
+    public void setColumnsIsSet(boolean value) {
+      if (!value) {
+        this.columns = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public scannerOpenTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case START_ROW:
+        if (value == null) {
+          unsetStartRow();
+        } else {
+          setStartRow((byte[])value);
+        }
+        break;
+
+      case COLUMNS:
+        if (value == null) {
+          unsetColumns();
+        } else {
+          setColumns((List<byte[]>)value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case START_ROW:
+        return getStartRow();
+
+      case COLUMNS:
+        return getColumns();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case START_ROW:
+        return isSetStartRow();
+      case COLUMNS:
+        return isSetColumns();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenTs_args)
+        return this.equals((scannerOpenTs_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_startRow = true && this.isSetStartRow();
+      boolean that_present_startRow = true && that.isSetStartRow();
+      if (this_present_startRow || that_present_startRow) {
+        if (!(this_present_startRow && that_present_startRow))
+          return false;
+        if (!java.util.Arrays.equals(this.startRow, that.startRow))
+          return false;
+      }
+
+      boolean this_present_columns = true && this.isSetColumns();
+      boolean that_present_columns = true && that.isSetColumns();
+      if (this_present_columns || that_present_columns) {
+        if (!(this_present_columns && that_present_columns))
+          return false;
+        if (!this.columns.equals(that.columns))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      builder.append(present_startRow);
+      if (present_startRow)
+        builder.append(startRow);
+
+      boolean present_columns = true && (isSetColumns());
+      builder.append(present_columns);
+      if (present_columns)
+        builder.append(columns);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenTs_args typedOther = (scannerOpenTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetStartRow()).compareTo(isSetStartRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(startRow, typedOther.startRow);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumns()).compareTo(isSetColumns());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columns, typedOther.columns);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case START_ROW:
+              if (field.type == TType.STRING) {
+                this.startRow = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMNS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list90 = iprot.readListBegin();
+                  this.columns = new ArrayList<byte[]>(_list90.size);
+                  for (int _i91 = 0; _i91 < _list90.size; ++_i91)
+                  {
+                    byte[] _elem92;
+                    _elem92 = iprot.readBinary();
+                    this.columns.add(_elem92);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.startRow != null) {
+        oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+        oprot.writeBinary(this.startRow);
+        oprot.writeFieldEnd();
+      }
+      if (this.columns != null) {
+        oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+          for (byte[] _iter93 : this.columns)
+          {
+            oprot.writeBinary(_iter93);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("startRow:");
+      if (this.startRow == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.startRow);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columns:");
+      if (this.columns == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columns);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenTs_result implements TBase<scannerOpenTs_result._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenTs_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenTs_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public int success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenTs_result.class, metaDataMap);
+    }
+
+    public scannerOpenTs_result() {
+    }
+
+    public scannerOpenTs_result(
+      int success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenTs_result(scannerOpenTs_result other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.success = other.success;
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public scannerOpenTs_result deepCopy() {
+      return new scannerOpenTs_result(this);
+    }
+
+    @Deprecated
+    public scannerOpenTs_result clone() {
+      return new scannerOpenTs_result(this);
+    }
+
+    public int getSuccess() {
+      return this.success;
+    }
+
+    public scannerOpenTs_result setSuccess(int success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bit_vector.clear(__SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return __isset_bit_vector.get(__SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bit_vector.set(__SUCCESS_ISSET_ID, value);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerOpenTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Integer)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return new Integer(getSuccess());
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenTs_result)
+        return this.equals((scannerOpenTs_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true;
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenTs_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenTs_result typedOther = (scannerOpenTs_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.I32) {
+                this.success = iprot.readI32();
+                setSuccessIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        oprot.writeI32(this.success);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenTs_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenWithStopTs_args implements TBase<scannerOpenWithStopTs_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenWithStopTs_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStopTs_args");
+
+    private static final TField TABLE_NAME_FIELD_DESC = new TField("tableName", TType.STRING, (short)1);
+    private static final TField START_ROW_FIELD_DESC = new TField("startRow", TType.STRING, (short)2);
+    private static final TField STOP_ROW_FIELD_DESC = new TField("stopRow", TType.STRING, (short)3);
+    private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.LIST, (short)4);
+    private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)5);
+
+    /**
+     * name of table
+     */
+    public byte[] tableName;
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] startRow;
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    public byte[] stopRow;
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> columns;
+    /**
+     * timestamp
+     */
+    public long timestamp;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * name of table
+       */
+      TABLE_NAME((short)1, "tableName"),
+      /**
+       * Starting row in table to scan.
+       * Send "" (empty string) to start at the first row.
+       */
+      START_ROW((short)2, "startRow"),
+      /**
+       * row to stop scanning on. This row is *not* included in the
+       * scanner's results
+       */
+      STOP_ROW((short)3, "stopRow"),
+      /**
+       * columns to scan. If column name is a column family, all
+       * columns of the specified column family are returned. It's also possible
+       * to pass a regex in the column qualifier.
+       */
+      COLUMNS((short)4, "columns"),
+      /**
+       * timestamp
+       */
+      TIMESTAMP((short)5, "timestamp");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __TIMESTAMP_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.TABLE_NAME, new FieldMetaData("tableName", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.START_ROW, new FieldMetaData("startRow", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.STOP_ROW, new FieldMetaData("stopRow", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRING)));
+      put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new FieldValueMetaData(TType.STRING))));
+      put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I64)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenWithStopTs_args.class, metaDataMap);
+    }
+
+    public scannerOpenWithStopTs_args() {
+    }
+
+    public scannerOpenWithStopTs_args(
+      byte[] tableName,
+      byte[] startRow,
+      byte[] stopRow,
+      List<byte[]> columns,
+      long timestamp)
+    {
+      this();
+      this.tableName = tableName;
+      this.startRow = startRow;
+      this.stopRow = stopRow;
+      this.columns = columns;
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenWithStopTs_args(scannerOpenWithStopTs_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      if (other.isSetTableName()) {
+        this.tableName = other.tableName;
+      }
+      if (other.isSetStartRow()) {
+        this.startRow = other.startRow;
+      }
+      if (other.isSetStopRow()) {
+        this.stopRow = other.stopRow;
+      }
+      if (other.isSetColumns()) {
+        List<byte[]> __this__columns = new ArrayList<byte[]>();
+        for (byte[] other_element : other.columns) {
+          __this__columns.add(other_element);
+        }
+        this.columns = __this__columns;
+      }
+      this.timestamp = other.timestamp;
+    }
+
+    public scannerOpenWithStopTs_args deepCopy() {
+      return new scannerOpenWithStopTs_args(this);
+    }
+
+    @Deprecated
+    public scannerOpenWithStopTs_args clone() {
+      return new scannerOpenWithStopTs_args(this);
+    }
+
+    /**
+     * name of table
+     */
+    public byte[] getTableName() {
+      return this.tableName;
+    }
+
+    /**
+     * name of table
+     */
+    public scannerOpenWithStopTs_args setTableName(byte[] tableName) {
+      this.tableName = tableName;
+      return this;
+    }
+
+    public void unsetTableName() {
+      this.tableName = null;
+    }
+
+    /** Returns true if field tableName is set (has been asigned a value) and false otherwise */
+    public boolean isSetTableName() {
+      return this.tableName != null;
+    }
+
+    public void setTableNameIsSet(boolean value) {
+      if (!value) {
+        this.tableName = null;
+      }
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public byte[] getStartRow() {
+      return this.startRow;
+    }
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    public scannerOpenWithStopTs_args setStartRow(byte[] startRow) {
+      this.startRow = startRow;
+      return this;
+    }
+
+    public void unsetStartRow() {
+      this.startRow = null;
+    }
+
+    /** Returns true if field startRow is set (has been asigned a value) and false otherwise */
+    public boolean isSetStartRow() {
+      return this.startRow != null;
+    }
+
+    public void setStartRowIsSet(boolean value) {
+      if (!value) {
+        this.startRow = null;
+      }
+    }
+
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    public byte[] getStopRow() {
+      return this.stopRow;
+    }
+
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    public scannerOpenWithStopTs_args setStopRow(byte[] stopRow) {
+      this.stopRow = stopRow;
+      return this;
+    }
+
+    public void unsetStopRow() {
+      this.stopRow = null;
+    }
+
+    /** Returns true if field stopRow is set (has been asigned a value) and false otherwise */
+    public boolean isSetStopRow() {
+      return this.stopRow != null;
+    }
+
+    public void setStopRowIsSet(boolean value) {
+      if (!value) {
+        this.stopRow = null;
+      }
+    }
+
+    public int getColumnsSize() {
+      return (this.columns == null) ? 0 : this.columns.size();
+    }
+
+    public java.util.Iterator<byte[]> getColumnsIterator() {
+      return (this.columns == null) ? null : this.columns.iterator();
+    }
+
+    public void addToColumns(byte[] elem) {
+      if (this.columns == null) {
+        this.columns = new ArrayList<byte[]>();
+      }
+      this.columns.add(elem);
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public List<byte[]> getColumns() {
+      return this.columns;
+    }
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    public scannerOpenWithStopTs_args setColumns(List<byte[]> columns) {
+      this.columns = columns;
+      return this;
+    }
+
+    public void unsetColumns() {
+      this.columns = null;
+    }
+
+    /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+    public boolean isSetColumns() {
+      return this.columns != null;
+    }
+
+    public void setColumnsIsSet(boolean value) {
+      if (!value) {
+        this.columns = null;
+      }
+    }
+
+    /**
+     * timestamp
+     */
+    public long getTimestamp() {
+      return this.timestamp;
+    }
+
+    /**
+     * timestamp
+     */
+    public scannerOpenWithStopTs_args setTimestamp(long timestamp) {
+      this.timestamp = timestamp;
+      setTimestampIsSet(true);
+      return this;
+    }
+
+    public void unsetTimestamp() {
+      __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+    }
+
+    /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+    public boolean isSetTimestamp() {
+      return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+    }
+
+    public void setTimestampIsSet(boolean value) {
+      __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case TABLE_NAME:
+        if (value == null) {
+          unsetTableName();
+        } else {
+          setTableName((byte[])value);
+        }
+        break;
+
+      case START_ROW:
+        if (value == null) {
+          unsetStartRow();
+        } else {
+          setStartRow((byte[])value);
+        }
+        break;
+
+      case STOP_ROW:
+        if (value == null) {
+          unsetStopRow();
+        } else {
+          setStopRow((byte[])value);
+        }
+        break;
+
+      case COLUMNS:
+        if (value == null) {
+          unsetColumns();
+        } else {
+          setColumns((List<byte[]>)value);
+        }
+        break;
+
+      case TIMESTAMP:
+        if (value == null) {
+          unsetTimestamp();
+        } else {
+          setTimestamp((Long)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return getTableName();
+
+      case START_ROW:
+        return getStartRow();
+
+      case STOP_ROW:
+        return getStopRow();
+
+      case COLUMNS:
+        return getColumns();
+
+      case TIMESTAMP:
+        return new Long(getTimestamp());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case TABLE_NAME:
+        return isSetTableName();
+      case START_ROW:
+        return isSetStartRow();
+      case STOP_ROW:
+        return isSetStopRow();
+      case COLUMNS:
+        return isSetColumns();
+      case TIMESTAMP:
+        return isSetTimestamp();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenWithStopTs_args)
+        return this.equals((scannerOpenWithStopTs_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenWithStopTs_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_tableName = true && this.isSetTableName();
+      boolean that_present_tableName = true && that.isSetTableName();
+      if (this_present_tableName || that_present_tableName) {
+        if (!(this_present_tableName && that_present_tableName))
+          return false;
+        if (!java.util.Arrays.equals(this.tableName, that.tableName))
+          return false;
+      }
+
+      boolean this_present_startRow = true && this.isSetStartRow();
+      boolean that_present_startRow = true && that.isSetStartRow();
+      if (this_present_startRow || that_present_startRow) {
+        if (!(this_present_startRow && that_present_startRow))
+          return false;
+        if (!java.util.Arrays.equals(this.startRow, that.startRow))
+          return false;
+      }
+
+      boolean this_present_stopRow = true && this.isSetStopRow();
+      boolean that_present_stopRow = true && that.isSetStopRow();
+      if (this_present_stopRow || that_present_stopRow) {
+        if (!(this_present_stopRow && that_present_stopRow))
+          return false;
+        if (!java.util.Arrays.equals(this.stopRow, that.stopRow))
+          return false;
+      }
+
+      boolean this_present_columns = true && this.isSetColumns();
+      boolean that_present_columns = true && that.isSetColumns();
+      if (this_present_columns || that_present_columns) {
+        if (!(this_present_columns && that_present_columns))
+          return false;
+        if (!this.columns.equals(that.columns))
+          return false;
+      }
+
+      boolean this_present_timestamp = true;
+      boolean that_present_timestamp = true;
+      if (this_present_timestamp || that_present_timestamp) {
+        if (!(this_present_timestamp && that_present_timestamp))
+          return false;
+        if (this.timestamp != that.timestamp)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_tableName = true && (isSetTableName());
+      builder.append(present_tableName);
+      if (present_tableName)
+        builder.append(tableName);
+
+      boolean present_startRow = true && (isSetStartRow());
+      builder.append(present_startRow);
+      if (present_startRow)
+        builder.append(startRow);
+
+      boolean present_stopRow = true && (isSetStopRow());
+      builder.append(present_stopRow);
+      if (present_stopRow)
+        builder.append(stopRow);
+
+      boolean present_columns = true && (isSetColumns());
+      builder.append(present_columns);
+      if (present_columns)
+        builder.append(columns);
+
+      boolean present_timestamp = true;
+      builder.append(present_timestamp);
+      if (present_timestamp)
+        builder.append(timestamp);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenWithStopTs_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenWithStopTs_args typedOther = (scannerOpenWithStopTs_args)other;
+
+      lastComparison = Boolean.valueOf(isSetTableName()).compareTo(isSetTableName());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(tableName, typedOther.tableName);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetStartRow()).compareTo(isSetStartRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(startRow, typedOther.startRow);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetStopRow()).compareTo(isSetStopRow());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(stopRow, typedOther.stopRow);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetColumns()).compareTo(isSetColumns());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(columns, typedOther.columns);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case TABLE_NAME:
+              if (field.type == TType.STRING) {
+                this.tableName = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case START_ROW:
+              if (field.type == TType.STRING) {
+                this.startRow = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case STOP_ROW:
+              if (field.type == TType.STRING) {
+                this.stopRow = iprot.readBinary();
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case COLUMNS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list94 = iprot.readListBegin();
+                  this.columns = new ArrayList<byte[]>(_list94.size);
+                  for (int _i95 = 0; _i95 < _list94.size; ++_i95)
+                  {
+                    byte[] _elem96;
+                    _elem96 = iprot.readBinary();
+                    this.columns.add(_elem96);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case TIMESTAMP:
+              if (field.type == TType.I64) {
+                this.timestamp = iprot.readI64();
+                setTimestampIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      if (this.tableName != null) {
+        oprot.writeFieldBegin(TABLE_NAME_FIELD_DESC);
+        oprot.writeBinary(this.tableName);
+        oprot.writeFieldEnd();
+      }
+      if (this.startRow != null) {
+        oprot.writeFieldBegin(START_ROW_FIELD_DESC);
+        oprot.writeBinary(this.startRow);
+        oprot.writeFieldEnd();
+      }
+      if (this.stopRow != null) {
+        oprot.writeFieldBegin(STOP_ROW_FIELD_DESC);
+        oprot.writeBinary(this.stopRow);
+        oprot.writeFieldEnd();
+      }
+      if (this.columns != null) {
+        oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRING, this.columns.size()));
+          for (byte[] _iter97 : this.columns)
+          {
+            oprot.writeBinary(_iter97);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+      oprot.writeI64(this.timestamp);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenWithStopTs_args(");
+      boolean first = true;
+
+      sb.append("tableName:");
+      if (this.tableName == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.tableName);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("startRow:");
+      if (this.startRow == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.startRow);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("stopRow:");
+      if (this.stopRow == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.stopRow);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("columns:");
+      if (this.columns == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.columns);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("timestamp:");
+      sb.append(this.timestamp);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerOpenWithStopTs_result implements TBase<scannerOpenWithStopTs_result._Fields>, java.io.Serializable, Cloneable, Comparable<scannerOpenWithStopTs_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerOpenWithStopTs_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.I32, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+
+    public int success;
+    public IOError io;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __SUCCESS_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerOpenWithStopTs_result.class, metaDataMap);
+    }
+
+    public scannerOpenWithStopTs_result() {
+    }
+
+    public scannerOpenWithStopTs_result(
+      int success,
+      IOError io)
+    {
+      this();
+      this.success = success;
+      setSuccessIsSet(true);
+      this.io = io;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerOpenWithStopTs_result(scannerOpenWithStopTs_result other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.success = other.success;
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+    }
+
+    public scannerOpenWithStopTs_result deepCopy() {
+      return new scannerOpenWithStopTs_result(this);
+    }
+
+    @Deprecated
+    public scannerOpenWithStopTs_result clone() {
+      return new scannerOpenWithStopTs_result(this);
+    }
+
+    public int getSuccess() {
+      return this.success;
+    }
+
+    public scannerOpenWithStopTs_result setSuccess(int success) {
+      this.success = success;
+      setSuccessIsSet(true);
+      return this;
+    }
+
+    public void unsetSuccess() {
+      __isset_bit_vector.clear(__SUCCESS_ISSET_ID);
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return __isset_bit_vector.get(__SUCCESS_ISSET_ID);
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      __isset_bit_vector.set(__SUCCESS_ISSET_ID, value);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerOpenWithStopTs_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((Integer)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return new Integer(getSuccess());
+
+      case IO:
+        return getIo();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerOpenWithStopTs_result)
+        return this.equals((scannerOpenWithStopTs_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerOpenWithStopTs_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true;
+      boolean that_present_success = true;
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (this.success != that.success)
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true;
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerOpenWithStopTs_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerOpenWithStopTs_result typedOther = (scannerOpenWithStopTs_result)other;
+
+      lastComparison = Boolean.valueOf(isSetSuccess()).compareTo(isSetSuccess());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(success, typedOther.success);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.I32) {
+                this.success = iprot.readI32();
+                setSuccessIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        oprot.writeI32(this.success);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerOpenWithStopTs_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      sb.append(this.success);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerGet_args implements TBase<scannerGet_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerGet_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerGet_args");
+
+    private static final TField ID_FIELD_DESC = new TField("id", TType.I32, (short)1);
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public int id;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * id of a scanner returned by scannerOpen
+       */
+      ID((short)1, "id");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __ID_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerGet_args.class, metaDataMap);
+    }
+
+    public scannerGet_args() {
+    }
+
+    public scannerGet_args(
+      int id)
+    {
+      this();
+      this.id = id;
+      setIdIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerGet_args(scannerGet_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.id = other.id;
+    }
+
+    public scannerGet_args deepCopy() {
+      return new scannerGet_args(this);
+    }
+
+    @Deprecated
+    public scannerGet_args clone() {
+      return new scannerGet_args(this);
+    }
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public int getId() {
+      return this.id;
+    }
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public scannerGet_args setId(int id) {
+      this.id = id;
+      setIdIsSet(true);
+      return this;
+    }
+
+    public void unsetId() {
+      __isset_bit_vector.clear(__ID_ISSET_ID);
+    }
+
+    /** Returns true if field id is set (has been asigned a value) and false otherwise */
+    public boolean isSetId() {
+      return __isset_bit_vector.get(__ID_ISSET_ID);
+    }
+
+    public void setIdIsSet(boolean value) {
+      __isset_bit_vector.set(__ID_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case ID:
+        if (value == null) {
+          unsetId();
+        } else {
+          setId((Integer)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case ID:
+        return new Integer(getId());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case ID:
+        return isSetId();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerGet_args)
+        return this.equals((scannerGet_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerGet_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_id = true;
+      boolean that_present_id = true;
+      if (this_present_id || that_present_id) {
+        if (!(this_present_id && that_present_id))
+          return false;
+        if (this.id != that.id)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_id = true;
+      builder.append(present_id);
+      if (present_id)
+        builder.append(id);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerGet_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerGet_args typedOther = (scannerGet_args)other;
+
+      lastComparison = Boolean.valueOf(isSetId()).compareTo(isSetId());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(id, typedOther.id);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case ID:
+              if (field.type == TType.I32) {
+                this.id = iprot.readI32();
+                setIdIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      oprot.writeFieldBegin(ID_FIELD_DESC);
+      oprot.writeI32(this.id);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerGet_args(");
+      boolean first = true;
+
+      sb.append("id:");
+      sb.append(this.id);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerGet_result implements TBase<scannerGet_result._Fields>, java.io.Serializable, Cloneable   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerGet_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public List<TRowResult> success;
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TRowResult.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerGet_result.class, metaDataMap);
+    }
+
+    public scannerGet_result() {
+    }
+
+    public scannerGet_result(
+      List<TRowResult> success,
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerGet_result(scannerGet_result other) {
+      if (other.isSetSuccess()) {
+        List<TRowResult> __this__success = new ArrayList<TRowResult>();
+        for (TRowResult other_element : other.success) {
+          __this__success.add(new TRowResult(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public scannerGet_result deepCopy() {
+      return new scannerGet_result(this);
+    }
+
+    @Deprecated
+    public scannerGet_result clone() {
+      return new scannerGet_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TRowResult> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TRowResult elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TRowResult>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TRowResult> getSuccess() {
+      return this.success;
+    }
+
+    public scannerGet_result setSuccess(List<TRowResult> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerGet_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public scannerGet_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TRowResult>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerGet_result)
+        return this.equals((scannerGet_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerGet_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list98 = iprot.readListBegin();
+                  this.success = new ArrayList<TRowResult>(_list98.size);
+                  for (int _i99 = 0; _i99 < _list98.size; ++_i99)
+                  {
+                    TRowResult _elem100;
+                    _elem100 = new TRowResult();
+                    _elem100.read(iprot);
+                    this.success.add(_elem100);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TRowResult _iter101 : this.success)
+          {
+            _iter101.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerGet_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerGetList_args implements TBase<scannerGetList_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerGetList_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerGetList_args");
+
+    private static final TField ID_FIELD_DESC = new TField("id", TType.I32, (short)1);
+    private static final TField NB_ROWS_FIELD_DESC = new TField("nbRows", TType.I32, (short)2);
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public int id;
+    /**
+     * number of results to return
+     */
+    public int nbRows;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * id of a scanner returned by scannerOpen
+       */
+      ID((short)1, "id"),
+      /**
+       * number of results to return
+       */
+      NB_ROWS((short)2, "nbRows");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __ID_ISSET_ID = 0;
+    private static final int __NBROWS_ISSET_ID = 1;
+    private BitSet __isset_bit_vector = new BitSet(2);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+      put(_Fields.NB_ROWS, new FieldMetaData("nbRows", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerGetList_args.class, metaDataMap);
+    }
+
+    public scannerGetList_args() {
+    }
+
+    public scannerGetList_args(
+      int id,
+      int nbRows)
+    {
+      this();
+      this.id = id;
+      setIdIsSet(true);
+      this.nbRows = nbRows;
+      setNbRowsIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerGetList_args(scannerGetList_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.id = other.id;
+      this.nbRows = other.nbRows;
+    }
+
+    public scannerGetList_args deepCopy() {
+      return new scannerGetList_args(this);
+    }
+
+    @Deprecated
+    public scannerGetList_args clone() {
+      return new scannerGetList_args(this);
+    }
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public int getId() {
+      return this.id;
+    }
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public scannerGetList_args setId(int id) {
+      this.id = id;
+      setIdIsSet(true);
+      return this;
+    }
+
+    public void unsetId() {
+      __isset_bit_vector.clear(__ID_ISSET_ID);
+    }
+
+    /** Returns true if field id is set (has been asigned a value) and false otherwise */
+    public boolean isSetId() {
+      return __isset_bit_vector.get(__ID_ISSET_ID);
+    }
+
+    public void setIdIsSet(boolean value) {
+      __isset_bit_vector.set(__ID_ISSET_ID, value);
+    }
+
+    /**
+     * number of results to return
+     */
+    public int getNbRows() {
+      return this.nbRows;
+    }
+
+    /**
+     * number of results to return
+     */
+    public scannerGetList_args setNbRows(int nbRows) {
+      this.nbRows = nbRows;
+      setNbRowsIsSet(true);
+      return this;
+    }
+
+    public void unsetNbRows() {
+      __isset_bit_vector.clear(__NBROWS_ISSET_ID);
+    }
+
+    /** Returns true if field nbRows is set (has been asigned a value) and false otherwise */
+    public boolean isSetNbRows() {
+      return __isset_bit_vector.get(__NBROWS_ISSET_ID);
+    }
+
+    public void setNbRowsIsSet(boolean value) {
+      __isset_bit_vector.set(__NBROWS_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case ID:
+        if (value == null) {
+          unsetId();
+        } else {
+          setId((Integer)value);
+        }
+        break;
+
+      case NB_ROWS:
+        if (value == null) {
+          unsetNbRows();
+        } else {
+          setNbRows((Integer)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case ID:
+        return new Integer(getId());
+
+      case NB_ROWS:
+        return new Integer(getNbRows());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case ID:
+        return isSetId();
+      case NB_ROWS:
+        return isSetNbRows();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerGetList_args)
+        return this.equals((scannerGetList_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerGetList_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_id = true;
+      boolean that_present_id = true;
+      if (this_present_id || that_present_id) {
+        if (!(this_present_id && that_present_id))
+          return false;
+        if (this.id != that.id)
+          return false;
+      }
+
+      boolean this_present_nbRows = true;
+      boolean that_present_nbRows = true;
+      if (this_present_nbRows || that_present_nbRows) {
+        if (!(this_present_nbRows && that_present_nbRows))
+          return false;
+        if (this.nbRows != that.nbRows)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_id = true;
+      builder.append(present_id);
+      if (present_id)
+        builder.append(id);
+
+      boolean present_nbRows = true;
+      builder.append(present_nbRows);
+      if (present_nbRows)
+        builder.append(nbRows);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerGetList_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerGetList_args typedOther = (scannerGetList_args)other;
+
+      lastComparison = Boolean.valueOf(isSetId()).compareTo(isSetId());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(id, typedOther.id);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetNbRows()).compareTo(isSetNbRows());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(nbRows, typedOther.nbRows);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case ID:
+              if (field.type == TType.I32) {
+                this.id = iprot.readI32();
+                setIdIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case NB_ROWS:
+              if (field.type == TType.I32) {
+                this.nbRows = iprot.readI32();
+                setNbRowsIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      oprot.writeFieldBegin(ID_FIELD_DESC);
+      oprot.writeI32(this.id);
+      oprot.writeFieldEnd();
+      oprot.writeFieldBegin(NB_ROWS_FIELD_DESC);
+      oprot.writeI32(this.nbRows);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerGetList_args(");
+      boolean first = true;
+
+      sb.append("id:");
+      sb.append(this.id);
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("nbRows:");
+      sb.append(this.nbRows);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerGetList_result implements TBase<scannerGetList_result._Fields>, java.io.Serializable, Cloneable   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerGetList_result");
+
+    private static final TField SUCCESS_FIELD_DESC = new TField("success", TType.LIST, (short)0);
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public List<TRowResult> success;
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      SUCCESS((short)0, "success"),
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.SUCCESS, new FieldMetaData("success", TFieldRequirementType.DEFAULT,
+          new ListMetaData(TType.LIST,
+              new StructMetaData(TType.STRUCT, TRowResult.class))));
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerGetList_result.class, metaDataMap);
+    }
+
+    public scannerGetList_result() {
+    }
+
+    public scannerGetList_result(
+      List<TRowResult> success,
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.success = success;
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerGetList_result(scannerGetList_result other) {
+      if (other.isSetSuccess()) {
+        List<TRowResult> __this__success = new ArrayList<TRowResult>();
+        for (TRowResult other_element : other.success) {
+          __this__success.add(new TRowResult(other_element));
+        }
+        this.success = __this__success;
+      }
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public scannerGetList_result deepCopy() {
+      return new scannerGetList_result(this);
+    }
+
+    @Deprecated
+    public scannerGetList_result clone() {
+      return new scannerGetList_result(this);
+    }
+
+    public int getSuccessSize() {
+      return (this.success == null) ? 0 : this.success.size();
+    }
+
+    public java.util.Iterator<TRowResult> getSuccessIterator() {
+      return (this.success == null) ? null : this.success.iterator();
+    }
+
+    public void addToSuccess(TRowResult elem) {
+      if (this.success == null) {
+        this.success = new ArrayList<TRowResult>();
+      }
+      this.success.add(elem);
+    }
+
+    public List<TRowResult> getSuccess() {
+      return this.success;
+    }
+
+    public scannerGetList_result setSuccess(List<TRowResult> success) {
+      this.success = success;
+      return this;
+    }
+
+    public void unsetSuccess() {
+      this.success = null;
+    }
+
+    /** Returns true if field success is set (has been asigned a value) and false otherwise */
+    public boolean isSetSuccess() {
+      return this.success != null;
+    }
+
+    public void setSuccessIsSet(boolean value) {
+      if (!value) {
+        this.success = null;
+      }
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerGetList_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public scannerGetList_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case SUCCESS:
+        if (value == null) {
+          unsetSuccess();
+        } else {
+          setSuccess((List<TRowResult>)value);
+        }
+        break;
+
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return getSuccess();
+
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case SUCCESS:
+        return isSetSuccess();
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerGetList_result)
+        return this.equals((scannerGetList_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerGetList_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_success = true && this.isSetSuccess();
+      boolean that_present_success = true && that.isSetSuccess();
+      if (this_present_success || that_present_success) {
+        if (!(this_present_success && that_present_success))
+          return false;
+        if (!this.success.equals(that.success))
+          return false;
+      }
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_success = true && (isSetSuccess());
+      builder.append(present_success);
+      if (present_success)
+        builder.append(success);
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case SUCCESS:
+              if (field.type == TType.LIST) {
+                {
+                  TList _list102 = iprot.readListBegin();
+                  this.success = new ArrayList<TRowResult>(_list102.size);
+                  for (int _i103 = 0; _i103 < _list102.size; ++_i103)
+                  {
+                    TRowResult _elem104;
+                    _elem104 = new TRowResult();
+                    _elem104.read(iprot);
+                    this.success.add(_elem104);
+                  }
+                  iprot.readListEnd();
+                }
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetSuccess()) {
+        oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
+        {
+          oprot.writeListBegin(new TList(TType.STRUCT, this.success.size()));
+          for (TRowResult _iter105 : this.success)
+          {
+            _iter105.write(oprot);
+          }
+          oprot.writeListEnd();
+        }
+        oprot.writeFieldEnd();
+      } else if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerGetList_result(");
+      boolean first = true;
+
+      sb.append("success:");
+      if (this.success == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.success);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerClose_args implements TBase<scannerClose_args._Fields>, java.io.Serializable, Cloneable, Comparable<scannerClose_args>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerClose_args");
+
+    private static final TField ID_FIELD_DESC = new TField("id", TType.I32, (short)1);
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public int id;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      /**
+       * id of a scanner returned by scannerOpen
+       */
+      ID((short)1, "id");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+    private static final int __ID_ISSET_ID = 0;
+    private BitSet __isset_bit_vector = new BitSet(1);
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.I32)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerClose_args.class, metaDataMap);
+    }
+
+    public scannerClose_args() {
+    }
+
+    public scannerClose_args(
+      int id)
+    {
+      this();
+      this.id = id;
+      setIdIsSet(true);
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerClose_args(scannerClose_args other) {
+      __isset_bit_vector.clear();
+      __isset_bit_vector.or(other.__isset_bit_vector);
+      this.id = other.id;
+    }
+
+    public scannerClose_args deepCopy() {
+      return new scannerClose_args(this);
+    }
+
+    @Deprecated
+    public scannerClose_args clone() {
+      return new scannerClose_args(this);
+    }
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public int getId() {
+      return this.id;
+    }
+
+    /**
+     * id of a scanner returned by scannerOpen
+     */
+    public scannerClose_args setId(int id) {
+      this.id = id;
+      setIdIsSet(true);
+      return this;
+    }
+
+    public void unsetId() {
+      __isset_bit_vector.clear(__ID_ISSET_ID);
+    }
+
+    /** Returns true if field id is set (has been asigned a value) and false otherwise */
+    public boolean isSetId() {
+      return __isset_bit_vector.get(__ID_ISSET_ID);
+    }
+
+    public void setIdIsSet(boolean value) {
+      __isset_bit_vector.set(__ID_ISSET_ID, value);
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case ID:
+        if (value == null) {
+          unsetId();
+        } else {
+          setId((Integer)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case ID:
+        return new Integer(getId());
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case ID:
+        return isSetId();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerClose_args)
+        return this.equals((scannerClose_args)that);
+      return false;
+    }
+
+    public boolean equals(scannerClose_args that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_id = true;
+      boolean that_present_id = true;
+      if (this_present_id || that_present_id) {
+        if (!(this_present_id && that_present_id))
+          return false;
+        if (this.id != that.id)
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_id = true;
+      builder.append(present_id);
+      if (present_id)
+        builder.append(id);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerClose_args other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerClose_args typedOther = (scannerClose_args)other;
+
+      lastComparison = Boolean.valueOf(isSetId()).compareTo(isSetId());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(id, typedOther.id);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case ID:
+              if (field.type == TType.I32) {
+                this.id = iprot.readI32();
+                setIdIsSet(true);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      validate();
+
+      oprot.writeStructBegin(STRUCT_DESC);
+      oprot.writeFieldBegin(ID_FIELD_DESC);
+      oprot.writeI32(this.id);
+      oprot.writeFieldEnd();
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerClose_args(");
+      boolean first = true;
+
+      sb.append("id:");
+      sb.append(this.id);
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+  public static class scannerClose_result implements TBase<scannerClose_result._Fields>, java.io.Serializable, Cloneable, Comparable<scannerClose_result>   {
+    private static final TStruct STRUCT_DESC = new TStruct("scannerClose_result");
+
+    private static final TField IO_FIELD_DESC = new TField("io", TType.STRUCT, (short)1);
+    private static final TField IA_FIELD_DESC = new TField("ia", TType.STRUCT, (short)2);
+
+    public IOError io;
+    public IllegalArgument ia;
+
+    /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+    public enum _Fields implements TFieldIdEnum {
+      IO((short)1, "io"),
+      IA((short)2, "ia");
+
+      private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+      private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+      static {
+        for (_Fields field : EnumSet.allOf(_Fields.class)) {
+          byId.put((int)field._thriftId, field);
+          byName.put(field.getFieldName(), field);
+        }
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, or null if its not found.
+       */
+      public static _Fields findByThriftId(int fieldId) {
+        return byId.get(fieldId);
+      }
+
+      /**
+       * Find the _Fields constant that matches fieldId, throwing an exception
+       * if it is not found.
+       */
+      public static _Fields findByThriftIdOrThrow(int fieldId) {
+        _Fields fields = findByThriftId(fieldId);
+        if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+        return fields;
+      }
+
+      /**
+       * Find the _Fields constant that matches name, or null if its not found.
+       */
+      public static _Fields findByName(String name) {
+        return byName.get(name);
+      }
+
+      private final short _thriftId;
+      private final String _fieldName;
+
+      _Fields(short thriftId, String fieldName) {
+        _thriftId = thriftId;
+        _fieldName = fieldName;
+      }
+
+      public short getThriftFieldId() {
+        return _thriftId;
+      }
+
+      public String getFieldName() {
+        return _fieldName;
+      }
+    }
+
+    // isset id assignments
+
+    public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+      put(_Fields.IO, new FieldMetaData("io", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+      put(_Fields.IA, new FieldMetaData("ia", TFieldRequirementType.DEFAULT,
+          new FieldValueMetaData(TType.STRUCT)));
+    }});
+
+    static {
+      FieldMetaData.addStructMetaDataMap(scannerClose_result.class, metaDataMap);
+    }
+
+    public scannerClose_result() {
+    }
+
+    public scannerClose_result(
+      IOError io,
+      IllegalArgument ia)
+    {
+      this();
+      this.io = io;
+      this.ia = ia;
+    }
+
+    /**
+     * Performs a deep copy on <i>other</i>.
+     */
+    public scannerClose_result(scannerClose_result other) {
+      if (other.isSetIo()) {
+        this.io = new IOError(other.io);
+      }
+      if (other.isSetIa()) {
+        this.ia = new IllegalArgument(other.ia);
+      }
+    }
+
+    public scannerClose_result deepCopy() {
+      return new scannerClose_result(this);
+    }
+
+    @Deprecated
+    public scannerClose_result clone() {
+      return new scannerClose_result(this);
+    }
+
+    public IOError getIo() {
+      return this.io;
+    }
+
+    public scannerClose_result setIo(IOError io) {
+      this.io = io;
+      return this;
+    }
+
+    public void unsetIo() {
+      this.io = null;
+    }
+
+    /** Returns true if field io is set (has been asigned a value) and false otherwise */
+    public boolean isSetIo() {
+      return this.io != null;
+    }
+
+    public void setIoIsSet(boolean value) {
+      if (!value) {
+        this.io = null;
+      }
+    }
+
+    public IllegalArgument getIa() {
+      return this.ia;
+    }
+
+    public scannerClose_result setIa(IllegalArgument ia) {
+      this.ia = ia;
+      return this;
+    }
+
+    public void unsetIa() {
+      this.ia = null;
+    }
+
+    /** Returns true if field ia is set (has been asigned a value) and false otherwise */
+    public boolean isSetIa() {
+      return this.ia != null;
+    }
+
+    public void setIaIsSet(boolean value) {
+      if (!value) {
+        this.ia = null;
+      }
+    }
+
+    public void setFieldValue(_Fields field, Object value) {
+      switch (field) {
+      case IO:
+        if (value == null) {
+          unsetIo();
+        } else {
+          setIo((IOError)value);
+        }
+        break;
+
+      case IA:
+        if (value == null) {
+          unsetIa();
+        } else {
+          setIa((IllegalArgument)value);
+        }
+        break;
+
+      }
+    }
+
+    public void setFieldValue(int fieldID, Object value) {
+      setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+    }
+
+    public Object getFieldValue(_Fields field) {
+      switch (field) {
+      case IO:
+        return getIo();
+
+      case IA:
+        return getIa();
+
+      }
+      throw new IllegalStateException();
+    }
+
+    public Object getFieldValue(int fieldId) {
+      return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+    }
+
+    /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+    public boolean isSet(_Fields field) {
+      switch (field) {
+      case IO:
+        return isSetIo();
+      case IA:
+        return isSetIa();
+      }
+      throw new IllegalStateException();
+    }
+
+    public boolean isSet(int fieldID) {
+      return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+    }
+
+    @Override
+    public boolean equals(Object that) {
+      if (that == null)
+        return false;
+      if (that instanceof scannerClose_result)
+        return this.equals((scannerClose_result)that);
+      return false;
+    }
+
+    public boolean equals(scannerClose_result that) {
+      if (that == null)
+        return false;
+
+      boolean this_present_io = true && this.isSetIo();
+      boolean that_present_io = true && that.isSetIo();
+      if (this_present_io || that_present_io) {
+        if (!(this_present_io && that_present_io))
+          return false;
+        if (!this.io.equals(that.io))
+          return false;
+      }
+
+      boolean this_present_ia = true && this.isSetIa();
+      boolean that_present_ia = true && that.isSetIa();
+      if (this_present_ia || that_present_ia) {
+        if (!(this_present_ia && that_present_ia))
+          return false;
+        if (!this.ia.equals(that.ia))
+          return false;
+      }
+
+      return true;
+    }
+
+    @Override
+    public int hashCode() {
+      HashCodeBuilder builder = new HashCodeBuilder();
+
+      boolean present_io = true && (isSetIo());
+      builder.append(present_io);
+      if (present_io)
+        builder.append(io);
+
+      boolean present_ia = true && (isSetIa());
+      builder.append(present_ia);
+      if (present_ia)
+        builder.append(ia);
+
+      return builder.toHashCode();
+    }
+
+    public int compareTo(scannerClose_result other) {
+      if (!getClass().equals(other.getClass())) {
+        return getClass().getName().compareTo(other.getClass().getName());
+      }
+
+      int lastComparison = 0;
+      scannerClose_result typedOther = (scannerClose_result)other;
+
+      lastComparison = Boolean.valueOf(isSetIo()).compareTo(isSetIo());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(io, typedOther.io);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = Boolean.valueOf(isSetIa()).compareTo(isSetIa());
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      lastComparison = TBaseHelper.compareTo(ia, typedOther.ia);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+      return 0;
+    }
+
+    public void read(TProtocol iprot) throws TException {
+      TField field;
+      iprot.readStructBegin();
+      while (true)
+      {
+        field = iprot.readFieldBegin();
+        if (field.type == TType.STOP) {
+          break;
+        }
+        _Fields fieldId = _Fields.findByThriftId(field.id);
+        if (fieldId == null) {
+          TProtocolUtil.skip(iprot, field.type);
+        } else {
+          switch (fieldId) {
+            case IO:
+              if (field.type == TType.STRUCT) {
+                this.io = new IOError();
+                this.io.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+            case IA:
+              if (field.type == TType.STRUCT) {
+                this.ia = new IllegalArgument();
+                this.ia.read(iprot);
+              } else {
+                TProtocolUtil.skip(iprot, field.type);
+              }
+              break;
+          }
+          iprot.readFieldEnd();
+        }
+      }
+      iprot.readStructEnd();
+
+      // check for required fields of primitive type, which can't be checked in the validate method
+      validate();
+    }
+
+    public void write(TProtocol oprot) throws TException {
+      oprot.writeStructBegin(STRUCT_DESC);
+
+      if (this.isSetIo()) {
+        oprot.writeFieldBegin(IO_FIELD_DESC);
+        this.io.write(oprot);
+        oprot.writeFieldEnd();
+      } else if (this.isSetIa()) {
+        oprot.writeFieldBegin(IA_FIELD_DESC);
+        this.ia.write(oprot);
+        oprot.writeFieldEnd();
+      }
+      oprot.writeFieldStop();
+      oprot.writeStructEnd();
+    }
+
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder("scannerClose_result(");
+      boolean first = true;
+
+      sb.append("io:");
+      if (this.io == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.io);
+      }
+      first = false;
+      if (!first) sb.append(", ");
+      sb.append("ia:");
+      if (this.ia == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.ia);
+      }
+      first = false;
+      sb.append(")");
+      return sb.toString();
+    }
+
+    public void validate() throws TException {
+      // check for required fields
+    }
+
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
new file mode 100644
index 0000000..d2bfa10
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/IOError.java
@@ -0,0 +1,333 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * An IOError exception signals that an error occurred communicating
+ * to the Hbase master or an Hbase region server.  Also used to return
+ * more general Hbase error conditions.
+ */
+public class IOError extends Exception implements TBase<IOError._Fields>, java.io.Serializable, Cloneable, Comparable<IOError> {
+  private static final TStruct STRUCT_DESC = new TStruct("IOError");
+
+  private static final TField MESSAGE_FIELD_DESC = new TField("message", TType.STRING, (short)1);
+
+  public String message;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    MESSAGE((short)1, "message");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.MESSAGE, new FieldMetaData("message", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(IOError.class, metaDataMap);
+  }
+
+  public IOError() {
+  }
+
+  public IOError(
+    String message)
+  {
+    this();
+    this.message = message;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public IOError(IOError other) {
+    if (other.isSetMessage()) {
+      this.message = other.message;
+    }
+  }
+
+  public IOError deepCopy() {
+    return new IOError(this);
+  }
+
+  @Deprecated
+  public IOError clone() {
+    return new IOError(this);
+  }
+
+  public String getMessage() {
+    return this.message;
+  }
+
+  public IOError setMessage(String message) {
+    this.message = message;
+    return this;
+  }
+
+  public void unsetMessage() {
+    this.message = null;
+  }
+
+  /** Returns true if field message is set (has been asigned a value) and false otherwise */
+  public boolean isSetMessage() {
+    return this.message != null;
+  }
+
+  public void setMessageIsSet(boolean value) {
+    if (!value) {
+      this.message = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case MESSAGE:
+      if (value == null) {
+        unsetMessage();
+      } else {
+        setMessage((String)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case MESSAGE:
+      return getMessage();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case MESSAGE:
+      return isSetMessage();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof IOError)
+      return this.equals((IOError)that);
+    return false;
+  }
+
+  public boolean equals(IOError that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_message = true && this.isSetMessage();
+    boolean that_present_message = true && that.isSetMessage();
+    if (this_present_message || that_present_message) {
+      if (!(this_present_message && that_present_message))
+        return false;
+      if (!this.message.equals(that.message))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_message = true && (isSetMessage());
+    builder.append(present_message);
+    if (present_message)
+      builder.append(message);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(IOError other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    IOError typedOther = (IOError)other;
+
+    lastComparison = Boolean.valueOf(isSetMessage()).compareTo(isSetMessage());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(message, typedOther.message);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case MESSAGE:
+            if (field.type == TType.STRING) {
+              this.message = iprot.readString();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.message != null) {
+      oprot.writeFieldBegin(MESSAGE_FIELD_DESC);
+      oprot.writeString(this.message);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("IOError(");
+    boolean first = true;
+
+    sb.append("message:");
+    if (this.message == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.message);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
new file mode 100644
index 0000000..6eb2700
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/IllegalArgument.java
@@ -0,0 +1,332 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * An IllegalArgument exception indicates an illegal or invalid
+ * argument was passed into a procedure.
+ */
+public class IllegalArgument extends Exception implements TBase<IllegalArgument._Fields>, java.io.Serializable, Cloneable, Comparable<IllegalArgument> {
+  private static final TStruct STRUCT_DESC = new TStruct("IllegalArgument");
+
+  private static final TField MESSAGE_FIELD_DESC = new TField("message", TType.STRING, (short)1);
+
+  public String message;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    MESSAGE((short)1, "message");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.MESSAGE, new FieldMetaData("message", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(IllegalArgument.class, metaDataMap);
+  }
+
+  public IllegalArgument() {
+  }
+
+  public IllegalArgument(
+    String message)
+  {
+    this();
+    this.message = message;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public IllegalArgument(IllegalArgument other) {
+    if (other.isSetMessage()) {
+      this.message = other.message;
+    }
+  }
+
+  public IllegalArgument deepCopy() {
+    return new IllegalArgument(this);
+  }
+
+  @Deprecated
+  public IllegalArgument clone() {
+    return new IllegalArgument(this);
+  }
+
+  public String getMessage() {
+    return this.message;
+  }
+
+  public IllegalArgument setMessage(String message) {
+    this.message = message;
+    return this;
+  }
+
+  public void unsetMessage() {
+    this.message = null;
+  }
+
+  /** Returns true if field message is set (has been asigned a value) and false otherwise */
+  public boolean isSetMessage() {
+    return this.message != null;
+  }
+
+  public void setMessageIsSet(boolean value) {
+    if (!value) {
+      this.message = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case MESSAGE:
+      if (value == null) {
+        unsetMessage();
+      } else {
+        setMessage((String)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case MESSAGE:
+      return getMessage();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case MESSAGE:
+      return isSetMessage();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof IllegalArgument)
+      return this.equals((IllegalArgument)that);
+    return false;
+  }
+
+  public boolean equals(IllegalArgument that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_message = true && this.isSetMessage();
+    boolean that_present_message = true && that.isSetMessage();
+    if (this_present_message || that_present_message) {
+      if (!(this_present_message && that_present_message))
+        return false;
+      if (!this.message.equals(that.message))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_message = true && (isSetMessage());
+    builder.append(present_message);
+    if (present_message)
+      builder.append(message);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(IllegalArgument other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    IllegalArgument typedOther = (IllegalArgument)other;
+
+    lastComparison = Boolean.valueOf(isSetMessage()).compareTo(isSetMessage());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(message, typedOther.message);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case MESSAGE:
+            if (field.type == TType.STRING) {
+              this.message = iprot.readString();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.message != null) {
+      oprot.writeFieldBegin(MESSAGE_FIELD_DESC);
+      oprot.writeString(this.message);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("IllegalArgument(");
+    boolean first = true;
+
+    sb.append("message:");
+    if (this.message == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.message);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
new file mode 100644
index 0000000..65a2391
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/Mutation.java
@@ -0,0 +1,508 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * A Mutation object is used to either update or delete a column-value.
+ */
+public class Mutation implements TBase<Mutation._Fields>, java.io.Serializable, Cloneable, Comparable<Mutation> {
+  private static final TStruct STRUCT_DESC = new TStruct("Mutation");
+
+  private static final TField IS_DELETE_FIELD_DESC = new TField("isDelete", TType.BOOL, (short)1);
+  private static final TField COLUMN_FIELD_DESC = new TField("column", TType.STRING, (short)2);
+  private static final TField VALUE_FIELD_DESC = new TField("value", TType.STRING, (short)3);
+
+  public boolean isDelete;
+  public byte[] column;
+  public byte[] value;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    IS_DELETE((short)1, "isDelete"),
+    COLUMN((short)2, "column"),
+    VALUE((short)3, "value");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  private static final int __ISDELETE_ISSET_ID = 0;
+  private BitSet __isset_bit_vector = new BitSet(1);
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.IS_DELETE, new FieldMetaData("isDelete", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.BOOL)));
+    put(_Fields.COLUMN, new FieldMetaData("column", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.VALUE, new FieldMetaData("value", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(Mutation.class, metaDataMap);
+  }
+
+  public Mutation() {
+    this.isDelete = false;
+
+  }
+
+  public Mutation(
+    boolean isDelete,
+    byte[] column,
+    byte[] value)
+  {
+    this();
+    this.isDelete = isDelete;
+    setIsDeleteIsSet(true);
+    this.column = column;
+    this.value = value;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public Mutation(Mutation other) {
+    __isset_bit_vector.clear();
+    __isset_bit_vector.or(other.__isset_bit_vector);
+    this.isDelete = other.isDelete;
+    if (other.isSetColumn()) {
+      this.column = other.column;
+    }
+    if (other.isSetValue()) {
+      this.value = other.value;
+    }
+  }
+
+  public Mutation deepCopy() {
+    return new Mutation(this);
+  }
+
+  @Deprecated
+  public Mutation clone() {
+    return new Mutation(this);
+  }
+
+  public boolean isIsDelete() {
+    return this.isDelete;
+  }
+
+  public Mutation setIsDelete(boolean isDelete) {
+    this.isDelete = isDelete;
+    setIsDeleteIsSet(true);
+    return this;
+  }
+
+  public void unsetIsDelete() {
+    __isset_bit_vector.clear(__ISDELETE_ISSET_ID);
+  }
+
+  /** Returns true if field isDelete is set (has been asigned a value) and false otherwise */
+  public boolean isSetIsDelete() {
+    return __isset_bit_vector.get(__ISDELETE_ISSET_ID);
+  }
+
+  public void setIsDeleteIsSet(boolean value) {
+    __isset_bit_vector.set(__ISDELETE_ISSET_ID, value);
+  }
+
+  public byte[] getColumn() {
+    return this.column;
+  }
+
+  public Mutation setColumn(byte[] column) {
+    this.column = column;
+    return this;
+  }
+
+  public void unsetColumn() {
+    this.column = null;
+  }
+
+  /** Returns true if field column is set (has been asigned a value) and false otherwise */
+  public boolean isSetColumn() {
+    return this.column != null;
+  }
+
+  public void setColumnIsSet(boolean value) {
+    if (!value) {
+      this.column = null;
+    }
+  }
+
+  public byte[] getValue() {
+    return this.value;
+  }
+
+  public Mutation setValue(byte[] value) {
+    this.value = value;
+    return this;
+  }
+
+  public void unsetValue() {
+    this.value = null;
+  }
+
+  /** Returns true if field value is set (has been asigned a value) and false otherwise */
+  public boolean isSetValue() {
+    return this.value != null;
+  }
+
+  public void setValueIsSet(boolean value) {
+    if (!value) {
+      this.value = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case IS_DELETE:
+      if (value == null) {
+        unsetIsDelete();
+      } else {
+        setIsDelete((Boolean)value);
+      }
+      break;
+
+    case COLUMN:
+      if (value == null) {
+        unsetColumn();
+      } else {
+        setColumn((byte[])value);
+      }
+      break;
+
+    case VALUE:
+      if (value == null) {
+        unsetValue();
+      } else {
+        setValue((byte[])value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case IS_DELETE:
+      return new Boolean(isIsDelete());
+
+    case COLUMN:
+      return getColumn();
+
+    case VALUE:
+      return getValue();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case IS_DELETE:
+      return isSetIsDelete();
+    case COLUMN:
+      return isSetColumn();
+    case VALUE:
+      return isSetValue();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof Mutation)
+      return this.equals((Mutation)that);
+    return false;
+  }
+
+  public boolean equals(Mutation that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_isDelete = true;
+    boolean that_present_isDelete = true;
+    if (this_present_isDelete || that_present_isDelete) {
+      if (!(this_present_isDelete && that_present_isDelete))
+        return false;
+      if (this.isDelete != that.isDelete)
+        return false;
+    }
+
+    boolean this_present_column = true && this.isSetColumn();
+    boolean that_present_column = true && that.isSetColumn();
+    if (this_present_column || that_present_column) {
+      if (!(this_present_column && that_present_column))
+        return false;
+      if (!java.util.Arrays.equals(this.column, that.column))
+        return false;
+    }
+
+    boolean this_present_value = true && this.isSetValue();
+    boolean that_present_value = true && that.isSetValue();
+    if (this_present_value || that_present_value) {
+      if (!(this_present_value && that_present_value))
+        return false;
+      if (!java.util.Arrays.equals(this.value, that.value))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_isDelete = true;
+    builder.append(present_isDelete);
+    if (present_isDelete)
+      builder.append(isDelete);
+
+    boolean present_column = true && (isSetColumn());
+    builder.append(present_column);
+    if (present_column)
+      builder.append(column);
+
+    boolean present_value = true && (isSetValue());
+    builder.append(present_value);
+    if (present_value)
+      builder.append(value);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(Mutation other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    Mutation typedOther = (Mutation)other;
+
+    lastComparison = Boolean.valueOf(isSetIsDelete()).compareTo(isSetIsDelete());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(isDelete, typedOther.isDelete);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetColumn()).compareTo(isSetColumn());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(column, typedOther.column);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetValue()).compareTo(isSetValue());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(value, typedOther.value);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case IS_DELETE:
+            if (field.type == TType.BOOL) {
+              this.isDelete = iprot.readBool();
+              setIsDeleteIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case COLUMN:
+            if (field.type == TType.STRING) {
+              this.column = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case VALUE:
+            if (field.type == TType.STRING) {
+              this.value = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    oprot.writeFieldBegin(IS_DELETE_FIELD_DESC);
+    oprot.writeBool(this.isDelete);
+    oprot.writeFieldEnd();
+    if (this.column != null) {
+      oprot.writeFieldBegin(COLUMN_FIELD_DESC);
+      oprot.writeBinary(this.column);
+      oprot.writeFieldEnd();
+    }
+    if (this.value != null) {
+      oprot.writeFieldBegin(VALUE_FIELD_DESC);
+      oprot.writeBinary(this.value);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("Mutation(");
+    boolean first = true;
+
+    sb.append("isDelete:");
+    sb.append(this.isDelete);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("column:");
+    if (this.column == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.column);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("value:");
+    if (this.value == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.value);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
new file mode 100644
index 0000000..ed021d3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TCell.java
@@ -0,0 +1,420 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * TCell - Used to transport a cell value (byte[]) and the timestamp it was
+ * stored with together as a result for get and getRow methods. This promotes
+ * the timestamp of a cell to a first-class value, making it easy to take
+ * note of temporal data. Cell is used all the way from HStore up to HTable.
+ */
+public class TCell implements TBase<TCell._Fields>, java.io.Serializable, Cloneable, Comparable<TCell> {
+  private static final TStruct STRUCT_DESC = new TStruct("TCell");
+
+  private static final TField VALUE_FIELD_DESC = new TField("value", TType.STRING, (short)1);
+  private static final TField TIMESTAMP_FIELD_DESC = new TField("timestamp", TType.I64, (short)2);
+
+  public byte[] value;
+  public long timestamp;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    VALUE((short)1, "value"),
+    TIMESTAMP((short)2, "timestamp");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  private static final int __TIMESTAMP_ISSET_ID = 0;
+  private BitSet __isset_bit_vector = new BitSet(1);
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.VALUE, new FieldMetaData("value", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.TIMESTAMP, new FieldMetaData("timestamp", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.I64)));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(TCell.class, metaDataMap);
+  }
+
+  public TCell() {
+  }
+
+  public TCell(
+    byte[] value,
+    long timestamp)
+  {
+    this();
+    this.value = value;
+    this.timestamp = timestamp;
+    setTimestampIsSet(true);
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public TCell(TCell other) {
+    __isset_bit_vector.clear();
+    __isset_bit_vector.or(other.__isset_bit_vector);
+    if (other.isSetValue()) {
+      this.value = other.value;
+    }
+    this.timestamp = other.timestamp;
+  }
+
+  public TCell deepCopy() {
+    return new TCell(this);
+  }
+
+  @Deprecated
+  public TCell clone() {
+    return new TCell(this);
+  }
+
+  public byte[] getValue() {
+    return this.value;
+  }
+
+  public TCell setValue(byte[] value) {
+    this.value = value;
+    return this;
+  }
+
+  public void unsetValue() {
+    this.value = null;
+  }
+
+  /** Returns true if field value is set (has been asigned a value) and false otherwise */
+  public boolean isSetValue() {
+    return this.value != null;
+  }
+
+  public void setValueIsSet(boolean value) {
+    if (!value) {
+      this.value = null;
+    }
+  }
+
+  public long getTimestamp() {
+    return this.timestamp;
+  }
+
+  public TCell setTimestamp(long timestamp) {
+    this.timestamp = timestamp;
+    setTimestampIsSet(true);
+    return this;
+  }
+
+  public void unsetTimestamp() {
+    __isset_bit_vector.clear(__TIMESTAMP_ISSET_ID);
+  }
+
+  /** Returns true if field timestamp is set (has been asigned a value) and false otherwise */
+  public boolean isSetTimestamp() {
+    return __isset_bit_vector.get(__TIMESTAMP_ISSET_ID);
+  }
+
+  public void setTimestampIsSet(boolean value) {
+    __isset_bit_vector.set(__TIMESTAMP_ISSET_ID, value);
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case VALUE:
+      if (value == null) {
+        unsetValue();
+      } else {
+        setValue((byte[])value);
+      }
+      break;
+
+    case TIMESTAMP:
+      if (value == null) {
+        unsetTimestamp();
+      } else {
+        setTimestamp((Long)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case VALUE:
+      return getValue();
+
+    case TIMESTAMP:
+      return new Long(getTimestamp());
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case VALUE:
+      return isSetValue();
+    case TIMESTAMP:
+      return isSetTimestamp();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof TCell)
+      return this.equals((TCell)that);
+    return false;
+  }
+
+  public boolean equals(TCell that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_value = true && this.isSetValue();
+    boolean that_present_value = true && that.isSetValue();
+    if (this_present_value || that_present_value) {
+      if (!(this_present_value && that_present_value))
+        return false;
+      if (!java.util.Arrays.equals(this.value, that.value))
+        return false;
+    }
+
+    boolean this_present_timestamp = true;
+    boolean that_present_timestamp = true;
+    if (this_present_timestamp || that_present_timestamp) {
+      if (!(this_present_timestamp && that_present_timestamp))
+        return false;
+      if (this.timestamp != that.timestamp)
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_value = true && (isSetValue());
+    builder.append(present_value);
+    if (present_value)
+      builder.append(value);
+
+    boolean present_timestamp = true;
+    builder.append(present_timestamp);
+    if (present_timestamp)
+      builder.append(timestamp);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(TCell other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    TCell typedOther = (TCell)other;
+
+    lastComparison = Boolean.valueOf(isSetValue()).compareTo(isSetValue());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(value, typedOther.value);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetTimestamp()).compareTo(isSetTimestamp());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(timestamp, typedOther.timestamp);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case VALUE:
+            if (field.type == TType.STRING) {
+              this.value = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case TIMESTAMP:
+            if (field.type == TType.I64) {
+              this.timestamp = iprot.readI64();
+              setTimestampIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.value != null) {
+      oprot.writeFieldBegin(VALUE_FIELD_DESC);
+      oprot.writeBinary(this.value);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldBegin(TIMESTAMP_FIELD_DESC);
+    oprot.writeI64(this.timestamp);
+    oprot.writeFieldEnd();
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("TCell(");
+    boolean first = true;
+
+    sb.append("value:");
+    if (this.value == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.value);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("timestamp:");
+    sb.append(this.timestamp);
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
new file mode 100644
index 0000000..a397431
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRegionInfo.java
@@ -0,0 +1,678 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * A TRegionInfo contains information about an HTable region.
+ */
+public class TRegionInfo implements TBase<TRegionInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TRegionInfo> {
+  private static final TStruct STRUCT_DESC = new TStruct("TRegionInfo");
+
+  private static final TField START_KEY_FIELD_DESC = new TField("startKey", TType.STRING, (short)1);
+  private static final TField END_KEY_FIELD_DESC = new TField("endKey", TType.STRING, (short)2);
+  private static final TField ID_FIELD_DESC = new TField("id", TType.I64, (short)3);
+  private static final TField NAME_FIELD_DESC = new TField("name", TType.STRING, (short)4);
+  private static final TField VERSION_FIELD_DESC = new TField("version", TType.BYTE, (short)5);
+
+  public byte[] startKey;
+  public byte[] endKey;
+  public long id;
+  public byte[] name;
+  public byte version;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    START_KEY((short)1, "startKey"),
+    END_KEY((short)2, "endKey"),
+    ID((short)3, "id"),
+    NAME((short)4, "name"),
+    VERSION((short)5, "version");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+  private static final int __ID_ISSET_ID = 0;
+  private static final int __VERSION_ISSET_ID = 1;
+  private BitSet __isset_bit_vector = new BitSet(2);
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.START_KEY, new FieldMetaData("startKey", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.END_KEY, new FieldMetaData("endKey", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.ID, new FieldMetaData("id", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.I64)));
+    put(_Fields.NAME, new FieldMetaData("name", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.VERSION, new FieldMetaData("version", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.BYTE)));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(TRegionInfo.class, metaDataMap);
+  }
+
+  public TRegionInfo() {
+  }
+
+  public TRegionInfo(
+    byte[] startKey,
+    byte[] endKey,
+    long id,
+    byte[] name,
+    byte version)
+  {
+    this();
+    this.startKey = startKey;
+    this.endKey = endKey;
+    this.id = id;
+    setIdIsSet(true);
+    this.name = name;
+    this.version = version;
+    setVersionIsSet(true);
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public TRegionInfo(TRegionInfo other) {
+    __isset_bit_vector.clear();
+    __isset_bit_vector.or(other.__isset_bit_vector);
+    if (other.isSetStartKey()) {
+      this.startKey = other.startKey;
+    }
+    if (other.isSetEndKey()) {
+      this.endKey = other.endKey;
+    }
+    this.id = other.id;
+    if (other.isSetName()) {
+      this.name = other.name;
+    }
+    this.version = other.version;
+  }
+
+  public TRegionInfo deepCopy() {
+    return new TRegionInfo(this);
+  }
+
+  @Deprecated
+  public TRegionInfo clone() {
+    return new TRegionInfo(this);
+  }
+
+  public byte[] getStartKey() {
+    return this.startKey;
+  }
+
+  public TRegionInfo setStartKey(byte[] startKey) {
+    this.startKey = startKey;
+    return this;
+  }
+
+  public void unsetStartKey() {
+    this.startKey = null;
+  }
+
+  /** Returns true if field startKey is set (has been asigned a value) and false otherwise */
+  public boolean isSetStartKey() {
+    return this.startKey != null;
+  }
+
+  public void setStartKeyIsSet(boolean value) {
+    if (!value) {
+      this.startKey = null;
+    }
+  }
+
+  public byte[] getEndKey() {
+    return this.endKey;
+  }
+
+  public TRegionInfo setEndKey(byte[] endKey) {
+    this.endKey = endKey;
+    return this;
+  }
+
+  public void unsetEndKey() {
+    this.endKey = null;
+  }
+
+  /** Returns true if field endKey is set (has been asigned a value) and false otherwise */
+  public boolean isSetEndKey() {
+    return this.endKey != null;
+  }
+
+  public void setEndKeyIsSet(boolean value) {
+    if (!value) {
+      this.endKey = null;
+    }
+  }
+
+  public long getId() {
+    return this.id;
+  }
+
+  public TRegionInfo setId(long id) {
+    this.id = id;
+    setIdIsSet(true);
+    return this;
+  }
+
+  public void unsetId() {
+    __isset_bit_vector.clear(__ID_ISSET_ID);
+  }
+
+  /** Returns true if field id is set (has been asigned a value) and false otherwise */
+  public boolean isSetId() {
+    return __isset_bit_vector.get(__ID_ISSET_ID);
+  }
+
+  public void setIdIsSet(boolean value) {
+    __isset_bit_vector.set(__ID_ISSET_ID, value);
+  }
+
+  public byte[] getName() {
+    return this.name;
+  }
+
+  public TRegionInfo setName(byte[] name) {
+    this.name = name;
+    return this;
+  }
+
+  public void unsetName() {
+    this.name = null;
+  }
+
+  /** Returns true if field name is set (has been asigned a value) and false otherwise */
+  public boolean isSetName() {
+    return this.name != null;
+  }
+
+  public void setNameIsSet(boolean value) {
+    if (!value) {
+      this.name = null;
+    }
+  }
+
+  public byte getVersion() {
+    return this.version;
+  }
+
+  public TRegionInfo setVersion(byte version) {
+    this.version = version;
+    setVersionIsSet(true);
+    return this;
+  }
+
+  public void unsetVersion() {
+    __isset_bit_vector.clear(__VERSION_ISSET_ID);
+  }
+
+  /** Returns true if field version is set (has been asigned a value) and false otherwise */
+  public boolean isSetVersion() {
+    return __isset_bit_vector.get(__VERSION_ISSET_ID);
+  }
+
+  public void setVersionIsSet(boolean value) {
+    __isset_bit_vector.set(__VERSION_ISSET_ID, value);
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case START_KEY:
+      if (value == null) {
+        unsetStartKey();
+      } else {
+        setStartKey((byte[])value);
+      }
+      break;
+
+    case END_KEY:
+      if (value == null) {
+        unsetEndKey();
+      } else {
+        setEndKey((byte[])value);
+      }
+      break;
+
+    case ID:
+      if (value == null) {
+        unsetId();
+      } else {
+        setId((Long)value);
+      }
+      break;
+
+    case NAME:
+      if (value == null) {
+        unsetName();
+      } else {
+        setName((byte[])value);
+      }
+      break;
+
+    case VERSION:
+      if (value == null) {
+        unsetVersion();
+      } else {
+        setVersion((Byte)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case START_KEY:
+      return getStartKey();
+
+    case END_KEY:
+      return getEndKey();
+
+    case ID:
+      return new Long(getId());
+
+    case NAME:
+      return getName();
+
+    case VERSION:
+      return new Byte(getVersion());
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case START_KEY:
+      return isSetStartKey();
+    case END_KEY:
+      return isSetEndKey();
+    case ID:
+      return isSetId();
+    case NAME:
+      return isSetName();
+    case VERSION:
+      return isSetVersion();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof TRegionInfo)
+      return this.equals((TRegionInfo)that);
+    return false;
+  }
+
+  public boolean equals(TRegionInfo that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_startKey = true && this.isSetStartKey();
+    boolean that_present_startKey = true && that.isSetStartKey();
+    if (this_present_startKey || that_present_startKey) {
+      if (!(this_present_startKey && that_present_startKey))
+        return false;
+      if (!java.util.Arrays.equals(this.startKey, that.startKey))
+        return false;
+    }
+
+    boolean this_present_endKey = true && this.isSetEndKey();
+    boolean that_present_endKey = true && that.isSetEndKey();
+    if (this_present_endKey || that_present_endKey) {
+      if (!(this_present_endKey && that_present_endKey))
+        return false;
+      if (!java.util.Arrays.equals(this.endKey, that.endKey))
+        return false;
+    }
+
+    boolean this_present_id = true;
+    boolean that_present_id = true;
+    if (this_present_id || that_present_id) {
+      if (!(this_present_id && that_present_id))
+        return false;
+      if (this.id != that.id)
+        return false;
+    }
+
+    boolean this_present_name = true && this.isSetName();
+    boolean that_present_name = true && that.isSetName();
+    if (this_present_name || that_present_name) {
+      if (!(this_present_name && that_present_name))
+        return false;
+      if (!java.util.Arrays.equals(this.name, that.name))
+        return false;
+    }
+
+    boolean this_present_version = true;
+    boolean that_present_version = true;
+    if (this_present_version || that_present_version) {
+      if (!(this_present_version && that_present_version))
+        return false;
+      if (this.version != that.version)
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_startKey = true && (isSetStartKey());
+    builder.append(present_startKey);
+    if (present_startKey)
+      builder.append(startKey);
+
+    boolean present_endKey = true && (isSetEndKey());
+    builder.append(present_endKey);
+    if (present_endKey)
+      builder.append(endKey);
+
+    boolean present_id = true;
+    builder.append(present_id);
+    if (present_id)
+      builder.append(id);
+
+    boolean present_name = true && (isSetName());
+    builder.append(present_name);
+    if (present_name)
+      builder.append(name);
+
+    boolean present_version = true;
+    builder.append(present_version);
+    if (present_version)
+      builder.append(version);
+
+    return builder.toHashCode();
+  }
+
+  public int compareTo(TRegionInfo other) {
+    if (!getClass().equals(other.getClass())) {
+      return getClass().getName().compareTo(other.getClass().getName());
+    }
+
+    int lastComparison = 0;
+    TRegionInfo typedOther = (TRegionInfo)other;
+
+    lastComparison = Boolean.valueOf(isSetStartKey()).compareTo(isSetStartKey());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(startKey, typedOther.startKey);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetEndKey()).compareTo(isSetEndKey());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(endKey, typedOther.endKey);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetId()).compareTo(isSetId());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(id, typedOther.id);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetName()).compareTo(isSetName());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(name, typedOther.name);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = Boolean.valueOf(isSetVersion()).compareTo(isSetVersion());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    lastComparison = TBaseHelper.compareTo(version, typedOther.version);
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    return 0;
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case START_KEY:
+            if (field.type == TType.STRING) {
+              this.startKey = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case END_KEY:
+            if (field.type == TType.STRING) {
+              this.endKey = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case ID:
+            if (field.type == TType.I64) {
+              this.id = iprot.readI64();
+              setIdIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case NAME:
+            if (field.type == TType.STRING) {
+              this.name = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case VERSION:
+            if (field.type == TType.BYTE) {
+              this.version = iprot.readByte();
+              setVersionIsSet(true);
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.startKey != null) {
+      oprot.writeFieldBegin(START_KEY_FIELD_DESC);
+      oprot.writeBinary(this.startKey);
+      oprot.writeFieldEnd();
+    }
+    if (this.endKey != null) {
+      oprot.writeFieldBegin(END_KEY_FIELD_DESC);
+      oprot.writeBinary(this.endKey);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldBegin(ID_FIELD_DESC);
+    oprot.writeI64(this.id);
+    oprot.writeFieldEnd();
+    if (this.name != null) {
+      oprot.writeFieldBegin(NAME_FIELD_DESC);
+      oprot.writeBinary(this.name);
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldBegin(VERSION_FIELD_DESC);
+    oprot.writeByte(this.version);
+    oprot.writeFieldEnd();
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("TRegionInfo(");
+    boolean first = true;
+
+    sb.append("startKey:");
+    if (this.startKey == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.startKey);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("endKey:");
+    if (this.endKey == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.endKey);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("id:");
+    sb.append(this.id);
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("name:");
+    if (this.name == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.name);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("version:");
+    sb.append(this.version);
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
new file mode 100644
index 0000000..39d0f9b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/thrift/generated/TRowResult.java
@@ -0,0 +1,439 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift.generated;
+
+import org.apache.commons.lang.builder.HashCodeBuilder;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.EnumMap;
+import java.util.Set;
+import java.util.HashSet;
+import java.util.EnumSet;
+import java.util.Collections;
+import java.util.BitSet;
+import java.util.Arrays;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.thrift.*;
+import org.apache.thrift.meta_data.*;
+import org.apache.thrift.protocol.*;
+
+/**
+ * Holds row name and then a map of columns to cells.
+ */
+public class TRowResult implements TBase<TRowResult._Fields>, java.io.Serializable, Cloneable {
+  private static final TStruct STRUCT_DESC = new TStruct("TRowResult");
+
+  private static final TField ROW_FIELD_DESC = new TField("row", TType.STRING, (short)1);
+  private static final TField COLUMNS_FIELD_DESC = new TField("columns", TType.MAP, (short)2);
+
+  public byte[] row;
+  public Map<byte[],TCell> columns;
+
+  /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
+  public enum _Fields implements TFieldIdEnum {
+    ROW((short)1, "row"),
+    COLUMNS((short)2, "columns");
+
+    private static final Map<Integer, _Fields> byId = new HashMap<Integer, _Fields>();
+    private static final Map<String, _Fields> byName = new HashMap<String, _Fields>();
+
+    static {
+      for (_Fields field : EnumSet.allOf(_Fields.class)) {
+        byId.put((int)field._thriftId, field);
+        byName.put(field.getFieldName(), field);
+      }
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, or null if its not found.
+     */
+    public static _Fields findByThriftId(int fieldId) {
+      return byId.get(fieldId);
+    }
+
+    /**
+     * Find the _Fields constant that matches fieldId, throwing an exception
+     * if it is not found.
+     */
+    public static _Fields findByThriftIdOrThrow(int fieldId) {
+      _Fields fields = findByThriftId(fieldId);
+      if (fields == null) throw new IllegalArgumentException("Field " + fieldId + " doesn't exist!");
+      return fields;
+    }
+
+    /**
+     * Find the _Fields constant that matches name, or null if its not found.
+     */
+    public static _Fields findByName(String name) {
+      return byName.get(name);
+    }
+
+    private final short _thriftId;
+    private final String _fieldName;
+
+    _Fields(short thriftId, String fieldName) {
+      _thriftId = thriftId;
+      _fieldName = fieldName;
+    }
+
+    public short getThriftFieldId() {
+      return _thriftId;
+    }
+
+    public String getFieldName() {
+      return _fieldName;
+    }
+  }
+
+  // isset id assignments
+
+  public static final Map<_Fields, FieldMetaData> metaDataMap = Collections.unmodifiableMap(new EnumMap<_Fields, FieldMetaData>(_Fields.class) {{
+    put(_Fields.ROW, new FieldMetaData("row", TFieldRequirementType.DEFAULT,
+        new FieldValueMetaData(TType.STRING)));
+    put(_Fields.COLUMNS, new FieldMetaData("columns", TFieldRequirementType.DEFAULT,
+        new MapMetaData(TType.MAP,
+            new FieldValueMetaData(TType.STRING),
+            new StructMetaData(TType.STRUCT, TCell.class))));
+  }});
+
+  static {
+    FieldMetaData.addStructMetaDataMap(TRowResult.class, metaDataMap);
+  }
+
+  public TRowResult() {
+  }
+
+  public TRowResult(
+    byte[] row,
+    Map<byte[],TCell> columns)
+  {
+    this();
+    this.row = row;
+    this.columns = columns;
+  }
+
+  /**
+   * Performs a deep copy on <i>other</i>.
+   */
+  public TRowResult(TRowResult other) {
+    if (other.isSetRow()) {
+      this.row = other.row;
+    }
+    if (other.isSetColumns()) {
+      Map<byte[],TCell> __this__columns = new HashMap<byte[],TCell>();
+      for (Map.Entry<byte[], TCell> other_element : other.columns.entrySet()) {
+
+        byte[] other_element_key = other_element.getKey();
+        TCell other_element_value = other_element.getValue();
+
+        byte[] __this__columns_copy_key = other_element_key;
+
+        TCell __this__columns_copy_value = new TCell(other_element_value);
+
+        __this__columns.put(__this__columns_copy_key, __this__columns_copy_value);
+      }
+      this.columns = __this__columns;
+    }
+  }
+
+  public TRowResult deepCopy() {
+    return new TRowResult(this);
+  }
+
+  @Deprecated
+  public TRowResult clone() {
+    return new TRowResult(this);
+  }
+
+  public byte[] getRow() {
+    return this.row;
+  }
+
+  public TRowResult setRow(byte[] row) {
+    this.row = row;
+    return this;
+  }
+
+  public void unsetRow() {
+    this.row = null;
+  }
+
+  /** Returns true if field row is set (has been asigned a value) and false otherwise */
+  public boolean isSetRow() {
+    return this.row != null;
+  }
+
+  public void setRowIsSet(boolean value) {
+    if (!value) {
+      this.row = null;
+    }
+  }
+
+  public int getColumnsSize() {
+    return (this.columns == null) ? 0 : this.columns.size();
+  }
+
+  public void putToColumns(byte[] key, TCell val) {
+    if (this.columns == null) {
+      this.columns = new HashMap<byte[],TCell>();
+    }
+    this.columns.put(key, val);
+  }
+
+  public Map<byte[],TCell> getColumns() {
+    return this.columns;
+  }
+
+  public TRowResult setColumns(Map<byte[],TCell> columns) {
+    this.columns = columns;
+    return this;
+  }
+
+  public void unsetColumns() {
+    this.columns = null;
+  }
+
+  /** Returns true if field columns is set (has been asigned a value) and false otherwise */
+  public boolean isSetColumns() {
+    return this.columns != null;
+  }
+
+  public void setColumnsIsSet(boolean value) {
+    if (!value) {
+      this.columns = null;
+    }
+  }
+
+  public void setFieldValue(_Fields field, Object value) {
+    switch (field) {
+    case ROW:
+      if (value == null) {
+        unsetRow();
+      } else {
+        setRow((byte[])value);
+      }
+      break;
+
+    case COLUMNS:
+      if (value == null) {
+        unsetColumns();
+      } else {
+        setColumns((Map<byte[],TCell>)value);
+      }
+      break;
+
+    }
+  }
+
+  public void setFieldValue(int fieldID, Object value) {
+    setFieldValue(_Fields.findByThriftIdOrThrow(fieldID), value);
+  }
+
+  public Object getFieldValue(_Fields field) {
+    switch (field) {
+    case ROW:
+      return getRow();
+
+    case COLUMNS:
+      return getColumns();
+
+    }
+    throw new IllegalStateException();
+  }
+
+  public Object getFieldValue(int fieldId) {
+    return getFieldValue(_Fields.findByThriftIdOrThrow(fieldId));
+  }
+
+  /** Returns true if field corresponding to fieldID is set (has been asigned a value) and false otherwise */
+  public boolean isSet(_Fields field) {
+    switch (field) {
+    case ROW:
+      return isSetRow();
+    case COLUMNS:
+      return isSetColumns();
+    }
+    throw new IllegalStateException();
+  }
+
+  public boolean isSet(int fieldID) {
+    return isSet(_Fields.findByThriftIdOrThrow(fieldID));
+  }
+
+  @Override
+  public boolean equals(Object that) {
+    if (that == null)
+      return false;
+    if (that instanceof TRowResult)
+      return this.equals((TRowResult)that);
+    return false;
+  }
+
+  public boolean equals(TRowResult that) {
+    if (that == null)
+      return false;
+
+    boolean this_present_row = true && this.isSetRow();
+    boolean that_present_row = true && that.isSetRow();
+    if (this_present_row || that_present_row) {
+      if (!(this_present_row && that_present_row))
+        return false;
+      if (!java.util.Arrays.equals(this.row, that.row))
+        return false;
+    }
+
+    boolean this_present_columns = true && this.isSetColumns();
+    boolean that_present_columns = true && that.isSetColumns();
+    if (this_present_columns || that_present_columns) {
+      if (!(this_present_columns && that_present_columns))
+        return false;
+      if (!this.columns.equals(that.columns))
+        return false;
+    }
+
+    return true;
+  }
+
+  @Override
+  public int hashCode() {
+    HashCodeBuilder builder = new HashCodeBuilder();
+
+    boolean present_row = true && (isSetRow());
+    builder.append(present_row);
+    if (present_row)
+      builder.append(row);
+
+    boolean present_columns = true && (isSetColumns());
+    builder.append(present_columns);
+    if (present_columns)
+      builder.append(columns);
+
+    return builder.toHashCode();
+  }
+
+  public void read(TProtocol iprot) throws TException {
+    TField field;
+    iprot.readStructBegin();
+    while (true)
+    {
+      field = iprot.readFieldBegin();
+      if (field.type == TType.STOP) {
+        break;
+      }
+      _Fields fieldId = _Fields.findByThriftId(field.id);
+      if (fieldId == null) {
+        TProtocolUtil.skip(iprot, field.type);
+      } else {
+        switch (fieldId) {
+          case ROW:
+            if (field.type == TType.STRING) {
+              this.row = iprot.readBinary();
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+          case COLUMNS:
+            if (field.type == TType.MAP) {
+              {
+                TMap _map4 = iprot.readMapBegin();
+                this.columns = new HashMap<byte[],TCell>(2*_map4.size);
+                for (int _i5 = 0; _i5 < _map4.size; ++_i5)
+                {
+                  byte[] _key6;
+                  TCell _val7;
+                  _key6 = iprot.readBinary();
+                  _val7 = new TCell();
+                  _val7.read(iprot);
+                  this.columns.put(_key6, _val7);
+                }
+                iprot.readMapEnd();
+              }
+            } else {
+              TProtocolUtil.skip(iprot, field.type);
+            }
+            break;
+        }
+        iprot.readFieldEnd();
+      }
+    }
+    iprot.readStructEnd();
+
+    // check for required fields of primitive type, which can't be checked in the validate method
+    validate();
+  }
+
+  public void write(TProtocol oprot) throws TException {
+    validate();
+
+    oprot.writeStructBegin(STRUCT_DESC);
+    if (this.row != null) {
+      oprot.writeFieldBegin(ROW_FIELD_DESC);
+      oprot.writeBinary(this.row);
+      oprot.writeFieldEnd();
+    }
+    if (this.columns != null) {
+      oprot.writeFieldBegin(COLUMNS_FIELD_DESC);
+      {
+        oprot.writeMapBegin(new TMap(TType.STRING, TType.STRUCT, this.columns.size()));
+        for (Map.Entry<byte[], TCell> _iter8 : this.columns.entrySet())
+        {
+          oprot.writeBinary(_iter8.getKey());
+          _iter8.getValue().write(oprot);
+        }
+        oprot.writeMapEnd();
+      }
+      oprot.writeFieldEnd();
+    }
+    oprot.writeFieldStop();
+    oprot.writeStructEnd();
+  }
+
+  @Override
+  public String toString() {
+    StringBuilder sb = new StringBuilder("TRowResult(");
+    boolean first = true;
+
+    sb.append("row:");
+    if (this.row == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.row);
+    }
+    first = false;
+    if (!first) sb.append(", ");
+    sb.append("columns:");
+    if (this.columns == null) {
+      sb.append("null");
+    } else {
+      sb.append(this.columns);
+    }
+    first = false;
+    sb.append(")");
+    return sb.toString();
+  }
+
+  public void validate() throws TException {
+    // check for required fields
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Base64.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Base64.java
new file mode 100644
index 0000000..892f808
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Base64.java
@@ -0,0 +1,1643 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.FilterInputStream;
+import java.io.FilterOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.ObjectInputStream;
+import java.io.ObjectOutputStream;
+import java.io.OutputStream;
+import java.io.Serializable;
+import java.io.UnsupportedEncodingException;
+import java.util.zip.GZIPInputStream;
+import java.util.zip.GZIPOutputStream;
+
+/**
+ * Encodes and decodes to and from Base64 notation.
+ *
+ * <p>
+ * Homepage: <a href="http://iharder.net/base64">http://iharder.net/base64</a>.
+ * </p>
+ *
+ * <p>
+ * Change Log:
+ * </p>
+ * <ul>
+ *   <li>v2.2.1 - Fixed bug using URL_SAFE and ORDERED encodings. Fixed bug
+ *     when using very small files (~< 40 bytes).</li>
+ *   <li>v2.2 - Added some helper methods for encoding/decoding directly from
+ *     one file to the next. Also added a main() method to support command
+ *     line encoding/decoding from one file to the next. Also added these
+ *     Base64 dialects:
+ *     <ol>
+ *       <li>The default is RFC3548 format.</li>
+ *       <li>Using Base64.URLSAFE generates URL and file name friendly format as
+ *         described in Section 4 of RFC3548.
+ *         http://www.faqs.org/rfcs/rfc3548.html</li>
+ *       <li>Using Base64.ORDERED generates URL and file name friendly format
+ *         that preserves lexical ordering as described in
+ *         http://www.faqs.org/qa/rfcc-1940.html</li>
+ *     </ol>
+ *     <p>
+ *     Special thanks to Jim Kellerman at <a href="http://www.powerset.com/">
+ *     http://www.powerset.com/</a> for contributing the new Base64 dialects.
+ *   </li>
+ *
+ *   <li>v2.1 - Cleaned up javadoc comments and unused variables and methods.
+ *     Added some convenience methods for reading and writing to and from files.
+ *   </li>
+ *   <li>v2.0.2 - Now specifies UTF-8 encoding in places where the code fails on
+ *     systems with other encodings (like EBCDIC).</li>
+ *   <li>v2.0.1 - Fixed an error when decoding a single byte, that is, when the
+ *     encoded data was a single byte.</li>
+ *   <li>v2.0 - I got rid of methods that used booleans to set options. Now
+ *     everything is more consolidated and cleaner. The code now detects when
+ *     data that's being decoded is gzip-compressed and will decompress it
+ *     automatically. Generally things are cleaner. You'll probably have to
+ *     change some method calls that you were making to support the new options
+ *     format (<tt>int</tt>s that you "OR" together).</li>
+ *   <li>v1.5.1 - Fixed bug when decompressing and decoding to a byte[] using
+ *     <tt>decode( String s, boolean gzipCompressed )</tt>. Added the ability to
+ *     "suspend" encoding in the Output Stream so you can turn on and off the
+ *     encoding if you need to embed base64 data in an otherwise "normal" stream
+ *     (like an XML file).</li>
+ *   <li>v1.5 - Output stream pases on flush() command but doesn't do anything
+ *     itself. This helps when using GZIP streams. Added the ability to
+ *     GZip-compress objects before encoding them.</li>
+ *   <li>v1.4 - Added helper methods to read/write files.</li>
+ *   <li>v1.3.6 - Fixed OutputStream.flush() so that 'position' is reset.</li>
+ *   <li>v1.3.5 - Added flag to turn on and off line breaks. Fixed bug in input
+ *     stream where last buffer being read, if not completely full, was not
+ *     returned.</li>
+ *   <li>v1.3.4 - Fixed when "improperly padded stream" error was thrown at the
+ *     wrong time.</li>
+ *   <li>v1.3.3 - Fixed I/O streams which were totally messed up.</li>
+ * </ul>
+ *
+ * <p>
+ * I am placing this code in the Public Domain. Do with it as you will. This
+ * software comes with no guarantees or warranties but with plenty of
+ * well-wishing instead!
+ * <p>
+ * Please visit <a href="http://iharder.net/base64">http://iharder.net/base64</a>
+ * periodically to check for updates or to contribute improvements.
+ * <p>
+ * author: Robert Harder, rob@iharder.net
+ * <br>
+ * version: 2.2.1
+ */
+public class Base64 {
+
+  /* ******** P U B L I C   F I E L D S ******** */
+
+  /** No options specified. Value is zero. */
+  public final static int NO_OPTIONS = 0;
+
+  /** Specify encoding. */
+  public final static int ENCODE = 1;
+
+  /** Specify decoding. */
+  public final static int DECODE = 0;
+
+  /** Specify that data should be gzip-compressed. */
+  public final static int GZIP = 2;
+
+  /** Don't break lines when encoding (violates strict Base64 specification) */
+  public final static int DONT_BREAK_LINES = 8;
+
+  /**
+   * Encode using Base64-like encoding that is URL and Filename safe as
+   * described in Section 4 of RFC3548:
+   * <a href="http://www.faqs.org/rfcs/rfc3548.html">
+   * http://www.faqs.org/rfcs/rfc3548.html</a>.
+   * It is important to note that data encoded this way is <em>not</em>
+   * officially valid Base64, or at the very least should not be called Base64
+   * without also specifying that is was encoded using the URL and
+   * Filename safe dialect.
+   */
+  public final static int URL_SAFE = 16;
+
+  /**
+   * Encode using the special "ordered" dialect of Base64 described here:
+   * <a href="http://www.faqs.org/qa/rfcc-1940.html">
+   * http://www.faqs.org/qa/rfcc-1940.html</a>.
+   */
+  public final static int ORDERED = 32;
+
+  /* ******** P R I V A T E   F I E L D S ******** */
+
+  private static final Log LOG = LogFactory.getLog(Base64.class);
+
+  /** Maximum line length (76) of Base64 output. */
+  private final static int MAX_LINE_LENGTH = 76;
+
+  /** The equals sign (=) as a byte. */
+  private final static byte EQUALS_SIGN = (byte) '=';
+
+  /** The new line character (\n) as a byte. */
+  private final static byte NEW_LINE = (byte) '\n';
+
+  /** Preferred encoding. */
+  private final static String PREFERRED_ENCODING = "UTF-8";
+
+  private final static byte WHITE_SPACE_ENC = -5; // Indicates white space
+  private final static byte EQUALS_SIGN_ENC = -1; // Indicates equals sign
+
+  /* ******** S T A N D A R D   B A S E 6 4   A L P H A B E T ******** */
+
+  /** The 64 valid Base64 values. */
+
+  /*
+   * Host platform may be something funny like EBCDIC, so we hardcode these
+   * values.
+   */
+  private final static byte[] _STANDARD_ALPHABET = { (byte) 'A', (byte) 'B',
+    (byte) 'C', (byte) 'D', (byte) 'E', (byte) 'F', (byte) 'G', (byte) 'H',
+    (byte) 'I', (byte) 'J', (byte) 'K', (byte) 'L', (byte) 'M', (byte) 'N',
+    (byte) 'O', (byte) 'P', (byte) 'Q', (byte) 'R', (byte) 'S', (byte) 'T',
+    (byte) 'U', (byte) 'V', (byte) 'W', (byte) 'X', (byte) 'Y', (byte) 'Z',
+    (byte) 'a', (byte) 'b', (byte) 'c', (byte) 'd', (byte) 'e', (byte) 'f',
+    (byte) 'g', (byte) 'h', (byte) 'i', (byte) 'j', (byte) 'k', (byte) 'l',
+    (byte) 'm', (byte) 'n', (byte) 'o', (byte) 'p', (byte) 'q', (byte) 'r',
+    (byte) 's', (byte) 't', (byte) 'u', (byte) 'v', (byte) 'w', (byte) 'x',
+    (byte) 'y', (byte) 'z', (byte) '0', (byte) '1', (byte) '2', (byte) '3',
+    (byte) '4', (byte) '5', (byte) '6', (byte) '7', (byte) '8', (byte) '9',
+    (byte) '+', (byte) '/'
+  };
+
+  /**
+   * Translates a Base64 value to either its 6-bit reconstruction value or a
+   * negative number indicating some other meaning.
+   */
+  private final static byte[] _STANDARD_DECODABET = {
+    -9, -9, -9, -9, -9, -9, -9, -9, -9,             // Decimal 0 - 8
+    -5, -5,                                         // Whitespace: Tab, Newline
+    -9, -9,                                         // Decimal 11 - 12
+    -5,                                             // Whitespace: Return
+    -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 14 - 26
+    -9, -9, -9, -9, -9,                             // Decimal 27 - 31
+    -5,                                             // Whitespace: Space
+    -9, -9, -9, -9, -9, -9, -9, -9, -9, -9,         // Decimal 33 - 42
+    62,                                             // Plus sign at decimal 43
+    -9, -9, -9,                                     // Decimal 44 - 46
+    63,                                             // Slash at decimal 47
+    52, 53, 54, 55, 56, 57, 58, 59, 60, 61,         // Numbers zero - nine
+    -9, -9, -9,                                     // Decimal 58 - 60
+    -1,                                             // Equals sign at decimal 61
+    -9, -9, -9,                                     // Decimal 62 - 64
+    0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,   // Letters 'A' - 'N'
+    14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // Letters 'O' - 'Z'
+    -9, -9, -9, -9, -9, -9,                         // Decimal 91 - 96
+    26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, // Letters 'a' - 'm'
+    39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // Letters 'n' -'z'
+    -9, -9, -9, -9                                  // Decimal 123 - 126
+  };
+
+  /* ******** U R L   S A F E   B A S E 6 4   A L P H A B E T ******** */
+
+  /**
+   * Used in the URL and Filename safe dialect described in Section 4 of RFC3548
+   * <a href="http://www.faqs.org/rfcs/rfc3548.html">
+   * http://www.faqs.org/rfcs/rfc3548.html</a>.
+   * Notice that the last two bytes become "hyphen" and "underscore" instead of
+   * "plus" and "slash."
+   */
+  private final static byte[] _URL_SAFE_ALPHABET = { (byte) 'A', (byte) 'B',
+    (byte) 'C', (byte) 'D', (byte) 'E', (byte) 'F', (byte) 'G', (byte) 'H',
+    (byte) 'I', (byte) 'J', (byte) 'K', (byte) 'L', (byte) 'M', (byte) 'N',
+    (byte) 'O', (byte) 'P', (byte) 'Q', (byte) 'R', (byte) 'S', (byte) 'T',
+    (byte) 'U', (byte) 'V', (byte) 'W', (byte) 'X', (byte) 'Y', (byte) 'Z',
+    (byte) 'a', (byte) 'b', (byte) 'c', (byte) 'd', (byte) 'e', (byte) 'f',
+    (byte) 'g', (byte) 'h', (byte) 'i', (byte) 'j', (byte) 'k', (byte) 'l',
+    (byte) 'm', (byte) 'n', (byte) 'o', (byte) 'p', (byte) 'q', (byte) 'r',
+    (byte) 's', (byte) 't', (byte) 'u', (byte) 'v', (byte) 'w', (byte) 'x',
+    (byte) 'y', (byte) 'z', (byte) '0', (byte) '1', (byte) '2', (byte) '3',
+    (byte) '4', (byte) '5', (byte) '6', (byte) '7', (byte) '8', (byte) '9',
+    (byte) '-', (byte) '_'
+  };
+
+  /**
+   * Used in decoding URL and Filename safe dialects of Base64.
+   */
+  private final static byte[] _URL_SAFE_DECODABET = {
+    -9, -9, -9, -9, -9, -9, -9, -9, -9,                 // Decimal 0 - 8
+    -5, -5,                                             // Whitespace: Tab, Newline
+    -9, -9,                                             // Decimal 11 - 12
+    -5,                                                 // Whitespace: Return
+    -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 14 - 26
+    -9, -9, -9, -9, -9,                                 // Decimal 27 - 31
+    -5,                                                 // Whitespace: Space
+    -9, -9, -9, -9, -9, -9, -9, -9, -9, -9,             // Decimal 33 - 42
+    -9,                                                 // Plus sign at 43
+    -9,                                                 // Decimal 44
+    62,                                                 // Minus sign at 45
+    -9,                                                 // Decimal 46
+    -9,                                                 // Slash at 47
+    52, 53, 54, 55, 56, 57, 58, 59, 60, 61,             // Numbers 0 - 9
+    -9, -9, -9,                                         // Decimal 58 - 60
+    -1,                                                 // Equals sign at 61
+    -9, -9, -9,                                         // Decimal 62 - 64
+    0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,       // Letters 'A' - 'N'
+    14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,     // Letters 'O' - 'Z'
+    -9, -9, -9, -9,                                     // Decimal 91 - 94
+    63,                                                 // Underscore at 95
+    -9,                                                 // Decimal 96
+    26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, // Letters 'a' - 'm'
+    39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // Letters 'n' - 'z'
+    -9, -9, -9, -9                                      // Decimal 123 - 126
+  };
+
+  /* ******** O R D E R E D   B A S E 6 4   A L P H A B E T ******** */
+
+  /**
+   * In addition to being URL and file name friendly, this encoding preserves
+   * the sort order of encoded values. Whatever is input, be it string or
+   * just an array of bytes, when you use this encoding, the encoded value sorts
+   * exactly the same as the input value. It is described in the RFC change
+   * request: <a href="http://www.faqs.org/qa/rfcc-1940.html">
+   * http://www.faqs.org/qa/rfcc-1940.html</a>.
+   *
+   * It replaces "plus" and "slash" with "hyphen" and "underscore" and
+   * rearranges the alphabet so that the characters are in their natural sort
+   * order.
+   */
+  private final static byte[] _ORDERED_ALPHABET = { (byte) '-', (byte) '0',
+    (byte) '1', (byte) '2', (byte) '3', (byte) '4', (byte) '5', (byte) '6',
+    (byte) '7', (byte) '8', (byte) '9', (byte) 'A', (byte) 'B', (byte) 'C',
+    (byte) 'D', (byte) 'E', (byte) 'F', (byte) 'G', (byte) 'H', (byte) 'I',
+    (byte) 'J', (byte) 'K', (byte) 'L', (byte) 'M', (byte) 'N', (byte) 'O',
+    (byte) 'P', (byte) 'Q', (byte) 'R', (byte) 'S', (byte) 'T', (byte) 'U',
+    (byte) 'V', (byte) 'W', (byte) 'X', (byte) 'Y', (byte) 'Z', (byte) '_',
+    (byte) 'a', (byte) 'b', (byte) 'c', (byte) 'd', (byte) 'e', (byte) 'f',
+    (byte) 'g', (byte) 'h', (byte) 'i', (byte) 'j', (byte) 'k', (byte) 'l',
+    (byte) 'm', (byte) 'n', (byte) 'o', (byte) 'p', (byte) 'q', (byte) 'r',
+    (byte) 's', (byte) 't', (byte) 'u', (byte) 'v', (byte) 'w', (byte) 'x',
+    (byte) 'y', (byte) 'z'
+  };
+
+  /**
+   * Used in decoding the "ordered" dialect of Base64.
+   */
+  private final static byte[] _ORDERED_DECODABET = {
+    -9, -9, -9, -9, -9, -9, -9, -9, -9,                 // Decimal 0 - 8
+    -5, -5,                                             // Whitespace: Tab, Newline
+    -9, -9,                                             // Decimal 11 - 12
+    -5,                                                 // Whitespace: Return
+    -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, // Decimal 14 - 26
+    -9, -9, -9, -9, -9,                                 // Decimal 27 - 31
+    -5,                                                 // Whitespace: Space
+    -9, -9, -9, -9, -9, -9, -9, -9, -9, -9,             // Decimal 33 - 42
+    -9,                                                 // Plus sign at 43
+    -9,                                                 // Decimal 44
+    0,                                                  // Minus sign at 45
+    -9,                                                 // Decimal 46
+    -9,                                                 // Slash at decimal 47
+    1, 2, 3, 4, 5, 6, 7, 8, 9, 10,                      // Numbers 0 - 9
+    -9, -9, -9,                                         // Decimal 58 - 60
+    -1,                                                 // Equals sign at 61
+    -9, -9, -9,                                         // Decimal 62 - 64
+    11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, // Letters 'A' - 'M'
+    24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, // Letters 'N' - 'Z'
+    -9, -9, -9, -9,                                     // Decimal 91 - 94
+    37,                                                 // Underscore at 95
+    -9,                                                 // Decimal 96
+    38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, // Letters 'a' - 'm'
+    51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, // Letters 'n' - 'z'
+    -9, -9, -9, -9                                      // Decimal 123 - 126
+  };
+
+  /* ******** D E T E R M I N E   W H I C H   A L H A B E T ******** */
+
+  /**
+   * Returns one of the _SOMETHING_ALPHABET byte arrays depending on the options
+   * specified. It's possible, though silly, to specify ORDERED and URLSAFE in
+   * which case one of them will be picked, though there is no guarantee as to
+   * which one will be picked.
+   *
+   * @param options URL_SAFE or ORDERED
+   * @return alphabet array to use
+   */
+  protected static byte[] getAlphabet(int options) {
+    if ((options & URL_SAFE) == URL_SAFE) {
+      return _URL_SAFE_ALPHABET;
+
+    } else if ((options & ORDERED) == ORDERED) {
+      return _ORDERED_ALPHABET;
+
+    } else {
+      return _STANDARD_ALPHABET;
+    }
+  } // end getAlphabet
+
+  /**
+   * Returns one of the _SOMETHING_DECODABET byte arrays depending on the
+   * options specified. It's possible, though silly, to specify ORDERED and
+   * URL_SAFE in which case one of them will be picked, though there is no
+   * guarantee as to which one will be picked.
+   * @param options URL_SAFE or ORDERED
+   * @return alphabet array to use
+   */
+  protected static byte[] getDecodabet(int options) {
+    if ((options & URL_SAFE) == URL_SAFE) {
+      return _URL_SAFE_DECODABET;
+
+    } else if ((options & ORDERED) == ORDERED) {
+      return _ORDERED_DECODABET;
+
+    } else {
+      return _STANDARD_DECODABET;
+    }
+  } // end getDecodabet
+
+  /** Defeats instantiation. */
+  private Base64() {}
+
+  /**
+   * Main program. Used for testing.
+   *
+   * Encodes or decodes two files from the command line
+   *
+   * @param args command arguments
+   */
+  public static void main(String[] args) {
+    if (args.length < 3) {
+      usage("Not enough arguments.");
+
+    } else {
+      String flag = args[0];
+      String infile = args[1];
+      String outfile = args[2];
+      if (flag.equals("-e")) {                          // encode
+        encodeFileToFile(infile, outfile);
+
+      } else if (flag.equals("-d")) {                   // decode
+        decodeFileToFile(infile, outfile);
+
+      } else {
+        usage("Unknown flag: " + flag);
+      }
+    }
+  } // end main
+
+  /**
+   * Prints command line usage.
+   *
+   * @param msg A message to include with usage info.
+   */
+  private static void usage(String msg) {
+    System.err.println(msg);
+    System.err.println("Usage: java Base64 -e|-d inputfile outputfile");
+  } // end usage
+
+  /* ******** E N C O D I N G   M E T H O D S ******** */
+
+  /**
+   * Encodes up to the first three bytes of array <var>threeBytes</var> and
+   * returns a four-byte array in Base64 notation. The actual number of
+   * significant bytes in your array is given by <var>numSigBytes</var>. The
+   * array <var>threeBytes</var> needs only be as big as <var>numSigBytes</var>.
+   * Code can reuse a byte array by passing a four-byte array as <var>b4</var>.
+   *
+   * @param b4 A reusable byte array to reduce array instantiation
+   * @param threeBytes the array to convert
+   * @param numSigBytes the number of significant bytes in your array
+   * @param options options for get alphabet
+   * @return four byte array in Base64 notation.
+   * @since 1.5.1
+   */
+  protected static byte[] encode3to4(byte[] b4, byte[] threeBytes,
+      int numSigBytes, int options) {
+    encode3to4(threeBytes, 0, numSigBytes, b4, 0, options);
+    return b4;
+  } // end encode3to4
+
+  /**
+   * Encodes up to three bytes of the array <var>source</var> and writes the
+   * resulting four Base64 bytes to <var>destination</var>. The source and
+   * destination arrays can be manipulated anywhere along their length by
+   * specifying <var>srcOffset</var> and <var>destOffset</var>. This method
+   * does not check to make sure your arrays are large enough to accomodate
+   * <var>srcOffset</var> + 3 for the <var>source</var> array or
+   * <var>destOffset</var> + 4 for the <var>destination</var> array. The
+   * actual number of significant bytes in your array is given by
+   * <var>numSigBytes</var>.
+   * <p>
+   * This is the lowest level of the encoding methods with all possible
+   * parameters.
+   *
+   * @param source the array to convert
+   * @param srcOffset the index where conversion begins
+   * @param numSigBytes the number of significant bytes in your array
+   * @param destination the array to hold the conversion
+   * @param destOffset the index where output will be put
+   * @param options options for get alphabet
+   * @return the <var>destination</var> array
+   * @since 1.3
+   */
+  protected static byte[] encode3to4(byte[] source, int srcOffset,
+      int numSigBytes, byte[] destination, int destOffset, int options) {
+    byte[] ALPHABET = getAlphabet(options);
+
+    //           1         2         3
+    // 01234567890123456789012345678901 Bit position
+    // --------000000001111111122222222 Array position from threeBytes
+    // --------|    ||    ||    ||    | Six bit groups to index ALPHABET
+    //          >>18  >>12  >> 6  >> 0  Right shift necessary
+    //                0x3f  0x3f  0x3f  Additional AND
+
+    // Create buffer with zero-padding if there are only one or two
+    // significant bytes passed in the array.
+    // We have to shift left 24 in order to flush out the 1's that appear
+    // when Java treats a value as negative that is cast from a byte to an int.
+    int inBuff =
+        (numSigBytes > 0 ? ((source[srcOffset] << 24) >>> 8) : 0)
+            | (numSigBytes > 1 ? ((source[srcOffset + 1] << 24) >>> 16) : 0)
+            | (numSigBytes > 2 ? ((source[srcOffset + 2] << 24) >>> 24) : 0);
+
+    switch (numSigBytes) {
+    case 3:
+      destination[destOffset] = ALPHABET[(inBuff >>> 18)];
+      destination[destOffset + 1] = ALPHABET[(inBuff >>> 12) & 0x3f];
+      destination[destOffset + 2] = ALPHABET[(inBuff >>> 6) & 0x3f];
+      destination[destOffset + 3] = ALPHABET[(inBuff) & 0x3f];
+      return destination;
+
+    case 2:
+      destination[destOffset] = ALPHABET[(inBuff >>> 18)];
+      destination[destOffset + 1] = ALPHABET[(inBuff >>> 12) & 0x3f];
+      destination[destOffset + 2] = ALPHABET[(inBuff >>> 6) & 0x3f];
+      destination[destOffset + 3] = EQUALS_SIGN;
+      return destination;
+
+    case 1:
+      destination[destOffset] = ALPHABET[(inBuff >>> 18)];
+      destination[destOffset + 1] = ALPHABET[(inBuff >>> 12) & 0x3f];
+      destination[destOffset + 2] = EQUALS_SIGN;
+      destination[destOffset + 3] = EQUALS_SIGN;
+      return destination;
+
+    default:
+      return destination;
+    } // end switch
+  } // end encode3to4
+
+  /**
+   * Serializes an object and returns the Base64-encoded version of that
+   * serialized object. If the object cannot be serialized or there is another
+   * error, the method will return <tt>null</tt>. The object is not
+   * GZip-compressed before being encoded.
+   *
+   * @param serializableObject The object to encode
+   * @return The Base64-encoded object
+   * @since 1.4
+   */
+  public static String encodeObject(Serializable serializableObject) {
+    return encodeObject(serializableObject, NO_OPTIONS);
+  } // end encodeObject
+
+  /**
+   * Serializes an object and returns the Base64-encoded version of that
+   * serialized object. If the object cannot be serialized or there is another
+   * error, the method will return <tt>null</tt>.
+   * <p>
+   * Valid options:
+   * <ul>
+   *   <li>GZIP: gzip-compresses object before encoding it.</li>
+   *   <li>DONT_BREAK_LINES: don't break lines at 76 characters. <i>Note:
+   *     Technically, this makes your encoding non-compliant.</i></li>
+   * </ul>
+   * <p>
+   * Example: <code>encodeObject( myObj, Base64.GZIP )</code> or
+   * <p>
+   * Example:
+   * <code>encodeObject( myObj, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+   *
+   * @param serializableObject The object to encode
+   * @param options Specified options
+   * @see Base64#GZIP
+   * @see Base64#DONT_BREAK_LINES
+   * @return The Base64-encoded object
+   * @since 2.0
+   */
+  @SuppressWarnings({"ConstantConditions"})
+  public static String encodeObject(Serializable serializableObject,
+      int options) {
+
+    ByteArrayOutputStream baos = new ByteArrayOutputStream();
+    OutputStream b64os = null;
+    ObjectOutputStream oos = null;
+    try {
+      // ObjectOutputStream -> (GZIP) -> Base64 -> ByteArrayOutputStream
+      b64os = new Base64OutputStream(baos, ENCODE | options);
+
+      oos = ((options & GZIP) == GZIP) ?
+          new ObjectOutputStream(new GZIPOutputStream(b64os)) :
+            new ObjectOutputStream(b64os);
+
+      oos.writeObject(serializableObject);
+      return new String(baos.toByteArray(), PREFERRED_ENCODING);
+
+    } catch (UnsupportedEncodingException uue) {
+      return new String(baos.toByteArray());
+
+    } catch (IOException e) {
+      LOG.error("error encoding object", e);
+      return null;
+
+    } finally {
+      if (oos != null) {
+        try {
+          oos.close();
+        } catch (Exception e) {
+          LOG.error("error closing ObjectOutputStream", e);
+        }
+      }
+      if (b64os != null) {
+        try {
+          b64os.close();
+        } catch (Exception e) {
+          LOG.error("error closing Base64OutputStream", e);
+        }
+      }
+      try {
+        baos.close();
+      } catch (Exception e) {
+        LOG.error("error closing ByteArrayOutputStream", e);
+      }
+    } // end finally
+  } // end encode
+
+  /**
+   * Encodes a byte array into Base64 notation. Does not GZip-compress data.
+   *
+   * @param source The data to convert
+   * @return encoded byte array
+   * @since 1.4
+   */
+  public static String encodeBytes(byte[] source) {
+    return encodeBytes(source, 0, source.length, NO_OPTIONS);
+  } // end encodeBytes
+
+  /**
+   * Encodes a byte array into Base64 notation.
+   * <p>
+   * Valid options:
+   * <ul>
+   *   <li>GZIP: gzip-compresses object before encoding it.</li>
+   *   <li>DONT_BREAK_LINES: don't break lines at 76 characters. <i>Note:
+   *     Technically, this makes your encoding non-compliant.</i></li>
+   * </ul>
+   *
+   * <p>
+   * Example: <code>encodeBytes( myData, Base64.GZIP )</code> or
+   * <p>
+   * Example:
+   * <code>encodeBytes( myData, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+   *
+   * @param source The data to convert
+   * @param options Specified options
+   * @see Base64#GZIP
+   * @see Base64#DONT_BREAK_LINES
+   * @see Base64#URL_SAFE
+   * @see Base64#ORDERED
+   * @return encoded byte array
+   * @since 2.0
+   */
+  public static String encodeBytes(byte[] source, int options) {
+    return encodeBytes(source, 0, source.length, options);
+  } // end encodeBytes
+
+  /**
+   * Encodes a byte array into Base64 notation. Does not GZip-compress data.
+   *
+   * @param source The data to convert
+   * @param off Offset in array where conversion should begin
+   * @param len Length of data to convert
+   * @return encoded byte array
+   * @since 1.4
+   */
+  public static String encodeBytes(byte[] source, int off, int len) {
+    return encodeBytes(source, off, len, NO_OPTIONS);
+  } // end encodeBytes
+
+  /**
+   * Encodes a byte array into Base64 notation.
+   * <p>
+   * Valid options:
+   * <ul>
+   *   <li>GZIP: gzip-compresses object before encoding it.</li>
+   *   <li>DONT_BREAK_LINES: don't break lines at 76 characters. <i>Note:
+   *     Technically, this makes your encoding non-compliant.</i></li>
+   * </ul>
+   *
+   * <p>
+   * Example: <code>encodeBytes( myData, Base64.GZIP )</code> or
+   * <p>
+   * Example:
+   * <code>encodeBytes( myData, Base64.GZIP | Base64.DONT_BREAK_LINES )</code>
+   *
+   * @param source The data to convert
+   * @param off Offset in array where conversion should begin
+   * @param len Length of data to convert
+   * @param options Specified options
+   * @see Base64#GZIP
+   * @see Base64#DONT_BREAK_LINES
+   * @see Base64#URL_SAFE
+   * @see Base64#ORDERED
+   * @return encoded byte array
+   * @since 2.0
+   */
+  public static String encodeBytes(byte[] source, int off, int len, int options) {
+    if ((options & GZIP) == GZIP) {                             // Compress?
+      // GZip -> Base64 -> ByteArray
+      ByteArrayOutputStream baos = new ByteArrayOutputStream();
+      GZIPOutputStream gzos = null;
+
+      try {
+        gzos =
+          new GZIPOutputStream(new Base64OutputStream(baos, ENCODE | options));
+
+        gzos.write(source, off, len);
+        gzos.close();
+        gzos = null;
+        return new String(baos.toByteArray(), PREFERRED_ENCODING);
+
+      } catch (UnsupportedEncodingException uue) {
+        return new String(baos.toByteArray());
+
+      } catch (IOException e) {
+        LOG.error("error encoding byte array", e);
+        return null;
+
+      } finally {
+        if (gzos != null) {
+          try {
+            gzos.close();
+          } catch (Exception e) {
+            LOG.error("error closing GZIPOutputStream", e);
+          }
+        }
+        try {
+          baos.close();
+        } catch (Exception e) {
+          LOG.error("error closing ByteArrayOutputStream", e);
+        }
+      } // end finally
+
+    } // end Compress
+
+    // Don't compress. Better not to use streams at all then.
+
+    boolean breakLines = ((options & DONT_BREAK_LINES) == 0);
+
+    int len43 = len * 4 / 3;
+    byte[] outBuff =
+      new byte[(len43)                                          // Main 4:3
+               + ((len % 3) > 0 ? 4 : 0)                        // padding
+               + (breakLines ? (len43 / MAX_LINE_LENGTH) : 0)]; // New lines
+    int d = 0;
+    int e = 0;
+    int len2 = len - 2;
+    int lineLength = 0;
+    for (; d < len2; d += 3, e += 4) {
+      encode3to4(source, d + off, 3, outBuff, e, options);
+
+      lineLength += 4;
+      if (breakLines && lineLength == MAX_LINE_LENGTH) {
+        outBuff[e + 4] = NEW_LINE;
+        e++;
+        lineLength = 0;
+      } // end if: end of line
+    } // end for: each piece of array
+
+    if (d < len) {
+      encode3to4(source, d + off, len - d, outBuff, e, options);
+      e += 4;
+    } // end if: some padding needed
+
+    // Return value according to relevant encoding.
+    try {
+      return new String(outBuff, 0, e, PREFERRED_ENCODING);
+
+    } catch (UnsupportedEncodingException uue) {
+      return new String(outBuff, 0, e);
+    }
+  } // end encodeBytes
+
+  /* ******** D E C O D I N G   M E T H O D S ******** */
+
+  /**
+   * Decodes four bytes from array <var>source</var> and writes the resulting
+   * bytes (up to three of them) to <var>destination</var>. The source and
+   * destination arrays can be manipulated anywhere along their length by
+   * specifying <var>srcOffset</var> and <var>destOffset</var>. This method
+   * does not check to make sure your arrays are large enough to accomodate
+   * <var>srcOffset</var> + 4 for the <var>source</var> array or
+   * <var>destOffset</var> + 3 for the <var>destination</var> array. This
+   * method returns the actual number of bytes that were converted from the
+   * Base64 encoding.
+   * <p>
+   * This is the lowest level of the decoding methods with all possible
+   * parameters.
+   * </p>
+   *
+   * @param source the array to convert
+   * @param srcOffset the index where conversion begins
+   * @param destination the array to hold the conversion
+   * @param destOffset the index where output will be put
+   * @param options options for getDecoabet
+   * @see Base64#URL_SAFE
+   * @see Base64#ORDERED
+   * @return the number of decoded bytes converted
+   * @since 1.3
+   */
+  @SuppressWarnings({"ConstantConditions"})
+  protected static int decode4to3(byte[] source, int srcOffset,
+      byte[] destination, int destOffset, int options) {
+    byte[] DECODABET = getDecodabet(options);
+
+    if (source[srcOffset + 2] == EQUALS_SIGN) {                 // Example: Dk==
+      // Two ways to do the same thing. Don't know which way I like best.
+      // int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+      // | ( ( DECODABET[ source[ srcOffset + 1] ] << 24 ) >>> 12 );
+      int outBuff =
+          ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+              | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12);
+
+      destination[destOffset] = (byte) (outBuff >>> 16);
+      return 1;
+
+    } else if (source[srcOffset + 3] == EQUALS_SIGN) {          // Example: DkL=
+      // Two ways to do the same thing. Don't know which way I like best.
+      // int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+      // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+      // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 );
+      int outBuff =
+          ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+              | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12)
+              | ((DECODABET[source[srcOffset + 2]] & 0xFF) << 6);
+
+      destination[destOffset] = (byte) (outBuff >>> 16);
+      destination[destOffset + 1] = (byte) (outBuff >>> 8);
+      return 2;
+
+    } else {                                                    // Example: DkLE
+      try {
+        // Two ways to do the same thing. Don't know which way I like best.
+        // int outBuff = ( ( DECODABET[ source[ srcOffset ] ] << 24 ) >>> 6 )
+        // | ( ( DECODABET[ source[ srcOffset + 1 ] ] << 24 ) >>> 12 )
+        // | ( ( DECODABET[ source[ srcOffset + 2 ] ] << 24 ) >>> 18 )
+        // | ( ( DECODABET[ source[ srcOffset + 3 ] ] << 24 ) >>> 24 );
+        int outBuff =
+            ((DECODABET[source[srcOffset]] & 0xFF) << 18)
+                | ((DECODABET[source[srcOffset + 1]] & 0xFF) << 12)
+                | ((DECODABET[source[srcOffset + 2]] & 0xFF) << 6)
+                | ((DECODABET[source[srcOffset + 3]] & 0xFF));
+
+        destination[destOffset] = (byte) (outBuff >> 16);
+        destination[destOffset + 1] = (byte) (outBuff >> 8);
+        destination[destOffset + 2] = (byte) (outBuff);
+
+        return 3;
+
+      } catch (Exception e) {
+        LOG.error("error decoding bytes at " + source[srcOffset] + ": " +
+            (DECODABET[source[srcOffset]]) + ", " + source[srcOffset + 1] +
+            ": " + (DECODABET[source[srcOffset + 1]]) + ", " +
+            source[srcOffset + 2] + ": " + (DECODABET[source[srcOffset + 2]]) +
+            ", " + source[srcOffset + 3] + ": " +
+            (DECODABET[source[srcOffset + 3]]), e);
+        return -1;
+      } // end catch
+    }
+  } // end decodeToBytes
+
+  /**
+   * Very low-level access to decoding ASCII characters in the form of a byte
+   * array. Does not support automatically gunzipping or any other "fancy"
+   * features.
+   *
+   * @param source The Base64 encoded data
+   * @param off The offset of where to begin decoding
+   * @param len The length of characters to decode
+   * @param options options for getDecodabet
+   * @see Base64#URL_SAFE
+   * @see Base64#ORDERED
+   * @return decoded data
+   * @since 1.3
+   */
+  public static byte[] decode(byte[] source, int off, int len, int options) {
+    byte[] DECODABET = getDecodabet(options);
+
+    int len34 = len * 3 / 4;
+    byte[] outBuff = new byte[len34];           // Upper limit on size of output
+    int outBuffPosn = 0;
+
+    byte[] b4 = new byte[4];
+    int b4Posn = 0;
+    int i;
+    byte sbiCrop;
+    byte sbiDecode;
+    for (i = off; i < off + len; i++) {
+      sbiCrop = (byte) (source[i] & 0x7f);      // Only the low seven bits
+      sbiDecode = DECODABET[sbiCrop];
+
+      if (sbiDecode >= WHITE_SPACE_ENC) {       // Whitespace, Equals or better
+        if (sbiDecode >= EQUALS_SIGN_ENC) {     // Equals or better
+          b4[b4Posn++] = sbiCrop;
+          if (b4Posn > 3) {
+            outBuffPosn += decode4to3(b4, 0, outBuff, outBuffPosn, options);
+            b4Posn = 0;
+
+            // If that was the equals sign, break out of 'for' loop
+            if (sbiCrop == EQUALS_SIGN)
+              break;
+          } // end if: quartet built
+        } // end if: equals sign or better
+      } else {
+        LOG.error("Bad Base64 input character at " + i + ": " + source[i] +
+            "(decimal)");
+        return null;
+      } // end else:
+    } // each input character
+
+    byte[] out = new byte[outBuffPosn];
+    System.arraycopy(outBuff, 0, out, 0, outBuffPosn);
+    return out;
+  } // end decode
+
+  /**
+   * Decodes data from Base64 notation, automatically detecting gzip-compressed
+   * data and decompressing it.
+   *
+   * @param s the string to decode
+   * @return the decoded data
+   * @since 1.4
+   */
+  public static byte[] decode(String s) {
+    return decode(s, NO_OPTIONS);
+  }
+
+  /**
+   * Decodes data from Base64 notation, automatically detecting gzip-compressed
+   * data and decompressing it.
+   *
+   * @param s the string to decode
+   * @param options options for decode
+   * @see Base64#URL_SAFE
+   * @see Base64#ORDERED
+   * @return the decoded data
+   * @since 1.4
+   */
+  public static byte[] decode(String s, int options) {
+    byte[] bytes;
+    try {
+      bytes = s.getBytes(PREFERRED_ENCODING);
+
+    } catch (UnsupportedEncodingException uee) {
+      bytes = s.getBytes();
+    } // end catch
+
+    // Decode
+
+    bytes = decode(bytes, 0, bytes.length, options);
+
+    // Check to see if it's gzip-compressed
+    // GZIP Magic Two-Byte Number: 0x8b1f (35615)
+
+    if (bytes != null && bytes.length >= 4) {
+      int head = (bytes[0] & 0xff) | ((bytes[1] << 8) & 0xff00);
+      if (GZIPInputStream.GZIP_MAGIC == head) {
+        GZIPInputStream gzis = null;
+        ByteArrayOutputStream baos = new ByteArrayOutputStream();
+        try {
+          gzis = new GZIPInputStream(new ByteArrayInputStream(bytes));
+
+          byte[] buffer = new byte[2048];
+          for (int length; (length = gzis.read(buffer)) >= 0; ) {
+            baos.write(buffer, 0, length);
+          } // end while: reading input
+
+          // No error? Get new bytes.
+          bytes = baos.toByteArray();
+
+        } catch (IOException e) {
+          // Just return originally-decoded bytes
+
+        } finally {
+          try {
+            baos.close();
+          } catch (Exception e) {
+            LOG.error("error closing ByteArrayOutputStream", e);
+          }
+          if (gzis != null) {
+            try {
+              gzis.close();
+            } catch (Exception e) {
+              LOG.error("error closing GZIPInputStream", e);
+            }
+          }
+        } // end finally
+      } // end if: gzipped
+    } // end if: bytes.length >= 2
+
+    return bytes;
+  } // end decode
+
+  /**
+   * Attempts to decode Base64 data and deserialize a Java Object within.
+   * Returns <tt>null</tt> if there was an error.
+   *
+   * @param encodedObject The Base64 data to decode
+   * @return The decoded and deserialized object
+   * @since 1.5
+   */
+  public static Object decodeToObject(String encodedObject) {
+    // Decode and gunzip if necessary
+    byte[] objBytes = decode(encodedObject);
+
+    Object obj = null;
+    ObjectInputStream ois = null;
+    try {
+      ois = new ObjectInputStream(new ByteArrayInputStream(objBytes));
+      obj = ois.readObject();
+
+    } catch (IOException e) {
+      LOG.error("error decoding object", e);
+
+    } catch (ClassNotFoundException e) {
+      LOG.error("error decoding object", e);
+
+    } finally {
+      if (ois != null) {
+        try {
+          ois.close();
+        } catch (Exception e) {
+          LOG.error("error closing ObjectInputStream", e);
+        }
+      }
+    } // end finally
+
+    return obj;
+  } // end decodeObject
+
+  /**
+   * Convenience method for encoding data to a file.
+   *
+   * @param dataToEncode byte array of data to encode in base64 form
+   * @param filename Filename for saving encoded data
+   * @return <tt>true</tt> if successful, <tt>false</tt> otherwise
+   *
+   * @since 2.1
+   */
+  public static boolean encodeToFile(byte[] dataToEncode, String filename) {
+    boolean success = false;
+    Base64OutputStream bos = null;
+    try {
+      bos = new Base64OutputStream(new FileOutputStream(filename), ENCODE);
+      bos.write(dataToEncode);
+      success = true;
+
+    } catch (IOException e) {
+      LOG.error("error encoding file: " + filename, e);
+      success = false;
+
+    } finally {
+      if (bos != null) {
+        try {
+          bos.close();
+        } catch (Exception e) {
+          LOG.error("error closing Base64OutputStream", e);
+        }
+      }
+    } // end finally
+
+    return success;
+  } // end encodeToFile
+
+  /**
+   * Convenience method for decoding data to a file.
+   *
+   * @param dataToDecode Base64-encoded data as a string
+   * @param filename Filename for saving decoded data
+   * @return <tt>true</tt> if successful, <tt>false</tt> otherwise
+   *
+   * @since 2.1
+   */
+  public static boolean decodeToFile(String dataToDecode, String filename) {
+    boolean success = false;
+    Base64OutputStream bos = null;
+    try {
+      bos = new Base64OutputStream(new FileOutputStream(filename), DECODE);
+      bos.write(dataToDecode.getBytes(PREFERRED_ENCODING));
+      success = true;
+
+    } catch (IOException e) {
+      LOG.error("error decoding to file: " + filename, e);
+      success = false;
+
+    } finally {
+      if (bos != null) {
+        try {
+          bos.close();
+        } catch (Exception e) {
+          LOG.error("error closing Base64OutputStream", e);
+        }
+      }
+    } // end finally
+
+    return success;
+  } // end decodeToFile
+
+  /**
+   * Convenience method for reading a base64-encoded file and decoding it.
+   *
+   * @param filename Filename for reading encoded data
+   * @return decoded byte array or null if unsuccessful
+   *
+   * @since 2.1
+   */
+  public static byte[] decodeFromFile(String filename) {
+    byte[] decodedData = null;
+    Base64InputStream bis = null;
+    try {
+      File file = new File(filename);
+      byte[] buffer;
+
+      // Check the size of file
+      if (file.length() > Integer.MAX_VALUE) {
+        LOG.fatal("File is too big for this convenience method (" +
+            file.length() + " bytes).");
+        return null;
+      } // end if: file too big for int index
+
+      buffer = new byte[(int) file.length()];
+
+      // Open a stream
+
+      bis = new Base64InputStream(new BufferedInputStream(
+          new FileInputStream(file)), DECODE);
+
+      // Read until done
+
+      int length = 0;
+      for (int numBytes; (numBytes = bis.read(buffer, length, 4096)) >= 0; ) {
+        length += numBytes;
+      }
+
+      // Save in a variable to return
+
+      decodedData = new byte[length];
+      System.arraycopy(buffer, 0, decodedData, 0, length);
+
+    } catch (IOException e) {
+      LOG.error("Error decoding from file " + filename, e);
+
+    } finally {
+      if (bis != null) {
+        try {
+          bis.close();
+        } catch (Exception e) {
+          LOG.error("error closing Base64InputStream", e);
+        }
+      }
+    } // end finally
+
+    return decodedData;
+  } // end decodeFromFile
+
+  /**
+   * Convenience method for reading a binary file and base64-encoding it.
+   *
+   * @param filename Filename for reading binary data
+   * @return base64-encoded string or null if unsuccessful
+   *
+   * @since 2.1
+   */
+  public static String encodeFromFile(String filename) {
+    String encodedData = null;
+    Base64InputStream bis = null;
+    try {
+      File file = new File(filename);
+
+      // Need max() for math on small files (v2.2.1)
+
+      byte[] buffer = new byte[Math.max((int) (file.length() * 1.4), 40)];
+
+      // Open a stream
+
+      bis = new Base64InputStream(new BufferedInputStream(
+              new FileInputStream(file)), ENCODE);
+
+      // Read until done
+      int length = 0;
+      for (int numBytes; (numBytes = bis.read(buffer, length, 4096)) >= 0; ) {
+        length += numBytes;
+      }
+
+      // Save in a variable to return
+
+      encodedData = new String(buffer, 0, length, PREFERRED_ENCODING);
+
+    } catch (IOException e) {
+      LOG.error("Error encoding from file " + filename, e);
+
+    } finally {
+      if (bis != null) {
+        try {
+          bis.close();
+        } catch (Exception e) {
+          LOG.error("error closing Base64InputStream", e);
+        }
+      }
+    } // end finally
+
+    return encodedData;
+  } // end encodeFromFile
+
+  /**
+   * Reads <tt>infile</tt> and encodes it to <tt>outfile</tt>.
+   *
+   * @param infile Input file
+   * @param outfile Output file
+   * @since 2.2
+   */
+  public static void encodeFileToFile(String infile, String outfile) {
+    String encoded = encodeFromFile(infile);
+    OutputStream out = null;
+    try {
+      out = new BufferedOutputStream(new FileOutputStream(outfile));
+      out.write(encoded.getBytes("US-ASCII")); // Strict, 7-bit output.
+
+    } catch (IOException e) {
+      LOG.error("error encoding from file " + infile + " to " + outfile, e);
+
+    } finally {
+      if (out != null) {
+        try {
+          out.close();
+        } catch (Exception e) {
+          LOG.error("error closing " + outfile, e);
+        }
+      }
+    } // end finally
+  } // end encodeFileToFile
+
+  /**
+   * Reads <tt>infile</tt> and decodes it to <tt>outfile</tt>.
+   *
+   * @param infile Input file
+   * @param outfile Output file
+   * @since 2.2
+   */
+  public static void decodeFileToFile(String infile, String outfile) {
+    byte[] decoded = decodeFromFile(infile);
+    OutputStream out = null;
+    try {
+      out = new BufferedOutputStream(new FileOutputStream(outfile));
+      out.write(decoded);
+
+    } catch (IOException e) {
+      LOG.error("error decoding from file " + infile + " to " + outfile, e);
+
+    } finally {
+      if (out != null) {
+        try {
+          out.close();
+        } catch (Exception e) {
+          LOG.error("error closing " + outfile, e);
+        }
+      }
+    } // end finally
+  } // end decodeFileToFile
+
+  /* ******** I N N E R   C L A S S   I N P U T S T R E A M ******** */
+
+  /**
+   * A {@link Base64.Base64InputStream} will read data from another
+   * <tt>InputStream</tt>, given in the constructor, and
+   * encode/decode to/from Base64 notation on the fly.
+   *
+   * @see Base64
+   * @since 1.3
+   */
+  public static class Base64InputStream extends FilterInputStream {
+    private boolean encode;                     // Encoding or decoding
+    private int position;                       // Current position in the buffer
+    private byte[] buffer;                      // Buffer holding converted data
+    private int bufferLength;                   // Length of buffer (3 or 4)
+    private int numSigBytes;                    // Meaningful bytes in the buffer
+    private int lineLength;
+    private boolean breakLines;                 // Break lines at < 80 characters
+    private int options;                        // Record options
+    private byte[] decodabet;                   // Local copy avoids method calls
+
+    /**
+     * Constructs a {@link Base64InputStream} in DECODE mode.
+     *
+     * @param in the <tt>InputStream</tt> from which to read data.
+     * @since 1.3
+     */
+    public Base64InputStream(InputStream in) {
+      this(in, DECODE);
+    } // end constructor
+
+    /**
+     * Constructs a {@link Base64.Base64InputStream} in either ENCODE or DECODE mode.
+     * <p>
+     * Valid options:
+     *
+     * <pre>
+     *   ENCODE or DECODE: Encode or Decode as data is read.
+     *   DONT_BREAK_LINES: don't break lines at 76 characters
+     *     (only meaningful when encoding)
+     *     &lt;i&gt;Note: Technically, this makes your encoding non-compliant.&lt;/i&gt;
+     * </pre>
+     *
+     * <p>
+     * Example: <code>new Base64.Base64InputStream( in, Base64.DECODE )</code>
+     *
+     *
+     * @param in the <tt>InputStream</tt> from which to read data.
+     * @param options Specified options
+     * @see Base64#ENCODE
+     * @see Base64#DECODE
+     * @see Base64#DONT_BREAK_LINES
+     * @since 2.0
+     */
+    public Base64InputStream(InputStream in, int options) {
+      super(in);
+      this.breakLines = (options & DONT_BREAK_LINES) != DONT_BREAK_LINES;
+      this.encode = (options & ENCODE) == ENCODE;
+      this.bufferLength = encode ? 4 : 3;
+      this.buffer = new byte[bufferLength];
+      this.position = -1;
+      this.lineLength = 0;
+      this.options = options; // Record for later, mostly to determine which
+                              // alphabet to use
+      this.decodabet = getDecodabet(options);
+    } // end constructor
+
+    /**
+     * Reads enough of the input stream to convert to/from Base64 and returns
+     * the next byte.
+     *
+     * @return next byte
+     * @since 1.3
+     */
+    @Override
+    public int read() throws IOException {
+      // Do we need to get data?
+      if (position < 0) {
+        if (encode) {
+          byte[] b3 = new byte[3];
+          int numBinaryBytes = 0;
+          for (int i = 0; i < 3; i++) {
+            try {
+              int b = in.read();
+
+              // If end of stream, b is -1.
+              if (b >= 0) {
+                b3[i] = (byte) b;
+                numBinaryBytes++;
+              } // end if: not end of stream
+
+            } catch (IOException e) {
+              // Only a problem if we got no data at all.
+              if (i == 0)
+                throw e;
+
+            } // end catch
+          } // end for: each needed input byte
+
+          if (numBinaryBytes > 0) {
+            encode3to4(b3, 0, numBinaryBytes, buffer, 0, options);
+            position = 0;
+            numSigBytes = 4;
+
+          } else {
+            return -1;
+          } // end else
+
+        } else {
+          byte[] b4 = new byte[4];
+          int i;
+          for (i = 0; i < 4; i++) {
+            // Read four "meaningful" bytes:
+            int b;
+            do {
+              b = in.read();
+            } while (b >= 0 && decodabet[b & 0x7f] <= WHITE_SPACE_ENC);
+
+            if (b < 0) {
+              break; // Reads a -1 if end of stream
+            }
+
+            b4[i] = (byte) b;
+          } // end for: each needed input byte
+
+          if (i == 4) {
+            numSigBytes = decode4to3(b4, 0, buffer, 0, options);
+            position = 0;
+
+          } else if (i == 0) {
+            return -1;
+
+          } else {
+            // Must have broken out from above.
+            throw new IOException("Improperly padded Base64 input.");
+          } // end
+        } // end else: decode
+      } // end else: get data
+
+      // Got data?
+      if (position >= 0) {
+        // End of relevant data?
+        if ( /* !encode && */position >= numSigBytes) {
+          return -1;
+        }
+
+        if (encode && breakLines && lineLength >= MAX_LINE_LENGTH) {
+          lineLength = 0;
+          return '\n';
+
+        }
+        lineLength++;                   // This isn't important when decoding
+                                        // but throwing an extra "if" seems
+                                        // just as wasteful.
+
+        int b = buffer[position++];
+
+        if (position >= bufferLength)
+          position = -1;
+
+        return b & 0xFF;                // This is how you "cast" a byte that's
+                                        // intended to be unsigned.
+
+      }
+
+      // When JDK1.4 is more accepted, use an assertion here.
+      throw new IOException("Error in Base64 code reading stream.");
+
+    } // end read
+
+    /**
+     * Calls {@link #read()} repeatedly until the end of stream is reached or
+     * <var>len</var> bytes are read. Returns number of bytes read into array
+     * or -1 if end of stream is encountered.
+     *
+     * @param dest array to hold values
+     * @param off offset for array
+     * @param len max number of bytes to read into array
+     * @return bytes read into array or -1 if end of stream is encountered.
+     * @since 1.3
+     */
+    @Override
+    public int read(byte[] dest, int off, int len) throws IOException {
+      int i;
+      int b;
+      for (i = 0; i < len; i++) {
+        b = read();
+        if (b >= 0) {
+          dest[off + i] = (byte) b;
+        } else if (i == 0) {
+          return -1;
+        } else {
+          break; // Out of 'for' loop
+        }
+      } // end for: each byte read
+      return i;
+    } // end read
+
+  } // end inner class InputStream
+
+  /* ******** I N N E R   C L A S S   O U T P U T S T R E A M ******** */
+
+  /**
+   * A {@link Base64.Base64OutputStream} will write data to another
+   * <tt>OutputStream</tt>, given in the constructor, and
+   * encode/decode to/from Base64 notation on the fly.
+   *
+   * @see Base64
+   * @since 1.3
+   */
+  public static class Base64OutputStream extends FilterOutputStream {
+    private boolean encode;
+    private int position;
+    private byte[] buffer;
+    private int bufferLength;
+    private int lineLength;
+    private boolean breakLines;
+    private byte[] b4;                          // Scratch used in a few places
+    private boolean suspendEncoding;
+    private int options;                        // Record for later
+    private byte[] decodabet;                   // Local copy avoids method calls
+
+    /**
+     * Constructs a {@link Base64OutputStream} in ENCODE mode.
+     *
+     * @param out the <tt>OutputStream</tt> to which data will be written.
+     * @since 1.3
+     */
+    public Base64OutputStream(OutputStream out) {
+      this(out, ENCODE);
+    } // end constructor
+
+    /**
+     * Constructs a {@link Base64OutputStream} in either ENCODE or DECODE mode.
+     * <p>
+     * Valid options:
+     *
+     * <ul>
+     *   <li>ENCODE or DECODE: Encode or Decode as data is read.</li>
+     *   <li>DONT_BREAK_LINES: don't break lines at 76 characters (only
+     *     meaningful when encoding) <i>Note: Technically, this makes your
+     *     encoding non-compliant.</i></li>
+     * </ul>
+     *
+     * <p>
+     * Example: <code>new Base64.Base64OutputStream( out, Base64.ENCODE )</code>
+     *
+     * @param out the <tt>OutputStream</tt> to which data will be written.
+     * @param options Specified options.
+     * @see Base64#ENCODE
+     * @see Base64#DECODE
+     * @see Base64#DONT_BREAK_LINES
+     * @since 1.3
+     */
+    public Base64OutputStream(OutputStream out, int options) {
+      super(out);
+      this.breakLines = (options & DONT_BREAK_LINES) != DONT_BREAK_LINES;
+      this.encode = (options & ENCODE) == ENCODE;
+      this.bufferLength = encode ? 3 : 4;
+      this.buffer = new byte[bufferLength];
+      this.position = 0;
+      this.lineLength = 0;
+      this.suspendEncoding = false;
+      this.b4 = new byte[4];
+      this.options = options;
+      this.decodabet = getDecodabet(options);
+    } // end constructor
+
+    /**
+     * Writes the byte to the output stream after converting to/from Base64
+     * notation. When encoding, bytes are buffered three at a time before the
+     * output stream actually gets a write() call. When decoding, bytes are
+     * buffered four at a time.
+     *
+     * @param theByte the byte to write
+     * @since 1.3
+     */
+    @Override
+    public void write(int theByte) throws IOException {
+      // Encoding suspended?
+      if (suspendEncoding) {
+        super.out.write(theByte);
+        return;
+      } // end if: supsended
+
+      // Encode?
+      if (encode) {
+        buffer[position++] = (byte) theByte;
+        if (position >= bufferLength) {                 // Enough to encode.
+          out.write(encode3to4(b4, buffer, bufferLength, options));
+          lineLength += 4;
+          if (breakLines && lineLength >= MAX_LINE_LENGTH) {
+            out.write(NEW_LINE);
+            lineLength = 0;
+          } // end if: end of line
+
+          position = 0;
+        } // end if: enough to output
+
+      } else {
+        // Meaningful Base64 character?
+        if (decodabet[theByte & 0x7f] > WHITE_SPACE_ENC) {
+          buffer[position++] = (byte) theByte;
+          if (position >= bufferLength) {               // Enough to output.
+            int len = decode4to3(buffer, 0, b4, 0, options);
+            out.write(b4, 0, len);
+            position = 0;
+          } // end if: enough to output
+
+        } else if (decodabet[theByte & 0x7f] != WHITE_SPACE_ENC) {
+          throw new IOException("Invalid character in Base64 data.");
+        } // end else: not white space either
+      } // end else: decoding
+    } // end write
+
+    /**
+     * Calls {@link #write(int)} repeatedly until <var>len</var> bytes are
+     * written.
+     *
+     * @param theBytes array from which to read bytes
+     * @param off offset for array
+     * @param len max number of bytes to read into array
+     * @since 1.3
+     */
+    @Override
+    public void write(byte[] theBytes, int off, int len) throws IOException {
+      // Encoding suspended?
+      if (suspendEncoding) {
+        super.out.write(theBytes, off, len);
+        return;
+      } // end if: supsended
+
+      for (int i = 0; i < len; i++) {
+        write(theBytes[off + i]);
+      } // end for: each byte written
+
+    } // end write
+
+    /**
+     * Method added by PHIL. [Thanks, PHIL. -Rob] This pads the buffer without
+     * closing the stream.
+     *
+     * @throws IOException e
+     */
+    public void flushBase64() throws IOException {
+      if (position > 0) {
+        if (encode) {
+          out.write(encode3to4(b4, buffer, position, options));
+          position = 0;
+
+        } else {
+          throw new IOException("Base64 input not properly padded.");
+        } // end else: decoding
+      } // end if: buffer partially full
+
+    } // end flush
+
+    /**
+     * Flushes and closes (I think, in the superclass) the stream.
+     *
+     * @since 1.3
+     */
+    @Override
+    public void close() throws IOException {
+      // 1. Ensure that pending characters are written
+      flushBase64();
+
+      // 2. Actually close the stream
+      // Base class both flushes and closes.
+      super.close();
+
+      buffer = null;
+      out = null;
+    } // end close
+
+    /**
+     * Suspends encoding of the stream. May be helpful if you need to embed a
+     * piece of base640-encoded data in a stream.
+     *
+     * @throws IOException e
+     * @since 1.5.1
+     */
+    public void suspendEncoding() throws IOException {
+      flushBase64();
+      this.suspendEncoding = true;
+    } // end suspendEncoding
+
+    /**
+     * Resumes encoding of the stream. May be helpful if you need to embed a
+     * piece of base640-encoded data in a stream.
+     *
+     * @since 1.5.1
+     */
+    public void resumeEncoding() {
+      this.suspendEncoding = false;
+    } // end resumeEncoding
+
+  } // end inner class OutputStream
+
+} // end class Base64
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/BloomFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/BloomFilter.java
new file mode 100644
index 0000000..f100366
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/BloomFilter.java
@@ -0,0 +1,121 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.io.Writable;
+
+import java.nio.ByteBuffer;
+
+/**
+ * Defines the general behavior of a bloom filter.
+ * <p>
+ * The Bloom filter is a data structure that was introduced in 1970 and that has been adopted by
+ * the networking research community in the past decade thanks to the bandwidth efficiencies that it
+ * offers for the transmission of set membership information between networked hosts.  A sender encodes
+ * the information into a bit vector, the Bloom filter, that is more compact than a conventional
+ * representation. Computation and space costs for construction are linear in the number of elements.
+ * The receiver uses the filter to test whether various elements are members of the set. Though the
+ * filter will occasionally return a false positive, it will never return a false negative. When creating
+ * the filter, the sender can choose its desired point in a trade-off between the false positive rate and the size.
+ *
+ * <p>
+ * Originally created by
+ * <a href="http://www.one-lab.org">European Commission One-Lab Project 034819</a>.
+ *
+ * <p>
+ * It must be extended in order to define the real behavior.
+ */
+public interface BloomFilter {
+  /**
+   * Allocate memory for the bloom filter data.  Note that bloom data isn't
+   * allocated by default because it can grow large & reads would be better
+   * managed by the LRU cache.
+   */
+  void allocBloom();
+
+  /**
+   * Add the specified binary to the bloom filter.
+   *
+   * @param buf data to be added to the bloom
+   */
+  void add(byte []buf);
+
+  /**
+   * Add the specified binary to the bloom filter.
+   *
+   * @param buf data to be added to the bloom
+   * @param offset offset into the data to be added
+   * @param len length of the data to be added
+   */
+  void add(byte []buf, int offset, int len);
+
+  /**
+   * Check if the specified key is contained in the bloom filter.
+   *
+   * @param buf data to check for existence of
+   * @param bloom bloom filter data to search
+   * @return true if matched by bloom, false if not
+   */
+  boolean contains(byte [] buf, ByteBuffer bloom);
+
+  /**
+   * Check if the specified key is contained in the bloom filter.
+   *
+   * @param buf data to check for existence of
+   * @param offset offset into the data
+   * @param length length of the data
+   * @param bloom bloom filter data to search
+   * @return true if matched by bloom, false if not
+   */
+  boolean contains(byte [] buf, int offset, int length, ByteBuffer bloom);
+
+  /**
+   * @return The number of keys added to the bloom
+   */
+  int getKeyCount();
+
+  /**
+   * @return The max number of keys that can be inserted
+   *         to maintain the desired error rate
+   */
+  public int getMaxKeys();
+
+  /**
+   * @return Size of the bloom, in bytes
+   */
+  public int getByteSize();
+
+  /**
+   * Compact the bloom before writing metadata & data to disk
+   */
+  void compactBloom();
+
+  /**
+   * Get a writable interface into bloom filter meta data.
+   * @return writable class
+   */
+  Writable getMetaWriter();
+
+  /**
+   * Get a writable interface into bloom filter data (actual bloom).
+   * @return writable class
+   */
+  Writable getDataWriter();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/ByteBloomFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/ByteBloomFilter.java
new file mode 100644
index 0000000..7682834
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/ByteBloomFilter.java
@@ -0,0 +1,390 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+/**
+ * Implements a <i>Bloom filter</i>, as defined by Bloom in 1970.
+ * <p>
+ * The Bloom filter is a data structure that was introduced in 1970 and that has been adopted by
+ * the networking research community in the past decade thanks to the bandwidth efficiencies that it
+ * offers for the transmission of set membership information between networked hosts.  A sender encodes
+ * the information into a bit vector, the Bloom filter, that is more compact than a conventional
+ * representation. Computation and space costs for construction are linear in the number of elements.
+ * The receiver uses the filter to test whether various elements are members of the set. Though the
+ * filter will occasionally return a false positive, it will never return a false negative. When creating
+ * the filter, the sender can choose its desired point in a trade-off between the false positive rate and the size.
+ *
+ * <p>
+ * Originally inspired by
+ * <a href="http://www.one-lab.org">European Commission One-Lab Project 034819</a>.
+ *
+ * @see BloomFilter The general behavior of a filter
+ *
+ * @see <a href="http://portal.acm.org/citation.cfm?id=362692&dl=ACM&coll=portal">Space/Time Trade-Offs in Hash Coding with Allowable Errors</a>
+ */
+public class ByteBloomFilter implements BloomFilter {
+  /** Current file format version */
+  public static final int VERSION = 1;
+
+  /** Bytes (B) in the array */
+  protected long byteSize;
+  /** Number of hash functions */
+  protected final int hashCount;
+  /** Hash type */
+  protected final int hashType;
+  /** Hash Function */
+  protected final Hash hash;
+  /** Keys currently in the bloom */
+  protected int keyCount;
+  /** Max Keys expected for the bloom */
+  protected int maxKeys;
+  /** Bloom bits */
+  protected ByteBuffer bloom;
+
+  /** Bit-value lookup array to prevent doing the same work over and over */
+  private static final byte [] bitvals = {
+    (byte) 0x01,
+    (byte) 0x02,
+    (byte) 0x04,
+    (byte) 0x08,
+    (byte) 0x10,
+    (byte) 0x20,
+    (byte) 0x40,
+    (byte) 0x80
+    };
+
+  /**
+   * Loads bloom filter meta data from file input.
+   * @param meta stored bloom meta data
+   * @throws IllegalArgumentException meta data is invalid
+   */
+  public ByteBloomFilter(ByteBuffer meta)
+  throws IllegalArgumentException {
+    int version = meta.getInt();
+    if (version != VERSION) throw new IllegalArgumentException("Bad version");
+
+    this.byteSize = meta.getInt();
+    this.hashCount = meta.getInt();
+    this.hashType = meta.getInt();
+    this.keyCount = meta.getInt();
+    this.maxKeys = this.keyCount;
+
+    this.hash = Hash.getInstance(this.hashType);
+    sanityCheck();
+  }
+
+  /**
+   * Determines & initializes bloom filter meta data from user config.  Call
+   * {@link #allocBloom()} to allocate bloom filter data.
+   * @param maxKeys Maximum expected number of keys that will be stored in this bloom
+   * @param errorRate Desired false positive error rate.  Lower rate = more storage required
+   * @param hashType Type of hash function to use
+   * @param foldFactor When finished adding entries, you may be able to 'fold'
+   * this bloom to save space.  Tradeoff potentially excess bytes in bloom for
+   * ability to fold if keyCount is exponentially greater than maxKeys.
+   * @throws IllegalArgumentException
+   */
+  public ByteBloomFilter(int maxKeys, float errorRate, int hashType, int foldFactor)
+      throws IllegalArgumentException {
+    /*
+     * Bloom filters are very sensitive to the number of elements inserted
+     * into them. For HBase, the number of entries depends on the size of the
+     * data stored in the column. Currently the default region size is 256MB,
+     * so entry count ~= 256MB / (average value size for column).  Despite
+     * this rule of thumb, there is no efficient way to calculate the entry
+     * count after compactions.  Therefore, it is often easier to use a
+     * dynamic bloom filter that will add extra space instead of allowing the
+     * error rate to grow.
+     *
+     * ( http://www.eecs.harvard.edu/~michaelm/NEWWORK/postscripts/BloomFilterSurvey.pdf )
+     *
+     * m denotes the number of bits in the Bloom filter (bitSize)
+     * n denotes the number of elements inserted into the Bloom filter (maxKeys)
+     * k represents the number of hash functions used (nbHash)
+     * e represents the desired false positive rate for the bloom (err)
+     *
+     * If we fix the error rate (e) and know the number of entries, then
+     * the optimal bloom size m = -(n * ln(err) / (ln(2)^2)
+     *                         ~= n * ln(err) / ln(0.6185)
+     *
+     * The probability of false positives is minimized when k = m/n ln(2).
+     */
+    long bitSize = (long)Math.ceil(maxKeys * (Math.log(errorRate) / Math.log(0.6185)));
+    int functionCount = (int)Math.ceil(Math.log(2) * (bitSize / maxKeys));
+
+    // increase byteSize so folding is possible
+    long byteSize = (bitSize + 7) / 8;
+    int mask = (1 << foldFactor) - 1;
+    if ( (mask & byteSize) != 0) {
+      byteSize >>= foldFactor;
+      ++byteSize;
+      byteSize <<= foldFactor;
+    }
+
+    this.byteSize = byteSize;
+    this.hashCount = functionCount;
+    this.hashType = hashType;
+    this.keyCount = 0;
+    this.maxKeys = maxKeys;
+
+    this.hash = Hash.getInstance(hashType);
+    sanityCheck();
+  }
+
+  @Override
+  public void allocBloom() {
+    if (this.bloom != null) {
+      throw new IllegalArgumentException("can only create bloom once.");
+    }
+    this.bloom = ByteBuffer.allocate((int)this.byteSize);
+    assert this.bloom.hasArray();
+  }
+
+  void sanityCheck() throws IllegalArgumentException {
+    if(0 >= this.byteSize || this.byteSize > Integer.MAX_VALUE) {
+      throw new IllegalArgumentException("Invalid byteSize: " + this.byteSize);
+    }
+
+    if(this.hashCount <= 0) {
+      throw new IllegalArgumentException("Hash function count must be > 0");
+    }
+
+    if (this.hash == null) {
+      throw new IllegalArgumentException("hashType must be known");
+    }
+
+    if (this.keyCount < 0) {
+      throw new IllegalArgumentException("must have positive keyCount");
+    }
+  }
+
+  void bloomCheck(ByteBuffer bloom)  throws IllegalArgumentException {
+    if (this.byteSize != bloom.limit()) {
+      throw new IllegalArgumentException(
+          "Configured bloom length should match actual length");
+    }
+  }
+
+  @Override
+  public void add(byte [] buf) {
+    add(buf, 0, buf.length);
+  }
+
+  @Override
+  public void add(byte [] buf, int offset, int len) {
+    /*
+     * For faster hashing, use combinatorial generation
+     * http://www.eecs.harvard.edu/~kirsch/pubs/bbbf/esa06.pdf
+     */
+    int hash1 = this.hash.hash(buf, offset, len, 0);
+    int hash2 = this.hash.hash(buf, offset, len, hash1);
+
+    for (int i = 0; i < this.hashCount; i++) {
+      long hashLoc = Math.abs((hash1 + i * hash2) % (this.byteSize * 8));
+      set(hashLoc);
+    }
+
+    ++this.keyCount;
+  }
+
+  /**
+   * Should only be used in tests when writing a bloom filter.
+   */
+  boolean contains(byte [] buf) {
+    return contains(buf, 0, buf.length, this.bloom);
+  }
+
+  /**
+   * Should only be used in tests when writing a bloom filter.
+   */
+  boolean contains(byte [] buf, int offset, int length) {
+    return contains(buf, offset, length, this.bloom);
+  }
+
+  @Override
+  public boolean contains(byte [] buf, ByteBuffer theBloom) {
+    return contains(buf, 0, buf.length, theBloom);
+  }
+
+  @Override
+  public boolean contains(byte [] buf, int offset, int length,
+      ByteBuffer theBloom) {
+
+    if(theBloom.limit() != this.byteSize) {
+      throw new IllegalArgumentException("Bloom does not match expected size");
+    }
+
+    int hash1 = this.hash.hash(buf, offset, length, 0);
+    int hash2 = this.hash.hash(buf, offset, length, hash1);
+
+    for (int i = 0; i < this.hashCount; i++) {
+      long hashLoc = Math.abs((hash1 + i * hash2) % (this.byteSize * 8));
+      if (!get(hashLoc, theBloom) ) {
+        return false;
+      }
+    }
+    return true;
+  }
+
+  //---------------------------------------------------------------------------
+  /** Private helpers */
+
+  /**
+   * Set the bit at the specified index to 1.
+   *
+   * @param pos index of bit
+   */
+  void set(long pos) {
+    int bytePos = (int)(pos / 8);
+    int bitPos = (int)(pos % 8);
+    byte curByte = bloom.get(bytePos);
+    curByte |= bitvals[bitPos];
+    bloom.put(bytePos, curByte);
+  }
+
+  /**
+   * Check if bit at specified index is 1.
+   *
+   * @param pos index of bit
+   * @return true if bit at specified index is 1, false if 0.
+   */
+  static boolean get(long pos, ByteBuffer theBloom) {
+    int bytePos = (int)(pos / 8);
+    int bitPos = (int)(pos % 8);
+    byte curByte = theBloom.get(bytePos);
+    curByte &= bitvals[bitPos];
+    return (curByte != 0);
+  }
+
+  @Override
+  public int getKeyCount() {
+    return this.keyCount;
+  }
+
+  @Override
+  public int getMaxKeys() {
+    return this.maxKeys;
+  }
+
+  @Override
+  public int getByteSize() {
+    return (int)this.byteSize;
+  }
+
+  @Override
+  public void compactBloom() {
+    // see if the actual size is exponentially smaller than expected.
+    if (this.keyCount > 0 && this.bloom.hasArray()) {
+      int pieces = 1;
+      int newByteSize = (int)this.byteSize;
+      int newMaxKeys = this.maxKeys;
+
+      // while exponentially smaller & folding is lossless
+      while ( (newByteSize & 1) == 0 && newMaxKeys > (this.keyCount<<1) ) {
+        pieces <<= 1;
+        newByteSize >>= 1;
+        newMaxKeys >>= 1;
+      }
+
+      // if we should fold these into pieces
+      if (pieces > 1) {
+        byte[] array = this.bloom.array();
+        int start = this.bloom.arrayOffset();
+        int end = start + newByteSize;
+        int off = end;
+        for(int p = 1; p < pieces; ++p) {
+          for(int pos = start; pos < end; ++pos) {
+            array[pos] |= array[off++];
+          }
+        }
+        // folding done, only use a subset of this array
+        this.bloom.rewind();
+        this.bloom.limit(newByteSize);
+        this.bloom = this.bloom.slice();
+        this.byteSize = newByteSize;
+        this.maxKeys = newMaxKeys;
+      }
+    }
+  }
+
+
+  //---------------------------------------------------------------------------
+
+  /**
+   * Writes just the bloom filter to the output array
+   * @param out OutputStream to place bloom
+   * @throws IOException Error writing bloom array
+   */
+  public void writeBloom(final DataOutput out) throws IOException {
+    if (!this.bloom.hasArray()) {
+      throw new IOException("Only writes ByteBuffer with underlying array.");
+    }
+    out.write(bloom.array(), bloom.arrayOffset(), bloom.limit());
+  }
+
+  @Override
+  public Writable getMetaWriter() {
+    return new MetaWriter();
+  }
+
+  @Override
+  public Writable getDataWriter() {
+    return new DataWriter();
+  }
+
+  private class MetaWriter implements Writable {
+    protected MetaWriter() {}
+    @Override
+    public void readFields(DataInput arg0) throws IOException {
+      throw new IOException("Cant read with this class.");
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      out.writeInt(VERSION);
+      out.writeInt((int)byteSize);
+      out.writeInt(hashCount);
+      out.writeInt(hashType);
+      out.writeInt(keyCount);
+    }
+  }
+
+  private class DataWriter implements Writable {
+    protected DataWriter() {}
+    @Override
+    public void readFields(DataInput arg0) throws IOException {
+      throw new IOException("Cant read with this class.");
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      writeBloom(out);
+    }
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Bytes.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
new file mode 100644
index 0000000..f2cc305
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
@@ -0,0 +1,1258 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.io.RawComparator;
+import org.apache.hadoop.io.WritableComparator;
+import org.apache.hadoop.io.WritableUtils;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.math.BigInteger;
+import java.nio.ByteBuffer;
+import java.util.Comparator;
+import java.util.Iterator;
+
+/**
+ * Utility class that handles byte arrays, conversions to/from other types,
+ * comparisons, hash code generation, manufacturing keys for HashMaps or
+ * HashSets, etc.
+ */
+public class Bytes {
+
+  private static final Log LOG = LogFactory.getLog(Bytes.class);
+
+  /**
+   * Size of boolean in bytes
+   */
+  public static final int SIZEOF_BOOLEAN = Byte.SIZE / Byte.SIZE;
+
+  /**
+   * Size of byte in bytes
+   */
+  public static final int SIZEOF_BYTE = SIZEOF_BOOLEAN;
+
+  /**
+   * Size of char in bytes
+   */
+  public static final int SIZEOF_CHAR = Character.SIZE / Byte.SIZE;
+
+  /**
+   * Size of double in bytes
+   */
+  public static final int SIZEOF_DOUBLE = Double.SIZE / Byte.SIZE;
+
+  /**
+   * Size of float in bytes
+   */
+  public static final int SIZEOF_FLOAT = Float.SIZE / Byte.SIZE;
+
+  /**
+   * Size of int in bytes
+   */
+  public static final int SIZEOF_INT = Integer.SIZE / Byte.SIZE;
+
+  /**
+   * Size of long in bytes
+   */
+  public static final int SIZEOF_LONG = Long.SIZE / Byte.SIZE;
+
+  /**
+   * Size of short in bytes
+   */
+  public static final int SIZEOF_SHORT = Short.SIZE / Byte.SIZE;
+
+
+  /**
+   * Estimate of size cost to pay beyond payload in jvm for instance of byte [].
+   * Estimate based on study of jhat and jprofiler numbers.
+   */
+  // JHat says BU is 56 bytes.
+  // SizeOf which uses java.lang.instrument says 24 bytes. (3 longs?)
+  public static final int ESTIMATED_HEAP_TAX = 16;
+
+  /**
+   * Byte array comparator class.
+   */
+  public static class ByteArrayComparator implements RawComparator<byte []> {
+    /**
+     * Constructor
+     */
+    public ByteArrayComparator() {
+      super();
+    }
+    public int compare(byte [] left, byte [] right) {
+      return compareTo(left, right);
+    }
+    public int compare(byte [] b1, int s1, int l1, byte [] b2, int s2, int l2) {
+      return compareTo(b1, s1, l1, b2, s2, l2);
+    }
+  }
+
+  /**
+   * Pass this to TreeMaps where byte [] are keys.
+   */
+  public static Comparator<byte []> BYTES_COMPARATOR =
+    new ByteArrayComparator();
+
+  /**
+   * Use comparing byte arrays, byte-by-byte
+   */
+  public static RawComparator<byte []> BYTES_RAWCOMPARATOR =
+    new ByteArrayComparator();
+
+  /**
+   * Read byte-array written with a WritableableUtils.vint prefix.
+   * @param in Input to read from.
+   * @return byte array read off <code>in</code>
+   * @throws IOException e
+   */
+  public static byte [] readByteArray(final DataInput in)
+  throws IOException {
+    int len = WritableUtils.readVInt(in);
+    if (len < 0) {
+      throw new NegativeArraySizeException(Integer.toString(len));
+    }
+    byte [] result = new byte[len];
+    in.readFully(result, 0, len);
+    return result;
+  }
+
+  /**
+   * Read byte-array written with a WritableableUtils.vint prefix.
+   * IOException is converted to a RuntimeException.
+   * @param in Input to read from.
+   * @return byte array read off <code>in</code>
+   */
+  public static byte [] readByteArrayThrowsRuntime(final DataInput in) {
+    try {
+      return readByteArray(in);
+    } catch (Exception e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  /**
+   * Write byte-array with a WritableableUtils.vint prefix.
+   * @param out output stream to be written to
+   * @param b array to write
+   * @throws IOException e
+   */
+  public static void writeByteArray(final DataOutput out, final byte [] b)
+  throws IOException {
+    if(b == null) {
+      WritableUtils.writeVInt(out, 0);
+    } else {
+      writeByteArray(out, b, 0, b.length);
+    }
+  }
+
+  /**
+   * Write byte-array to out with a vint length prefix.
+   * @param out output stream
+   * @param b array
+   * @param offset offset into array
+   * @param length length past offset
+   * @throws IOException e
+   */
+  public static void writeByteArray(final DataOutput out, final byte [] b,
+      final int offset, final int length)
+  throws IOException {
+    WritableUtils.writeVInt(out, length);
+    out.write(b, offset, length);
+  }
+
+  /**
+   * Write byte-array from src to tgt with a vint length prefix.
+   * @param tgt target array
+   * @param tgtOffset offset into target array
+   * @param src source array
+   * @param srcOffset source offset
+   * @param srcLength source length
+   * @return New offset in src array.
+   */
+  public static int writeByteArray(final byte [] tgt, final int tgtOffset,
+      final byte [] src, final int srcOffset, final int srcLength) {
+    byte [] vint = vintToBytes(srcLength);
+    System.arraycopy(vint, 0, tgt, tgtOffset, vint.length);
+    int offset = tgtOffset + vint.length;
+    System.arraycopy(src, srcOffset, tgt, offset, srcLength);
+    return offset + srcLength;
+  }
+
+  /**
+   * Put bytes at the specified byte array position.
+   * @param tgtBytes the byte array
+   * @param tgtOffset position in the array
+   * @param srcBytes array to write out
+   * @param srcOffset source offset
+   * @param srcLength source length
+   * @return incremented offset
+   */
+  public static int putBytes(byte[] tgtBytes, int tgtOffset, byte[] srcBytes,
+      int srcOffset, int srcLength) {
+    System.arraycopy(srcBytes, srcOffset, tgtBytes, tgtOffset, srcLength);
+    return tgtOffset + srcLength;
+  }
+
+  /**
+   * Write a single byte out to the specified byte array position.
+   * @param bytes the byte array
+   * @param offset position in the array
+   * @param b byte to write out
+   * @return incremented offset
+   */
+  public static int putByte(byte[] bytes, int offset, byte b) {
+    bytes[offset] = b;
+    return offset + 1;
+  }
+
+  /**
+   * Returns a new byte array, copied from the passed ByteBuffer.
+   * @param bb A ByteBuffer
+   * @return the byte array
+   */
+  public static byte[] toBytes(ByteBuffer bb) {
+    int length = bb.limit();
+    byte [] result = new byte[length];
+    System.arraycopy(bb.array(), bb.arrayOffset(), result, 0, length);
+    return result;
+  }
+
+  /**
+   * @param b Presumed UTF-8 encoded byte array.
+   * @return String made from <code>b</code>
+   */
+  public static String toString(final byte [] b) {
+    if (b == null) {
+      return null;
+    }
+    return toString(b, 0, b.length);
+  }
+
+  /**
+   * Joins two byte arrays together using a separator.
+   * @param b1 The first byte array.
+   * @param sep The separator to use.
+   * @param b2 The second byte array.
+   */
+  public static String toString(final byte [] b1,
+                                String sep,
+                                final byte [] b2) {
+    return toString(b1, 0, b1.length) + sep + toString(b2, 0, b2.length);
+  }
+
+  /**
+   * This method will convert utf8 encoded bytes into a string. If
+   * an UnsupportedEncodingException occurs, this method will eat it
+   * and return null instead.
+   *
+   * @param b Presumed UTF-8 encoded byte array.
+   * @param off offset into array
+   * @param len length of utf-8 sequence
+   * @return String made from <code>b</code> or null
+   */
+  public static String toString(final byte [] b, int off, int len) {
+    if (b == null) {
+      return null;
+    }
+    if (len == 0) {
+      return "";
+    }
+    try {
+      return new String(b, off, len, HConstants.UTF8_ENCODING);
+    } catch (UnsupportedEncodingException e) {
+      LOG.error("UTF-8 not supported?", e);
+      return null;
+    }
+  }
+
+  /**
+   * Write a printable representation of a byte array.
+   *
+   * @param b byte array
+   * @return string
+   * @see #toStringBinary(byte[], int, int)
+   */
+  public static String toStringBinary(final byte [] b) {
+    return toStringBinary(b, 0, b.length);
+  }
+
+  /**
+   * Write a printable representation of a byte array. Non-printable
+   * characters are hex escaped in the format \\x%02X, eg:
+   * \x00 \x05 etc
+   *
+   * @param b array to write out
+   * @param off offset to start at
+   * @param len length to write
+   * @return string output
+   */
+  public static String toStringBinary(final byte [] b, int off, int len) {
+    StringBuilder result = new StringBuilder();
+    try {
+      String first = new String(b, off, len, "ISO-8859-1");
+      for (int i = 0; i < first.length() ; ++i ) {
+        int ch = first.charAt(i) & 0xFF;
+        if ( (ch >= '0' && ch <= '9')
+            || (ch >= 'A' && ch <= 'Z')
+            || (ch >= 'a' && ch <= 'z')
+            || " `~!@#$%^&*()-_=+[]{}\\|;:'\",.<>/?".indexOf(ch) >= 0 ) {
+          result.append(first.charAt(i));
+        } else {
+          result.append(String.format("\\x%02X", ch));
+        }
+      }
+    } catch (UnsupportedEncodingException e) {
+      LOG.error("ISO-8859-1 not supported?", e);
+    }
+    return result.toString();
+  }
+
+  private static boolean isHexDigit(char c) {
+    return
+        (c >= 'A' && c <= 'F') ||
+        (c >= '0' && c <= '9');
+  }
+
+  /**
+   * Takes a ASCII digit in the range A-F0-9 and returns
+   * the corresponding integer/ordinal value.
+   * @param ch  The hex digit.
+   * @return The converted hex value as a byte.
+   */
+  public static byte toBinaryFromHex(byte ch) {
+    if ( ch >= 'A' && ch <= 'F' )
+      return (byte) ((byte)10 + (byte) (ch - 'A'));
+    // else
+    return (byte) (ch - '0');
+  }
+
+  public static byte [] toBytesBinary(String in) {
+    // this may be bigger than we need, but lets be safe.
+    byte [] b = new byte[in.length()];
+    int size = 0;
+    for (int i = 0; i < in.length(); ++i) {
+      char ch = in.charAt(i);
+      if (ch == '\\') {
+        // begin hex escape:
+        char next = in.charAt(i+1);
+        if (next != 'x') {
+          // invalid escape sequence, ignore this one.
+          b[size++] = (byte)ch;
+          continue;
+        }
+        // ok, take next 2 hex digits.
+        char hd1 = in.charAt(i+2);
+        char hd2 = in.charAt(i+3);
+
+        // they need to be A-F0-9:
+        if (!isHexDigit(hd1) ||
+            !isHexDigit(hd2)) {
+          // bogus escape code, ignore:
+          continue;
+        }
+        // turn hex ASCII digit -> number
+        byte d = (byte) ((toBinaryFromHex((byte)hd1) << 4) + toBinaryFromHex((byte)hd2));
+
+        b[size++] = d;
+        i += 3; // skip 3
+      } else {
+        b[size++] = (byte) ch;
+      }
+    }
+    // resize:
+    byte [] b2 = new byte[size];
+    System.arraycopy(b, 0, b2, 0, size);
+    return b2;
+  }
+
+  /**
+   * Converts a string to a UTF-8 byte array.
+   * @param s string
+   * @return the byte array
+   */
+  public static byte[] toBytes(String s) {
+    try {
+      return s.getBytes(HConstants.UTF8_ENCODING);
+    } catch (UnsupportedEncodingException e) {
+      LOG.error("UTF-8 not supported?", e);
+      return null;
+    }
+  }
+
+  /**
+   * Convert a boolean to a byte array. True becomes -1
+   * and false becomes 0.
+   *
+   * @param b value
+   * @return <code>b</code> encoded in a byte array.
+   */
+  public static byte [] toBytes(final boolean b) {
+    return new byte[] { b ? (byte) -1 : (byte) 0 };
+  }
+
+  /**
+   * Reverses {@link #toBytes(boolean)}
+   * @param b array
+   * @return True or false.
+   */
+  public static boolean toBoolean(final byte [] b) {
+    if (b.length != 1) {
+      throw new IllegalArgumentException("Array has wrong size: " + b.length);
+    }
+    return b[0] != (byte) 0;
+  }
+
+  /**
+   * Convert a long value to a byte array using big-endian.
+   *
+   * @param val value to convert
+   * @return the byte array
+   */
+  public static byte[] toBytes(long val) {
+    byte [] b = new byte[8];
+    for (int i = 7; i > 0; i--) {
+      b[i] = (byte) val;
+      val >>>= 8;
+    }
+    b[0] = (byte) val;
+    return b;
+  }
+
+  /**
+   * Converts a byte array to a long value. Reverses
+   * {@link #toBytes(long)}
+   * @param bytes array
+   * @return the long value
+   */
+  public static long toLong(byte[] bytes) {
+    return toLong(bytes, 0, SIZEOF_LONG);
+  }
+
+  /**
+   * Converts a byte array to a long value. Assumes there will be
+   * {@link #SIZEOF_LONG} bytes available.
+   *
+   * @param bytes bytes
+   * @param offset offset
+   * @return the long value
+   */
+  public static long toLong(byte[] bytes, int offset) {
+    return toLong(bytes, offset, SIZEOF_LONG);
+  }
+
+  /**
+   * Converts a byte array to a long value.
+   *
+   * @param bytes array of bytes
+   * @param offset offset into array
+   * @param length length of data (must be {@link #SIZEOF_LONG})
+   * @return the long value
+   * @throws IllegalArgumentException if length is not {@link #SIZEOF_LONG} or
+   * if there's not enough room in the array at the offset indicated.
+   */
+  public static long toLong(byte[] bytes, int offset, final int length) {
+    if (length != SIZEOF_LONG || offset + length > bytes.length) {
+      throw explainWrongLengthOrOffset(bytes, offset, length, SIZEOF_LONG);
+    }
+    long l = 0;
+    for(int i = offset; i < offset + length; i++) {
+      l <<= 8;
+      l ^= bytes[i] & 0xFF;
+    }
+    return l;
+  }
+
+  private static IllegalArgumentException
+    explainWrongLengthOrOffset(final byte[] bytes,
+                               final int offset,
+                               final int length,
+                               final int expectedLength) {
+    String reason;
+    if (length != expectedLength) {
+      reason = "Wrong length: " + length + ", expected " + expectedLength;
+    } else {
+     reason = "offset (" + offset + ") + length (" + length + ") exceed the"
+        + " capacity of the array: " + bytes.length;
+    }
+    return new IllegalArgumentException(reason);
+  }
+
+  /**
+   * Put a long value out to the specified byte array position.
+   * @param bytes the byte array
+   * @param offset position in the array
+   * @param val long to write out
+   * @return incremented offset
+   * @throws IllegalArgumentException if the byte array given doesn't have
+   * enough room at the offset specified.
+   */
+  public static int putLong(byte[] bytes, int offset, long val) {
+    if (bytes.length - offset < SIZEOF_LONG) {
+      throw new IllegalArgumentException("Not enough room to put a long at"
+          + " offset " + offset + " in a " + bytes.length + " byte array");
+    }
+    for(int i = offset + 7; i > offset; i--) {
+      bytes[i] = (byte) val;
+      val >>>= 8;
+    }
+    bytes[offset] = (byte) val;
+    return offset + SIZEOF_LONG;
+  }
+
+  /**
+   * Presumes float encoded as IEEE 754 floating-point "single format"
+   * @param bytes byte array
+   * @return Float made from passed byte array.
+   */
+  public static float toFloat(byte [] bytes) {
+    return toFloat(bytes, 0);
+  }
+
+  /**
+   * Presumes float encoded as IEEE 754 floating-point "single format"
+   * @param bytes array to convert
+   * @param offset offset into array
+   * @return Float made from passed byte array.
+   */
+  public static float toFloat(byte [] bytes, int offset) {
+    return Float.intBitsToFloat(toInt(bytes, offset, SIZEOF_INT));
+  }
+
+  /**
+   * @param bytes byte array
+   * @param offset offset to write to
+   * @param f float value
+   * @return New offset in <code>bytes</code>
+   */
+  public static int putFloat(byte [] bytes, int offset, float f) {
+    return putInt(bytes, offset, Float.floatToRawIntBits(f));
+  }
+
+  /**
+   * @param f float value
+   * @return the float represented as byte []
+   */
+  public static byte [] toBytes(final float f) {
+    // Encode it as int
+    return Bytes.toBytes(Float.floatToRawIntBits(f));
+  }
+
+  /**
+   * @param bytes byte array
+   * @return Return double made from passed bytes.
+   */
+  public static double toDouble(final byte [] bytes) {
+    return toDouble(bytes, 0);
+  }
+
+  /**
+   * @param bytes byte array
+   * @param offset offset where double is
+   * @return Return double made from passed bytes.
+   */
+  public static double toDouble(final byte [] bytes, final int offset) {
+    return Double.longBitsToDouble(toLong(bytes, offset, SIZEOF_LONG));
+  }
+
+  /**
+   * @param bytes byte array
+   * @param offset offset to write to
+   * @param d value
+   * @return New offset into array <code>bytes</code>
+   */
+  public static int putDouble(byte [] bytes, int offset, double d) {
+    return putLong(bytes, offset, Double.doubleToLongBits(d));
+  }
+
+  /**
+   * Serialize a double as the IEEE 754 double format output. The resultant
+   * array will be 8 bytes long.
+   *
+   * @param d value
+   * @return the double represented as byte []
+   */
+  public static byte [] toBytes(final double d) {
+    // Encode it as a long
+    return Bytes.toBytes(Double.doubleToRawLongBits(d));
+  }
+
+  /**
+   * Convert an int value to a byte array
+   * @param val value
+   * @return the byte array
+   */
+  public static byte[] toBytes(int val) {
+    byte [] b = new byte[4];
+    for(int i = 3; i > 0; i--) {
+      b[i] = (byte) val;
+      val >>>= 8;
+    }
+    b[0] = (byte) val;
+    return b;
+  }
+
+  /**
+   * Converts a byte array to an int value
+   * @param bytes byte array
+   * @return the int value
+   */
+  public static int toInt(byte[] bytes) {
+    return toInt(bytes, 0, SIZEOF_INT);
+  }
+
+  /**
+   * Converts a byte array to an int value
+   * @param bytes byte array
+   * @param offset offset into array
+   * @return the int value
+   */
+  public static int toInt(byte[] bytes, int offset) {
+    return toInt(bytes, offset, SIZEOF_INT);
+  }
+
+  /**
+   * Converts a byte array to an int value
+   * @param bytes byte array
+   * @param offset offset into array
+   * @param length length of int (has to be {@link #SIZEOF_INT})
+   * @return the int value
+   * @throws IllegalArgumentException if length is not {@link #SIZEOF_INT} or
+   * if there's not enough room in the array at the offset indicated.
+   */
+  public static int toInt(byte[] bytes, int offset, final int length) {
+    if (length != SIZEOF_INT || offset + length > bytes.length) {
+      throw explainWrongLengthOrOffset(bytes, offset, length, SIZEOF_INT);
+    }
+    int n = 0;
+    for(int i = offset; i < (offset + length); i++) {
+      n <<= 8;
+      n ^= bytes[i] & 0xFF;
+    }
+    return n;
+  }
+
+  /**
+   * Put an int value out to the specified byte array position.
+   * @param bytes the byte array
+   * @param offset position in the array
+   * @param val int to write out
+   * @return incremented offset
+   * @throws IllegalArgumentException if the byte array given doesn't have
+   * enough room at the offset specified.
+   */
+  public static int putInt(byte[] bytes, int offset, int val) {
+    if (bytes.length - offset < SIZEOF_INT) {
+      throw new IllegalArgumentException("Not enough room to put an int at"
+          + " offset " + offset + " in a " + bytes.length + " byte array");
+    }
+    for(int i= offset + 3; i > offset; i--) {
+      bytes[i] = (byte) val;
+      val >>>= 8;
+    }
+    bytes[offset] = (byte) val;
+    return offset + SIZEOF_INT;
+  }
+
+  /**
+   * Convert a short value to a byte array of {@link #SIZEOF_SHORT} bytes long.
+   * @param val value
+   * @return the byte array
+   */
+  public static byte[] toBytes(short val) {
+    byte[] b = new byte[SIZEOF_SHORT];
+    b[1] = (byte) val;
+    val >>= 8;
+    b[0] = (byte) val;
+    return b;
+  }
+
+  /**
+   * Converts a byte array to a short value
+   * @param bytes byte array
+   * @return the short value
+   */
+  public static short toShort(byte[] bytes) {
+    return toShort(bytes, 0, SIZEOF_SHORT);
+  }
+
+  /**
+   * Converts a byte array to a short value
+   * @param bytes byte array
+   * @param offset offset into array
+   * @return the short value
+   */
+  public static short toShort(byte[] bytes, int offset) {
+    return toShort(bytes, offset, SIZEOF_SHORT);
+  }
+
+  /**
+   * Converts a byte array to a short value
+   * @param bytes byte array
+   * @param offset offset into array
+   * @param length length, has to be {@link #SIZEOF_SHORT}
+   * @return the short value
+   * @throws IllegalArgumentException if length is not {@link #SIZEOF_SHORT}
+   * or if there's not enough room in the array at the offset indicated.
+   */
+  public static short toShort(byte[] bytes, int offset, final int length) {
+    if (length != SIZEOF_SHORT || offset + length > bytes.length) {
+      throw explainWrongLengthOrOffset(bytes, offset, length, SIZEOF_SHORT);
+    }
+    short n = 0;
+    n ^= bytes[offset] & 0xFF;
+    n <<= 8;
+    n ^= bytes[offset+1] & 0xFF;
+    return n;
+  }
+
+  /**
+   * Put a short value out to the specified byte array position.
+   * @param bytes the byte array
+   * @param offset position in the array
+   * @param val short to write out
+   * @return incremented offset
+   * @throws IllegalArgumentException if the byte array given doesn't have
+   * enough room at the offset specified.
+   */
+  public static int putShort(byte[] bytes, int offset, short val) {
+    if (bytes.length - offset < SIZEOF_SHORT) {
+      throw new IllegalArgumentException("Not enough room to put a short at"
+          + " offset " + offset + " in a " + bytes.length + " byte array");
+    }
+    bytes[offset+1] = (byte) val;
+    val >>= 8;
+    bytes[offset] = (byte) val;
+    return offset + SIZEOF_SHORT;
+  }
+
+  /**
+   * @param vint Integer to make a vint of.
+   * @return Vint as bytes array.
+   */
+  public static byte [] vintToBytes(final long vint) {
+    long i = vint;
+    int size = WritableUtils.getVIntSize(i);
+    byte [] result = new byte[size];
+    int offset = 0;
+    if (i >= -112 && i <= 127) {
+      result[offset] = (byte) i;
+      return result;
+    }
+
+    int len = -112;
+    if (i < 0) {
+      i ^= -1L; // take one's complement'
+      len = -120;
+    }
+
+    long tmp = i;
+    while (tmp != 0) {
+      tmp = tmp >> 8;
+      len--;
+    }
+
+    result[offset++] = (byte) len;
+
+    len = (len < -120) ? -(len + 120) : -(len + 112);
+
+    for (int idx = len; idx != 0; idx--) {
+      int shiftbits = (idx - 1) * 8;
+      long mask = 0xFFL << shiftbits;
+      result[offset++] = (byte)((i & mask) >> shiftbits);
+    }
+    return result;
+  }
+
+  /**
+   * @param buffer buffer to convert
+   * @return vint bytes as an integer.
+   */
+  public static long bytesToVint(final byte [] buffer) {
+    int offset = 0;
+    byte firstByte = buffer[offset++];
+    int len = WritableUtils.decodeVIntSize(firstByte);
+    if (len == 1) {
+      return firstByte;
+    }
+    long i = 0;
+    for (int idx = 0; idx < len-1; idx++) {
+      byte b = buffer[offset++];
+      i = i << 8;
+      i = i | (b & 0xFF);
+    }
+    return (WritableUtils.isNegativeVInt(firstByte) ? ~i : i);
+  }
+
+  /**
+   * Reads a zero-compressed encoded long from input stream and returns it.
+   * @param buffer Binary array
+   * @param offset Offset into array at which vint begins.
+   * @throws java.io.IOException e
+   * @return deserialized long from stream.
+   */
+  public static long readVLong(final byte [] buffer, final int offset)
+  throws IOException {
+    byte firstByte = buffer[offset];
+    int len = WritableUtils.decodeVIntSize(firstByte);
+    if (len == 1) {
+      return firstByte;
+    }
+    long i = 0;
+    for (int idx = 0; idx < len-1; idx++) {
+      byte b = buffer[offset + 1 + idx];
+      i = i << 8;
+      i = i | (b & 0xFF);
+    }
+    return (WritableUtils.isNegativeVInt(firstByte) ? ~i : i);
+  }
+
+  /**
+   * @param left left operand
+   * @param right right operand
+   * @return 0 if equal, < 0 if left is less than right, etc.
+   */
+  public static int compareTo(final byte [] left, final byte [] right) {
+    return compareTo(left, 0, left.length, right, 0, right.length);
+  }
+
+  /**
+   * Lexographically compare two arrays.
+   *
+   * @param buffer1 left operand
+   * @param buffer2 right operand
+   * @param offset1 Where to start comparing in the left buffer
+   * @param offset2 Where to start comparing in the right buffer
+   * @param length1 How much to compare from the left buffer
+   * @param length2 How much to compare from the right buffer
+   * @return 0 if equal, < 0 if left is less than right, etc.
+   */
+  public static int compareTo(byte[] buffer1, int offset1, int length1,
+      byte[] buffer2, int offset2, int length2) {
+    // Bring WritableComparator code local
+    int end1 = offset1 + length1;
+    int end2 = offset2 + length2;
+    for (int i = offset1, j = offset2; i < end1 && j < end2; i++, j++) {
+      int a = (buffer1[i] & 0xff);
+      int b = (buffer2[j] & 0xff);
+      if (a != b) {
+        return a - b;
+      }
+    }
+    return length1 - length2;
+  }
+
+  /**
+   * @param left left operand
+   * @param right right operand
+   * @return True if equal
+   */
+  public static boolean equals(final byte [] left, final byte [] right) {
+    // Could use Arrays.equals?
+    //noinspection SimplifiableConditionalExpression
+    if (left == null && right == null) {
+      return true;
+    }
+    return (left == null || right == null || (left.length != right.length)
+            ? false : compareTo(left, right) == 0);
+  }
+
+  /**
+   * Return true if the byte array on the right is a prefix of the byte
+   * array on the left.
+   */
+  public static boolean startsWith(byte[] bytes, byte[] prefix) {
+    return bytes != null && prefix != null &&
+      bytes.length >= prefix.length &&
+      compareTo(bytes, 0, prefix.length, prefix, 0, prefix.length) == 0;      
+  }
+
+  /**
+   * @param b bytes to hash
+   * @return Runs {@link WritableComparator#hashBytes(byte[], int)} on the
+   * passed in array.  This method is what {@link org.apache.hadoop.io.Text} and
+   * {@link ImmutableBytesWritable} use calculating hash code.
+   */
+  public static int hashCode(final byte [] b) {
+    return hashCode(b, b.length);
+  }
+
+  /**
+   * @param b value
+   * @param length length of the value
+   * @return Runs {@link WritableComparator#hashBytes(byte[], int)} on the
+   * passed in array.  This method is what {@link org.apache.hadoop.io.Text} and
+   * {@link ImmutableBytesWritable} use calculating hash code.
+   */
+  public static int hashCode(final byte [] b, final int length) {
+    return WritableComparator.hashBytes(b, length);
+  }
+
+  /**
+   * @param b bytes to hash
+   * @return A hash of <code>b</code> as an Integer that can be used as key in
+   * Maps.
+   */
+  public static Integer mapKey(final byte [] b) {
+    return hashCode(b);
+  }
+
+  /**
+   * @param b bytes to hash
+   * @param length length to hash
+   * @return A hash of <code>b</code> as an Integer that can be used as key in
+   * Maps.
+   */
+  public static Integer mapKey(final byte [] b, final int length) {
+    return hashCode(b, length);
+  }
+
+  /**
+   * @param a lower half
+   * @param b upper half
+   * @return New array that has a in lower half and b in upper half.
+   */
+  public static byte [] add(final byte [] a, final byte [] b) {
+    return add(a, b, HConstants.EMPTY_BYTE_ARRAY);
+  }
+
+  /**
+   * @param a first third
+   * @param b second third
+   * @param c third third
+   * @return New array made from a, b and c
+   */
+  public static byte [] add(final byte [] a, final byte [] b, final byte [] c) {
+    byte [] result = new byte[a.length + b.length + c.length];
+    System.arraycopy(a, 0, result, 0, a.length);
+    System.arraycopy(b, 0, result, a.length, b.length);
+    System.arraycopy(c, 0, result, a.length + b.length, c.length);
+    return result;
+  }
+
+  /**
+   * @param a array
+   * @param length amount of bytes to grab
+   * @return First <code>length</code> bytes from <code>a</code>
+   */
+  public static byte [] head(final byte [] a, final int length) {
+    if (a.length < length) {
+      return null;
+    }
+    byte [] result = new byte[length];
+    System.arraycopy(a, 0, result, 0, length);
+    return result;
+  }
+
+  /**
+   * @param a array
+   * @param length amount of bytes to snarf
+   * @return Last <code>length</code> bytes from <code>a</code>
+   */
+  public static byte [] tail(final byte [] a, final int length) {
+    if (a.length < length) {
+      return null;
+    }
+    byte [] result = new byte[length];
+    System.arraycopy(a, a.length - length, result, 0, length);
+    return result;
+  }
+
+  /**
+   * @param a array
+   * @param length new array size
+   * @return Value in <code>a</code> plus <code>length</code> prepended 0 bytes
+   */
+  public static byte [] padHead(final byte [] a, final int length) {
+    byte [] padding = new byte[length];
+    for (int i = 0; i < length; i++) {
+      padding[i] = 0;
+    }
+    return add(padding,a);
+  }
+
+  /**
+   * @param a array
+   * @param length new array size
+   * @return Value in <code>a</code> plus <code>length</code> appended 0 bytes
+   */
+  public static byte [] padTail(final byte [] a, final int length) {
+    byte [] padding = new byte[length];
+    for (int i = 0; i < length; i++) {
+      padding[i] = 0;
+    }
+    return add(a,padding);
+  }
+
+  /**
+   * Split passed range.  Expensive operation relatively.  Uses BigInteger math.
+   * Useful splitting ranges for MapReduce jobs.
+   * @param a Beginning of range
+   * @param b End of range
+   * @param num Number of times to split range.  Pass 1 if you want to split
+   * the range in two; i.e. one split.
+   * @return Array of dividing values
+   */
+  public static byte [][] split(final byte [] a, final byte [] b, final int num) {
+    byte[][] ret = new byte[num+2][];
+    int i = 0;
+    Iterable<byte[]> iter = iterateOnSplits(a, b, num);
+    if (iter == null) return null;
+    for (byte[] elem : iter) {
+      ret[i++] = elem;
+    }
+    return ret;
+  }
+  
+  /**
+   * Iterate over keys within the passed inclusive range.
+   */
+  public static Iterable<byte[]> iterateOnSplits(
+      final byte[] a, final byte[]b, final int num)
+  {  
+    byte [] aPadded;
+    byte [] bPadded;
+    if (a.length < b.length) {
+      aPadded = padTail(a, b.length - a.length);
+      bPadded = b;
+    } else if (b.length < a.length) {
+      aPadded = a;
+      bPadded = padTail(b, a.length - b.length);
+    } else {
+      aPadded = a;
+      bPadded = b;
+    }
+    if (compareTo(aPadded,bPadded) >= 0) {
+      throw new IllegalArgumentException("b <= a");
+    }
+    if (num <= 0) {
+      throw new IllegalArgumentException("num cannot be < 0");
+    }
+    byte [] prependHeader = {1, 0};
+    final BigInteger startBI = new BigInteger(add(prependHeader, aPadded));
+    final BigInteger stopBI = new BigInteger(add(prependHeader, bPadded));
+    final BigInteger diffBI = stopBI.subtract(startBI);
+    final BigInteger splitsBI = BigInteger.valueOf(num + 1);
+    if(diffBI.compareTo(splitsBI) < 0) {
+      return null;
+    }
+    final BigInteger intervalBI;
+    try {
+      intervalBI = diffBI.divide(splitsBI);
+    } catch(Exception e) {
+      LOG.error("Exception caught during division", e);
+      return null;
+    }
+
+    final Iterator<byte[]> iterator = new Iterator<byte[]>() {
+      private int i = -1;
+      
+      @Override
+      public boolean hasNext() {
+        return i < num+1;
+      }
+
+      @Override
+      public byte[] next() {
+        i++;
+        if (i == 0) return a;
+        if (i == num + 1) return b;
+        
+        BigInteger curBI = startBI.add(intervalBI.multiply(BigInteger.valueOf(i)));
+        byte [] padded = curBI.toByteArray();
+        if (padded[1] == 0)
+          padded = tail(padded, padded.length - 2);
+        else
+          padded = tail(padded, padded.length - 1);
+        return padded;
+      }
+
+      @Override
+      public void remove() {
+        throw new UnsupportedOperationException();
+      }
+      
+    };
+    
+    return new Iterable<byte[]>() {
+      @Override
+      public Iterator<byte[]> iterator() {
+        return iterator;
+      }
+    };
+  }
+
+  /**
+   * @param t operands
+   * @return Array of byte arrays made from passed array of Text
+   */
+  public static byte [][] toByteArrays(final String [] t) {
+    byte [][] result = new byte[t.length][];
+    for (int i = 0; i < t.length; i++) {
+      result[i] = Bytes.toBytes(t[i]);
+    }
+    return result;
+  }
+
+  /**
+   * @param column operand
+   * @return A byte array of a byte array where first and only entry is
+   * <code>column</code>
+   */
+  public static byte [][] toByteArrays(final String column) {
+    return toByteArrays(toBytes(column));
+  }
+
+  /**
+   * @param column operand
+   * @return A byte array of a byte array where first and only entry is
+   * <code>column</code>
+   */
+  public static byte [][] toByteArrays(final byte [] column) {
+    byte [][] result = new byte[1][];
+    result[0] = column;
+    return result;
+  }
+
+  /**
+   * Binary search for keys in indexes.
+   * @param arr array of byte arrays to search for
+   * @param key the key you want to find
+   * @param offset the offset in the key you want to find
+   * @param length the length of the key
+   * @param comparator a comparator to compare.
+   * @return index of key
+   */
+  public static int binarySearch(byte [][]arr, byte []key, int offset,
+      int length, RawComparator<byte []> comparator) {
+    int low = 0;
+    int high = arr.length - 1;
+
+    while (low <= high) {
+      int mid = (low+high) >>> 1;
+      // we have to compare in this order, because the comparator order
+      // has special logic when the 'left side' is a special key.
+      int cmp = comparator.compare(key, offset, length,
+          arr[mid], 0, arr[mid].length);
+      // key lives above the midpoint
+      if (cmp > 0)
+        low = mid + 1;
+      // key lives below the midpoint
+      else if (cmp < 0)
+        high = mid - 1;
+      // BAM. how often does this really happen?
+      else
+        return mid;
+    }
+    return - (low+1);
+  }
+
+  /**
+   * Bytewise binary increment/deincrement of long contained in byte array
+   * on given amount.
+   *
+   * @param value - array of bytes containing long (length <= SIZEOF_LONG)
+   * @param amount value will be incremented on (deincremented if negative)
+   * @return array of bytes containing incremented long (length == SIZEOF_LONG)
+   * @throws IOException - if value.length > SIZEOF_LONG
+   */
+  public static byte [] incrementBytes(byte[] value, long amount)
+  throws IOException {
+    byte[] val = value;
+    if (val.length < SIZEOF_LONG) {
+      // Hopefully this doesn't happen too often.
+      byte [] newvalue;
+      if (val[0] < 0) {
+        newvalue = new byte[]{-1, -1, -1, -1, -1, -1, -1, -1};
+      } else {
+        newvalue = new byte[SIZEOF_LONG];
+      }
+      System.arraycopy(val, 0, newvalue, newvalue.length - val.length,
+        val.length);
+      val = newvalue;
+    } else if (val.length > SIZEOF_LONG) {
+      throw new IllegalArgumentException("Increment Bytes - value too big: " +
+        val.length);
+    }
+    if(amount == 0) return val;
+    if(val[0] < 0){
+      return binaryIncrementNeg(val, amount);
+    }
+    return binaryIncrementPos(val, amount);
+  }
+
+  /* increment/deincrement for positive value */
+  private static byte [] binaryIncrementPos(byte [] value, long amount) {
+    long amo = amount;
+    int sign = 1;
+    if (amount < 0) {
+      amo = -amount;
+      sign = -1;
+    }
+    for(int i=0;i<value.length;i++) {
+      int cur = ((int)amo % 256) * sign;
+      amo = (amo >> 8);
+      int val = value[value.length-i-1] & 0x0ff;
+      int total = val + cur;
+      if(total > 255) {
+        amo += sign;
+        total %= 256;
+      } else if (total < 0) {
+        amo -= sign;
+      }
+      value[value.length-i-1] = (byte)total;
+      if (amo == 0) return value;
+    }
+    return value;
+  }
+
+  /* increment/deincrement for negative value */
+  private static byte [] binaryIncrementNeg(byte [] value, long amount) {
+    long amo = amount;
+    int sign = 1;
+    if (amount < 0) {
+      amo = -amount;
+      sign = -1;
+    }
+    for(int i=0;i<value.length;i++) {
+      int cur = ((int)amo % 256) * sign;
+      amo = (amo >> 8);
+      int val = ((~value[value.length-i-1]) & 0x0ff) + 1;
+      int total = cur - val;
+      if(total >= 0) {
+        amo += sign;
+      } else if (total < -256) {
+        amo -= sign;
+        total %= 256;
+      }
+      value[value.length-i-1] = (byte)total;
+      if (amo == 0) return value;
+    }
+    return value;
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
new file mode 100755
index 0000000..9ada18c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/ClassSize.java
@@ -0,0 +1,296 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import java.lang.reflect.Field;
+import java.lang.reflect.Modifier;
+import java.util.Properties;
+
+/**
+ * Class for determining the "size" of a class, an attempt to calculate the
+ * actual bytes that an object of this class will occupy in memory
+ *
+ * The core of this class is taken from the Derby project
+ */
+public class ClassSize {
+  static final Log LOG = LogFactory.getLog(ClassSize.class);
+
+  private static int nrOfRefsPerObj = 2;
+
+  /** Array overhead */
+  public static final int ARRAY;
+
+  /** Overhead for ArrayList(0) */
+  public static final int ARRAYLIST;
+
+  /** Overhead for ByteBuffer */
+  public static final int BYTE_BUFFER;
+
+  /** Overhead for an Integer */
+  public static final int INTEGER;
+
+  /** Overhead for entry in map */
+  public static final int MAP_ENTRY;
+
+  /** Object overhead is minimum 2 * reference size (8 bytes on 64-bit) */
+  public static final int OBJECT;
+
+  /** Reference size is 8 bytes on 64-bit, 4 bytes on 32-bit */
+  public static final int REFERENCE;
+
+  /** String overhead */
+  public static final int STRING;
+
+  /** Overhead for TreeMap */
+  public static final int TREEMAP;
+
+  /** Overhead for ConcurrentHashMap */
+  public static final int CONCURRENT_HASHMAP;
+
+  /** Overhead for ConcurrentHashMap.Entry */
+  public static final int CONCURRENT_HASHMAP_ENTRY;
+
+  /** Overhead for ConcurrentHashMap.Segment */
+  public static final int CONCURRENT_HASHMAP_SEGMENT;
+
+  /** Overhead for ConcurrentSkipListMap */
+  public static final int CONCURRENT_SKIPLISTMAP;
+
+  /** Overhead for ConcurrentSkipListMap Entry */
+  public static final int CONCURRENT_SKIPLISTMAP_ENTRY;
+
+  /** Overhead for ReentrantReadWriteLock */
+  public static final int REENTRANT_LOCK;
+
+  /** Overhead for AtomicLong */
+  public static final int ATOMIC_LONG;
+
+  /** Overhead for AtomicInteger */
+  public static final int ATOMIC_INTEGER;
+
+  /** Overhead for AtomicBoolean */
+  public static final int ATOMIC_BOOLEAN;
+
+  /** Overhead for CopyOnWriteArraySet */
+  public static final int COPYONWRITE_ARRAYSET;
+
+  /** Overhead for CopyOnWriteArrayList */
+  public static final int COPYONWRITE_ARRAYLIST;
+
+  private static final String THIRTY_TWO = "32";
+
+  /**
+   * Method for reading the arc settings and setting overheads according
+   * to 32-bit or 64-bit architecture.
+   */
+  static {
+    // Figure out whether this is a 32 or 64 bit machine.
+    Properties sysProps = System.getProperties();
+    String arcModel = sysProps.getProperty("sun.arch.data.model");
+
+    //Default value is set to 8, covering the case when arcModel is unknown
+    if (arcModel.equals(THIRTY_TWO)) {
+      REFERENCE = 4;
+    } else {
+      REFERENCE = 8;
+    }
+
+    OBJECT = 2 * REFERENCE;
+
+    ARRAY = align(3 * REFERENCE);
+
+    ARRAYLIST = align(OBJECT + align(REFERENCE) + align(ARRAY) +
+        (2 * Bytes.SIZEOF_INT));
+
+    //noinspection PointlessArithmeticExpression
+    BYTE_BUFFER = align(OBJECT + align(REFERENCE) + align(ARRAY) +
+        (5 * Bytes.SIZEOF_INT) +
+        (3 * Bytes.SIZEOF_BOOLEAN) + Bytes.SIZEOF_LONG);
+
+    INTEGER = align(OBJECT + Bytes.SIZEOF_INT);
+
+    MAP_ENTRY = align(OBJECT + 5 * REFERENCE + Bytes.SIZEOF_BOOLEAN);
+
+    TREEMAP = align(OBJECT + (2 * Bytes.SIZEOF_INT) + align(7 * REFERENCE));
+
+    STRING = align(OBJECT + ARRAY + REFERENCE + 3 * Bytes.SIZEOF_INT);
+
+    CONCURRENT_HASHMAP = align((2 * Bytes.SIZEOF_INT) + ARRAY +
+        (6 * REFERENCE) + OBJECT);
+
+    CONCURRENT_HASHMAP_ENTRY = align(REFERENCE + OBJECT + (3 * REFERENCE) +
+        (2 * Bytes.SIZEOF_INT));
+
+    CONCURRENT_HASHMAP_SEGMENT = align(REFERENCE + OBJECT +
+        (3 * Bytes.SIZEOF_INT) + Bytes.SIZEOF_FLOAT + ARRAY);
+
+    CONCURRENT_SKIPLISTMAP = align(Bytes.SIZEOF_INT + OBJECT + (8 * REFERENCE));
+
+    CONCURRENT_SKIPLISTMAP_ENTRY = align(
+        align(OBJECT + (3 * REFERENCE)) + /* one node per entry */
+        align((OBJECT + (3 * REFERENCE))/2)); /* one index per two entries */
+
+    REENTRANT_LOCK = align(OBJECT + (3 * REFERENCE));
+
+    ATOMIC_LONG = align(OBJECT + Bytes.SIZEOF_LONG);
+
+    ATOMIC_INTEGER = align(OBJECT + Bytes.SIZEOF_INT);
+
+    ATOMIC_BOOLEAN = align(OBJECT + Bytes.SIZEOF_BOOLEAN);
+
+    COPYONWRITE_ARRAYSET = align(OBJECT + REFERENCE);
+
+    COPYONWRITE_ARRAYLIST = align(OBJECT + (2 * REFERENCE) + ARRAY);
+  }
+
+  /**
+   * The estimate of the size of a class instance depends on whether the JVM
+   * uses 32 or 64 bit addresses, that is it depends on the size of an object
+   * reference. It is a linear function of the size of a reference, e.g.
+   * 24 + 5*r where r is the size of a reference (usually 4 or 8 bytes).
+   *
+   * This method returns the coefficients of the linear function, e.g. {24, 5}
+   * in the above example.
+   *
+   * @param cl A class whose instance size is to be estimated
+   * @param debug debug flag
+   * @return an array of 3 integers. The first integer is the size of the
+   * primitives, the second the number of arrays and the third the number of
+   * references.
+   */
+  @SuppressWarnings("unchecked")
+  private static int [] getSizeCoefficients(Class cl, boolean debug) {
+    int primitives = 0;
+    int arrays = 0;
+    //The number of references that a new object takes
+    int references = nrOfRefsPerObj;
+
+    for ( ; null != cl; cl = cl.getSuperclass()) {
+      Field[] field = cl.getDeclaredFields();
+      if (null != field) {
+        for (Field aField : field) {
+          if (!Modifier.isStatic(aField.getModifiers())) {
+            Class fieldClass = aField.getType();
+            if (fieldClass.isArray()) {
+              arrays++;
+              references++;
+            } else if (!fieldClass.isPrimitive()) {
+              references++;
+            } else {// Is simple primitive
+              String name = fieldClass.getName();
+
+              if (name.equals("int") || name.equals("I"))
+                primitives += Bytes.SIZEOF_INT;
+              else if (name.equals("long") || name.equals("J"))
+                primitives += Bytes.SIZEOF_LONG;
+              else if (name.equals("boolean") || name.equals("Z"))
+                primitives += Bytes.SIZEOF_BOOLEAN;
+              else if (name.equals("short") || name.equals("S"))
+                primitives += Bytes.SIZEOF_SHORT;
+              else if (name.equals("byte") || name.equals("B"))
+                primitives += Bytes.SIZEOF_BYTE;
+              else if (name.equals("char") || name.equals("C"))
+                primitives += Bytes.SIZEOF_CHAR;
+              else if (name.equals("float") || name.equals("F"))
+                primitives += Bytes.SIZEOF_FLOAT;
+              else if (name.equals("double") || name.equals("D"))
+                primitives += Bytes.SIZEOF_DOUBLE;
+            }
+            if (debug) {
+              if (LOG.isDebugEnabled()) {
+                // Write out region name as string and its encoded name.
+                LOG.debug(aField.getName() + "\n\t" + aField.getType());
+              }
+            }
+          }
+        }
+      }
+    }
+    return new int [] {primitives, arrays, references};
+  }
+
+  /**
+   * Estimate the static space taken up by a class instance given the
+   * coefficients returned by getSizeCoefficients.
+   *
+   * @param coeff the coefficients
+   *
+   * @param debug debug flag
+   * @return the size estimate, in bytes
+   */
+  private static long estimateBaseFromCoefficients(int [] coeff, boolean debug) {
+    long size = coeff[0] + align(coeff[1]*ARRAY) + coeff[2]*REFERENCE;
+
+    // Round up to a multiple of 8
+    size = align(size);
+    if(debug) {
+      if (LOG.isDebugEnabled()) {
+        // Write out region name as string and its encoded name.
+        LOG.debug("Primitives " + coeff[0] + ", arrays " + coeff[1] +
+            ", references(includes " + nrOfRefsPerObj +
+            " for object overhead) " + coeff[2] + ", refSize " + REFERENCE +
+            ", size " + size);
+      }
+    }
+    return size;
+  }
+
+  /**
+   * Estimate the static space taken up by the fields of a class. This includes
+   * the space taken up by by references (the pointer) but not by the referenced
+   * object. So the estimated size of an array field does not depend on the size
+   * of the array. Similarly the size of an object (reference) field does not
+   * depend on the object.
+   *
+   * @param cl class
+   * @param debug debug flag
+   * @return the size estimate in bytes.
+   */
+  @SuppressWarnings("unchecked")
+  public static long estimateBase(Class cl, boolean debug) {
+    return estimateBaseFromCoefficients( getSizeCoefficients(cl, debug), debug);
+  }
+
+  /**
+   * Aligns a number to 8.
+   * @param num number to align to 8
+   * @return smallest number >= input that is a multiple of 8
+   */
+  public static int align(int num) {
+    return (int)(align((long)num));
+  }
+
+  /**
+   * Aligns a number to 8.
+   * @param num number to align to 8
+   * @return smallest number >= input that is a multiple of 8
+   */
+  public static long align(long num) {
+    //The 7 comes from that the alignSize is 8 which is the number of bytes
+    //stored and sent together
+    return  ((num + 7) >> 3) << 3;
+  }
+
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
new file mode 100644
index 0000000..b15f756
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
@@ -0,0 +1,146 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.io.compress.Compressor;
+
+import java.io.IOException;
+import java.net.URI;
+
+/**
+ * Compression validation test.  Checks compression is working.  Be sure to run
+ * on every node in your cluster.
+ */
+public class CompressionTest {
+  static final Log LOG = LogFactory.getLog(CompressionTest.class);
+
+  public static boolean testCompression(String codec) {
+    codec = codec.toLowerCase();
+
+    Compression.Algorithm a;
+
+    try {
+      a = Compression.getCompressionAlgorithmByName(codec);
+    } catch (IllegalArgumentException e) {
+      LOG.warn("Codec type: " + codec + " is not known");
+      return false;
+    }
+
+    try {
+      testCompression(a);
+      return true;
+    } catch (IOException ignored) {
+      LOG.warn("Can't instantiate codec: " + codec, ignored);
+      return false;
+    }
+  }
+
+  private final static Boolean[] compressionTestResults
+      = new Boolean[Compression.Algorithm.values().length];
+  static {
+    for (int i = 0 ; i < compressionTestResults.length ; ++i) {
+      compressionTestResults[i] = null;
+    }
+  }
+
+  public static void testCompression(Compression.Algorithm algo)
+      throws IOException {
+    if (compressionTestResults[algo.ordinal()] != null) {
+      if (compressionTestResults[algo.ordinal()]) {
+        return ; // already passed test, dont do it again.
+      } else {
+        // failed.
+        throw new IOException("Compression algorithm '" + algo.getName() + "'" +
+        " previously failed test.");
+      }
+    }
+
+    try {
+      Compressor c = algo.getCompressor();
+      algo.returnCompressor(c);
+      compressionTestResults[algo.ordinal()] = true; // passes
+    } catch (Throwable t) {
+      compressionTestResults[algo.ordinal()] = false; // failure
+      throw new IOException(t);
+    }
+  }
+
+  protected static Path path = new Path(".hfile-comp-test");
+
+  public static void usage() {
+    System.err.println("Usage: CompressionTest HDFS_PATH none|gz|lzo");
+    System.exit(1);
+  }
+
+  protected static DistributedFileSystem openConnection(String urlString)
+  throws java.net.URISyntaxException, java.io.IOException {
+    URI dfsUri = new URI(urlString);
+    Configuration dfsConf = new Configuration();
+    DistributedFileSystem dfs = new DistributedFileSystem();
+    dfs.initialize(dfsUri, dfsConf);
+    return dfs;
+  }
+
+  protected static boolean closeConnection(DistributedFileSystem dfs) {
+    if (dfs != null) {
+      try {
+        dfs.close();
+      } catch (Exception e) {
+        e.printStackTrace();
+      }
+    }
+    return dfs == null;
+  }
+
+  public static void main(String[] args) {
+    if (args.length != 2) usage();
+    try {
+      DistributedFileSystem dfs = openConnection(args[0]);
+      dfs.delete(path, false);
+      HFile.Writer writer = new HFile.Writer(dfs, path,
+        HFile.DEFAULT_BLOCKSIZE, args[1], null);
+      writer.append(Bytes.toBytes("testkey"), Bytes.toBytes("testval"));
+      writer.appendFileInfo(Bytes.toBytes("infokey"), Bytes.toBytes("infoval"));
+      writer.close();
+
+      HFile.Reader reader = new HFile.Reader(dfs, path, null, false);
+      reader.loadFileInfo();
+      byte[] key = reader.getFirstKey();
+      boolean rc = Bytes.toString(key).equals("testkey");
+      reader.close();
+
+      dfs.delete(path, false);
+      closeConnection(dfs);
+
+      if (rc) System.exit(0);
+    } catch (Exception e) {
+      e.printStackTrace();
+    }
+    System.out.println("FAILED");
+    System.exit(1);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/DefaultEnvironmentEdge.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/DefaultEnvironmentEdge.java
new file mode 100644
index 0000000..66f9192
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/DefaultEnvironmentEdge.java
@@ -0,0 +1,37 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * Default implementation of an environment edge.
+ */
+public class DefaultEnvironmentEdge implements EnvironmentEdge {
+
+
+  /**
+   * {@inheritDoc}
+   * <p/>
+   * This implementation returns {@link System#currentTimeMillis()}
+   */
+  @Override
+  public long currentTimeMillis() {
+    return System.currentTimeMillis();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/DynamicByteBloomFilter.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/DynamicByteBloomFilter.java
new file mode 100644
index 0000000..441167b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/DynamicByteBloomFilter.java
@@ -0,0 +1,302 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.io.Writable;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+/**
+ * Implements a <i>dynamic Bloom filter</i>, as defined in the INFOCOM 2006 paper.
+ * <p>
+ * A dynamic Bloom filter (DBF) makes use of a <code>s * m</code> bit matrix but
+ * each of the <code>s</code> rows is a standard Bloom filter. The creation
+ * process of a DBF is iterative. At the start, the DBF is a <code>1 * m</code>
+ * bit matrix, i.e., it is composed of a single standard Bloom filter.
+ * It assumes that <code>n<sub>r</sub></code> elements are recorded in the
+ * initial bit vector, where <code>n<sub>r</sub> <= n</code> (<code>n</code> is
+ * the cardinality of the set <code>A</code> to record in the filter).
+ * <p>
+ * As the size of <code>A</code> grows during the execution of the application,
+ * several keys must be inserted in the DBF.  When inserting a key into the DBF,
+ * one must first get an active Bloom filter in the matrix.  A Bloom filter is
+ * active when the number of recorded keys, <code>n<sub>r</sub></code>, is
+ * strictly less than the current cardinality of <code>A</code>, <code>n</code>.
+ * If an active Bloom filter is found, the key is inserted and
+ * <code>n<sub>r</sub></code> is incremented by one. On the other hand, if there
+ * is no active Bloom filter, a new one is created (i.e., a new row is added to
+ * the matrix) according to the current size of <code>A</code> and the element
+ * is added in this new Bloom filter and the <code>n<sub>r</sub></code> value of
+ * this new Bloom filter is set to one.  A given key is said to belong to the
+ * DBF if the <code>k</code> positions are set to one in one of the matrix rows.
+ * <p>
+ * Originally created by
+ * <a href="http://www.one-lab.org">European Commission One-Lab Project 034819</a>.
+ *
+ * @see BloomFilter A Bloom filter
+ *
+ * @see <a href="http://www.cse.fau.edu/~jie/research/publications/Publication_files/infocom2006.pdf">Theory and Network Applications of Dynamic Bloom Filters</a>
+ */
+public class DynamicByteBloomFilter implements BloomFilter {
+  /** Current file format version */
+  public static final int VERSION = 2;
+  /** Maximum number of keys in a dynamic Bloom filter row. */
+  protected final int keyInterval;
+  /** The maximum false positive rate per bloom */
+  protected final float errorRate;
+  /** Hash type */
+  protected final int hashType;
+  /** The number of keys recorded in the current Bloom filter. */
+  protected int curKeys;
+  /** expected size of bloom filter matrix (used during reads) */
+  protected int readMatrixSize;
+  /** The matrix of Bloom filters (contains bloom data only during writes). */
+  protected ByteBloomFilter[] matrix;
+
+  /**
+   * Normal read constructor.  Loads bloom filter meta data.
+   * @param meta stored bloom meta data
+   * @throws IllegalArgumentException meta data is invalid
+   */
+  public DynamicByteBloomFilter(ByteBuffer meta) throws IllegalArgumentException {
+    int version = meta.getInt();
+    if (version != VERSION) throw new IllegalArgumentException("Bad version");
+
+    this.keyInterval = meta.getInt();
+    this.errorRate  = meta.getFloat();
+    this.hashType = meta.getInt();
+    this.readMatrixSize = meta.getInt();
+    this.curKeys = meta.getInt();
+
+    readSanityCheck();
+
+    this.matrix = new ByteBloomFilter[1];
+    this.matrix[0] = new ByteBloomFilter(keyInterval, errorRate, hashType, 0);
+}
+
+  /**
+   * Normal write constructor.  Note that this doesn't allocate bloom data by
+   * default.  Instead, call allocBloom() before adding entries.
+   * @param errorRate
+   * @param hashType type of the hashing function (see <code>org.apache.hadoop.util.hash.Hash</code>).
+   * @param keyInterval Maximum number of keys to record per Bloom filter row.
+   * @throws IllegalArgumentException The input parameters were invalid
+   */
+  public DynamicByteBloomFilter(int keyInterval, float errorRate, int hashType)
+  throws IllegalArgumentException {
+    this.keyInterval = keyInterval;
+    this.errorRate = errorRate;
+    this.hashType = hashType;
+    this.curKeys = 0;
+
+    if(keyInterval <= 0) {
+      throw new IllegalArgumentException("keyCount must be > 0");
+    }
+
+    this.matrix = new ByteBloomFilter[1];
+    this.matrix[0] = new ByteBloomFilter(keyInterval, errorRate, hashType, 0);
+}
+
+  @Override
+  public void allocBloom() {
+    this.matrix[0].allocBloom();
+  }
+
+  void readSanityCheck() throws IllegalArgumentException {
+    if (this.curKeys <= 0) {
+      throw new IllegalArgumentException("last bloom's key count invalid");
+    }
+
+    if (this.readMatrixSize <= 0) {
+      throw new IllegalArgumentException("matrix size must be known");
+    }
+  }
+
+  @Override
+  public void add(byte []buf, int offset, int len) {
+    BloomFilter bf = getCurBloom();
+
+    if (bf == null) {
+      addRow();
+      bf = matrix[matrix.length - 1];
+      curKeys = 0;
+    }
+
+    bf.add(buf, offset, len);
+    curKeys++;
+  }
+
+  @Override
+  public void add(byte []buf) {
+    add(buf, 0, buf.length);
+  }
+
+  /**
+   * Should only be used in tests when writing a bloom filter.
+   */
+  boolean contains(byte [] buf) {
+    return contains(buf, 0, buf.length);
+  }
+
+  /**
+   * Should only be used in tests when writing a bloom filter.
+   */
+  boolean contains(byte [] buf, int offset, int length) {
+    for (int i = 0; i < matrix.length; i++) {
+      if (matrix[i].contains(buf, offset, length)) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  @Override
+  public boolean contains(byte [] buf, ByteBuffer theBloom) {
+    return contains(buf, 0, buf.length, theBloom);
+  }
+
+  @Override
+  public boolean contains(byte[] buf, int offset, int length,
+      ByteBuffer theBloom) {
+    if(offset + length > buf.length) {
+      return false;
+    }
+
+    // current version assumes uniform size
+    int bytesPerBloom = this.matrix[0].getByteSize();
+
+    if(theBloom.limit() != bytesPerBloom * readMatrixSize) {
+      throw new IllegalArgumentException("Bloom does not match expected size");
+    }
+
+    ByteBuffer tmp = theBloom.duplicate();
+
+    // note: actually searching an array of blooms that have been serialized
+    for (int m = 0; m < readMatrixSize; ++m) {
+      tmp.position(m* bytesPerBloom);
+      tmp.limit(tmp.position() + bytesPerBloom);
+      boolean match = this.matrix[0].contains(buf, offset, length, tmp.slice());
+      if (match) {
+        return true;
+      }
+    }
+
+    // matched no bloom filters
+    return false;
+  }
+
+  int bloomCount() {
+    return Math.max(this.matrix.length, this.readMatrixSize);
+  }
+
+  @Override
+  public int getKeyCount() {
+    return (bloomCount()-1) * this.keyInterval + this.curKeys;
+  }
+
+  @Override
+  public int getMaxKeys() {
+    return bloomCount() * this.keyInterval;
+  }
+
+  @Override
+  public int getByteSize() {
+    return bloomCount() * this.matrix[0].getByteSize();
+  }
+
+  @Override
+  public void compactBloom() {
+  }
+
+  /**
+   * Adds a new row to <i>this</i> dynamic Bloom filter.
+   */
+  private void addRow() {
+    ByteBloomFilter[] tmp = new ByteBloomFilter[matrix.length + 1];
+
+    for (int i = 0; i < matrix.length; i++) {
+      tmp[i] = matrix[i];
+    }
+
+    tmp[tmp.length-1] = new ByteBloomFilter(keyInterval, errorRate, hashType, 0);
+    tmp[tmp.length-1].allocBloom();
+    matrix = tmp;
+  }
+
+  /**
+   * Returns the currently-unfilled row in the dynamic Bloom Filter array.
+   * @return BloomFilter The active standard Bloom filter.
+   * 			 <code>Null</code> otherwise.
+   */
+  private BloomFilter getCurBloom() {
+    if (curKeys >= keyInterval) {
+      return null;
+    }
+
+    return matrix[matrix.length - 1];
+  }
+
+  @Override
+  public Writable getMetaWriter() {
+    return new MetaWriter();
+  }
+
+  @Override
+  public Writable getDataWriter() {
+    return new DataWriter();
+  }
+
+  private class MetaWriter implements Writable {
+    protected MetaWriter() {}
+    @Override
+    public void readFields(DataInput arg0) throws IOException {
+      throw new IOException("Cant read with this class.");
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      out.writeInt(VERSION);
+      out.writeInt(keyInterval);
+      out.writeFloat(errorRate);
+      out.writeInt(hashType);
+      out.writeInt(matrix.length);
+      out.writeInt(curKeys);
+    }
+  }
+
+  private class DataWriter implements Writable {
+    protected DataWriter() {}
+    @Override
+    public void readFields(DataInput arg0) throws IOException {
+      throw new IOException("Cant read with this class.");
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      for (int i = 0; i < matrix.length; ++i) {
+        matrix[i].writeBloom(out);
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/EnvironmentEdge.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/EnvironmentEdge.java
new file mode 100644
index 0000000..16e65d3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/EnvironmentEdge.java
@@ -0,0 +1,36 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * Has some basic interaction with the environment. Alternate implementations
+ * can be used where required (eg in tests).
+ *
+ * @see EnvironmentEdgeManager
+ */
+public interface EnvironmentEdge {
+
+  /**
+   * Returns the currentTimeMillis.
+   *
+   * @return currentTimeMillis.
+   */
+  long currentTimeMillis();
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/EnvironmentEdgeManager.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/EnvironmentEdgeManager.java
new file mode 100644
index 0000000..9984b4b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/EnvironmentEdgeManager.java
@@ -0,0 +1,75 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * Manages a singleton instance of the environment edge. This class shall
+ * implement static versions of the interface {@link EnvironmentEdge}, then
+ * defer to the delegate on invocation.
+ */
+public class EnvironmentEdgeManager {
+  private static volatile EnvironmentEdge delegate = new DefaultEnvironmentEdge();
+
+  private EnvironmentEdgeManager() {
+
+  }
+
+  /**
+   * Retrieves the singleton instance of the {@link EnvironmentEdge} that is
+   * being managed.
+   *
+   * @return the edge.
+   */
+  public static EnvironmentEdge getDelegate() {
+    return delegate;
+  }
+
+  /**
+   * Resets the managed instance to the default instance: {@link
+   * DefaultEnvironmentEdge}.
+   */
+  static void reset() {
+    injectEdge(new DefaultEnvironmentEdge());
+  }
+
+  /**
+   * Injects the given edge such that it becomes the managed entity. If null is
+   * passed to this method, the default type is assigned to the delegate.
+   *
+   * @param edge the new edge.
+   */
+  static void injectEdge(EnvironmentEdge edge) {
+    if (edge == null) {
+      reset();
+    } else {
+      delegate = edge;
+    }
+  }
+
+  /**
+   * Defers to the delegate and calls the
+   * {@link EnvironmentEdge#currentTimeMillis()} method.
+   *
+   * @return current time in millis according to the delegate.
+   */
+  public static long currentTimeMillis() {
+    return getDelegate().currentTimeMillis();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
new file mode 100644
index 0000000..c88b320
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java
@@ -0,0 +1,660 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PathFilter;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException;
+import org.apache.hadoop.hdfs.protocol.FSConstants;
+import org.apache.hadoop.io.SequenceFile;
+
+import java.io.DataInputStream;
+import java.io.EOFException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * Utility methods for interacting with the underlying file system.
+ */
+public class FSUtils {
+  private static final Log LOG = LogFactory.getLog(FSUtils.class);
+
+  /**
+   * Not instantiable
+   */
+  private FSUtils() {
+    super();
+  }
+
+  /**
+   * Delete if exists.
+   * @param fs filesystem object
+   * @param dir directory to delete
+   * @return True if deleted <code>dir</code>
+   * @throws IOException e
+   */
+  public static boolean deleteDirectory(final FileSystem fs, final Path dir)
+  throws IOException {
+    return fs.exists(dir) && fs.delete(dir, true);
+  }
+
+  /**
+   * Check if directory exists.  If it does not, create it.
+   * @param fs filesystem object
+   * @param dir path to check
+   * @return Path
+   * @throws IOException e
+   */
+  public Path checkdir(final FileSystem fs, final Path dir) throws IOException {
+    if (!fs.exists(dir)) {
+      fs.mkdirs(dir);
+    }
+    return dir;
+  }
+
+  /**
+   * Create file.
+   * @param fs filesystem object
+   * @param p path to create
+   * @return Path
+   * @throws IOException e
+   */
+  public static Path create(final FileSystem fs, final Path p)
+  throws IOException {
+    if (fs.exists(p)) {
+      throw new IOException("File already exists " + p.toString());
+    }
+    if (!fs.createNewFile(p)) {
+      throw new IOException("Failed create of " + p);
+    }
+    return p;
+  }
+
+  /**
+   * Checks to see if the specified file system is available
+   *
+   * @param fs filesystem
+   * @throws IOException e
+   */
+  public static void checkFileSystemAvailable(final FileSystem fs)
+  throws IOException {
+    if (!(fs instanceof DistributedFileSystem)) {
+      return;
+    }
+    IOException exception = null;
+    DistributedFileSystem dfs = (DistributedFileSystem) fs;
+    try {
+      if (dfs.exists(new Path("/"))) {
+        return;
+      }
+    } catch (IOException e) {
+      exception = RemoteExceptionHandler.checkIOException(e);
+    }
+    try {
+      fs.close();
+    } catch (Exception e) {
+        LOG.error("file system close failed: ", e);
+    }
+    IOException io = new IOException("File system is not available");
+    io.initCause(exception);
+    throw io;
+  }
+
+  /**
+   * Verifies current version of file system
+   *
+   * @param fs filesystem object
+   * @param rootdir root hbase directory
+   * @return null if no version file exists, version string otherwise.
+   * @throws IOException e
+   */
+  public static String getVersion(FileSystem fs, Path rootdir)
+  throws IOException {
+    Path versionFile = new Path(rootdir, HConstants.VERSION_FILE_NAME);
+    String version = null;
+    if (fs.exists(versionFile)) {
+      FSDataInputStream s =
+        fs.open(versionFile);
+      try {
+        version = DataInputStream.readUTF(s);
+      } catch (EOFException eof) {
+        LOG.warn("Version file was empty, odd, will try to set it.");
+      } finally {
+        s.close();
+      }
+    }
+    return version;
+  }
+
+  /**
+   * Verifies current version of file system
+   *
+   * @param fs file system
+   * @param rootdir root directory of HBase installation
+   * @param message if true, issues a message on System.out
+   *
+   * @throws IOException e
+   */
+  public static void checkVersion(FileSystem fs, Path rootdir,
+      boolean message) throws IOException {
+    String version = getVersion(fs, rootdir);
+
+    if (version == null) {
+      if (!rootRegionExists(fs, rootdir)) {
+        // rootDir is empty (no version file and no root region)
+        // just create new version file (HBASE-1195)
+        FSUtils.setVersion(fs, rootdir);
+        return;
+      }
+    } else if (version.compareTo(HConstants.FILE_SYSTEM_VERSION) == 0)
+        return;
+
+    // version is deprecated require migration
+    // Output on stdout so user sees it in terminal.
+    String msg = "File system needs to be upgraded."
+      + "  You have version " + version
+      + " and I want version " + HConstants.FILE_SYSTEM_VERSION
+      + ".  Run the '${HBASE_HOME}/bin/hbase migrate' script.";
+    if (message) {
+      System.out.println("WARNING! " + msg);
+    }
+    throw new FileSystemVersionException(msg);
+  }
+
+  /**
+   * Sets version of file system
+   *
+   * @param fs filesystem object
+   * @param rootdir hbase root
+   * @throws IOException e
+   */
+  public static void setVersion(FileSystem fs, Path rootdir)
+  throws IOException {
+    setVersion(fs, rootdir, HConstants.FILE_SYSTEM_VERSION);
+  }
+
+  /**
+   * Sets version of file system
+   *
+   * @param fs filesystem object
+   * @param rootdir hbase root directory
+   * @param version version to set
+   * @throws IOException e
+   */
+  public static void setVersion(FileSystem fs, Path rootdir, String version)
+  throws IOException {
+    FSDataOutputStream s =
+      fs.create(new Path(rootdir, HConstants.VERSION_FILE_NAME));
+    s.writeUTF(version);
+    s.close();
+    LOG.debug("Created version file at " + rootdir.toString() + " set its version at:" + version);
+  }
+
+  /**
+   * Verifies root directory path is a valid URI with a scheme
+   *
+   * @param root root directory path
+   * @return Passed <code>root</code> argument.
+   * @throws IOException if not a valid URI with a scheme
+   */
+  public static Path validateRootPath(Path root) throws IOException {
+    try {
+      URI rootURI = new URI(root.toString());
+      String scheme = rootURI.getScheme();
+      if (scheme == null) {
+        throw new IOException("Root directory does not have a scheme");
+      }
+      return root;
+    } catch (URISyntaxException e) {
+      IOException io = new IOException("Root directory path is not a valid " +
+        "URI -- check your " + HConstants.HBASE_DIR + " configuration");
+      io.initCause(e);
+      throw io;
+    }
+  }
+
+  /**
+   * If DFS, check safe mode and if so, wait until we clear it.
+   * @param conf configuration
+   * @param wait Sleep between retries
+   * @throws IOException e
+   */
+  public static void waitOnSafeMode(final Configuration conf,
+    final long wait)
+  throws IOException {
+    FileSystem fs = FileSystem.get(conf);
+    if (!(fs instanceof DistributedFileSystem)) return;
+    DistributedFileSystem dfs = (DistributedFileSystem)fs;
+    // Are there any data nodes up yet?
+    // Currently the safe mode check falls through if the namenode is up but no
+    // datanodes have reported in yet.
+    try {
+      while (dfs.getDataNodeStats().length == 0) {
+        LOG.info("Waiting for dfs to come up...");
+        try {
+          Thread.sleep(wait);
+        } catch (InterruptedException e) {
+          //continue
+        }
+      }
+    } catch (IOException e) {
+      // getDataNodeStats can fail if superuser privilege is required to run
+      // the datanode report, just ignore it
+    }
+    // Make sure dfs is not in safe mode
+    while (dfs.setSafeMode(FSConstants.SafeModeAction.SAFEMODE_GET)) {
+      LOG.info("Waiting for dfs to exit safe mode...");
+      try {
+        Thread.sleep(wait);
+      } catch (InterruptedException e) {
+        //continue
+      }
+    }
+  }
+
+  /**
+   * Return the 'path' component of a Path.  In Hadoop, Path is an URI.  This
+   * method returns the 'path' component of a Path's URI: e.g. If a Path is
+   * <code>hdfs://example.org:9000/hbase_trunk/TestTable/compaction.dir</code>,
+   * this method returns <code>/hbase_trunk/TestTable/compaction.dir</code>.
+   * This method is useful if you want to print out a Path without qualifying
+   * Filesystem instance.
+   * @param p Filesystem Path whose 'path' component we are to return.
+   * @return Path portion of the Filesystem
+   */
+  public static String getPath(Path p) {
+    return p.toUri().getPath();
+  }
+
+  /**
+   * @param c configuration
+   * @return Path to hbase root directory: i.e. <code>hbase.rootdir</code> from
+   * configuration as a Path.
+   * @throws IOException e
+   */
+  public static Path getRootDir(final Configuration c) throws IOException {
+    return new Path(c.get(HConstants.HBASE_DIR));
+  }
+
+  /**
+   * Checks if root region exists
+   *
+   * @param fs file system
+   * @param rootdir root directory of HBase installation
+   * @return true if exists
+   * @throws IOException e
+   */
+  public static boolean rootRegionExists(FileSystem fs, Path rootdir)
+  throws IOException {
+    Path rootRegionDir =
+      HRegion.getRegionDir(rootdir, HRegionInfo.ROOT_REGIONINFO);
+    return fs.exists(rootRegionDir);
+  }
+
+  /**
+   * Runs through the hbase rootdir and checks all stores have only
+   * one file in them -- that is, they've been major compacted.  Looks
+   * at root and meta tables too.
+   * @param fs filesystem
+   * @param hbaseRootDir hbase root directory
+   * @return True if this hbase install is major compacted.
+   * @throws IOException e
+   */
+  public static boolean isMajorCompacted(final FileSystem fs,
+      final Path hbaseRootDir)
+  throws IOException {
+    // Presumes any directory under hbase.rootdir is a table.
+    FileStatus [] tableDirs = fs.listStatus(hbaseRootDir, new DirFilter(fs));
+    for (FileStatus tableDir : tableDirs) {
+      // Skip the .log directory.  All others should be tables.  Inside a table,
+      // there are compaction.dir directories to skip.  Otherwise, all else
+      // should be regions.  Then in each region, should only be family
+      // directories.  Under each of these, should be one file only.
+      Path d = tableDir.getPath();
+      if (d.getName().equals(HConstants.HREGION_LOGDIR_NAME)) {
+        continue;
+      }
+      FileStatus[] regionDirs = fs.listStatus(d, new DirFilter(fs));
+      for (FileStatus regionDir : regionDirs) {
+        Path dd = regionDir.getPath();
+        if (dd.getName().equals(HConstants.HREGION_COMPACTIONDIR_NAME)) {
+          continue;
+        }
+        // Else its a region name.  Now look in region for families.
+        FileStatus[] familyDirs = fs.listStatus(dd, new DirFilter(fs));
+        for (FileStatus familyDir : familyDirs) {
+          Path family = familyDir.getPath();
+          // Now in family make sure only one file.
+          FileStatus[] familyStatus = fs.listStatus(family);
+          if (familyStatus.length > 1) {
+            LOG.debug(family.toString() + " has " + familyStatus.length +
+                " files.");
+            return false;
+          }
+        }
+      }
+    }
+    return true;
+  }
+
+  // TODO move this method OUT of FSUtils. No dependencies to HMaster
+  /**
+   * Returns the total overall fragmentation percentage. Includes .META. and
+   * -ROOT- as well.
+   *
+   * @param master  The master defining the HBase root and file system.
+   * @return A map for each table and its percentage.
+   * @throws IOException When scanning the directory fails.
+   */
+  public static int getTotalTableFragmentation(final HMaster master)
+  throws IOException {
+    Map<String, Integer> map = getTableFragmentation(master);
+    return map != null && map.size() > 0 ? map.get("-TOTAL-") : -1;
+  }
+
+  /**
+   * Runs through the HBase rootdir and checks how many stores for each table
+   * have more than one file in them. Checks -ROOT- and .META. too. The total
+   * percentage across all tables is stored under the special key "-TOTAL-".
+   *
+   * @param master  The master defining the HBase root and file system.
+   * @return A map for each table and its percentage.
+   * @throws IOException When scanning the directory fails.
+   */
+  public static Map<String, Integer> getTableFragmentation(
+    final HMaster master)
+  throws IOException {
+    Path path = getRootDir(master.getConfiguration());
+    // since HMaster.getFileSystem() is package private
+    FileSystem fs = path.getFileSystem(master.getConfiguration());
+    return getTableFragmentation(fs, path);
+  }
+
+  /**
+   * Runs through the HBase rootdir and checks how many stores for each table
+   * have more than one file in them. Checks -ROOT- and .META. too. The total
+   * percentage across all tables is stored under the special key "-TOTAL-".
+   *
+   * @param fs  The file system to use.
+   * @param hbaseRootDir  The root directory to scan.
+   * @return A map for each table and its percentage.
+   * @throws IOException When scanning the directory fails.
+   */
+  public static Map<String, Integer> getTableFragmentation(
+    final FileSystem fs, final Path hbaseRootDir)
+  throws IOException {
+    Map<String, Integer> frags = new HashMap<String, Integer>();
+    int cfCountTotal = 0;
+    int cfFragTotal = 0;
+    DirFilter df = new DirFilter(fs);
+    // presumes any directory under hbase.rootdir is a table
+    FileStatus [] tableDirs = fs.listStatus(hbaseRootDir, df);
+    for (FileStatus tableDir : tableDirs) {
+      // Skip the .log directory.  All others should be tables.  Inside a table,
+      // there are compaction.dir directories to skip.  Otherwise, all else
+      // should be regions.  Then in each region, should only be family
+      // directories.  Under each of these, should be one file only.
+      Path d = tableDir.getPath();
+      if (d.getName().equals(HConstants.HREGION_LOGDIR_NAME)) {
+        continue;
+      }
+      int cfCount = 0;
+      int cfFrag = 0;
+      FileStatus[] regionDirs = fs.listStatus(d, df);
+      for (FileStatus regionDir : regionDirs) {
+        Path dd = regionDir.getPath();
+        if (dd.getName().equals(HConstants.HREGION_COMPACTIONDIR_NAME)) {
+          continue;
+        }
+        // else its a region name, now look in region for families
+        FileStatus[] familyDirs = fs.listStatus(dd, df);
+        for (FileStatus familyDir : familyDirs) {
+          cfCount++;
+          cfCountTotal++;
+          Path family = familyDir.getPath();
+          // now in family make sure only one file
+          FileStatus[] familyStatus = fs.listStatus(family);
+          if (familyStatus.length > 1) {
+            cfFrag++;
+            cfFragTotal++;
+          }
+        }
+      }
+      // compute percentage per table and store in result list
+      frags.put(d.getName(), Math.round((float) cfFrag / cfCount * 100));
+    }
+    // set overall percentage for all tables
+    frags.put("-TOTAL-", Math.round((float) cfFragTotal / cfCountTotal * 100));
+    return frags;
+  }
+
+  /**
+   * Expects to find -ROOT- directory.
+   * @param fs filesystem
+   * @param hbaseRootDir hbase root directory
+   * @return True if this a pre020 layout.
+   * @throws IOException e
+   */
+  public static boolean isPre020FileLayout(final FileSystem fs,
+    final Path hbaseRootDir)
+  throws IOException {
+    Path mapfiles = new Path(new Path(new Path(new Path(hbaseRootDir, "-ROOT-"),
+      "70236052"), "info"), "mapfiles");
+    return fs.exists(mapfiles);
+  }
+
+  /**
+   * Runs through the hbase rootdir and checks all stores have only
+   * one file in them -- that is, they've been major compacted.  Looks
+   * at root and meta tables too.  This version differs from
+   * {@link #isMajorCompacted(FileSystem, Path)} in that it expects a
+   * pre-0.20.0 hbase layout on the filesystem.  Used migrating.
+   * @param fs filesystem
+   * @param hbaseRootDir hbase root directory
+   * @return True if this hbase install is major compacted.
+   * @throws IOException e
+   */
+  public static boolean isMajorCompactedPre020(final FileSystem fs,
+      final Path hbaseRootDir)
+  throws IOException {
+    // Presumes any directory under hbase.rootdir is a table.
+    FileStatus [] tableDirs = fs.listStatus(hbaseRootDir, new DirFilter(fs));
+    for (FileStatus tableDir : tableDirs) {
+      // Inside a table, there are compaction.dir directories to skip.
+      // Otherwise, all else should be regions.  Then in each region, should
+      // only be family directories.  Under each of these, should be a mapfile
+      // and info directory and in these only one file.
+      Path d = tableDir.getPath();
+      if (d.getName().equals(HConstants.HREGION_LOGDIR_NAME)) {
+        continue;
+      }
+      FileStatus[] regionDirs = fs.listStatus(d, new DirFilter(fs));
+      for (FileStatus regionDir : regionDirs) {
+        Path dd = regionDir.getPath();
+        if (dd.getName().equals(HConstants.HREGION_COMPACTIONDIR_NAME)) {
+          continue;
+        }
+        // Else its a region name.  Now look in region for families.
+        FileStatus[] familyDirs = fs.listStatus(dd, new DirFilter(fs));
+        for (FileStatus familyDir : familyDirs) {
+          Path family = familyDir.getPath();
+          FileStatus[] infoAndMapfile = fs.listStatus(family);
+          // Assert that only info and mapfile in family dir.
+          if (infoAndMapfile.length != 0 && infoAndMapfile.length != 2) {
+            LOG.debug(family.toString() +
+                " has more than just info and mapfile: " + infoAndMapfile.length);
+            return false;
+          }
+          // Make sure directory named info or mapfile.
+          for (int ll = 0; ll < 2; ll++) {
+            if (infoAndMapfile[ll].getPath().getName().equals("info") ||
+                infoAndMapfile[ll].getPath().getName().equals("mapfiles"))
+              continue;
+            LOG.debug("Unexpected directory name: " +
+                infoAndMapfile[ll].getPath());
+            return false;
+          }
+          // Now in family, there are 'mapfile' and 'info' subdirs.  Just
+          // look in the 'mapfile' subdir.
+          FileStatus[] familyStatus =
+              fs.listStatus(new Path(family, "mapfiles"));
+          if (familyStatus.length > 1) {
+            LOG.debug(family.toString() + " has " + familyStatus.length +
+                " files.");
+            return false;
+          }
+        }
+      }
+    }
+    return true;
+  }
+
+  /**
+   * A {@link PathFilter} that returns directories.
+   */
+  public static class DirFilter implements PathFilter {
+    private final FileSystem fs;
+
+    public DirFilter(final FileSystem fs) {
+      this.fs = fs;
+    }
+
+    public boolean accept(Path p) {
+      boolean isdir = false;
+      try {
+        isdir = this.fs.getFileStatus(p).isDir();
+      } catch (IOException e) {
+        e.printStackTrace();
+      }
+      return isdir;
+    }
+  }
+
+  /**
+   * Heuristic to determine whether is safe or not to open a file for append
+   * Looks both for dfs.support.append and use reflection to search
+   * for SequenceFile.Writer.syncFs() or FSDataOutputStream.hflush()
+   * @param conf
+   * @return True if append support
+   */
+  public static boolean isAppendSupported(final Configuration conf) {
+    boolean append = conf.getBoolean("dfs.support.append", false);
+    if (append) {
+      try {
+        // TODO: The implementation that comes back when we do a createWriter
+        // may not be using SequenceFile so the below is not a definitive test.
+        // Will do for now (hdfs-200).
+        SequenceFile.Writer.class.getMethod("syncFs", new Class<?> []{});
+        append = true;
+      } catch (SecurityException e) {
+      } catch (NoSuchMethodException e) {
+        append = false;
+      }
+    } else {
+      try {
+        FSDataOutputStream.class.getMethod("hflush", new Class<?> []{});
+      } catch (NoSuchMethodException e) {
+        append = false;
+      }
+    }
+    return append;
+  }
+
+  /**
+   * @param conf
+   * @return True if this filesystem whose scheme is 'hdfs'.
+   * @throws IOException
+   */
+  public static boolean isHDFS(final Configuration conf) throws IOException {
+    FileSystem fs = FileSystem.get(conf);
+    String scheme = fs.getUri().getScheme();
+    return scheme.equalsIgnoreCase("hdfs");
+  }
+
+  /*
+   * Recover file lease. Used when a file might be suspect to be had been left open by another process. <code>p</code>
+   * @param fs
+   * @param p
+   * @param append True if append supported
+   * @throws IOException
+   */
+  public static void recoverFileLease(final FileSystem fs, final Path p, Configuration conf)
+  throws IOException{
+    if (!isAppendSupported(conf)) {
+      LOG.warn("Running on HDFS without append enabled may result in data loss");
+      return;
+    }
+    // lease recovery not needed for local file system case.
+    // currently, local file system doesn't implement append either.
+    if (!(fs instanceof DistributedFileSystem)) {
+      return;
+    }
+    LOG.info("Recovering file " + p);
+    long startWaiting = System.currentTimeMillis();
+
+    // Trying recovery
+    boolean recovered = false;
+    while (!recovered) {
+      try {
+        FSDataOutputStream out = fs.append(p);
+        out.close();
+        recovered = true;
+      } catch (IOException e) {
+        e = RemoteExceptionHandler.checkIOException(e);
+        if (e instanceof AlreadyBeingCreatedException) {
+          // We expect that we'll get this message while the lease is still
+          // within its soft limit, but if we get it past that, it means
+          // that the RS is holding onto the file even though it lost its
+          // znode. We could potentially abort after some time here.
+          long waitedFor = System.currentTimeMillis() - startWaiting;
+          if (waitedFor > FSConstants.LEASE_SOFTLIMIT_PERIOD) {
+            LOG.warn("Waited " + waitedFor + "ms for lease recovery on " + p +
+              ":" + e.getMessage());
+          }
+          try {
+            Thread.sleep(1000);
+          } catch (InterruptedException ex) {
+            // ignore it and try again
+          }
+        } else {
+          throw new IOException("Failed to open " + p + " for append", e);
+        }
+      }
+    }
+    LOG.info("Finished lease recover attempt for " + p);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/FileSystemVersionException.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/FileSystemVersionException.java
new file mode 100644
index 0000000..5235121
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/FileSystemVersionException.java
@@ -0,0 +1,39 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+
+/** Thrown when the file system needs to be upgraded */
+public class FileSystemVersionException extends IOException {
+  private static final long serialVersionUID = 1004053363L;
+
+  /** default constructor */
+  public FileSystemVersionException() {
+    super();
+  }
+
+  /** @param s message */
+  public FileSystemVersionException(String s) {
+    super(s);
+  }
+
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseConfTool.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseConfTool.java
new file mode 100644
index 0000000..225f92c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseConfTool.java
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+
+/**
+ * Tool that prints out a configuration.
+ * Pass the configuration key on the command-line.
+ */
+public class HBaseConfTool {
+  public static void main(String args[]) {
+    if (args.length < 1) {
+      System.err.println("Usage: HBaseConfTool <CONFIGURATION_KEY>");
+      System.exit(1);
+      return;
+    }
+
+    Configuration conf = HBaseConfiguration.create();
+    System.out.println(conf.get(args[0]));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
new file mode 100644
index 0000000..bf5c663
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
@@ -0,0 +1,985 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.MetaScanner;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.zookeeper.ZKTable;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+
+/**
+ * Check consistency among the in-memory states of the master and the
+ * region server(s) and the state of data in HDFS.
+ */
+public class HBaseFsck {
+  public static final long DEFAULT_TIME_LAG = 60000; // default value of 1 minute
+
+  private static final Log LOG = LogFactory.getLog(HBaseFsck.class.getName());
+  private Configuration conf;
+
+  private ClusterStatus status;
+  private HConnection connection;
+
+  private TreeMap<String, HbckInfo> regionInfo = new TreeMap<String, HbckInfo>();
+  private TreeMap<String, TInfo> tablesInfo = new TreeMap<String, TInfo>();
+  private TreeSet<byte[]> disabledTables =
+    new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+  ErrorReporter errors = new PrintingErrorReporter();
+
+  private static boolean details = false; // do we display the full report
+  private long timelag = DEFAULT_TIME_LAG; // tables whose modtime is older
+  private boolean fix = false; // do we want to try fixing the errors?
+  private boolean rerun = false; // if we tried to fix something rerun hbck
+  private static boolean summary = false; // if we want to print less output
+  // Empty regioninfo qualifiers in .META.
+  private Set<Result> emptyRegionInfoQualifiers = new HashSet<Result>();
+
+  /**
+   * Constructor
+   *
+   * @param conf Configuration object
+   * @throws MasterNotRunningException if the master is not running
+   * @throws ZooKeeperConnectionException if unable to connect to zookeeper
+   */
+  public HBaseFsck(Configuration conf)
+    throws MasterNotRunningException, ZooKeeperConnectionException, IOException {
+    this.conf = conf;
+
+    HBaseAdmin admin = new HBaseAdmin(conf);
+    status = admin.getMaster().getClusterStatus();
+    connection = admin.getConnection();
+  }
+
+  /**
+   * Contacts the master and prints out cluster-wide information
+   * @throws IOException if a remote or network exception occurs
+   * @return 0 on success, non-zero on failure
+   * @throws KeeperException
+   * @throws InterruptedException
+   */
+  int doWork() throws IOException, KeeperException, InterruptedException {
+    // print hbase server version
+    errors.print("Version: " + status.getHBaseVersion());
+
+    // Make sure regionInfo is empty before starting
+    regionInfo.clear();
+    tablesInfo.clear();
+    emptyRegionInfoQualifiers.clear();
+    disabledTables.clear();
+
+    // get a list of all regions from the master. This involves
+    // scanning the META table
+    if (!recordRootRegion()) {
+      // Will remove later if we can fix it
+      errors.reportError("Encountered fatal error. Exitting...");
+      return -1;
+    }
+    getMetaEntries();
+
+    // Check if .META. is found only once and on the right place
+    if (!checkMetaEntries()) {
+      // Will remove later if we can fix it
+      errors.reportError("Encountered fatal error. Exitting...");
+      return -1;
+    }
+
+    // get a list of all tables that have not changed recently.
+    AtomicInteger numSkipped = new AtomicInteger(0);
+    HTableDescriptor[] allTables = getTables(numSkipped);
+    errors.print("Number of Tables: " + allTables.length);
+    if (details) {
+      if (numSkipped.get() > 0) {
+        errors.detail("Number of Tables in flux: " + numSkipped.get());
+      }
+      for (HTableDescriptor td : allTables) {
+        String tableName = td.getNameAsString();
+        errors.detail("  Table: " + tableName + "\t" +
+                           (td.isReadOnly() ? "ro" : "rw") + "\t" +
+                           (td.isRootRegion() ? "ROOT" :
+                            (td.isMetaRegion() ? "META" : "    ")) + "\t" +
+                           " families: " + td.getFamilies().size());
+      }
+    }
+
+    // From the master, get a list of all known live region servers
+    Collection<HServerInfo> regionServers = status.getServerInfo();
+    errors.print("Number of live region servers: " +
+                       regionServers.size());
+    if (details) {
+      for (HServerInfo rsinfo: regionServers) {
+        errors.print("  " + rsinfo.getServerName());
+      }
+    }
+
+    // From the master, get a list of all dead region servers
+    Collection<String> deadRegionServers = status.getDeadServerNames();
+    errors.print("Number of dead region servers: " +
+                       deadRegionServers.size());
+    if (details) {
+      for (String name: deadRegionServers) {
+        errors.print("  " + name);
+      }
+    }
+
+    // Determine what's deployed
+    processRegionServers(regionServers);
+
+    // Determine what's on HDFS
+    checkHdfs();
+
+    // Empty cells in .META.?
+    errors.print("Number of empty REGIONINFO_QUALIFIER rows in .META.: " +
+      emptyRegionInfoQualifiers.size());
+    if (details) {
+      for (Result r: emptyRegionInfoQualifiers) {
+        errors.print("  " + r);
+      }
+    }
+
+    // Get disabled tables from ZooKeeper
+    loadDisabledTables();
+
+    // Check consistency
+    checkConsistency();
+
+    // Check integrity
+    checkIntegrity();
+
+    // Print table summary
+    printTableSummary();
+
+    return errors.summarize();
+  }
+
+  /**
+   * Load the list of disabled tables in ZK into local set.
+   * @throws ZooKeeperConnectionException
+   * @throws IOException
+   * @throws KeeperException
+   */
+  private void loadDisabledTables()
+  throws ZooKeeperConnectionException, IOException, KeeperException {
+    ZooKeeperWatcher zkw =
+      HConnectionManager.getConnection(conf).getZooKeeperWatcher();
+    for (String tableName : ZKTable.getDisabledOrDisablingTables(zkw)) {
+      disabledTables.add(Bytes.toBytes(tableName));
+    }
+  }
+
+  /**
+   * Check if the specified region's table is disabled.
+   * @throws ZooKeeperConnectionException
+   * @throws IOException
+   * @throws KeeperException
+   */
+  private boolean isTableDisabled(HRegionInfo regionInfo) {
+    return disabledTables.contains(regionInfo.getTableDesc().getName());
+  }
+
+  /**
+   * Scan HDFS for all regions, recording their information into
+   * regionInfo
+   */
+  void checkHdfs() throws IOException {
+    Path rootDir = new Path(conf.get(HConstants.HBASE_DIR));
+    FileSystem fs = rootDir.getFileSystem(conf);
+
+    // list all tables from HDFS
+    List<FileStatus> tableDirs = Lists.newArrayList();
+
+    boolean foundVersionFile = false;
+    FileStatus[] files = fs.listStatus(rootDir);
+    for (FileStatus file : files) {
+      if (file.getPath().getName().equals(HConstants.VERSION_FILE_NAME)) {
+        foundVersionFile = true;
+      } else {
+        tableDirs.add(file);
+      }
+    }
+
+    // verify that version file exists
+    if (!foundVersionFile) {
+      errors.reportError("Version file does not exist in root dir " + rootDir);
+    }
+
+    // level 1:  <HBASE_DIR>/*
+    for (FileStatus tableDir : tableDirs) {
+      String tableName = tableDir.getPath().getName();
+      // ignore hidden files
+      if (tableName.startsWith(".") &&
+          !tableName.equals( Bytes.toString(HConstants.META_TABLE_NAME)))
+        continue;
+      // level 2: <HBASE_DIR>/<table>/*
+      FileStatus[] regionDirs = fs.listStatus(tableDir.getPath());
+      for (FileStatus regionDir : regionDirs) {
+        String encodedName = regionDir.getPath().getName();
+        // ignore directories that aren't hexadecimal
+        if (!encodedName.toLowerCase().matches("[0-9a-f]+")) continue;
+
+        HbckInfo hbi = getOrCreateInfo(encodedName);
+        hbi.foundRegionDir = regionDir;
+
+        // Set a flag if this region contains only edits
+        // This is special case if a region is left after split
+        hbi.onlyEdits = true;
+        FileStatus[] subDirs = fs.listStatus(regionDir.getPath());
+        if (subDirs != null) {
+          Path ePath = HLog.getRegionDirRecoveredEditsDir(regionDir.getPath());
+          for (FileStatus subDir : subDirs) {
+            String sdName = subDir.getPath().getName();
+            if (!sdName.startsWith(".") && !sdName.equals(ePath.getName())) {
+              hbi.onlyEdits = false;
+              break;
+            }
+          }
+        }
+      }
+    }
+  }
+
+  /**
+   * Record the location of the ROOT region as found in ZooKeeper,
+   * as if it were in a META table. This is so that we can check
+   * deployment of ROOT.
+   */
+  boolean recordRootRegion() throws IOException {
+    HRegionLocation rootLocation = connection.locateRegion(
+      HConstants.ROOT_TABLE_NAME, HConstants.EMPTY_START_ROW);
+
+    // Check if Root region is valid and existing
+    if (rootLocation == null || rootLocation.getRegionInfo() == null ||
+        rootLocation.getServerAddress() == null) {
+      errors.reportError("Root Region or some of its attributes is null.");
+      return false;
+    }
+
+    MetaEntry m = new MetaEntry(rootLocation.getRegionInfo(),
+      rootLocation.getServerAddress(), null, System.currentTimeMillis());
+    HbckInfo hbInfo = new HbckInfo(m);
+    regionInfo.put(rootLocation.getRegionInfo().getEncodedName(), hbInfo);
+    return true;
+  }
+
+  /**
+   * Contacts each regionserver and fetches metadata about regions.
+   * @param regionServerList - the list of region servers to connect to
+   * @throws IOException if a remote or network exception occurs
+   */
+  void processRegionServers(Collection<HServerInfo> regionServerList)
+    throws IOException {
+
+    // loop to contact each region server
+    for (HServerInfo rsinfo: regionServerList) {
+      errors.progress();
+      try {
+        HRegionInterface server = connection.getHRegionConnection(
+                                    rsinfo.getServerAddress());
+
+        // list all online regions from this region server
+        List<HRegionInfo> regions = server.getOnlineRegions();
+        if (details) {
+          errors.detail("RegionServer: " + rsinfo.getServerName() +
+                           " number of regions: " + regions.size());
+          for (HRegionInfo rinfo: regions) {
+            errors.detail("  " + rinfo.getRegionNameAsString() +
+                             " id: " + rinfo.getRegionId() +
+                             " encoded_name: " + rinfo.getEncodedName() +
+                             " start: " + Bytes.toStringBinary(rinfo.getStartKey()) +
+                             " end: " + Bytes.toStringBinary(rinfo.getEndKey()));
+          }
+        }
+
+        // check to see if the existance of this region matches the region in META
+        for (HRegionInfo r:regions) {
+          HbckInfo hbi = getOrCreateInfo(r.getEncodedName());
+          hbi.deployedOn.add(rsinfo.getServerAddress());
+        }
+      } catch (IOException e) {          // unable to connect to the region server.
+        errors.reportError("\nRegionServer:" + rsinfo.getServerName() +
+                           " Unable to fetch region information. " + e);
+      }
+    }
+  }
+
+  /**
+   * Check consistency of all regions that have been found in previous phases.
+   * @throws KeeperException
+   * @throws InterruptedException
+   */
+  void checkConsistency()
+  throws IOException, KeeperException, InterruptedException {
+    for (java.util.Map.Entry<String, HbckInfo> e: regionInfo.entrySet()) {
+      doConsistencyCheck(e.getKey(), e.getValue());
+    }
+  }
+
+  /**
+   * Check a single region for consistency and correct deployment.
+   * @throws KeeperException
+   * @throws InterruptedException
+   */
+  void doConsistencyCheck(final String key, final HbckInfo hbi)
+  throws IOException, KeeperException, InterruptedException {
+    String descriptiveName = hbi.toString();
+
+    boolean inMeta = hbi.metaEntry != null;
+    boolean inHdfs = hbi.foundRegionDir != null;
+    boolean hasMetaAssignment = inMeta && hbi.metaEntry.regionServer != null;
+    boolean isDeployed = !hbi.deployedOn.isEmpty();
+    boolean isMultiplyDeployed = hbi.deployedOn.size() > 1;
+    boolean deploymentMatchesMeta =
+      hasMetaAssignment && isDeployed && !isMultiplyDeployed &&
+      hbi.metaEntry.regionServer.equals(hbi.deployedOn.get(0));
+    boolean splitParent =
+      (hbi.metaEntry == null)? false: hbi.metaEntry.isSplit() && hbi.metaEntry.isOffline();
+    boolean shouldBeDeployed = inMeta && !isTableDisabled(hbi.metaEntry);
+    boolean recentlyModified = hbi.foundRegionDir != null &&
+      hbi.foundRegionDir.getModificationTime() + timelag > System.currentTimeMillis();
+
+    // ========== First the healthy cases =============
+    if (hbi.onlyEdits) {
+      return;
+    }
+    if (inMeta && inHdfs && isDeployed && deploymentMatchesMeta && shouldBeDeployed) {
+      return;
+    } else if (inMeta && !isDeployed && splitParent) {
+      // Offline regions shouldn't cause complaints
+      LOG.debug("Region " + descriptiveName + " offline, split, parent, ignoring.");
+      return;
+    } else if (inMeta && !shouldBeDeployed && !isDeployed) {
+      // offline regions shouldn't cause complaints
+      LOG.debug("Region " + descriptiveName + " offline, ignoring.");
+      return;
+    } else if (recentlyModified) {
+      LOG.info("Region " + descriptiveName + " was recently modified -- skipping");
+      return;
+    }
+    // ========== Cases where the region is not in META =============
+    else if (!inMeta && !inHdfs && !isDeployed) {
+      // We shouldn't have record of this region at all then!
+      assert false : "Entry for region with no data";
+    } else if (!inMeta && !inHdfs && isDeployed) {
+      errors.reportError("Region " + descriptiveName + ", key=" + key + ", not on HDFS or in META but " +
+        "deployed on " + Joiner.on(", ").join(hbi.deployedOn));
+    } else if (!inMeta && inHdfs && !isDeployed) {
+      errors.reportError("Region " + descriptiveName + " on HDFS, but not listed in META " +
+        "or deployed on any region server.");
+    } else if (!inMeta && inHdfs && isDeployed) {
+      errors.reportError("Region " + descriptiveName + " not in META, but deployed on " +
+        Joiner.on(", ").join(hbi.deployedOn));
+
+    // ========== Cases where the region is in META =============
+    } else if (inMeta && !inHdfs && !isDeployed) {
+      errors.reportError("Region " + descriptiveName + " found in META, but not in HDFS " +
+        "or deployed on any region server.");
+    } else if (inMeta && !inHdfs && isDeployed) {
+      errors.reportError("Region " + descriptiveName + " found in META, but not in HDFS, " +
+        "and deployed on " + Joiner.on(", ").join(hbi.deployedOn));
+    } else if (inMeta && inHdfs && !isDeployed && shouldBeDeployed) {
+      errors.reportError("Region " + descriptiveName + " not deployed on any region server.");
+      // If we are trying to fix the errors
+      if (shouldFix()) {
+        errors.print("Trying to fix unassigned region...");
+        setShouldRerun();
+        HBaseFsckRepair.fixUnassigned(this.conf, hbi.metaEntry);
+      }
+    } else if (inMeta && inHdfs && isDeployed && !shouldBeDeployed) {
+      errors.reportError("Region " + descriptiveName + " should not be deployed according " +
+        "to META, but is deployed on " + Joiner.on(", ").join(hbi.deployedOn));
+    } else if (inMeta && inHdfs && isMultiplyDeployed) {
+      errors.reportError("Region " + descriptiveName + " is listed in META on region server " +
+        hbi.metaEntry.regionServer + " but is multiply assigned to region servers " +
+        Joiner.on(", ").join(hbi.deployedOn));
+      // If we are trying to fix the errors
+      if (shouldFix()) {
+        errors.print("Trying to fix assignment error...");
+        setShouldRerun();
+        HBaseFsckRepair.fixDupeAssignment(this.conf, hbi.metaEntry, hbi.deployedOn);
+      }
+    } else if (inMeta && inHdfs && isDeployed && !deploymentMatchesMeta) {
+      errors.reportError("Region " + descriptiveName + " listed in META on region server " +
+        hbi.metaEntry.regionServer + " but found on region server " +
+        hbi.deployedOn.get(0));
+      // If we are trying to fix the errors
+      if (shouldFix()) {
+        errors.print("Trying to fix assignment error...");
+        setShouldRerun();
+        HBaseFsckRepair.fixDupeAssignment(this.conf, hbi.metaEntry, hbi.deployedOn);
+      }
+    } else {
+      errors.reportError("Region " + descriptiveName + " is in an unforeseen state:" +
+        " inMeta=" + inMeta +
+        " inHdfs=" + inHdfs +
+        " isDeployed=" + isDeployed +
+        " isMultiplyDeployed=" + isMultiplyDeployed +
+        " deploymentMatchesMeta=" + deploymentMatchesMeta +
+        " shouldBeDeployed=" + shouldBeDeployed);
+    }
+  }
+
+  /**
+   * Checks tables integrity. Goes over all regions and scans the tables.
+   * Collects all the pieces for each table and checks if there are missing,
+   * repeated or overlapping ones.
+   */
+  void checkIntegrity() {
+    for (HbckInfo hbi : regionInfo.values()) {
+      // Check only valid, working regions
+      if (hbi.metaEntry == null) continue;
+      if (hbi.metaEntry.regionServer == null) continue;
+      if (hbi.foundRegionDir == null) continue;
+      if (hbi.deployedOn.size() != 1) continue;
+      if (hbi.onlyEdits) continue;
+
+      // We should be safe here
+      String tableName = hbi.metaEntry.getTableDesc().getNameAsString();
+      TInfo modTInfo = tablesInfo.get(tableName);
+      if (modTInfo == null) {
+        modTInfo = new TInfo(tableName);
+      }
+      for (HServerAddress server : hbi.deployedOn) {
+        modTInfo.addServer(server);
+      }
+      modTInfo.addEdge(hbi.metaEntry.getStartKey(), hbi.metaEntry.getEndKey());
+      tablesInfo.put(tableName, modTInfo);
+    }
+
+    for (TInfo tInfo : tablesInfo.values()) {
+      if (!tInfo.check()) {
+        errors.reportError("Found inconsistency in table " + tInfo.getName());
+      }
+    }
+  }
+
+  /**
+   * Maintain information about a particular table.
+   */
+  private class TInfo {
+    String tableName;
+    TreeMap <byte[], byte[]> edges;
+    TreeSet <HServerAddress> deployedOn;
+
+    TInfo(String name) {
+      this.tableName = name;
+      edges = new TreeMap <byte[], byte[]> (Bytes.BYTES_COMPARATOR);
+      deployedOn = new TreeSet <HServerAddress>();
+    }
+
+    public void addEdge(byte[] fromNode, byte[] toNode) {
+      this.edges.put(fromNode, toNode);
+    }
+
+    public void addServer(HServerAddress server) {
+      this.deployedOn.add(server);
+    }
+
+    public String getName() {
+      return tableName;
+    }
+
+    public int getNumRegions() {
+      return edges.size();
+    }
+
+    public boolean check() {
+      byte[] last = new byte[0];
+      byte[] next = new byte[0];
+      TreeSet <byte[]> visited = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+      // Each table should start with a zero-length byte[] and end at a
+      // zero-length byte[]. Just follow the edges to see if this is true
+      while (true) {
+        // Check if chain is broken
+        if (!edges.containsKey(last)) {
+          errors.detail("Chain of regions in table " + tableName +
+            " is broken; edges does not contain " + Bytes.toString(last));
+          return false;
+        }
+        next = edges.get(last);
+        // Found a cycle
+        if (visited.contains(next)) {
+          errors.detail("Chain of regions in table " + tableName +
+            " has a cycle around " + Bytes.toString(next));
+          return false;
+        }
+        // Mark next node as visited
+        visited.add(next);
+        // If next is zero-length byte[] we are possibly at the end of the chain
+        if (next.length == 0) {
+          // If we have visited all elements we are fine
+          if (edges.size() != visited.size()) {
+            errors.detail("Chain of regions in table " + tableName +
+              " contains less elements than are listed in META; visited=" + visited.size() +
+              ", edges=" + edges.size());
+            return false;
+          }
+          return true;
+        }
+        last = next;
+      }
+      // How did we get here?
+    }
+  }
+
+  /**
+   * Return a list of user-space table names whose metadata have not been
+   * modified in the last few milliseconds specified by timelag
+   * if any of the REGIONINFO_QUALIFIER, SERVER_QUALIFIER, STARTCODE_QUALIFIER,
+   * SPLITA_QUALIFIER, SPLITB_QUALIFIER have not changed in the last
+   * milliseconds specified by timelag, then the table is a candidate to be returned.
+   * @param regionList - all entries found in .META
+   * @return tables that have not been modified recently
+   * @throws IOException if an error is encountered
+   */
+  HTableDescriptor[] getTables(AtomicInteger numSkipped) {
+    TreeSet<HTableDescriptor> uniqueTables = new TreeSet<HTableDescriptor>();
+    long now = System.currentTimeMillis();
+
+    for (HbckInfo hbi : regionInfo.values()) {
+      MetaEntry info = hbi.metaEntry;
+
+      // if the start key is zero, then we have found the first region of a table.
+      // pick only those tables that were not modified in the last few milliseconds.
+      if (info != null && info.getStartKey().length == 0 && !info.isMetaRegion()) {
+        if (info.modTime + timelag < now) {
+          uniqueTables.add(info.getTableDesc());
+        } else {
+          numSkipped.incrementAndGet(); // one more in-flux table
+        }
+      }
+    }
+    return uniqueTables.toArray(new HTableDescriptor[uniqueTables.size()]);
+  }
+
+  /**
+   * Gets the entry in regionInfo corresponding to the the given encoded
+   * region name. If the region has not been seen yet, a new entry is added
+   * and returned.
+   */
+  private HbckInfo getOrCreateInfo(String name) {
+    HbckInfo hbi = regionInfo.get(name);
+    if (hbi == null) {
+      hbi = new HbckInfo(null);
+      regionInfo.put(name, hbi);
+    }
+    return hbi;
+  }
+
+  /**
+    * Check values in regionInfo for .META.
+    * Check if zero or more than one regions with META are found.
+    * If there are inconsistencies (i.e. zero or more than one regions
+    * pretend to be holding the .META.) try to fix that and report an error.
+    * @throws IOException from HBaseFsckRepair functions
+   * @throws KeeperException
+   * @throws InterruptedException
+    */
+  boolean checkMetaEntries()
+  throws IOException, KeeperException, InterruptedException {
+    List <HbckInfo> metaRegions = Lists.newArrayList();
+    for (HbckInfo value : regionInfo.values()) {
+      if (value.metaEntry.isMetaTable()) {
+        metaRegions.add(value);
+      }
+    }
+
+    // If something is wrong
+    if (metaRegions.size() != 1) {
+      HRegionLocation rootLocation = connection.locateRegion(
+        HConstants.ROOT_TABLE_NAME, HConstants.EMPTY_START_ROW);
+      HbckInfo root =
+          regionInfo.get(rootLocation.getRegionInfo().getEncodedName());
+
+      // If there is no region holding .META.
+      if (metaRegions.size() == 0) {
+        errors.reportError(".META. is not found on any region.");
+        if (shouldFix()) {
+          errors.print("Trying to fix a problem with .META...");
+          setShouldRerun();
+          // try to fix it (treat it as unassigned region)
+          HBaseFsckRepair.fixUnassigned(conf, root.metaEntry);
+        }
+      }
+      // If there are more than one regions pretending to hold the .META.
+      else if (metaRegions.size() > 1) {
+        errors.reportError(".META. is found on more than one region.");
+        if (shouldFix()) {
+          errors.print("Trying to fix a problem with .META...");
+          setShouldRerun();
+          // try fix it (treat is a dupe assignment)
+          List <HServerAddress> deployedOn = Lists.newArrayList();
+          for (HbckInfo mRegion : metaRegions) {
+            deployedOn.add(mRegion.metaEntry.regionServer);
+          }
+          HBaseFsckRepair.fixDupeAssignment(conf, root.metaEntry, deployedOn);
+        }
+      }
+      // rerun hbck with hopefully fixed META
+      return false;
+    }
+    // no errors, so continue normally
+    return true;
+  }
+
+  /**
+   * Scan .META. and -ROOT-, adding all regions found to the regionInfo map.
+   * @throws IOException if an error is encountered
+   */
+  void getMetaEntries() throws IOException {
+    MetaScannerVisitor visitor = new MetaScannerVisitor() {
+      int countRecord = 1;
+
+      // comparator to sort KeyValues with latest modtime
+      final Comparator<KeyValue> comp = new Comparator<KeyValue>() {
+        public int compare(KeyValue k1, KeyValue k2) {
+          return (int)(k1.getTimestamp() - k2.getTimestamp());
+        }
+      };
+
+      public boolean processRow(Result result) throws IOException {
+        try {
+
+          // record the latest modification of this META record
+          long ts =  Collections.max(result.list(), comp).getTimestamp();
+
+          // record region details
+          byte [] value = result.getValue(HConstants.CATALOG_FAMILY,
+            HConstants.REGIONINFO_QUALIFIER);
+          if (value == null || value.length == 0) {
+            emptyRegionInfoQualifiers.add(result);
+            return true;
+          }
+          HRegionInfo info = Writables.getHRegionInfo(value);
+          HServerAddress server = null;
+          byte[] startCode = null;
+
+          // record assigned region server
+          value = result.getValue(HConstants.CATALOG_FAMILY,
+                                     HConstants.SERVER_QUALIFIER);
+          if (value != null && value.length > 0) {
+            String address = Bytes.toString(value);
+            server = new HServerAddress(address);
+          }
+
+          // record region's start key
+          value = result.getValue(HConstants.CATALOG_FAMILY,
+                                  HConstants.STARTCODE_QUALIFIER);
+          if (value != null) {
+            startCode = value;
+          }
+          MetaEntry m = new MetaEntry(info, server, startCode, ts);
+          HbckInfo hbInfo = new HbckInfo(m);
+          HbckInfo previous = regionInfo.put(info.getEncodedName(), hbInfo);
+          if (previous != null) {
+            throw new IOException("Two entries in META are same " + previous);
+          }
+
+          // show proof of progress to the user, once for every 100 records.
+          if (countRecord % 100 == 0) {
+            errors.progress();
+          }
+          countRecord++;
+          return true;
+        } catch (RuntimeException e) {
+          LOG.error("Result=" + result);
+          throw e;
+        }
+      }
+    };
+
+    // Scan -ROOT- to pick up META regions
+    MetaScanner.metaScan(conf, visitor, null, null,
+      Integer.MAX_VALUE, HConstants.ROOT_TABLE_NAME);
+
+    // Scan .META. to pick up user regions
+    MetaScanner.metaScan(conf, visitor);
+    errors.print("");
+  }
+
+  /**
+   * Stores the entries scanned from META
+   */
+  private static class MetaEntry extends HRegionInfo {
+    HServerAddress regionServer;   // server hosting this region
+    long modTime;          // timestamp of most recent modification metadata
+
+    public MetaEntry(HRegionInfo rinfo, HServerAddress regionServer,
+                     byte[] startCode, long modTime) {
+      super(rinfo);
+      this.regionServer = regionServer;
+      this.modTime = modTime;
+    }
+  }
+
+  /**
+   * Maintain information about a particular region.
+   */
+  static class HbckInfo {
+    boolean onlyEdits = false;
+    MetaEntry metaEntry = null;
+    FileStatus foundRegionDir = null;
+    List<HServerAddress> deployedOn = Lists.newArrayList();
+
+    HbckInfo(MetaEntry metaEntry) {
+      this.metaEntry = metaEntry;
+    }
+
+    public String toString() {
+      if (metaEntry != null) {
+        return metaEntry.getRegionNameAsString();
+      } else if (foundRegionDir != null) {
+        return foundRegionDir.getPath().toString();
+      } else {
+        return "UNKNOWN_REGION on " + Joiner.on(", ").join(deployedOn);
+      }
+    }
+  }
+
+  /**
+   * Prints summary of all tables found on the system.
+   */
+  private void printTableSummary() {
+    System.out.println("Summary:");
+    for (TInfo tInfo : tablesInfo.values()) {
+      if (tInfo.check()) {
+        System.out.println("  " + tInfo.getName() + " is okay.");
+      } else {
+        System.out.println("Table " + tInfo.getName() + " is inconsistent.");
+      }
+      System.out.println("    Number of regions: " + tInfo.getNumRegions());
+      System.out.print("    Deployed on: ");
+      for (HServerAddress server : tInfo.deployedOn) {
+        System.out.print(" " + server.toString());
+      }
+      System.out.println();
+    }
+  }
+
+  interface ErrorReporter {
+    public void reportError(String message);
+    public int summarize();
+    public void detail(String details);
+    public void progress();
+    public void print(String message);
+  }
+
+  private static class PrintingErrorReporter implements ErrorReporter {
+    public int errorCount = 0;
+    private int showProgress;
+
+    public void reportError(String message) {
+      if (!summary) {
+        System.out.println("ERROR: " + message);
+      }
+      errorCount++;
+      showProgress = 0;
+    }
+
+    public int summarize() {
+      System.out.println(Integer.toString(errorCount) +
+                         " inconsistencies detected.");
+      if (errorCount == 0) {
+        System.out.println("Status: OK");
+        return 0;
+      } else {
+        System.out.println("Status: INCONSISTENT");
+        return -1;
+      }
+    }
+
+    public void print(String message) {
+      if (!summary) {
+        System.out.println(message);
+      }
+    }
+
+    public void detail(String message) {
+      if (details) {
+        System.out.println(message);
+      }
+      showProgress = 0;
+    }
+
+    public void progress() {
+      if (showProgress++ == 10) {
+        if (!summary) {
+          System.out.print(".");
+        }
+        showProgress = 0;
+      }
+    }
+  }
+
+  /**
+   * Display the full report from fsck.
+   * This displays all live and dead region servers, and all known regions.
+   */
+  void displayFullReport() {
+    details = true;
+  }
+
+  /**
+   * Set summary mode.
+   * Print only summary of the tables and status (OK or INCONSISTENT)
+   */
+  void setSummary() {
+    summary = true;
+  }
+
+  /**
+   * Check if we should rerun fsck again. This checks if we've tried to
+   * fix something and we should rerun fsck tool again.
+   * Display the full report from fsck. This displays all live and dead
+   * region servers, and all known regions.
+   */
+  void setShouldRerun() {
+    rerun = true;
+  }
+
+  boolean shouldRerun() {
+    return rerun;
+  }
+
+  /**
+   * Fix inconsistencies found by fsck. This should try to fix errors (if any)
+   * found by fsck utility.
+   */
+  void setFixErrors() {
+    fix = true;
+  }
+
+  boolean shouldFix() {
+    return fix;
+  }
+
+  /**
+   * We are interested in only those tables that have not changed their state in
+   * META during the last few seconds specified by hbase.admin.fsck.timelag
+   * @param seconds - the time in seconds
+   */
+  void setTimeLag(long seconds) {
+    timelag = seconds * 1000; // convert to milliseconds
+  }
+
+  protected static void printUsageAndExit() {
+    System.err.println("Usage: fsck [opts] ");
+    System.err.println(" where [opts] are:");
+    System.err.println("   -details Display full report of all regions.");
+    System.err.println("   -timelag {timeInSeconds}  Process only regions that " +
+                       " have not experienced any metadata updates in the last " +
+                       " {{timeInSeconds} seconds.");
+    System.err.println("   -fix Try to fix some of the errors.");
+    System.err.println("   -summary Print only summary of the tables and status.");
+
+    Runtime.getRuntime().exit(-2);
+  }
+
+  /**
+   * Main program
+   * @param args
+   * @throws Exception
+   */
+  public static void main(String [] args) throws Exception {
+
+    // create a fsck object
+    Configuration conf = HBaseConfiguration.create();
+    conf.set("fs.defaultFS", conf.get("hbase.rootdir"));
+    HBaseFsck fsck = new HBaseFsck(conf);
+
+    // Process command-line args.
+    for (int i = 0; i < args.length; i++) {
+      String cmd = args[i];
+      if (cmd.equals("-details")) {
+        fsck.displayFullReport();
+      } else if (cmd.equals("-timelag")) {
+        if (i == args.length - 1) {
+          System.err.println("HBaseFsck: -timelag needs a value.");
+          printUsageAndExit();
+        }
+        try {
+          long timelag = Long.parseLong(args[i+1]);
+          fsck.setTimeLag(timelag);
+        } catch (NumberFormatException e) {
+          System.err.println("-timelag needs a numeric value.");
+          printUsageAndExit();
+        }
+        i++;
+      } else if (cmd.equals("-fix")) {
+        fsck.setFixErrors();
+      } else if (cmd.equals("-summary")) {
+        fsck.setSummary();
+      } else {
+        String str = "Unknown command line option : " + cmd;
+        LOG.info(str);
+        System.out.println(str);
+        printUsageAndExit();
+      }
+    }
+    // do the real work of fsck
+    int code = fsck.doWork();
+    // If we have changed the HBase state it is better to run fsck again
+    // to see if we haven't broken something else in the process.
+    // We run it only once more because otherwise we can easily fall into
+    // an infinite loop.
+    if (fsck.shouldRerun()) {
+      code = fsck.doWork();
+    }
+
+    Runtime.getRuntime().exit(code);
+  }
+}
+
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java
new file mode 100644
index 0000000..b624d28
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java
@@ -0,0 +1,108 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.zookeeper.KeeperException;
+
+public class HBaseFsckRepair {
+
+  /**
+   * Fix dupe assignment by doing silent closes on each RS hosting the region
+   * and then force ZK unassigned node to OFFLINE to trigger assignment by
+   * master.
+   * @param conf
+   * @param region
+   * @param servers
+   * @throws IOException
+   * @throws KeeperException
+   * @throws InterruptedException
+   */
+  public static void fixDupeAssignment(Configuration conf, HRegionInfo region,
+      List<HServerAddress> servers)
+  throws IOException, KeeperException, InterruptedException {
+
+    HRegionInfo actualRegion = new HRegionInfo(region);
+
+    // Close region on the servers silently
+    for(HServerAddress server : servers) {
+      closeRegionSilentlyAndWait(conf, server, actualRegion);
+    }
+
+    // Force ZK node to OFFLINE so master assigns
+    forceOfflineInZK(conf, actualRegion);
+  }
+
+  /**
+   * Fix unassigned by creating/transition the unassigned ZK node for this
+   * region to OFFLINE state with a special flag to tell the master that this
+   * is a forced operation by HBCK.
+   * @param conf
+   * @param region
+   * @throws IOException
+   * @throws KeeperException
+   */
+  public static void fixUnassigned(Configuration conf, HRegionInfo region)
+  throws IOException, KeeperException {
+    HRegionInfo actualRegion = new HRegionInfo(region);
+
+    // Force ZK node to OFFLINE so master assigns
+    forceOfflineInZK(conf, actualRegion);
+  }
+
+  private static void forceOfflineInZK(Configuration conf, HRegionInfo region)
+  throws ZooKeeperConnectionException, KeeperException, IOException {
+    ZKAssign.createOrForceNodeOffline(
+        HConnectionManager.getConnection(conf).getZooKeeperWatcher(),
+        region, HConstants.HBCK_CODE_NAME);
+  }
+
+  private static void closeRegionSilentlyAndWait(Configuration conf,
+      HServerAddress server, HRegionInfo region)
+  throws IOException, InterruptedException {
+    HRegionInterface rs =
+      HConnectionManager.getConnection(conf).getHRegionConnection(server);
+    rs.closeRegion(region, false);
+    long timeout = conf.getLong("hbase.hbck.close.timeout", 120000);
+    long expiration = timeout + System.currentTimeMillis();
+    while (System.currentTimeMillis() < expiration) {
+      try {
+        HRegionInfo rsRegion = rs.getRegionInfo(region.getRegionName());
+        if (rsRegion == null) throw new NotServingRegionException();
+      } catch (Exception e) {
+        return;
+      }
+      Thread.sleep(1000);
+    }
+    throw new IOException("Region " + region + " failed to close within" +
+        " timeout " + timeout);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/HMerge.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/HMerge.java
new file mode 100644
index 0000000..c447287
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/HMerge.java
@@ -0,0 +1,429 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.RemoteExceptionHandler;
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.Random;
+
+/**
+ * A non-instantiable class that has a static method capable of compacting
+ * a table by merging adjacent regions.
+ */
+class HMerge {
+  static final Log LOG = LogFactory.getLog(HMerge.class);
+  static final Random rand = new Random();
+
+  /*
+   * Not instantiable
+   */
+  private HMerge() {
+    super();
+  }
+
+  /**
+   * Scans the table and merges two adjacent regions if they are small. This
+   * only happens when a lot of rows are deleted.
+   *
+   * When merging the META region, the HBase instance must be offline.
+   * When merging a normal table, the HBase instance must be online, but the
+   * table must be disabled.
+   *
+   * @param conf        - configuration object for HBase
+   * @param fs          - FileSystem where regions reside
+   * @param tableName   - Table to be compacted
+   * @throws IOException
+   */
+  public static void merge(Configuration conf, FileSystem fs,
+    final byte [] tableName)
+  throws IOException {
+    merge(conf, fs, tableName, true);
+  }
+
+  /**
+   * Scans the table and merges two adjacent regions if they are small. This
+   * only happens when a lot of rows are deleted.
+   *
+   * When merging the META region, the HBase instance must be offline.
+   * When merging a normal table, the HBase instance must be online, but the
+   * table must be disabled.
+   *
+   * @param conf        - configuration object for HBase
+   * @param fs          - FileSystem where regions reside
+   * @param tableName   - Table to be compacted
+   * @param testMasterRunning True if we are to verify master is down before
+   * running merge
+   * @throws IOException
+   */
+  public static void merge(Configuration conf, FileSystem fs,
+    final byte [] tableName, final boolean testMasterRunning)
+  throws IOException {
+    boolean masterIsRunning = false;
+    if (testMasterRunning) {
+      HConnection connection = HConnectionManager.getConnection(conf);
+      masterIsRunning = connection.isMasterRunning();
+    }
+    HConnectionManager.deleteConnection(conf, true);
+    if (Bytes.equals(tableName, HConstants.META_TABLE_NAME)) {
+      if (masterIsRunning) {
+        throw new IllegalStateException(
+            "Can not compact META table if instance is on-line");
+      }
+      new OfflineMerger(conf, fs).process();
+    } else {
+      if(!masterIsRunning) {
+        throw new IllegalStateException(
+            "HBase instance must be running to merge a normal table");
+      }
+      HBaseAdmin admin = new HBaseAdmin(conf);
+      if (!admin.isTableDisabled(tableName)) {
+        throw new TableNotDisabledException(tableName);
+      }
+      new OnlineMerger(conf, fs, tableName).process();
+    }
+  }
+
+  private static abstract class Merger {
+    protected final Configuration conf;
+    protected final FileSystem fs;
+    protected final Path tabledir;
+    protected final HLog hlog;
+    private final long maxFilesize;
+
+
+    protected Merger(Configuration conf, FileSystem fs,
+      final byte [] tableName)
+    throws IOException {
+      this.conf = conf;
+      this.fs = fs;
+      this.maxFilesize = conf.getLong("hbase.hregion.max.filesize",
+          HConstants.DEFAULT_MAX_FILE_SIZE);
+
+      this.tabledir = new Path(
+          fs.makeQualified(new Path(conf.get(HConstants.HBASE_DIR))),
+          Bytes.toString(tableName)
+      );
+      Path logdir = new Path(tabledir, "merge_" + System.currentTimeMillis() +
+          HConstants.HREGION_LOGDIR_NAME);
+      Path oldLogDir = new Path(tabledir, HConstants.HREGION_OLDLOGDIR_NAME);
+      this.hlog = new HLog(fs, logdir, oldLogDir, conf);
+    }
+
+    void process() throws IOException {
+      try {
+        for (HRegionInfo[] regionsToMerge = next();
+            regionsToMerge != null;
+            regionsToMerge = next()) {
+          if (!merge(regionsToMerge)) {
+            return;
+          }
+        }
+      } finally {
+        try {
+          hlog.closeAndDelete();
+
+        } catch(IOException e) {
+          LOG.error(e);
+        }
+      }
+    }
+
+    protected boolean merge(final HRegionInfo[] info) throws IOException {
+      if (info.length < 2) {
+        LOG.info("only one region - nothing to merge");
+        return false;
+      }
+
+      HRegion currentRegion = null;
+      long currentSize = 0;
+      HRegion nextRegion = null;
+      long nextSize = 0;
+      for (int i = 0; i < info.length - 1; i++) {
+        if (currentRegion == null) {
+          currentRegion =
+            HRegion.newHRegion(tabledir, hlog, fs, conf, info[i], null);
+          currentRegion.initialize();
+          currentSize = currentRegion.getLargestHStoreSize();
+        }
+        nextRegion =
+          HRegion.newHRegion(tabledir, hlog, fs, conf, info[i + 1], null);
+        nextRegion.initialize();
+        nextSize = nextRegion.getLargestHStoreSize();
+
+        if ((currentSize + nextSize) <= (maxFilesize / 2)) {
+          // We merge two adjacent regions if their total size is less than
+          // one half of the desired maximum size
+          LOG.info("Merging regions " + currentRegion.getRegionNameAsString() +
+            " and " + nextRegion.getRegionNameAsString());
+          HRegion mergedRegion =
+            HRegion.mergeAdjacent(currentRegion, nextRegion);
+          updateMeta(currentRegion.getRegionName(), nextRegion.getRegionName(),
+              mergedRegion);
+          break;
+        }
+        LOG.info("not merging regions " + Bytes.toString(currentRegion.getRegionName())
+            + " and " + Bytes.toString(nextRegion.getRegionName()));
+        currentRegion.close();
+        currentRegion = nextRegion;
+        currentSize = nextSize;
+      }
+      if(currentRegion != null) {
+        currentRegion.close();
+      }
+      return true;
+    }
+
+    protected abstract HRegionInfo[] next() throws IOException;
+
+    protected abstract void updateMeta(final byte [] oldRegion1,
+      final byte [] oldRegion2, HRegion newRegion)
+    throws IOException;
+
+  }
+
+  /** Instantiated to compact a normal user table */
+  private static class OnlineMerger extends Merger {
+    private final byte [] tableName;
+    private final HTable table;
+    private final ResultScanner metaScanner;
+    private HRegionInfo latestRegion;
+
+    OnlineMerger(Configuration conf, FileSystem fs,
+      final byte [] tableName)
+    throws IOException {
+      super(conf, fs, tableName);
+      this.tableName = tableName;
+      this.table = new HTable(conf, HConstants.META_TABLE_NAME);
+      this.metaScanner = table.getScanner(HConstants.CATALOG_FAMILY,
+          HConstants.REGIONINFO_QUALIFIER);
+      this.latestRegion = null;
+    }
+
+    private HRegionInfo nextRegion() throws IOException {
+      try {
+        Result results = getMetaRow();
+        if (results == null) {
+          return null;
+        }
+        byte[] regionInfoValue = results.getValue(HConstants.CATALOG_FAMILY,
+            HConstants.REGIONINFO_QUALIFIER);
+        if (regionInfoValue == null || regionInfoValue.length == 0) {
+          throw new NoSuchElementException("meta region entry missing " +
+              Bytes.toString(HConstants.CATALOG_FAMILY) + ":" +
+              Bytes.toString(HConstants.REGIONINFO_QUALIFIER));
+        }
+        HRegionInfo region = Writables.getHRegionInfo(regionInfoValue);
+        if (!Bytes.equals(region.getTableDesc().getName(), this.tableName)) {
+          return null;
+        }
+        return region;
+      } catch (IOException e) {
+        e = RemoteExceptionHandler.checkIOException(e);
+        LOG.error("meta scanner error", e);
+        metaScanner.close();
+        throw e;
+      }
+    }
+
+    /*
+     * Check current row has a HRegionInfo.  Skip to next row if HRI is empty.
+     * @return A Map of the row content else null if we are off the end.
+     * @throws IOException
+     */
+    private Result getMetaRow() throws IOException {
+      Result currentRow = metaScanner.next();
+      boolean foundResult = false;
+      while (currentRow != null) {
+        LOG.info("Row: <" + Bytes.toString(currentRow.getRow()) + ">");
+        byte[] regionInfoValue = currentRow.getValue(HConstants.CATALOG_FAMILY,
+            HConstants.REGIONINFO_QUALIFIER);
+        if (regionInfoValue == null || regionInfoValue.length == 0) {
+          currentRow = metaScanner.next();
+          continue;
+        }
+        foundResult = true;
+        break;
+      }
+      return foundResult ? currentRow : null;
+    }
+
+    @Override
+    protected HRegionInfo[] next() throws IOException {
+      List<HRegionInfo> regions = new ArrayList<HRegionInfo>();
+      if(latestRegion == null) {
+        latestRegion = nextRegion();
+      }
+      if(latestRegion != null) {
+        regions.add(latestRegion);
+      }
+      latestRegion = nextRegion();
+      if(latestRegion != null) {
+        regions.add(latestRegion);
+      }
+      return regions.toArray(new HRegionInfo[regions.size()]);
+    }
+
+    @Override
+    protected void updateMeta(final byte [] oldRegion1,
+        final byte [] oldRegion2,
+      HRegion newRegion)
+    throws IOException {
+      byte[][] regionsToDelete = {oldRegion1, oldRegion2};
+      for (int r = 0; r < regionsToDelete.length; r++) {
+        if(Bytes.equals(regionsToDelete[r], latestRegion.getRegionName())) {
+          latestRegion = null;
+        }
+        Delete delete = new Delete(regionsToDelete[r]);
+        table.delete(delete);
+        if(LOG.isDebugEnabled()) {
+          LOG.debug("updated columns in row: " + Bytes.toString(regionsToDelete[r]));
+        }
+      }
+      newRegion.getRegionInfo().setOffline(true);
+
+      Put put = new Put(newRegion.getRegionName());
+      put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+        Writables.getBytes(newRegion.getRegionInfo()));
+      table.put(put);
+
+      if(LOG.isDebugEnabled()) {
+        LOG.debug("updated columns in row: "
+            + Bytes.toString(newRegion.getRegionName()));
+      }
+    }
+  }
+
+  /** Instantiated to compact the meta region */
+  private static class OfflineMerger extends Merger {
+    private final List<HRegionInfo> metaRegions = new ArrayList<HRegionInfo>();
+    private final HRegion root;
+
+    OfflineMerger(Configuration conf, FileSystem fs)
+        throws IOException {
+      super(conf, fs, HConstants.META_TABLE_NAME);
+
+      Path rootTableDir = HTableDescriptor.getTableDir(
+          fs.makeQualified(new Path(conf.get(HConstants.HBASE_DIR))),
+          HConstants.ROOT_TABLE_NAME);
+
+      // Scan root region to find all the meta regions
+
+      root = HRegion.newHRegion(rootTableDir, hlog, fs, conf,
+          HRegionInfo.ROOT_REGIONINFO, null);
+      root.initialize();
+
+      Scan scan = new Scan();
+      scan.addColumn(HConstants.CATALOG_FAMILY,
+          HConstants.REGIONINFO_QUALIFIER);
+      InternalScanner rootScanner =
+        root.getScanner(scan);
+
+      try {
+        List<KeyValue> results = new ArrayList<KeyValue>();
+        while(rootScanner.next(results)) {
+          for(KeyValue kv: results) {
+            HRegionInfo info = Writables.getHRegionInfoOrNull(kv.getValue());
+            if (info != null) {
+              metaRegions.add(info);
+            }
+          }
+        }
+      } finally {
+        rootScanner.close();
+        try {
+          root.close();
+
+        } catch(IOException e) {
+          LOG.error(e);
+        }
+      }
+    }
+
+    @Override
+    protected HRegionInfo[] next() {
+      HRegionInfo[] results = null;
+      if (metaRegions.size() > 0) {
+        results = metaRegions.toArray(new HRegionInfo[metaRegions.size()]);
+        metaRegions.clear();
+      }
+      return results;
+    }
+
+    @Override
+    protected void updateMeta(final byte [] oldRegion1,
+      final byte [] oldRegion2, HRegion newRegion)
+    throws IOException {
+      byte[][] regionsToDelete = {oldRegion1, oldRegion2};
+      for(int r = 0; r < regionsToDelete.length; r++) {
+        Delete delete = new Delete(regionsToDelete[r]);
+        delete.deleteColumns(HConstants.CATALOG_FAMILY,
+            HConstants.REGIONINFO_QUALIFIER);
+        delete.deleteColumns(HConstants.CATALOG_FAMILY,
+            HConstants.SERVER_QUALIFIER);
+        delete.deleteColumns(HConstants.CATALOG_FAMILY,
+            HConstants.STARTCODE_QUALIFIER);
+        delete.deleteColumns(HConstants.CATALOG_FAMILY,
+            HConstants.SPLITA_QUALIFIER);
+        delete.deleteColumns(HConstants.CATALOG_FAMILY,
+            HConstants.SPLITB_QUALIFIER);
+        root.delete(delete, null, true);
+
+        if(LOG.isDebugEnabled()) {
+          LOG.debug("updated columns in row: " + Bytes.toString(regionsToDelete[r]));
+        }
+      }
+      HRegionInfo newInfo = newRegion.getRegionInfo();
+      newInfo.setOffline(true);
+      Put put = new Put(newRegion.getRegionName());
+      put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+          Writables.getBytes(newInfo));
+      root.put(put);
+      if(LOG.isDebugEnabled()) {
+        LOG.debug("updated columns in row: " + Bytes.toString(newRegion.getRegionName()));
+      }
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Hash.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Hash.java
new file mode 100644
index 0000000..0a533d9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Hash.java
@@ -0,0 +1,134 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * This class represents a common API for hashing functions.
+ */
+public abstract class Hash {
+  /** Constant to denote invalid hash type. */
+  public static final int INVALID_HASH = -1;
+  /** Constant to denote {@link JenkinsHash}. */
+  public static final int JENKINS_HASH = 0;
+  /** Constant to denote {@link MurmurHash}. */
+  public static final int MURMUR_HASH  = 1;
+
+  /**
+   * This utility method converts String representation of hash function name
+   * to a symbolic constant. Currently two function types are supported,
+   * "jenkins" and "murmur".
+   * @param name hash function name
+   * @return one of the predefined constants
+   */
+  public static int parseHashType(String name) {
+    if ("jenkins".equalsIgnoreCase(name)) {
+      return JENKINS_HASH;
+    } else if ("murmur".equalsIgnoreCase(name)) {
+      return MURMUR_HASH;
+    } else {
+      return INVALID_HASH;
+    }
+  }
+
+  /**
+   * This utility method converts the name of the configured
+   * hash type to a symbolic constant.
+   * @param conf configuration
+   * @return one of the predefined constants
+   */
+  public static int getHashType(Configuration conf) {
+    String name = conf.get("hbase.hash.type", "murmur");
+    return parseHashType(name);
+  }
+
+  /**
+   * Get a singleton instance of hash function of a given type.
+   * @param type predefined hash type
+   * @return hash function instance, or null if type is invalid
+   */
+  public static Hash getInstance(int type) {
+    switch(type) {
+    case JENKINS_HASH:
+      return JenkinsHash.getInstance();
+    case MURMUR_HASH:
+      return MurmurHash.getInstance();
+    default:
+      return null;
+    }
+  }
+
+  /**
+   * Get a singleton instance of hash function of a type
+   * defined in the configuration.
+   * @param conf current configuration
+   * @return defined hash type, or null if type is invalid
+   */
+  public static Hash getInstance(Configuration conf) {
+    int type = getHashType(conf);
+    return getInstance(type);
+  }
+
+  /**
+   * Calculate a hash using all bytes from the input argument, and
+   * a seed of -1.
+   * @param bytes input bytes
+   * @return hash value
+   */
+  public int hash(byte[] bytes) {
+    return hash(bytes, bytes.length, -1);
+  }
+
+  /**
+   * Calculate a hash using all bytes from the input argument,
+   * and a provided seed value.
+   * @param bytes input bytes
+   * @param initval seed value
+   * @return hash value
+   */
+  public int hash(byte[] bytes, int initval) {
+    return hash(bytes, 0, bytes.length, initval);
+  }
+
+  /**
+   * Calculate a hash using bytes from 0 to <code>length</code>, and
+   * the provided seed value
+   * @param bytes input bytes
+   * @param length length of the valid bytes after offset to consider
+   * @param initval seed value
+   * @return hash value
+   */
+  public int hash(byte[] bytes, int length, int initval) {
+    return hash(bytes, 0, length, initval);
+  }
+
+  /**
+   * Calculate a hash using bytes from 0 to <code>length</code>, and
+   * the provided seed value
+   * @param bytes input bytes
+   * @param offset the offset into the array to start consideration
+   * @param length length of the valid bytes after offset to consider
+   * @param initval seed value
+   * @return hash value
+   */
+  public abstract int hash(byte[] bytes, int offset, int length, int initval);
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/IncrementingEnvironmentEdge.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/IncrementingEnvironmentEdge.java
new file mode 100644
index 0000000..e105b77
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/IncrementingEnvironmentEdge.java
@@ -0,0 +1,39 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * Uses an incrementing algorithm instead of the default.
+ */
+public class IncrementingEnvironmentEdge implements EnvironmentEdge {
+
+  private long timeIncrement = 1;
+
+  /**
+   * {@inheritDoc}
+   * <p/>
+   * This method increments a known value for the current time each time this
+   * method is called. The first value is 1.
+   */
+  @Override
+  public synchronized long currentTimeMillis() {
+    return timeIncrement++;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/InfoServer.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/InfoServer.java
new file mode 100644
index 0000000..6ed9fe6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/InfoServer.java
@@ -0,0 +1,123 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.http.HttpServer;
+import org.mortbay.jetty.handler.ContextHandlerCollection;
+import org.mortbay.jetty.servlet.Context;
+import org.mortbay.jetty.servlet.DefaultServlet;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.Map;
+
+/**
+ * Create a Jetty embedded server to answer http requests. The primary goal
+ * is to serve up status information for the server.
+ * There are three contexts:
+ *   "/stacks/" -> points to stack trace
+ *   "/static/" -> points to common static files (src/hbase-webapps/static)
+ *   "/" -> the jsp server code from (src/hbase-webapps/<name>)
+ */
+public class InfoServer extends HttpServer {
+  /**
+   * Create a status server on the given port.
+   * The jsp scripts are taken from src/hbase-webapps/<code>name<code>.
+   * @param name The name of the server
+   * @param bindAddress address to bind to
+   * @param port The port to use on the server
+   * @param findPort whether the server should start at the given port and
+   * increment by 1 until it finds a free port.
+   * @throws IOException e
+   */
+  public InfoServer(String name, String bindAddress, int port, boolean findPort)
+  throws IOException {
+    super(name, bindAddress, port, findPort, HBaseConfiguration.create());
+    webServer.addHandler(new ContextHandlerCollection());
+  }
+
+  protected void addDefaultApps(ContextHandlerCollection parent, String appDir)
+  throws IOException {
+    super.addDefaultApps(parent, appDir);
+    // Must be same as up in hadoop.
+    final String logsContextPath = "/logs";
+    // Now, put my logs in place of hadoops... disable old one first.
+    Context oldLogsContext = null;
+    for (Map.Entry<Context, Boolean> e : defaultContexts.entrySet()) {
+      if (e.getKey().getContextPath().equals(logsContextPath)) {
+        oldLogsContext = e.getKey();
+        break;
+      }
+    }
+    if (oldLogsContext != null) {
+      this.defaultContexts.put(oldLogsContext, Boolean.FALSE);
+    }
+    // Now do my logs.
+    // set up the context for "/logs/" if "hadoop.log.dir" property is defined.
+    String logDir = System.getProperty("hbase.log.dir");
+    if (logDir != null) {
+      Context logContext = new Context(parent, "/logs");
+      logContext.setResourceBase(logDir);
+      logContext.addServlet(DefaultServlet.class, "/");
+      defaultContexts.put(logContext, true);
+    }
+  }
+
+  /**
+   * Get the pathname to the <code>path</code> files.
+   * @return the pathname as a URL
+   */
+  @Override
+  protected String getWebAppsPath() throws IOException {
+    // Hack: webapps is not a unique enough element to find in CLASSPATH
+    // We'll more than likely find the hadoop webapps dir.  So, instead
+    // look for the 'master' webapp in the webapps subdir.  That should
+    // get us the hbase context.  Presumption is that place where the
+    // master webapp resides is where we want this InfoServer picking up
+    // web applications.
+    final String master = "master";
+    String p = getWebAppDir(master);
+    // Now strip master + the separator off the end of our context
+    return p.substring(0, p.length() - (master.length() + 1/* The separator*/));
+  }
+
+  private static String getWebAppsPath(final String path)
+  throws IOException {
+    URL url = InfoServer.class.getClassLoader().getResource(path);
+    if (url == null)
+      throw new IOException("hbase-webapps not found in CLASSPATH: " + path);
+    return url.toString();
+  }
+
+  /**
+   * Get the path for this web app
+   * @param webappName web app
+   * @return path
+   * @throws IOException e
+   */
+  public static String getWebAppDir(final String webappName)
+  throws IOException {
+    String webappDir;
+    webappDir = getWebAppsPath("hbase-webapps/" + webappName);
+    return webappDir;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/JVMClusterUtil.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/JVMClusterUtil.java
new file mode 100644
index 0000000..baf0c27
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/JVMClusterUtil.java
@@ -0,0 +1,249 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+
+/**
+ * Utility used running a cluster all in the one JVM.
+ */
+public class JVMClusterUtil {
+  private static final Log LOG = LogFactory.getLog(JVMClusterUtil.class);
+
+  /**
+   * Datastructure to hold RegionServer Thread and RegionServer instance
+   */
+  public static class RegionServerThread extends Thread {
+    private final HRegionServer regionServer;
+
+    public RegionServerThread(final HRegionServer r, final int index) {
+      super(r, "RegionServer:" + index + ";" + r.getServerName());
+      this.regionServer = r;
+    }
+
+    /** @return the region server */
+    public HRegionServer getRegionServer() {
+      return this.regionServer;
+    }
+
+    /**
+     * Block until the region server has come online, indicating it is ready
+     * to be used.
+     */
+    public void waitForServerOnline() {
+      // The server is marked online after the init method completes inside of
+      // the HRS#run method.  HRS#init can fail for whatever region.  In those
+      // cases, we'll jump out of the run without setting online flag.  Check
+      // stopRequested so we don't wait here a flag that will never be flipped.
+      while (!this.regionServer.isOnline() &&
+          !this.regionServer.isStopped()) {
+        try {
+          Thread.sleep(1000);
+        } catch (InterruptedException e) {
+          // continue waiting
+        }
+      }
+    }
+  }
+
+  /**
+   * Creates a {@link RegionServerThread}.
+   * Call 'start' on the returned thread to make it run.
+   * @param c Configuration to use.
+   * @param hrsc Class to create.
+   * @param index Used distinguishing the object returned.
+   * @throws IOException
+   * @return Region server added.
+   */
+  public static JVMClusterUtil.RegionServerThread createRegionServerThread(
+      final Configuration c, final Class<? extends HRegionServer> hrsc,
+      final int index)
+  throws IOException {
+    HRegionServer server;
+    try {
+      server = hrsc.getConstructor(Configuration.class).newInstance(c);
+    } catch (InvocationTargetException ite) {
+      Throwable target = ite.getTargetException();
+      throw new RuntimeException("Failed construction of RegionServer: " +
+        hrsc.toString() + ((target.getCause() != null)?
+          target.getCause().getMessage(): ""), target);
+    } catch (Exception e) {
+      IOException ioe = new IOException();
+      ioe.initCause(e);
+      throw ioe;
+    }
+    return new JVMClusterUtil.RegionServerThread(server, index);
+  }
+
+
+  /**
+   * Datastructure to hold Master Thread and Master instance
+   */
+  public static class MasterThread extends Thread {
+    private final HMaster master;
+
+    public MasterThread(final HMaster m, final int index) {
+      super(m, "Master:" + index + ";" + m.getServerName());
+      this.master = m;
+    }
+
+    /** @return the master */
+    public HMaster getMaster() {
+      return this.master;
+    }
+
+    /**
+     * Block until the master has come online, indicating it is ready
+     * to be used.
+     */
+    public void waitForServerOnline() {
+      // The server is marked online after init begins but before race to become
+      // the active master.
+      while (!this.master.isMasterRunning() && !this.master.isStopped()) {
+        try {
+          Thread.sleep(1000);
+        } catch (InterruptedException e) {
+          // continue waiting
+        }
+      }
+    }
+  }
+
+  /**
+   * Creates a {@link MasterThread}.
+   * Call 'start' on the returned thread to make it run.
+   * @param c Configuration to use.
+   * @param hmc Class to create.
+   * @param index Used distinguishing the object returned.
+   * @throws IOException
+   * @return Master added.
+   */
+  public static JVMClusterUtil.MasterThread createMasterThread(
+      final Configuration c, final Class<? extends HMaster> hmc,
+      final int index)
+  throws IOException {
+    HMaster server;
+    try {
+      server = hmc.getConstructor(Configuration.class).newInstance(c);
+    } catch (InvocationTargetException ite) {
+      Throwable target = ite.getTargetException();
+      throw new RuntimeException("Failed construction of RegionServer: " +
+        hmc.toString() + ((target.getCause() != null)?
+          target.getCause().getMessage(): ""), target);
+    } catch (Exception e) {
+      IOException ioe = new IOException();
+      ioe.initCause(e);
+      throw ioe;
+    }
+    return new JVMClusterUtil.MasterThread(server, index);
+  }
+
+  /**
+   * Start the cluster.  Waits until there is a primary master and returns its
+   * address.
+   * @param masters
+   * @param regionservers
+   * @return Address to use contacting primary master.
+   */
+  public static String startup(final List<JVMClusterUtil.MasterThread> masters,
+      final List<JVMClusterUtil.RegionServerThread> regionservers) {
+    if (masters != null) {
+      for (JVMClusterUtil.MasterThread t : masters) {
+        t.start();
+      }
+    }
+    if (regionservers != null) {
+      for (JVMClusterUtil.RegionServerThread t: regionservers) {
+        t.start();
+      }
+    }
+    if (masters == null || masters.isEmpty()) {
+      return null;
+    }
+    // Wait for an active master
+    while (true) {
+      for (JVMClusterUtil.MasterThread t : masters) {
+        if (t.master.isActiveMaster()) {
+          return t.master.getMasterAddress().toString();
+        }
+      }
+      try {
+        Thread.sleep(1000);
+      } catch(InterruptedException e) {
+        // Keep waiting
+      }
+    }
+  }
+
+  /**
+   * @param masters
+   * @param regionservers
+   */
+  public static void shutdown(final List<MasterThread> masters,
+      final List<RegionServerThread> regionservers) {
+    LOG.debug("Shutting down HBase Cluster");
+    if (masters != null) {
+      for (JVMClusterUtil.MasterThread t : masters) {
+        if (t.master.isActiveMaster()) {
+          t.master.shutdown();
+        } else {
+          t.master.stopMaster();
+        }
+      }
+    }
+    // regionServerThreads can never be null because they are initialized when
+    // the class is constructed.
+      for(Thread t: regionservers) {
+        if (t.isAlive()) {
+          try {
+            t.join();
+          } catch (InterruptedException e) {
+            // continue
+          }
+        }
+      }
+    if (masters != null) {
+      for (JVMClusterUtil.MasterThread t : masters) {
+        while (t.master.isAlive()) {
+          try {
+            // The below has been replaced to debug sometime hangs on end of
+            // tests.
+            // this.master.join():
+            Threads.threadDumpingIsAlive(t.master);
+          } catch(InterruptedException e) {
+            // continue
+          }
+        }
+      }
+    }
+    LOG.info("Shutdown of " +
+      ((masters != null) ? masters.size() : "0") + " master(s) and " +
+      ((regionservers != null) ? regionservers.size() : "0") +
+      " regionserver(s) complete");
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
new file mode 100644
index 0000000..1e67371
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java
@@ -0,0 +1,263 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.FileInputStream;
+import java.io.IOException;
+
+/**
+ * Produces 32-bit hash for hash table lookup.
+ *
+ * <pre>lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ *
+ * You can use this free for any purpose.  It's in the public domain.
+ * It has no warranty.
+ * </pre>
+ *
+ * @see <a href="http://burtleburtle.net/bob/c/lookup3.c">lookup3.c</a>
+ * @see <a href="http://www.ddj.com/184410284">Hash Functions (and how this
+ * function compares to others such as CRC, MD?, etc</a>
+ * @see <a href="http://burtleburtle.net/bob/hash/doobs.html">Has update on the
+ * Dr. Dobbs Article</a>
+ */
+public class JenkinsHash extends Hash {
+  private static long INT_MASK  = 0x00000000ffffffffL;
+  private static long BYTE_MASK = 0x00000000000000ffL;
+
+  private static JenkinsHash _instance = new JenkinsHash();
+
+  public static Hash getInstance() {
+    return _instance;
+  }
+
+  private static long rot(long val, int pos) {
+    return ((Integer.rotateLeft(
+        (int)(val & INT_MASK), pos)) & INT_MASK);
+  }
+
+  /**
+   * taken from  hashlittle() -- hash a variable-length key into a 32-bit value
+   *
+   * @param key the key (the unaligned variable-length array of bytes)
+   * @param nbytes number of bytes to include in hash
+   * @param initval can be any integer value
+   * @return a 32-bit value.  Every bit of the key affects every bit of the
+   * return value.  Two keys differing by one or two bits will have totally
+   * different hash values.
+   *
+   * <p>The best hash table sizes are powers of 2.  There is no need to do mod
+   * a prime (mod is sooo slow!).  If you need less than 32 bits, use a bitmask.
+   * For example, if you need only 10 bits, do
+   * <code>h = (h & hashmask(10));</code>
+   * In which case, the hash table should have hashsize(10) elements.
+   *
+   * <p>If you are hashing n strings byte[][] k, do it like this:
+   * for (int i = 0, h = 0; i < n; ++i) h = hash( k[i], h);
+   *
+   * <p>By Bob Jenkins, 2006.  bob_jenkins@burtleburtle.net.  You may use this
+   * code any way you wish, private, educational, or commercial.  It's free.
+   *
+   * <p>Use for hash table lookup, or anything where one collision in 2^^32 is
+   * acceptable.  Do NOT use for cryptographic purposes.
+  */
+  @Override
+  @SuppressWarnings("fallthrough")
+  public int hash(byte[] key, int off, int nbytes, int initval) {
+    int length = nbytes;
+    long a, b, c;       // We use longs because we don't have unsigned ints
+    a = b = c = (0x00000000deadbeefL + length + initval) & INT_MASK;
+    int offset = off;
+    for (; length > 12; offset += 12, length -= 12) {
+      //noinspection PointlessArithmeticExpression
+      a = (a + (key[offset + 0]    & BYTE_MASK)) & INT_MASK;
+      a = (a + (((key[offset + 1]  & BYTE_MASK) <<  8) & INT_MASK)) & INT_MASK;
+      a = (a + (((key[offset + 2]  & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+      a = (a + (((key[offset + 3]  & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+      b = (b + (key[offset + 4]    & BYTE_MASK)) & INT_MASK;
+      b = (b + (((key[offset + 5]  & BYTE_MASK) <<  8) & INT_MASK)) & INT_MASK;
+      b = (b + (((key[offset + 6]  & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+      b = (b + (((key[offset + 7]  & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+      c = (c + (key[offset + 8]    & BYTE_MASK)) & INT_MASK;
+      c = (c + (((key[offset + 9]  & BYTE_MASK) <<  8) & INT_MASK)) & INT_MASK;
+      c = (c + (((key[offset + 10] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+      c = (c + (((key[offset + 11] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+
+      /*
+       * mix -- mix 3 32-bit values reversibly.
+       * This is reversible, so any information in (a,b,c) before mix() is
+       * still in (a,b,c) after mix().
+       *
+       * If four pairs of (a,b,c) inputs are run through mix(), or through
+       * mix() in reverse, there are at least 32 bits of the output that
+       * are sometimes the same for one pair and different for another pair.
+       *
+       * This was tested for:
+       * - pairs that differed by one bit, by two bits, in any combination
+       *   of top bits of (a,b,c), or in any combination of bottom bits of
+       *   (a,b,c).
+       * - "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+       *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+       *    is commonly produced by subtraction) look like a single 1-bit
+       *    difference.
+       * - the base values were pseudorandom, all zero but one bit set, or
+       *   all zero plus a counter that starts at zero.
+       *
+       * Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+       * satisfy this are
+       *     4  6  8 16 19  4
+       *     9 15  3 18 27 15
+       *    14  9  3  7 17  3
+       * Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing for
+       * "differ" defined as + with a one-bit base and a two-bit delta.  I
+       * used http://burtleburtle.net/bob/hash/avalanche.html to choose
+       * the operations, constants, and arrangements of the variables.
+       *
+       * This does not achieve avalanche.  There are input bits of (a,b,c)
+       * that fail to affect some output bits of (a,b,c), especially of a.
+       * The most thoroughly mixed value is c, but it doesn't really even
+       * achieve avalanche in c.
+       *
+       * This allows some parallelism.  Read-after-writes are good at doubling
+       * the number of bits affected, so the goal of mixing pulls in the
+       * opposite direction as the goal of parallelism.  I did what I could.
+       * Rotates seem to cost as much as shifts on every machine I could lay
+       * my hands on, and rotates are much kinder to the top and bottom bits,
+       * so I used rotates.
+       *
+       * #define mix(a,b,c) \
+       * { \
+       *   a -= c;  a ^= rot(c, 4);  c += b; \
+       *   b -= a;  b ^= rot(a, 6);  a += c; \
+       *   c -= b;  c ^= rot(b, 8);  b += a; \
+       *   a -= c;  a ^= rot(c,16);  c += b; \
+       *   b -= a;  b ^= rot(a,19);  a += c; \
+       *   c -= b;  c ^= rot(b, 4);  b += a; \
+       * }
+       *
+       * mix(a,b,c);
+       */
+      a = (a - c) & INT_MASK;  a ^= rot(c, 4);  c = (c + b) & INT_MASK;
+      b = (b - a) & INT_MASK;  b ^= rot(a, 6);  a = (a + c) & INT_MASK;
+      c = (c - b) & INT_MASK;  c ^= rot(b, 8);  b = (b + a) & INT_MASK;
+      a = (a - c) & INT_MASK;  a ^= rot(c,16);  c = (c + b) & INT_MASK;
+      b = (b - a) & INT_MASK;  b ^= rot(a,19);  a = (a + c) & INT_MASK;
+      c = (c - b) & INT_MASK;  c ^= rot(b, 4);  b = (b + a) & INT_MASK;
+    }
+
+    //-------------------------------- last block: affect all 32 bits of (c)
+    switch (length) {                   // all the case statements fall through
+    case 12:
+      c = (c + (((key[offset + 11] & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+    case 11:
+      c = (c + (((key[offset + 10] & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+    case 10:
+      c = (c + (((key[offset + 9]  & BYTE_MASK) <<  8) & INT_MASK)) & INT_MASK;
+    case  9:
+      c = (c + (key[offset + 8]    & BYTE_MASK)) & INT_MASK;
+    case  8:
+      b = (b + (((key[offset + 7]  & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+    case  7:
+      b = (b + (((key[offset + 6]  & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+    case  6:
+      b = (b + (((key[offset + 5]  & BYTE_MASK) <<  8) & INT_MASK)) & INT_MASK;
+    case  5:
+      b = (b + (key[offset + 4]    & BYTE_MASK)) & INT_MASK;
+    case  4:
+      a = (a + (((key[offset + 3]  & BYTE_MASK) << 24) & INT_MASK)) & INT_MASK;
+    case  3:
+      a = (a + (((key[offset + 2]  & BYTE_MASK) << 16) & INT_MASK)) & INT_MASK;
+    case  2:
+      a = (a + (((key[offset + 1]  & BYTE_MASK) <<  8) & INT_MASK)) & INT_MASK;
+    case  1:
+      //noinspection PointlessArithmeticExpression
+      a = (a + (key[offset + 0]    & BYTE_MASK)) & INT_MASK;
+      break;
+    case  0:
+      return (int)(c & INT_MASK);
+    }
+    /*
+     * final -- final mixing of 3 32-bit values (a,b,c) into c
+     *
+     * Pairs of (a,b,c) values differing in only a few bits will usually
+     * produce values of c that look totally different.  This was tested for
+     * - pairs that differed by one bit, by two bits, in any combination
+     *   of top bits of (a,b,c), or in any combination of bottom bits of
+     *   (a,b,c).
+     *
+     * - "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+     *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+     *   is commonly produced by subtraction) look like a single 1-bit
+     *   difference.
+     *
+     * - the base values were pseudorandom, all zero but one bit set, or
+     *   all zero plus a counter that starts at zero.
+     *
+     * These constants passed:
+     *   14 11 25 16 4 14 24
+     *   12 14 25 16 4 14 24
+     * and these came close:
+     *    4  8 15 26 3 22 24
+     *   10  8 15 26 3 22 24
+     *   11  8 15 26 3 22 24
+     *
+     * #define final(a,b,c) \
+     * {
+     *   c ^= b; c -= rot(b,14); \
+     *   a ^= c; a -= rot(c,11); \
+     *   b ^= a; b -= rot(a,25); \
+     *   c ^= b; c -= rot(b,16); \
+     *   a ^= c; a -= rot(c,4);  \
+     *   b ^= a; b -= rot(a,14); \
+     *   c ^= b; c -= rot(b,24); \
+     * }
+     *
+     */
+    c ^= b; c = (c - rot(b,14)) & INT_MASK;
+    a ^= c; a = (a - rot(c,11)) & INT_MASK;
+    b ^= a; b = (b - rot(a,25)) & INT_MASK;
+    c ^= b; c = (c - rot(b,16)) & INT_MASK;
+    a ^= c; a = (a - rot(c,4))  & INT_MASK;
+    b ^= a; b = (b - rot(a,14)) & INT_MASK;
+    c ^= b; c = (c - rot(b,24)) & INT_MASK;
+
+    return (int)(c & INT_MASK);
+  }
+
+  /**
+   * Compute the hash of the specified file
+   * @param args name of file to compute hash of.
+   * @throws IOException e
+   */
+  public static void main(String[] args) throws IOException {
+    if (args.length != 1) {
+      System.err.println("Usage: JenkinsHash filename");
+      System.exit(-1);
+    }
+    FileInputStream in = new FileInputStream(args[0]);
+    byte[] bytes = new byte[512];
+    int value = 0;
+    JenkinsHash hash = new JenkinsHash();
+    for (int length = in.read(bytes); length > 0 ; length = in.read(bytes)) {
+      value = hash.hash(bytes, length, value);
+    }
+    System.out.println(Math.abs(value));
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/JvmVersion.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/JvmVersion.java
new file mode 100644
index 0000000..b7eb7e5
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/JvmVersion.java
@@ -0,0 +1,43 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Certain JVM versions are known to be unstable with HBase. This
+ * class has a utility function to determine whether the current JVM
+ * is known to be unstable.
+ */
+public abstract class JvmVersion {
+  private static Set<String> BAD_JVM_VERSIONS = new HashSet<String>();
+  static {
+    BAD_JVM_VERSIONS.add("1.6.0_18");
+  }
+
+  /**
+   * Return true if the current JVM is known to be unstable.
+   */
+  public static boolean isBadJvmVersion() {
+    String version = System.getProperty("java.version");
+    return version != null && BAD_JVM_VERSIONS.contains(version);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Keying.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Keying.java
new file mode 100644
index 0000000..d3b83f4
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Keying.java
@@ -0,0 +1,115 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.util.StringTokenizer;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+/**
+ * Utility creating hbase friendly keys.
+ * Use fabricating row names or column qualifiers.
+ * <p>TODO: Add createSchemeless key, a key that doesn't care if scheme is
+ * http or https.
+ * @see Bytes#split(byte[], byte[], int)
+ */
+public class Keying {
+  private static final String SCHEME = "r:";
+  private static final Pattern URI_RE_PARSER =
+    Pattern.compile("^([^:/?#]+://(?:[^/?#@]+@)?)([^:/?#]+)(.*)$");
+
+  /**
+   * Makes a key out of passed URI for use as row name or column qualifier.
+   *
+   * This method runs transforms on the passed URI so it sits better
+   * as a key (or portion-of-a-key) in hbase.  The <code>host</code> portion of
+   * the URI authority is reversed so subdomains sort under their parent
+   * domain.  The returned String is an opaque URI of an artificial
+   * <code>r:</code> scheme to prevent the result being considered an URI of
+   * the original scheme.  Here is an example of the transform: The url
+   * <code>http://lucene.apache.org/index.html?query=something#middle<code> is
+   * returned as
+   * <code>r:http://org.apache.lucene/index.html?query=something#middle</code>
+   * The transforms are reversible.  No transform is done if passed URI is
+   * not hierarchical.
+   *
+   * <p>If authority <code>userinfo</code> is present, will mess up the sort
+   * (until we do more work).</p>
+   *
+   * @param u URL to transform.
+   * @return An opaque URI of artificial 'r' scheme with host portion of URI
+   * authority reversed (if present).
+   * @see #keyToUri(String)
+   * @see <a href="http://www.ietf.org/rfc/rfc2396.txt">RFC2396</a>
+   */
+  public static String createKey(final String u) {
+    if (u.startsWith(SCHEME)) {
+      throw new IllegalArgumentException("Starts with " + SCHEME);
+    }
+    Matcher m = getMatcher(u);
+    if (m == null || !m.matches()) {
+      // If no match, return original String.
+      return u;
+    }
+    return SCHEME + m.group(1) + reverseHostname(m.group(2)) + m.group(3);
+  }
+
+  /**
+   * Reverse the {@link #createKey(String)} transform.
+   *
+   * @param s <code>URI</code> made by {@link #createKey(String)}.
+   * @return 'Restored' URI made by reversing the {@link #createKey(String)}
+   * transform.
+   */
+  public static String keyToUri(final String s) {
+    if (!s.startsWith(SCHEME)) {
+      return s;
+    }
+    Matcher m = getMatcher(s.substring(SCHEME.length()));
+    if (m == null || !m.matches()) {
+      // If no match, return original String.
+      return s;
+    }
+    return m.group(1) + reverseHostname(m.group(2)) + m.group(3);
+  }
+
+  private static Matcher getMatcher(final String u) {
+    if (u == null || u.length() <= 0) {
+      return null;
+    }
+    return URI_RE_PARSER.matcher(u);
+  }
+
+  private static String reverseHostname(final String hostname) {
+    if (hostname == null) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder(hostname.length());
+    for (StringTokenizer st = new StringTokenizer(hostname, ".", false);
+        st.hasMoreElements();) {
+      Object next = st.nextElement();
+      if (sb.length() > 0) {
+        sb.insert(0, ".");
+      }
+      sb.insert(0, next);
+    }
+    return sb.toString();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/MD5Hash.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/MD5Hash.java
new file mode 100644
index 0000000..b2998c9
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/MD5Hash.java
@@ -0,0 +1,67 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+
+import org.apache.commons.codec.binary.Hex;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Utility class for MD5
+ * MD5 hash produces a 128-bit digest.
+ */
+public class MD5Hash {
+  private static final Log LOG = LogFactory.getLog(MD5Hash.class);
+
+  /**
+   * Given a byte array, returns in MD5 hash as a hex string.
+   * @param key
+   * @return SHA1 hash as a 32 character hex string.
+   */
+  public static String getMD5AsHex(byte[] key) {
+    return getMD5AsHex(key, 0, key.length);
+  }
+  
+  /**
+   * Given a byte array, returns its MD5 hash as a hex string.
+   * Only "length" number of bytes starting at "offset" within the
+   * byte array are used.
+   *
+   * @param key the key to hash (variable length byte array)
+   * @param offset
+   * @param length 
+   * @return MD5 hash as a 32 character hex string.
+   */
+  public static String getMD5AsHex(byte[] key, int offset, int length) {
+    try {
+      MessageDigest md = MessageDigest.getInstance("MD5");
+      md.update(key, offset, length);
+      byte[] digest = md.digest();
+      return new String(Hex.encodeHex(digest));
+    } catch (NoSuchAlgorithmException e) {
+      // this should never happen unless the JDK is messed up.
+      throw new RuntimeException("Error computing MD5 hash", e);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/ManualEnvironmentEdge.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/ManualEnvironmentEdge.java
new file mode 100644
index 0000000..d698df1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/ManualEnvironmentEdge.java
@@ -0,0 +1,39 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * An environment edge that uses a manually set value. This is useful for testing events that are supposed to
+ * happen in the same millisecond.
+ */
+public class ManualEnvironmentEdge implements EnvironmentEdge {
+
+  // Sometimes 0 ts might have a special value, so lets start with 1
+  protected long value = 1L;
+
+  public void setValue(long newValue) {
+    value = newValue;
+  }
+
+  @Override
+  public long currentTimeMillis() {
+    return this.value;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Merge.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Merge.java
new file mode 100644
index 0000000..4551982
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Merge.java
@@ -0,0 +1,386 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.MasterNotRunningException;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.io.WritableComparator;
+import org.apache.hadoop.util.GenericOptionsParser;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Utility that can merge any two regions in the same table: adjacent,
+ * overlapping or disjoint.
+ */
+public class Merge extends Configured implements Tool {
+  static final Log LOG = LogFactory.getLog(Merge.class);
+  private Path rootdir;
+  private volatile MetaUtils utils;
+  private byte [] tableName;               // Name of table
+  private volatile byte [] region1;        // Name of region 1
+  private volatile byte [] region2;        // Name of region 2
+  private volatile boolean isMetaTable;
+  private volatile HRegionInfo mergeInfo;
+
+  /** default constructor */
+  public Merge() {
+    super();
+  }
+
+  /**
+   * @param conf configuration
+   */
+  public Merge(Configuration conf) {
+    this.mergeInfo = null;
+    setConf(conf);
+  }
+
+  public int run(String[] args) throws Exception {
+    if (parseArgs(args) != 0) {
+      return -1;
+    }
+
+    // Verify file system is up.
+    FileSystem fs = FileSystem.get(getConf());              // get DFS handle
+    LOG.info("Verifying that file system is available...");
+    try {
+      FSUtils.checkFileSystemAvailable(fs);
+    } catch (IOException e) {
+      LOG.fatal("File system is not available", e);
+      return -1;
+    }
+
+    // Verify HBase is down
+    LOG.info("Verifying that HBase is not running...");
+    try {
+      HBaseAdmin.checkHBaseAvailable(getConf());
+      LOG.fatal("HBase cluster must be off-line.");
+      return -1;
+    } catch (ZooKeeperConnectionException zkce) {
+      // If no zk, presume no master.
+    } catch (MasterNotRunningException e) {
+      // Expected. Ignore.
+    }
+
+    // Initialize MetaUtils and and get the root of the HBase installation
+
+    this.utils = new MetaUtils(getConf());
+    this.rootdir = FSUtils.getRootDir(getConf());
+    try {
+      if (isMetaTable) {
+        mergeTwoMetaRegions();
+      } else {
+        mergeTwoRegions();
+      }
+      return 0;
+    } catch (Exception e) {
+      LOG.fatal("Merge failed", e);
+      utils.scanMetaRegion(HRegionInfo.FIRST_META_REGIONINFO,
+          new MetaUtils.ScannerListener() {
+            public boolean processRow(HRegionInfo info) {
+              System.err.println(info.toString());
+              return true;
+            }
+          }
+      );
+
+      return -1;
+
+    } finally {
+      if (this.utils != null) {
+        this.utils.shutdown();
+      }
+    }
+  }
+
+  /** @return HRegionInfo for merge result */
+  HRegionInfo getMergedHRegionInfo() {
+    return this.mergeInfo;
+  }
+
+  /*
+   * Merge two meta regions. This is unlikely to be needed soon as we have only
+   * seend the meta table split once and that was with 64MB regions. With 256MB
+   * regions, it will be some time before someone has enough data in HBase to
+   * split the meta region and even less likely that a merge of two meta
+   * regions will be needed, but it is included for completeness.
+   */
+  private void mergeTwoMetaRegions() throws IOException {
+    HRegion rootRegion = utils.getRootRegion();
+    Get get = new Get(region1);
+    get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    List<KeyValue> cells1 =  rootRegion.get(get, null).list();
+    HRegionInfo info1 = Writables.getHRegionInfo((cells1 == null)? null: cells1.get(0).getValue());
+
+    get = new Get(region2);
+    get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    List<KeyValue> cells2 =  rootRegion.get(get, null).list();
+    HRegionInfo info2 = Writables.getHRegionInfo((cells2 == null)? null: cells2.get(0).getValue());
+    HRegion merged = merge(info1, rootRegion, info2, rootRegion);
+    LOG.info("Adding " + merged.getRegionInfo() + " to " +
+        rootRegion.getRegionInfo());
+    HRegion.addRegionToMETA(rootRegion, merged);
+    merged.close();
+  }
+
+  private static class MetaScannerListener
+  implements MetaUtils.ScannerListener {
+    private final byte [] region1;
+    private final byte [] region2;
+    private HRegionInfo meta1 = null;
+    private HRegionInfo meta2 = null;
+
+    MetaScannerListener(final byte [] region1, final byte [] region2) {
+      this.region1 = region1;
+      this.region2 = region2;
+    }
+
+    public boolean processRow(HRegionInfo info) {
+      if (meta1 == null && HRegion.rowIsInRange(info, region1)) {
+        meta1 = info;
+      }
+      if (region2 != null && meta2 == null &&
+          HRegion.rowIsInRange(info, region2)) {
+        meta2 = info;
+      }
+      return meta1 == null || (region2 != null && meta2 == null);
+    }
+
+    HRegionInfo getMeta1() {
+      return meta1;
+    }
+
+    HRegionInfo getMeta2() {
+      return meta2;
+    }
+  }
+
+  /*
+   * Merges two regions from a user table.
+   */
+  private void mergeTwoRegions() throws IOException {
+    LOG.info("Merging regions " + Bytes.toString(this.region1) + " and " +
+        Bytes.toString(this.region2) + " in table " + Bytes.toString(this.tableName));
+    // Scan the root region for all the meta regions that contain the regions
+    // we're merging.
+    MetaScannerListener listener = new MetaScannerListener(region1, region2);
+    this.utils.scanRootRegion(listener);
+    HRegionInfo meta1 = listener.getMeta1();
+    if (meta1 == null) {
+      throw new IOException("Could not find meta region for " + Bytes.toString(region1));
+    }
+    HRegionInfo meta2 = listener.getMeta2();
+    if (meta2 == null) {
+      throw new IOException("Could not find meta region for " + Bytes.toString(region2));
+    }
+    LOG.info("Found meta for region1 " + Bytes.toString(meta1.getRegionName()) +
+      ", meta for region2 " + Bytes.toString(meta2.getRegionName()));
+    HRegion metaRegion1 = this.utils.getMetaRegion(meta1);
+    Get get = new Get(region1);
+    get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    List<KeyValue> cells1 =  metaRegion1.get(get, null).list();
+    HRegionInfo info1 = Writables.getHRegionInfo((cells1 == null)? null: cells1.get(0).getValue());
+    if (info1== null) {
+      throw new NullPointerException("info1 is null using key " +
+          Bytes.toString(region1) + " in " + meta1);
+    }
+
+    HRegion metaRegion2;
+    if (Bytes.equals(meta1.getRegionName(), meta2.getRegionName())) {
+      metaRegion2 = metaRegion1;
+    } else {
+      metaRegion2 = utils.getMetaRegion(meta2);
+    }
+    get = new Get(region2);
+    get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    List<KeyValue> cells2 =  metaRegion2.get(get, null).list();
+    HRegionInfo info2 = Writables.getHRegionInfo((cells2 == null)? null: cells2.get(0).getValue());
+    if (info2 == null) {
+      throw new NullPointerException("info2 is null using key " + meta2);
+    }
+    HRegion merged = merge(info1, metaRegion1, info2, metaRegion2);
+
+    // Now find the meta region which will contain the newly merged region
+
+    listener = new MetaScannerListener(merged.getRegionName(), null);
+    utils.scanRootRegion(listener);
+    HRegionInfo mergedInfo = listener.getMeta1();
+    if (mergedInfo == null) {
+      throw new IOException("Could not find meta region for " +
+          Bytes.toString(merged.getRegionName()));
+    }
+    HRegion mergeMeta;
+    if (Bytes.equals(mergedInfo.getRegionName(), meta1.getRegionName())) {
+      mergeMeta = metaRegion1;
+    } else if (Bytes.equals(mergedInfo.getRegionName(), meta2.getRegionName())) {
+      mergeMeta = metaRegion2;
+    } else {
+      mergeMeta = utils.getMetaRegion(mergedInfo);
+    }
+    LOG.info("Adding " + merged.getRegionInfo() + " to " +
+        mergeMeta.getRegionInfo());
+
+    HRegion.addRegionToMETA(mergeMeta, merged);
+    merged.close();
+  }
+
+  /*
+   * Actually merge two regions and update their info in the meta region(s)
+   * If the meta is split, meta1 may be different from meta2. (and we may have
+   * to scan the meta if the resulting merged region does not go in either)
+   * Returns HRegion object for newly merged region
+   */
+  private HRegion merge(HRegionInfo info1, HRegion meta1, HRegionInfo info2,
+      HRegion meta2)
+  throws IOException {
+    if (info1 == null) {
+      throw new IOException("Could not find " + Bytes.toString(region1) + " in " +
+          Bytes.toString(meta1.getRegionName()));
+    }
+    if (info2 == null) {
+      throw new IOException("Cound not find " + Bytes.toString(region2) + " in " +
+          Bytes.toString(meta2.getRegionName()));
+    }
+    HRegion merged = null;
+    HLog log = utils.getLog();
+    HRegion r1 = HRegion.openHRegion(info1, log, getConf());
+    try {
+      HRegion r2 = HRegion.openHRegion(info2, log, getConf());
+      try {
+        merged = HRegion.merge(r1, r2);
+      } finally {
+        if (!r2.isClosed()) {
+          r2.close();
+        }
+      }
+    } finally {
+      if (!r1.isClosed()) {
+        r1.close();
+      }
+    }
+
+    // Remove the old regions from meta.
+    // HRegion.merge has already deleted their files
+
+    removeRegionFromMeta(meta1, info1);
+    removeRegionFromMeta(meta2, info2);
+
+    this.mergeInfo = merged.getRegionInfo();
+    return merged;
+  }
+
+  /*
+   * Removes a region's meta information from the passed <code>meta</code>
+   * region.
+   *
+   * @param meta META HRegion to be updated
+   * @param regioninfo HRegionInfo of region to remove from <code>meta</code>
+   *
+   * @throws IOException
+   */
+  private void removeRegionFromMeta(HRegion meta, HRegionInfo regioninfo)
+  throws IOException {
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("Removing region: " + regioninfo + " from " + meta);
+    }
+
+    Delete delete  = new Delete(regioninfo.getRegionName(),
+        System.currentTimeMillis(), null);
+    meta.delete(delete, null, true);
+  }
+
+  /*
+   * Adds a region's meta information from the passed <code>meta</code>
+   * region.
+   *
+   * @param metainfo META HRegionInfo to be updated
+   * @param region HRegion to add to <code>meta</code>
+   *
+   * @throws IOException
+   */
+  private int parseArgs(String[] args) throws IOException {
+    GenericOptionsParser parser =
+      new GenericOptionsParser(getConf(), args);
+
+    String[] remainingArgs = parser.getRemainingArgs();
+    if (remainingArgs.length != 3) {
+      usage();
+      return -1;
+    }
+    tableName = Bytes.toBytes(remainingArgs[0]);
+    isMetaTable = Bytes.compareTo(tableName, HConstants.META_TABLE_NAME) == 0;
+
+    region1 = Bytes.toBytesBinary(remainingArgs[1]);
+    region2 = Bytes.toBytesBinary(remainingArgs[2]);
+    int status = 0;
+    if (notInTable(tableName, region1) || notInTable(tableName, region2)) {
+      status = -1;
+    } else if (Bytes.equals(region1, region2)) {
+      LOG.error("Can't merge a region with itself");
+      status = -1;
+    }
+    return status;
+  }
+
+  private boolean notInTable(final byte [] tn, final byte [] rn) {
+    if (WritableComparator.compareBytes(tn, 0, tn.length, rn, 0, tn.length) != 0) {
+      LOG.error("Region " + Bytes.toString(rn) + " does not belong to table " +
+        Bytes.toString(tn));
+      return true;
+    }
+    return false;
+  }
+
+  private void usage() {
+    System.err.println(
+        "Usage: bin/hbase merge <table-name> <region-1> <region-2>\n");
+  }
+
+  public static void main(String[] args) {
+    int status;
+    try {
+      status = ToolRunner.run(HBaseConfiguration.create(), new Merge(), args);
+    } catch (Exception e) {
+      LOG.error("exiting due to error", e);
+      status = -1;
+    }
+    System.exit(status);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/MetaUtils.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/MetaUtils.java
new file mode 100644
index 0000000..10d6b92
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/MetaUtils.java
@@ -0,0 +1,478 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+/**
+ * Contains utility methods for manipulating HBase meta tables.
+ * Be sure to call {@link #shutdown()} when done with this class so it closes
+ * resources opened during meta processing (ROOT, META, etc.).  Be careful
+ * how you use this class.  If used during migrations, be careful when using
+ * this class to check whether migration is needed.
+ */
+public class MetaUtils {
+  private static final Log LOG = LogFactory.getLog(MetaUtils.class);
+  private final Configuration conf;
+  private FileSystem fs;
+  private Path rootdir;
+  private HLog log;
+  private HRegion rootRegion;
+  private Map<byte [], HRegion> metaRegions = Collections.synchronizedSortedMap(
+    new TreeMap<byte [], HRegion>(Bytes.BYTES_COMPARATOR));
+
+  /** Default constructor
+   * @throws IOException e
+   */
+  public MetaUtils() throws IOException {
+    this(HBaseConfiguration.create());
+  }
+
+  /**
+   * @param conf Configuration
+   * @throws IOException e
+   */
+  public MetaUtils(Configuration conf) throws IOException {
+    this.conf = conf;
+    conf.setInt("hbase.client.retries.number", 1);
+    this.rootRegion = null;
+    initialize();
+  }
+
+  /**
+   * Verifies that DFS is available and that HBase is off-line.
+   * @throws IOException e
+   */
+  private void initialize() throws IOException {
+    this.fs = FileSystem.get(this.conf);
+    // Get root directory of HBase installation
+    this.rootdir = FSUtils.getRootDir(this.conf);
+  }
+
+  /**
+   * @return the HLog
+   * @throws IOException e
+   */
+  public synchronized HLog getLog() throws IOException {
+    if (this.log == null) {
+      Path logdir = new Path(this.fs.getHomeDirectory(),
+          HConstants.HREGION_LOGDIR_NAME + "_" + System.currentTimeMillis());
+      Path oldLogDir = new Path(this.fs.getHomeDirectory(),
+          HConstants.HREGION_OLDLOGDIR_NAME);
+      this.log = new HLog(this.fs, logdir, oldLogDir, this.conf);
+    }
+    return this.log;
+  }
+
+  /**
+   * @return HRegion for root region
+   * @throws IOException e
+   */
+  public HRegion getRootRegion() throws IOException {
+    if (this.rootRegion == null) {
+      openRootRegion();
+    }
+    return this.rootRegion;
+  }
+
+  /**
+   * Open or return cached opened meta region
+   *
+   * @param metaInfo HRegionInfo for meta region
+   * @return meta HRegion
+   * @throws IOException e
+   */
+  public HRegion getMetaRegion(HRegionInfo metaInfo) throws IOException {
+    HRegion meta = metaRegions.get(metaInfo.getRegionName());
+    if (meta == null) {
+      meta = openMetaRegion(metaInfo);
+      LOG.info("OPENING META " + meta.toString());
+      this.metaRegions.put(metaInfo.getRegionName(), meta);
+    }
+    return meta;
+  }
+
+  /**
+   * Closes catalog regions if open. Also closes and deletes the HLog. You
+   * must call this method if you want to persist changes made during a
+   * MetaUtils edit session.
+   */
+  public void shutdown() {
+    if (this.rootRegion != null) {
+      try {
+        this.rootRegion.close();
+      } catch (IOException e) {
+        LOG.error("closing root region", e);
+      } finally {
+        this.rootRegion = null;
+      }
+    }
+    try {
+      for (HRegion r: metaRegions.values()) {
+        LOG.info("CLOSING META " + r.toString());
+        r.close();
+      }
+    } catch (IOException e) {
+      LOG.error("closing meta region", e);
+    } finally {
+      metaRegions.clear();
+    }
+    try {
+      if (this.log != null) {
+        this.log.rollWriter();
+        this.log.closeAndDelete();
+      }
+    } catch (IOException e) {
+      LOG.error("closing HLog", e);
+    } finally {
+      this.log = null;
+    }
+  }
+
+  /**
+   * Used by scanRootRegion and scanMetaRegion to call back the caller so it
+   * can process the data for a row.
+   */
+  public interface ScannerListener {
+    /**
+     * Callback so client of scanner can process row contents
+     *
+     * @param info HRegionInfo for row
+     * @return false to terminate the scan
+     * @throws IOException e
+     */
+    public boolean processRow(HRegionInfo info) throws IOException;
+  }
+
+  /**
+   * Scans the root region. For every meta region found, calls the listener with
+   * the HRegionInfo of the meta region.
+   *
+   * @param listener method to be called for each meta region found
+   * @throws IOException e
+   */
+  public void scanRootRegion(ScannerListener listener) throws IOException {
+    // Open root region so we can scan it
+    if (this.rootRegion == null) {
+      openRootRegion();
+    }
+    scanMetaRegion(this.rootRegion, listener);
+  }
+
+  /**
+   * Scan the passed in metaregion <code>m</code> invoking the passed
+   * <code>listener</code> per row found.
+   * @param r region
+   * @param listener scanner listener
+   * @throws IOException e
+   */
+  public void scanMetaRegion(final HRegion r, final ScannerListener listener)
+  throws IOException {
+    Scan scan = new Scan();
+    scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    InternalScanner s = r.getScanner(scan);
+    try {
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      boolean hasNext = true;
+      do {
+        hasNext = s.next(results);
+        HRegionInfo info = null;
+        for (KeyValue kv: results) {
+          info = Writables.getHRegionInfoOrNull(kv.getValue());
+          if (info == null) {
+            LOG.warn("Region info is null for row " +
+              Bytes.toString(kv.getRow()) + " in table " +
+              r.getTableDesc().getNameAsString());
+          }
+          continue;
+        }
+        if (!listener.processRow(info)) {
+          break;
+        }
+        results.clear();
+      } while (hasNext);
+    } finally {
+      s.close();
+    }
+  }
+
+  /**
+   * Scans a meta region. For every region found, calls the listener with
+   * the HRegionInfo of the region.
+   * TODO: Use Visitor rather than Listener pattern.  Allow multiple Visitors.
+   * Use this everywhere we scan meta regions: e.g. in metascanners, in close
+   * handling, etc.  Have it pass in the whole row, not just HRegionInfo.
+   * <p>Use for reading meta only.  Does not close region when done.
+   * Use {@link #getMetaRegion(HRegionInfo)} instead if writing.  Adds
+   * meta region to list that will get a close on {@link #shutdown()}.
+   *
+   * @param metaRegionInfo HRegionInfo for meta region
+   * @param listener method to be called for each meta region found
+   * @throws IOException e
+   */
+  public void scanMetaRegion(HRegionInfo metaRegionInfo,
+    ScannerListener listener)
+  throws IOException {
+    // Open meta region so we can scan it
+    HRegion metaRegion = openMetaRegion(metaRegionInfo);
+    scanMetaRegion(metaRegion, listener);
+  }
+
+  private synchronized HRegion openRootRegion() throws IOException {
+    if (this.rootRegion != null) {
+      return this.rootRegion;
+    }
+    this.rootRegion = HRegion.openHRegion(HRegionInfo.ROOT_REGIONINFO, getLog(),
+      this.conf);
+    this.rootRegion.compactStores();
+    return this.rootRegion;
+  }
+
+  private HRegion openMetaRegion(HRegionInfo metaInfo) throws IOException {
+    HRegion meta = HRegion.openHRegion(metaInfo, getLog(), this.conf);
+    meta.compactStores();
+    return meta;
+  }
+
+  /**
+   * Set a single region on/offline.
+   * This is a tool to repair tables that have offlined tables in their midst.
+   * Can happen on occasion.  Use at your own risk.  Call from a bit of java
+   * or jython script.  This method is 'expensive' in that it creates a
+   * {@link HTable} instance per invocation to go against <code>.META.</code>
+   * @param c A configuration that has its <code>hbase.master</code>
+   * properly set.
+   * @param row Row in the catalog .META. table whose HRegionInfo's offline
+   * status we want to change.
+   * @param onlineOffline Pass <code>true</code> to OFFLINE the region.
+   * @throws IOException e
+   */
+  public static void changeOnlineStatus (final Configuration c,
+      final byte [] row, final boolean onlineOffline)
+  throws IOException {
+    HTable t = new HTable(c, HConstants.META_TABLE_NAME);
+    Get get = new Get(row);
+    get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    Result res = t.get(get);
+    KeyValue [] kvs = res.raw();
+    if(kvs.length <= 0) {
+      throw new IOException("no information for row " + Bytes.toString(row));
+    }
+    byte [] value = kvs[0].getValue();
+    if (value == null) {
+      throw new IOException("no information for row " + Bytes.toString(row));
+    }
+    HRegionInfo info = Writables.getHRegionInfo(value);
+    Put put = new Put(row);
+    info.setOffline(onlineOffline);
+    put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+        Writables.getBytes(info));
+    t.put(put);
+
+    Delete delete = new Delete(row);
+    delete.deleteColumns(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
+    delete.deleteColumns(HConstants.CATALOG_FAMILY,
+        HConstants.STARTCODE_QUALIFIER);
+
+    t.delete(delete);
+  }
+
+  /**
+   * Offline version of the online TableOperation,
+   * org.apache.hadoop.hbase.master.AddColumn.
+   * @param tableName table name
+   * @param hcd Add this column to <code>tableName</code>
+   * @throws IOException e
+   */
+  public void addColumn(final byte [] tableName,
+      final HColumnDescriptor hcd)
+  throws IOException {
+    List<HRegionInfo> metas = getMETARows(tableName);
+    for (HRegionInfo hri: metas) {
+      final HRegion m = getMetaRegion(hri);
+      scanMetaRegion(m, new ScannerListener() {
+        private boolean inTable = true;
+
+        @SuppressWarnings("synthetic-access")
+        public boolean processRow(HRegionInfo info) throws IOException {
+          LOG.debug("Testing " + Bytes.toString(tableName) + " against " +
+            Bytes.toString(info.getTableDesc().getName()));
+          if (Bytes.equals(info.getTableDesc().getName(), tableName)) {
+            this.inTable = false;
+            info.getTableDesc().addFamily(hcd);
+            updateMETARegionInfo(m, info);
+            return true;
+          }
+          // If we got here and we have not yet encountered the table yet,
+          // inTable will be false.  Otherwise, we've passed out the table.
+          // Stop the scanner.
+          return this.inTable;
+        }});
+    }
+  }
+
+  /**
+   * Offline version of the online TableOperation,
+   * org.apache.hadoop.hbase.master.DeleteColumn.
+   * @param tableName table name
+   * @param columnFamily Name of column name to remove.
+   * @throws IOException e
+   */
+  public void deleteColumn(final byte [] tableName,
+      final byte [] columnFamily) throws IOException {
+    List<HRegionInfo> metas = getMETARows(tableName);
+    for (HRegionInfo hri: metas) {
+      final HRegion m = getMetaRegion(hri);
+      scanMetaRegion(m, new ScannerListener() {
+        private boolean inTable = true;
+
+        @SuppressWarnings("synthetic-access")
+        public boolean processRow(HRegionInfo info) throws IOException {
+          if (Bytes.equals(info.getTableDesc().getName(), tableName)) {
+            this.inTable = false;
+            info.getTableDesc().removeFamily(columnFamily);
+            updateMETARegionInfo(m, info);
+            Path tabledir = new Path(rootdir,
+              info.getTableDesc().getNameAsString());
+            Path p = Store.getStoreHomedir(tabledir, info.getEncodedName(),
+              columnFamily);
+            if (!fs.delete(p, true)) {
+              LOG.warn("Failed delete of " + p);
+            }
+            return false;
+          }
+          // If we got here and we have not yet encountered the table yet,
+          // inTable will be false.  Otherwise, we've passed out the table.
+          // Stop the scanner.
+          return this.inTable;
+        }});
+    }
+  }
+
+  /**
+   * Update COL_REGIONINFO in meta region r with HRegionInfo hri
+   *
+   * @param r region
+   * @param hri region info
+   * @throws IOException e
+   */
+  public void updateMETARegionInfo(HRegion r, final HRegionInfo hri)
+  throws IOException {
+    if (LOG.isDebugEnabled()) {
+      Get get = new Get(hri.getRegionName());
+      get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+      Result res = r.get(get, null);
+      KeyValue [] kvs = res.raw();
+      if(kvs.length <= 0) {
+        return;
+      }
+      byte [] value = kvs[0].getValue();
+      if (value == null) {
+        return;
+      }
+      HRegionInfo h = Writables.getHRegionInfoOrNull(value);
+
+      LOG.debug("Old " + Bytes.toString(HConstants.CATALOG_FAMILY) + ":" +
+          Bytes.toString(HConstants.REGIONINFO_QUALIFIER) + " for " +
+          hri.toString() + " in " + r.toString() + " is: " + h.toString());
+    }
+
+    Put put = new Put(hri.getRegionName());
+    put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+        Writables.getBytes(hri));
+    r.put(put);
+
+    if (LOG.isDebugEnabled()) {
+      Get get = new Get(hri.getRegionName());
+      get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+      Result res = r.get(get, null);
+      KeyValue [] kvs = res.raw();
+      if(kvs.length <= 0) {
+        return;
+      }
+      byte [] value = kvs[0].getValue();
+      if (value == null) {
+        return;
+      }
+      HRegionInfo h = Writables.getHRegionInfoOrNull(value);
+        LOG.debug("New " + Bytes.toString(HConstants.CATALOG_FAMILY) + ":" +
+            Bytes.toString(HConstants.REGIONINFO_QUALIFIER) + " for " +
+            hri.toString() + " in " + r.toString() + " is: " +  h.toString());
+    }
+  }
+
+  /**
+   * @return List of {@link HRegionInfo} rows found in the ROOT or META
+   * catalog table.
+   * @param tableName Name of table to go looking for.
+   * @throws IOException e
+   * @see #getMetaRegion(HRegionInfo)
+   */
+  public List<HRegionInfo> getMETARows(final byte [] tableName)
+  throws IOException {
+    final List<HRegionInfo> result = new ArrayList<HRegionInfo>();
+    // If passed table name is META, then  return the root region.
+    if (Bytes.equals(HConstants.META_TABLE_NAME, tableName)) {
+      result.add(openRootRegion().getRegionInfo());
+      return result;
+    }
+    // Return all meta regions that contain the passed tablename.
+    scanRootRegion(new ScannerListener() {
+      private final Log SL_LOG = LogFactory.getLog(this.getClass());
+
+      public boolean processRow(HRegionInfo info) throws IOException {
+        SL_LOG.debug("Testing " + info);
+        if (Bytes.equals(info.getTableDesc().getName(),
+            HConstants.META_TABLE_NAME)) {
+          result.add(info);
+          return false;
+        }
+        return true;
+      }});
+    return result;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
new file mode 100644
index 0000000..085bf1e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/MurmurHash.java
@@ -0,0 +1,88 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+/**
+ * This is a very fast, non-cryptographic hash suitable for general hash-based
+ * lookup.  See http://murmurhash.googlepages.com/ for more details.
+ *
+ * <p>The C version of MurmurHash 2.0 found at that site was ported
+ * to Java by Andrzej Bialecki (ab at getopt org).</p>
+ */
+public class MurmurHash extends Hash {
+  private static MurmurHash _instance = new MurmurHash();
+
+  public static Hash getInstance() {
+    return _instance;
+  }
+
+  @Override
+  public int hash(byte[] data, int offset, int length, int seed) {
+    int m = 0x5bd1e995;
+    int r = 24;
+
+    int h = seed ^ length;
+
+    int len_4 = length >> 2;
+
+    for (int i = 0; i < len_4; i++) {
+      int i_4 = (i << 2) + offset;
+      int k = data[i_4 + 3];
+      k = k << 8;
+      k = k | (data[i_4 + 2] & 0xff);
+      k = k << 8;
+      k = k | (data[i_4 + 1] & 0xff);
+      k = k << 8;
+      //noinspection PointlessArithmeticExpression
+      k = k | (data[i_4 + 0] & 0xff);
+      k *= m;
+      k ^= k >>> r;
+      k *= m;
+      h *= m;
+      h ^= k;
+    }
+
+    // avoid calculating modulo
+    int len_m = len_4 << 2;
+    int left = length - len_m;
+    int i_m = len_m + offset;
+
+    if (left != 0) {
+      if (left >= 3) {
+        h ^= data[i_m + 2] << 16;
+      }
+      if (left >= 2) {
+        h ^= data[i_m + 1] << 8;
+      }
+      if (left >= 1) {
+        h ^= data[i_m];
+      }
+
+      h *= m;
+    }
+
+    h ^= h >>> 13;
+    h *= m;
+    h ^= h >>> 15;
+
+    return h;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Pair.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Pair.java
new file mode 100644
index 0000000..ff296b6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Pair.java
@@ -0,0 +1,119 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.Serializable;
+
+/**
+ * A generic class for pairs.
+ * @param <T1>
+ * @param <T2>
+ */
+public class Pair<T1, T2> implements Serializable
+{
+  private static final long serialVersionUID = -3986244606585552569L;
+  protected T1 first = null;
+  protected T2 second = null;
+
+  /**
+   * Default constructor.
+   */
+  public Pair()
+  {
+  }
+
+  /**
+   * Constructor
+   * @param a operand
+   * @param b operand
+   */
+  public Pair(T1 a, T2 b)
+  {
+    this.first = a;
+    this.second = b;
+  }
+
+  /**
+   * Replace the first element of the pair.
+   * @param a operand
+   */
+  public void setFirst(T1 a)
+  {
+    this.first = a;
+  }
+
+  /**
+   * Replace the second element of the pair.
+   * @param b operand
+   */
+  public void setSecond(T2 b)
+  {
+    this.second = b;
+  }
+
+  /**
+   * Return the first element stored in the pair.
+   * @return T1
+   */
+  public T1 getFirst()
+  {
+    return first;
+  }
+
+  /**
+   * Return the second element stored in the pair.
+   * @return T2
+   */
+  public T2 getSecond()
+  {
+    return second;
+  }
+
+  private static boolean equals(Object x, Object y)
+  {
+     return (x == null && y == null) || (x != null && x.equals(y));
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public boolean equals(Object other)
+  {
+    return other instanceof Pair && equals(first, ((Pair)other).first) &&
+      equals(second, ((Pair)other).second);
+  }
+
+  @Override
+  public int hashCode()
+  {
+    if (first == null)
+      return (second == null) ? 0 : second.hashCode() + 1;
+    else if (second == null)
+      return first.hashCode() + 2;
+    else
+      return first.hashCode() * 17 + second.hashCode();
+  }
+
+  @Override
+  public String toString()
+  {
+    return "{" + getFirst() + "," + getSecond() + "}";
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/PairOfSameType.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/PairOfSameType.java
new file mode 100644
index 0000000..ddee30d
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/PairOfSameType.java
@@ -0,0 +1,112 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.util.Iterator;
+
+import org.apache.commons.lang.NotImplementedException;
+
+/**
+ * A generic, immutable class for pairs of objects both of type <code>T</code>.
+ * @param <T>
+ * @see Pair if Types differ.
+ */
+public class PairOfSameType<T> implements Iterable<T> {
+  private final T first;
+  private final T second;
+
+  /**
+   * Constructor
+   * @param a operand
+   * @param b operand
+   */
+  public PairOfSameType(T a, T b) {
+    this.first = a;
+    this.second = b;
+  }
+
+  /**
+   * Return the first element stored in the pair.
+   * @return T
+   */
+  public T getFirst() {
+    return first;
+  }
+
+  /**
+   * Return the second element stored in the pair.
+   * @return T
+   */
+  public T getSecond() {
+    return second;
+  }
+
+  private static boolean equals(Object x, Object y) {
+     return (x == null && y == null) || (x != null && x.equals(y));
+  }
+
+  @Override
+  @SuppressWarnings("unchecked")
+  public boolean equals(Object other) {
+    return other instanceof PairOfSameType &&
+      equals(first, ((PairOfSameType)other).first) &&
+      equals(second, ((PairOfSameType)other).second);
+  }
+
+  @Override
+  public int hashCode() {
+    if (first == null)
+      return (second == null) ? 0 : second.hashCode() + 1;
+    else if (second == null)
+      return first.hashCode() + 2;
+    else
+      return first.hashCode() * 17 + second.hashCode();
+  }
+
+  @Override
+  public String toString() {
+    return "{" + getFirst() + "," + getSecond() + "}";
+  }
+
+  @Override
+  public Iterator<T> iterator() {
+    return new Iterator<T>() {
+      private int returned = 0;
+
+      @Override
+      public boolean hasNext() {
+        return this.returned < 2;
+      }
+
+      @Override
+      public T next() {
+        if (++this.returned == 1) return getFirst();
+        else if (this.returned == 2) return getSecond();
+        else throw new IllegalAccessError("this.returned=" + this.returned);
+      }
+
+      @Override
+      public void remove() {
+        throw new NotImplementedException();
+      }
+    };
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/ServerCommandLine.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/ServerCommandLine.java
new file mode 100644
index 0000000..b2f3770
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/ServerCommandLine.java
@@ -0,0 +1,82 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.lang.management.RuntimeMXBean;
+import java.lang.management.ManagementFactory;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.conf.Configured;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.util.Tool;
+import org.apache.hadoop.util.ToolRunner;
+
+/**
+ * Base class for command lines that start up various HBase daemons.
+ */
+public abstract class ServerCommandLine extends Configured implements Tool {
+  private static final Log LOG = LogFactory.getLog(ServerCommandLine.class);
+
+  /**
+   * Implementing subclasses should return a usage string to print out.
+   */
+  protected abstract String getUsage();
+
+  /**
+   * Print usage information for this command line.
+   *
+   * @param message if not null, print this message before the usage info.
+   */
+  protected void usage(String message) {
+    if (message != null) {
+      System.err.println(message);
+      System.err.println("");
+    }
+
+    System.err.println(getUsage());
+  }
+
+  /**
+   * Log information about the currently running JVM.
+   */
+  public static void logJVMInfo() {
+    // Print out vm stats before starting up.
+    RuntimeMXBean runtime = ManagementFactory.getRuntimeMXBean();
+    if (runtime != null) {
+      LOG.info("vmName=" + runtime.getVmName() + ", vmVendor=" +
+               runtime.getVmVendor() + ", vmVersion=" + runtime.getVmVersion());
+      LOG.info("vmInputArguments=" + runtime.getInputArguments());
+    }
+  }
+
+  /**
+   * Parse and run the given command line. This may exit the JVM if
+   * a nonzero exit code is returned from <code>run()</code>.
+   */
+  public void doMain(String args[]) throws Exception {
+    int ret = ToolRunner.run(
+      HBaseConfiguration.create(), this, args);
+    if (ret != 0) {
+      System.exit(ret);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Sleeper.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Sleeper.java
new file mode 100644
index 0000000..011dcbe
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Sleeper.java
@@ -0,0 +1,114 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Stoppable;
+
+/**
+ * Sleeper for current thread.
+ * Sleeps for passed period.  Also checks passed boolean and if interrupted,
+ * will return if the flag is set (rather than go back to sleep until its
+ * sleep time is up).
+ */
+public class Sleeper {
+  private final Log LOG = LogFactory.getLog(this.getClass().getName());
+  private final int period;
+  private final Stoppable stopper;
+  private static final long MINIMAL_DELTA_FOR_LOGGING = 10000;
+
+  private final Object sleepLock = new Object();
+  private boolean triggerWake = false;
+
+  /**
+   * @param sleep sleep time in milliseconds
+   * @param stopper When {@link Stoppable#isStopped()} is true, this thread will
+   * cleanup and exit cleanly.
+   */
+  public Sleeper(final int sleep, final Stoppable stopper) {
+    this.period = sleep;
+    this.stopper = stopper;
+  }
+
+  /**
+   * Sleep for period.
+   */
+  public void sleep() {
+    sleep(System.currentTimeMillis());
+  }
+
+  /**
+   * If currently asleep, stops sleeping; if not asleep, will skip the next
+   * sleep cycle.
+   */
+  public void skipSleepCycle() {
+    synchronized (sleepLock) {
+      triggerWake = true;
+      sleepLock.notify();
+    }
+  }
+
+  /**
+   * Sleep for period adjusted by passed <code>startTime<code>
+   * @param startTime Time some task started previous to now.  Time to sleep
+   * will be docked current time minus passed <code>startTime<code>.
+   */
+  public void sleep(final long startTime) {
+    if (this.stopper.isStopped()) {
+      return;
+    }
+    long now = System.currentTimeMillis();
+    long waitTime = this.period - (now - startTime);
+    if (waitTime > this.period) {
+      LOG.warn("Calculated wait time > " + this.period +
+        "; setting to this.period: " + System.currentTimeMillis() + ", " +
+        startTime);
+      waitTime = this.period;
+    }
+    while (waitTime > 0) {
+      long woke = -1;
+      try {
+        synchronized (sleepLock) {
+          if (triggerWake) break;
+          sleepLock.wait(waitTime);
+        }
+        woke = System.currentTimeMillis();
+        long slept = woke - now;
+        if (slept - this.period > MINIMAL_DELTA_FOR_LOGGING) {
+          LOG.warn("We slept " + slept + "ms instead of " + this.period +
+              "ms, this is likely due to a long " +
+              "garbage collecting pause and it's usually bad, " +
+              "see http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A9");
+        }
+      } catch(InterruptedException iex) {
+        // We we interrupted because we're meant to stop?  If not, just
+        // continue ignoring the interruption
+        if (this.stopper.isStopped()) {
+          return;
+        }
+      }
+      // Recalculate waitTime.
+      woke = (woke == -1)? System.currentTimeMillis(): woke;
+      waitTime = this.period - (woke - startTime);
+    }
+    triggerWake = false;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/SoftValueSortedMap.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/SoftValueSortedMap.java
new file mode 100644
index 0000000..4d1a552
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/SoftValueSortedMap.java
@@ -0,0 +1,196 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.lang.ref.ReferenceQueue;
+import java.lang.ref.SoftReference;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Comparator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedMap;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+/**
+ * A SortedMap implementation that uses Soft Reference values
+ * internally to make it play well with the GC when in a low-memory
+ * situation. Use as a cache where you also need SortedMap functionality.
+ *
+ * @param <K> key class
+ * @param <V> value class
+ */
+public class SoftValueSortedMap<K,V> implements SortedMap<K,V> {
+  private final SortedMap<K, SoftValue<K,V>> internalMap;
+  private final ReferenceQueue rq = new ReferenceQueue();
+
+  /** Constructor */
+  public SoftValueSortedMap() {
+    this(new TreeMap<K, SoftValue<K,V>>());
+  }
+
+  /**
+   * Constructor
+   * @param c comparator
+   */
+  public SoftValueSortedMap(final Comparator<K> c) {
+    this(new TreeMap<K, SoftValue<K,V>>(c));
+  }
+
+  /** For headMap and tailMap support
+   * @param original object to wrap
+   */
+  private SoftValueSortedMap(SortedMap<K,SoftValue<K,V>> original) {
+    this.internalMap = original;
+  }
+
+  /**
+   * Checks soft references and cleans any that have been placed on
+   * ReferenceQueue.  Call if get/put etc. are not called regularly.
+   * Internally these call checkReferences on each access.
+   * @return How many references cleared.
+   */
+  private int checkReferences() {
+    int i = 0;
+    for (Object obj; (obj = this.rq.poll()) != null;) {
+      i++;
+      //noinspection unchecked
+      this.internalMap.remove(((SoftValue<K,V>)obj).key);
+    }
+    return i;
+  }
+
+  public synchronized V put(K key, V value) {
+    checkReferences();
+    SoftValue<K,V> oldValue = this.internalMap.put(key,
+      new SoftValue<K,V>(key, value, this.rq));
+    return oldValue == null ? null : oldValue.get();
+  }
+
+  @SuppressWarnings("unchecked")
+  public synchronized void putAll(Map map) {
+    throw new RuntimeException("Not implemented");
+  }
+
+  @SuppressWarnings({"SuspiciousMethodCalls"})
+  public synchronized V get(Object key) {
+    checkReferences();
+    SoftValue<K,V> value = this.internalMap.get(key);
+    if (value == null) {
+      return null;
+    }
+    if (value.get() == null) {
+      this.internalMap.remove(key);
+      return null;
+    }
+    return value.get();
+  }
+
+  public synchronized V remove(Object key) {
+    checkReferences();
+    SoftValue<K,V> value = this.internalMap.remove(key);
+    return value == null ? null : value.get();
+  }
+
+  public synchronized boolean containsKey(Object key) {
+    checkReferences();
+    return this.internalMap.containsKey(key);
+  }
+
+  public synchronized boolean containsValue(Object value) {
+/*    checkReferences();
+    return internalMap.containsValue(value);*/
+    throw new UnsupportedOperationException("Don't support containsValue!");
+  }
+
+  public synchronized K firstKey() {
+    checkReferences();
+    return internalMap.firstKey();
+  }
+
+  public synchronized K lastKey() {
+    checkReferences();
+    return internalMap.lastKey();
+  }
+
+  public synchronized SoftValueSortedMap<K,V> headMap(K key) {
+    checkReferences();
+    return new SoftValueSortedMap<K,V>(this.internalMap.headMap(key));
+  }
+
+  public synchronized SoftValueSortedMap<K,V> tailMap(K key) {
+    checkReferences();
+    return new SoftValueSortedMap<K,V>(this.internalMap.tailMap(key));
+  }
+
+  public synchronized SoftValueSortedMap<K,V> subMap(K fromKey, K toKey) {
+    checkReferences();
+    return new SoftValueSortedMap<K,V>(this.internalMap.subMap(fromKey, toKey));
+  }
+
+  public synchronized boolean isEmpty() {
+    checkReferences();
+    return this.internalMap.isEmpty();
+  }
+
+  public synchronized int size() {
+    checkReferences();
+    return this.internalMap.size();
+  }
+
+  public synchronized void clear() {
+    checkReferences();
+    this.internalMap.clear();
+  }
+
+  public synchronized Set<K> keySet() {
+    checkReferences();
+    return this.internalMap.keySet();
+  }
+
+  @SuppressWarnings("unchecked")
+  public synchronized Comparator comparator() {
+    return this.internalMap.comparator();
+  }
+
+  public synchronized Set<Map.Entry<K,V>> entrySet() {
+    throw new RuntimeException("Not implemented");
+  }
+
+  public synchronized Collection<V> values() {
+    checkReferences();
+    Collection<SoftValue<K,V>> softValues = this.internalMap.values();
+    ArrayList<V> hardValues = new ArrayList<V>();
+    for(SoftValue<K,V> softValue : softValues) {
+      hardValues.add(softValue.get());
+    }
+    return hardValues;
+  }
+
+  private static class SoftValue<K,V> extends SoftReference<V> {
+    final K key;
+
+    SoftValue(K key, V value, ReferenceQueue q) {
+      super(value, q);
+      this.key = key;
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Strings.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Strings.java
new file mode 100644
index 0000000..c2cad2e
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Strings.java
@@ -0,0 +1,61 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * Utility for Strings.
+ */
+public class Strings {
+  public final static String DEFAULT_SEPARATOR = "=";
+  public final static String DEFAULT_KEYVALUE_SEPARATOR = ", ";
+
+  /**
+   * Append to a StringBuilder a key/value.
+   * Uses default separators.
+   * @param sb StringBuilder to use
+   * @param key Key to append.
+   * @param value Value to append.
+   * @return Passed <code>sb</code> populated with key/value.
+   */
+  public static StringBuilder appendKeyValue(final StringBuilder sb,
+      final String key, final Object value) {
+    return appendKeyValue(sb, key, value, DEFAULT_SEPARATOR,
+      DEFAULT_KEYVALUE_SEPARATOR);
+  }
+
+  /**
+   * Append to a StringBuilder a key/value.
+   * Uses default separators.
+   * @param sb StringBuilder to use
+   * @param key Key to append.
+   * @param value Value to append.
+   * @param separator Value to use between key and value.
+   * @param keyValueSeparator Value to use between key/value sets.
+   * @return Passed <code>sb</code> populated with key/value.
+   */
+  public static StringBuilder appendKeyValue(final StringBuilder sb,
+      final String key, final Object value, final String separator,
+      final String keyValueSeparator) {
+    if (sb.length() > 0) {
+      sb.append(keyValueSeparator);
+    }
+    return sb.append(key).append(separator).append(value);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Threads.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Threads.java
new file mode 100644
index 0000000..3b89433
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Threads.java
@@ -0,0 +1,131 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import java.io.PrintWriter;
+import org.apache.hadoop.util.ReflectionUtils;
+
+import java.lang.Thread.UncaughtExceptionHandler;
+
+/**
+ * Thread Utility
+ */
+public class Threads {
+  protected static final Log LOG = LogFactory.getLog(Threads.class);
+
+  /**
+   * Utility method that sets name, daemon status and starts passed thread.
+   * @param t thread to run
+   * @return Returns the passed Thread <code>t</code>.
+   */
+  public static Thread setDaemonThreadRunning(final Thread t) {
+    return setDaemonThreadRunning(t, t.getName());
+  }
+
+  /**
+   * Utility method that sets name, daemon status and starts passed thread.
+   * @param t thread to frob
+   * @param name new name
+   * @return Returns the passed Thread <code>t</code>.
+   */
+  public static Thread setDaemonThreadRunning(final Thread t,
+    final String name) {
+    return setDaemonThreadRunning(t, name, null);
+  }
+
+  /**
+   * Utility method that sets name, daemon status and starts passed thread.
+   * @param t thread to frob
+   * @param name new name
+   * @param handler A handler to set on the thread.  Pass null if want to
+   * use default handler.
+   * @return Returns the passed Thread <code>t</code>.
+   */
+  public static Thread setDaemonThreadRunning(final Thread t,
+    final String name, final UncaughtExceptionHandler handler) {
+    t.setName(name);
+    if (handler != null) {
+      t.setUncaughtExceptionHandler(handler);
+    }
+    t.setDaemon(true);
+    t.start();
+    return t;
+  }
+
+  /**
+   * Shutdown passed thread using isAlive and join.
+   * @param t Thread to shutdown
+   */
+  public static void shutdown(final Thread t) {
+    shutdown(t, 0);
+  }
+
+  /**
+   * Shutdown passed thread using isAlive and join.
+   * @param joinwait Pass 0 if we're to wait forever.
+   * @param t Thread to shutdown
+   */
+  public static void shutdown(final Thread t, final long joinwait) {
+    if (t == null) return;
+    while (t.isAlive()) {
+      try {
+        t.join(joinwait);
+      } catch (InterruptedException e) {
+        LOG.warn(t.getName() + "; joinwait=" + joinwait, e);
+      }
+    }
+  }
+
+
+  /**
+   * @param t Waits on the passed thread to die dumping a threaddump every
+   * minute while its up.
+   * @throws InterruptedException
+   */
+  public static void threadDumpingIsAlive(final Thread t)
+  throws InterruptedException {
+    if (t == null) {
+      return;
+    }
+    long startTime = System.currentTimeMillis();
+    while (t.isAlive()) {
+      Thread.sleep(1000);
+      if (System.currentTimeMillis() - startTime > 60000) {
+        startTime = System.currentTimeMillis();
+        ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+            "Automatic Stack Trace every 60 seconds waiting on " +
+            t.getName());
+      }
+    }
+  }
+
+  /**
+   * @param millis How long to sleep for in milliseconds.
+   */
+  public static void sleep(int millis) {
+    try {
+      Thread.sleep(millis);
+    } catch (InterruptedException e) {
+      e.printStackTrace();
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java
new file mode 100644
index 0000000..24e98df
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java
@@ -0,0 +1,91 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.VersionAnnotation;
+
+/**
+ * This class finds the package info for hbase and the VersionAnnotation
+ * information.  Taken from hadoop.  Only name of annotation is different.
+ */
+public class VersionInfo {
+  private static Package myPackage;
+  private static VersionAnnotation version;
+
+  static {
+    myPackage = VersionAnnotation.class.getPackage();
+    version = myPackage.getAnnotation(VersionAnnotation.class);
+  }
+
+  /**
+   * Get the meta-data for the hbase package.
+   * @return package
+   */
+  static Package getPackage() {
+    return myPackage;
+  }
+
+  /**
+   * Get the hbase version.
+   * @return the hbase version string, eg. "0.6.3-dev"
+   */
+  public static String getVersion() {
+    return version != null ? version.version() : "Unknown";
+  }
+
+  /**
+   * Get the subversion revision number for the root directory
+   * @return the revision number, eg. "451451"
+   */
+  public static String getRevision() {
+    return version != null ? version.revision() : "Unknown";
+  }
+
+  /**
+   * The date that hbase was compiled.
+   * @return the compilation date in unix date format
+   */
+  public static String getDate() {
+    return version != null ? version.date() : "Unknown";
+  }
+
+  /**
+   * The user that compiled hbase.
+   * @return the username of the user
+   */
+  public static String getUser() {
+    return version != null ? version.user() : "Unknown";
+  }
+
+  /**
+   * Get the subversion URL for the root hbase directory.
+   * @return the url
+   */
+  public static String getUrl() {
+    return version != null ? version.url() : "Unknown";
+  }
+
+  public static void main(String[] args) {
+    System.out.println("HBase " + getVersion());
+    System.out.println("Subversion " + getUrl() + " -r " + getRevision());
+    System.out.println("Compiled by " + getUser() + " on " + getDate());
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/util/Writables.java b/0.90/src/main/java/org/apache/hadoop/hbase/util/Writables.java
new file mode 100644
index 0000000..4bff615
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/util/Writables.java
@@ -0,0 +1,163 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.apache.hadoop.io.Writable;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+/**
+ * Utility class with methods for manipulating Writable objects
+ */
+public class Writables {
+  /**
+   * @param w writable
+   * @return The bytes of <code>w</code> gotten by running its
+   * {@link Writable#write(java.io.DataOutput)} method.
+   * @throws IOException e
+   * @see #getWritable(byte[], Writable)
+   */
+  public static byte [] getBytes(final Writable w) throws IOException {
+    if (w == null) {
+      throw new IllegalArgumentException("Writable cannot be null");
+    }
+    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(byteStream);
+    try {
+      w.write(out);
+      out.close();
+      out = null;
+      return byteStream.toByteArray();
+    } finally {
+      if (out != null) {
+        out.close();
+      }
+    }
+  }
+
+  /**
+   * Set bytes into the passed Writable by calling its
+   * {@link Writable#readFields(java.io.DataInput)}.
+   * @param bytes serialized bytes
+   * @param w An empty Writable (usually made by calling the null-arg
+   * constructor).
+   * @return The passed Writable after its readFields has been called fed
+   * by the passed <code>bytes</code> array or IllegalArgumentException
+   * if passed null or an empty <code>bytes</code> array.
+   * @throws IOException e
+   * @throws IllegalArgumentException
+   */
+  public static Writable getWritable(final byte [] bytes, final Writable w)
+  throws IOException {
+    return getWritable(bytes, 0, bytes.length, w);
+  }
+
+  /**
+   * Set bytes into the passed Writable by calling its
+   * {@link Writable#readFields(java.io.DataInput)}.
+   * @param bytes serialized bytes
+   * @param offset offset into array
+   * @param length length of data
+   * @param w An empty Writable (usually made by calling the null-arg
+   * constructor).
+   * @return The passed Writable after its readFields has been called fed
+   * by the passed <code>bytes</code> array or IllegalArgumentException
+   * if passed null or an empty <code>bytes</code> array.
+   * @throws IOException e
+   * @throws IllegalArgumentException
+   */
+  public static Writable getWritable(final byte [] bytes, final int offset,
+    final int length, final Writable w)
+  throws IOException {
+    if (bytes == null || length <=0) {
+      throw new IllegalArgumentException("Can't build a writable with empty " +
+        "bytes array");
+    }
+    if (w == null) {
+      throw new IllegalArgumentException("Writable cannot be null");
+    }
+    DataInputBuffer in = new DataInputBuffer();
+    try {
+      in.reset(bytes, offset, length);
+      w.readFields(in);
+      return w;
+    } finally {
+      in.close();
+    }
+  }
+
+  /**
+   * @param bytes serialized bytes
+   * @return A HRegionInfo instance built out of passed <code>bytes</code>.
+   * @throws IOException e
+   */
+  public static HRegionInfo getHRegionInfo(final byte [] bytes)
+  throws IOException {
+    return (HRegionInfo)getWritable(bytes, new HRegionInfo());
+  }
+
+  /**
+   * @param bytes serialized bytes
+   * @return A HRegionInfo instance built out of passed <code>bytes</code>
+   * or <code>null</code> if passed bytes are null or an empty array.
+   * @throws IOException e
+   */
+  public static HRegionInfo getHRegionInfoOrNull(final byte [] bytes)
+  throws IOException {
+    return (bytes == null || bytes.length <= 0)?
+        null : getHRegionInfo(bytes);
+  }
+
+  /**
+   * Copy one Writable to another.  Copies bytes using data streams.
+   * @param src Source Writable
+   * @param tgt Target Writable
+   * @return The target Writable.
+   * @throws IOException e
+   */
+  public static Writable copyWritable(final Writable src, final Writable tgt)
+  throws IOException {
+    return copyWritable(getBytes(src), tgt);
+  }
+
+  /**
+   * Copy one Writable to another.  Copies bytes using data streams.
+   * @param bytes Source Writable
+   * @param tgt Target Writable
+   * @return The target Writable.
+   * @throws IOException e
+   */
+  public static Writable copyWritable(final byte [] bytes, final Writable tgt)
+  throws IOException {
+    DataInputStream dis = new DataInputStream(new ByteArrayInputStream(bytes));
+    try {
+      tgt.readFields(dis);
+    } finally {
+      dis.close();
+    }
+    return tgt;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ClusterStatusTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ClusterStatusTracker.java
new file mode 100644
index 0000000..5974681
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ClusterStatusTracker.java
@@ -0,0 +1,86 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.ClusterStatus;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Tracker on cluster settings up in zookeeper.
+ * This is not related to {@link ClusterStatus}.  That class is a data structure
+ * that holds snapshot of current view on cluster.  This class is about tracking
+ * cluster attributes up in zookeeper.
+ *
+ */
+public class ClusterStatusTracker extends ZooKeeperNodeTracker {
+  private static final Log LOG = LogFactory.getLog(ClusterStatusTracker.class);
+
+  /**
+   * Creates a cluster status tracker.
+   *
+   * <p>After construction, use {@link #start} to kick off tracking.
+   *
+   * @param watcher
+   * @param abortable
+   */
+  public ClusterStatusTracker(ZooKeeperWatcher watcher, Abortable abortable) {
+    super(watcher, watcher.clusterStateZNode, abortable);
+  }
+
+  /**
+   * Checks if cluster is up.
+   * @return true if root region location is available, false if not
+   */
+  public boolean isClusterUp() {
+    return super.getData() != null;
+  }
+
+  /**
+   * Sets the cluster as up.
+   * @throws KeeperException unexpected zk exception
+   */
+  public void setClusterUp()
+  throws KeeperException {
+    byte [] upData = Bytes.toBytes(new java.util.Date().toString());
+    try {
+      ZKUtil.createAndWatch(watcher, watcher.clusterStateZNode, upData);
+    } catch(KeeperException.NodeExistsException nee) {
+      ZKUtil.setData(watcher, watcher.clusterStateZNode, upData);
+    }
+  }
+
+  /**
+   * Sets the cluster as down by deleting the znode.
+   * @throws KeeperException unexpected zk exception
+   */
+  public void setClusterDown()
+  throws KeeperException {
+    try {
+      ZKUtil.deleteNode(watcher, watcher.clusterStateZNode);
+    } catch(KeeperException.NoNodeException nne) {
+      LOG.warn("Attempted to set cluster as down but already down, cluster " +
+          "state node (" + watcher.clusterStateZNode + ") not found");
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
new file mode 100644
index 0000000..d551c6f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java
@@ -0,0 +1,150 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+import java.net.InetAddress;
+import java.net.NetworkInterface;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
+import java.util.Properties;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.net.DNS;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.zookeeper.server.ServerConfig;
+import org.apache.zookeeper.server.ZooKeeperServerMain;
+import org.apache.zookeeper.server.quorum.QuorumPeerConfig;
+import org.apache.zookeeper.server.quorum.QuorumPeerMain;
+
+/**
+ * HBase's version of ZooKeeper's QuorumPeer. When HBase is set to manage
+ * ZooKeeper, this class is used to start up QuorumPeer instances. By doing
+ * things in here rather than directly calling to ZooKeeper, we have more
+ * control over the process. This class uses {@link ZKConfig} to parse the
+ * zoo.cfg and inject variables from HBase's site.xml configuration in.
+ */
+public class HQuorumPeer {
+  
+  /**
+   * Parse ZooKeeper configuration from HBase XML config and run a QuorumPeer.
+   * @param args String[] of command line arguments. Not used.
+   */
+  public static void main(String[] args) {
+    Configuration conf = HBaseConfiguration.create();
+    try {
+      Properties zkProperties = ZKConfig.makeZKProps(conf);
+      writeMyID(zkProperties);
+      QuorumPeerConfig zkConfig = new QuorumPeerConfig();
+      zkConfig.parseProperties(zkProperties);
+      runZKServer(zkConfig);
+    } catch (Exception e) {
+      e.printStackTrace();
+      System.exit(-1);
+    }
+  }
+
+  private static void runZKServer(QuorumPeerConfig zkConfig) throws UnknownHostException, IOException {
+    if (zkConfig.isDistributed()) {
+      QuorumPeerMain qp = new QuorumPeerMain();
+      qp.runFromConfig(zkConfig);
+    } else {
+      ZooKeeperServerMain zk = new ZooKeeperServerMain();
+      ServerConfig serverConfig = new ServerConfig();
+      serverConfig.readFrom(zkConfig);
+      zk.runFromConfig(serverConfig);
+    }
+  }
+
+  private static boolean addressIsLocalHost(String address) {
+    return address.equals("localhost") || address.equals("127.0.0.1");
+  }
+
+  static void writeMyID(Properties properties) throws IOException {
+    long myId = -1;
+
+    Configuration conf = HBaseConfiguration.create();
+    String myAddress = DNS.getDefaultHost(
+        conf.get("hbase.zookeeper.dns.interface","default"),
+        conf.get("hbase.zookeeper.dns.nameserver","default"));
+
+    List<String> ips = new ArrayList<String>();
+
+    // Add what could be the best (configured) match
+    ips.add(myAddress.contains(".") ?
+        myAddress :
+        StringUtils.simpleHostname(myAddress));
+
+    // For all nics get all hostnames and IPs
+    Enumeration<?> nics = NetworkInterface.getNetworkInterfaces();
+    while(nics.hasMoreElements()) {
+      Enumeration<?> rawAdrs =
+          ((NetworkInterface)nics.nextElement()).getInetAddresses();
+      while(rawAdrs.hasMoreElements()) {
+        InetAddress inet = (InetAddress) rawAdrs.nextElement();
+        ips.add(StringUtils.simpleHostname(inet.getHostName()));
+        ips.add(inet.getHostAddress());
+      }
+    }
+
+    for (Entry<Object, Object> entry : properties.entrySet()) {
+      String key = entry.getKey().toString().trim();
+      String value = entry.getValue().toString().trim();
+      if (key.startsWith("server.")) {
+        int dot = key.indexOf('.');
+        long id = Long.parseLong(key.substring(dot + 1));
+        String[] parts = value.split(":");
+        String address = parts[0];
+        if (addressIsLocalHost(address) || ips.contains(address)) {
+          myId = id;
+          break;
+        }
+      }
+    }
+
+    // Set the max session timeout from the provided client-side timeout
+    properties.setProperty("maxSessionTimeout",
+        conf.get("zookeeper.session.timeout", "180000"));
+
+    if (myId == -1) {
+      throw new IOException("Could not find my address: " + myAddress +
+                            " in list of ZooKeeper quorum servers");
+    }
+
+    String dataDirStr = properties.get("dataDir").toString().trim();
+    File dataDir = new File(dataDirStr);
+    if (!dataDir.isDirectory()) {
+      if (!dataDir.mkdirs()) {
+        throw new IOException("Unable to create data dir " + dataDir);
+      }
+    }
+
+    File myIdFile = new File(dataDir, "myid");
+    PrintWriter w = new PrintWriter(myIdFile);
+    w.println(myId);
+    w.close();
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaNodeTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaNodeTracker.java
new file mode 100644
index 0000000..55257b3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaNodeTracker.java
@@ -0,0 +1,70 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+
+/**
+ * Tracks the unassigned zookeeper node used by the META table.
+ *
+ * A callback is made into the passed {@link CatalogTracker} when
+ * <code>.META.</code> completes a new assignment.
+ * <p>
+ * If META is already assigned when instantiating this class, you will not
+ * receive any notification for that assignment.  You will receive a
+ * notification after META has been successfully assigned to a new location.
+ */
+public class MetaNodeTracker extends ZooKeeperNodeTracker {
+  private static final Log LOG = LogFactory.getLog(MetaNodeTracker.class);
+
+  /** Catalog tracker to notify when META has a new assignment completed. */
+  private final CatalogTracker catalogTracker;
+
+  /**
+   * Creates a meta node tracker.
+   * @param watcher
+   * @param abortable
+   */
+  public MetaNodeTracker(final ZooKeeperWatcher watcher,
+      final CatalogTracker catalogTracker, final Abortable abortable) {
+    super(watcher, ZKUtil.joinZNode(watcher.assignmentZNode,
+        HRegionInfo.FIRST_META_REGIONINFO.getEncodedName()), abortable);
+    this.catalogTracker = catalogTracker;
+  }
+
+  @Override
+  public void nodeDeleted(String path) {
+    super.nodeDeleted(path);
+    if (!path.equals(node)) return;
+    LOG.info("Detected completed assignment of META, notifying catalog tracker");
+    try {
+      this.catalogTracker.waitForMetaServerConnectionDefault();
+    } catch (IOException e) {
+      LOG.warn("Tried to reset META server location after seeing the " +
+        "completion of a new META assignment but got an IOE", e);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
new file mode 100644
index 0000000..559a2f0
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java
@@ -0,0 +1,225 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.OutputStream;
+import java.io.Reader;
+import java.net.BindException;
+import java.net.InetSocketAddress;
+import java.net.Socket;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.zookeeper.server.NIOServerCnxn;
+import org.apache.zookeeper.server.ZooKeeperServer;
+import org.apache.zookeeper.server.persistence.FileTxnLog;
+
+/**
+ * TODO: Most of the code in this class is ripped from ZooKeeper tests. Instead
+ * of redoing it, we should contribute updates to their code which let us more
+ * easily access testing helper objects.
+ */
+public class MiniZooKeeperCluster {
+  private static final Log LOG = LogFactory.getLog(MiniZooKeeperCluster.class);
+
+  private static final int TICK_TIME = 2000;
+  private static final int CONNECTION_TIMEOUT = 30000;
+
+  private boolean started;
+  private int clientPort = 21818; // use non-standard port
+
+  private NIOServerCnxn.Factory standaloneServerFactory;
+  private int tickTime = 0;
+
+  /** Create mini ZooKeeper cluster. */
+  public MiniZooKeeperCluster() {
+    this.started = false;
+  }
+
+  public void setClientPort(int clientPort) {
+    this.clientPort = clientPort;
+  }
+
+  public int getClientPort() {
+    return clientPort;
+  }
+
+  public void setTickTime(int tickTime) {
+    this.tickTime = tickTime;
+  }
+
+  // / XXX: From o.a.zk.t.ClientBase
+  private static void setupTestEnv() {
+    // during the tests we run with 100K prealloc in the logs.
+    // on windows systems prealloc of 64M was seen to take ~15seconds
+    // resulting in test failure (client timeout on first session).
+    // set env and directly in order to handle static init/gc issues
+    System.setProperty("zookeeper.preAllocSize", "100");
+    FileTxnLog.setPreallocSize(100);
+  }
+
+  /**
+   * @param baseDir
+   * @return ClientPort server bound to.
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  public int startup(File baseDir) throws IOException,
+      InterruptedException {
+
+    setupTestEnv();
+
+    shutdown();
+
+    File dir = new File(baseDir, "zookeeper").getAbsoluteFile();
+    recreateDir(dir);
+
+    int tickTimeToUse;
+    if (this.tickTime > 0) {
+      tickTimeToUse = this.tickTime;
+    } else {
+      tickTimeToUse = TICK_TIME;
+    }
+    ZooKeeperServer server = new ZooKeeperServer(dir, dir, tickTimeToUse);
+    while (true) {
+      try {
+        standaloneServerFactory =
+          new NIOServerCnxn.Factory(new InetSocketAddress(clientPort));
+      } catch (BindException e) {
+        LOG.info("Failed binding ZK Server to client port: " + clientPort);
+        //this port is already in use. try to use another
+        clientPort++;
+        continue;
+      }
+      break;
+    }
+    standaloneServerFactory.startup(server);
+
+    if (!waitForServerUp(clientPort, CONNECTION_TIMEOUT)) {
+      throw new IOException("Waiting for startup of standalone server");
+    }
+
+    started = true;
+    LOG.info("Started MiniZK Server on client port: " + clientPort);
+    return clientPort;
+  }
+
+  private void recreateDir(File dir) throws IOException {
+    if (dir.exists()) {
+      FileUtil.fullyDelete(dir);
+    }
+    try {
+      dir.mkdirs();
+    } catch (SecurityException e) {
+      throw new IOException("creating dir: " + dir, e);
+    }
+  }
+
+  /**
+   * @throws IOException
+   */
+  public void shutdown() throws IOException {
+    if (!started) {
+      return;
+    }
+
+    standaloneServerFactory.shutdown();
+    if (!waitForServerDown(clientPort, CONNECTION_TIMEOUT)) {
+      throw new IOException("Waiting for shutdown of standalone server");
+    }
+
+    started = false;
+  }
+
+  // XXX: From o.a.zk.t.ClientBase
+  private static boolean waitForServerDown(int port, long timeout) {
+    long start = System.currentTimeMillis();
+    while (true) {
+      try {
+        Socket sock = new Socket("localhost", port);
+        try {
+          OutputStream outstream = sock.getOutputStream();
+          outstream.write("stat".getBytes());
+          outstream.flush();
+        } finally {
+          sock.close();
+        }
+      } catch (IOException e) {
+        return true;
+      }
+
+      if (System.currentTimeMillis() > start + timeout) {
+        break;
+      }
+      try {
+        Thread.sleep(250);
+      } catch (InterruptedException e) {
+        // ignore
+      }
+    }
+    return false;
+  }
+
+  // XXX: From o.a.zk.t.ClientBase
+  private static boolean waitForServerUp(int port, long timeout) {
+    long start = System.currentTimeMillis();
+    while (true) {
+      try {
+        Socket sock = new Socket("localhost", port);
+        BufferedReader reader = null;
+        try {
+          OutputStream outstream = sock.getOutputStream();
+          outstream.write("stat".getBytes());
+          outstream.flush();
+
+          Reader isr = new InputStreamReader(sock.getInputStream());
+          reader = new BufferedReader(isr);
+          String line = reader.readLine();
+          if (line != null && line.startsWith("Zookeeper version:")) {
+            return true;
+          }
+        } finally {
+          sock.close();
+          if (reader != null) {
+            reader.close();
+          }
+        }
+      } catch (IOException e) {
+        // ignore as this is expected
+        LOG.info("server localhost:" + port + " not up " + e);
+      }
+
+      if (System.currentTimeMillis() > start + timeout) {
+        break;
+      }
+      try {
+        Thread.sleep(250);
+      } catch (InterruptedException e) {
+        // ignore
+      }
+    }
+    return false;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/RegionServerTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/RegionServerTracker.java
new file mode 100644
index 0000000..0437484
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/RegionServerTracker.java
@@ -0,0 +1,101 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.master.ServerManager;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Tracks the online region servers via ZK.
+ *
+ * <p>Handling of new RSs checking in is done via RPC.  This class
+ * is only responsible for watching for expired nodes.  It handles
+ * listening for changes in the RS node list and watching each node.
+ *
+ * <p>If an RS node gets deleted, this automatically handles calling of
+ * {@link ServerManager#expireServer(org.apache.hadoop.hbase.HServerInfo)}.
+ */
+public class RegionServerTracker extends ZooKeeperListener {
+  private static final Log LOG = LogFactory.getLog(RegionServerTracker.class);
+
+  private ServerManager serverManager;
+  private Abortable abortable;
+
+  public RegionServerTracker(ZooKeeperWatcher watcher,
+      Abortable abortable, ServerManager serverManager) {
+    super(watcher);
+    this.abortable = abortable;
+    this.serverManager = serverManager;
+  }
+
+  /**
+   * Starts the tracking of online RegionServers.
+   *
+   * <p>All RSs will be tracked after this method is called.
+   *
+   * @throws KeeperException
+   */
+  public void start() throws KeeperException {
+    watcher.registerListener(this);
+    ZKUtil.watchAndGetNewChildren(watcher, watcher.rsZNode);
+  }
+
+  @Override
+  public void nodeDeleted(String path) {
+    if(path.startsWith(watcher.rsZNode)) {
+      String serverName = ZKUtil.getNodeName(path);
+      LOG.info("RegionServer ephemeral node deleted, processing expiration [" +
+          serverName + "]");
+      HServerInfo hsi = serverManager.getServerInfo(serverName);
+      if(hsi == null) {
+        LOG.info("No HServerInfo found for " + serverName);
+        return;
+      }
+      serverManager.expireServer(hsi);
+    }
+  }
+
+  @Override
+  public void nodeChildrenChanged(String path) {
+    if(path.equals(watcher.rsZNode)) {
+      try {
+        ZKUtil.watchAndGetNewChildren(watcher, watcher.rsZNode);
+      } catch (KeeperException e) {
+        abortable.abort("Unexpected zk exception getting RS nodes", e);
+      }
+    }
+  }
+
+  /**
+   * Gets the online servers.
+   * @return list of online servers from zk
+   * @throws KeeperException
+   */
+  public List<HServerAddress> getOnlineServers() throws KeeperException {
+    return ZKUtil.listChildrenAndGetAsAddresses(watcher, watcher.rsZNode);
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/RootRegionTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/RootRegionTracker.java
new file mode 100644
index 0000000..692b608
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/RootRegionTracker.java
@@ -0,0 +1,84 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.catalog.RootLocationEditor;
+import org.apache.hadoop.hbase.regionserver.RegionServerServices;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tracks the root region server location node in zookeeper.
+ * Root region location is set by {@link RootLocationEditor} usually called
+ * out of <code>RegionServerServices</code>.
+ * This class has a watcher on the root location and notices changes.
+ */
+public class RootRegionTracker extends ZooKeeperNodeTracker {
+  /**
+   * Creates a root region location tracker.
+   *
+   * <p>After construction, use {@link #start} to kick off tracking.
+   *
+   * @param watcher
+   * @param abortable
+   */
+  public RootRegionTracker(ZooKeeperWatcher watcher, Abortable abortable) {
+    super(watcher, watcher.rootServerZNode, abortable);
+  }
+
+  /**
+   * Checks if the root region location is available.
+   * @return true if root region location is available, false if not
+   */
+  public boolean isLocationAvailable() {
+    return super.getData() != null;
+  }
+
+  /**
+   * Gets the root region location, if available.  Null if not.  Does not block.
+   * @return server address for server hosting root region, null if none available
+   * @throws InterruptedException 
+   */
+  public HServerAddress getRootRegionLocation() throws InterruptedException {
+    return dataToHServerAddress(super.getData());
+  }
+
+  /**
+   * Gets the root region location, if available, and waits for up to the
+   * specified timeout if not immediately available.
+   * @param timeout maximum time to wait, in millis
+   * @return server address for server hosting root region, null if timed out
+   * @throws InterruptedException if interrupted while waiting
+   */
+  public HServerAddress waitRootRegionLocation(long timeout)
+  throws InterruptedException {
+    return dataToHServerAddress(super.blockUntilAvailable(timeout));
+  }
+
+  /*
+   * @param data
+   * @return Returns null if <code>data</code> is null else converts passed data
+   * to an HServerAddress instance.
+   */
+  private static HServerAddress dataToHServerAddress(final byte [] data) {
+    return data == null ? null: new HServerAddress(Bytes.toString(data));
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java
new file mode 100644
index 0000000..1ac083d1
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java
@@ -0,0 +1,868 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.executor.RegionTransitionData;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.zookeeper.AsyncCallback;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.KeeperException.Code;
+import org.apache.zookeeper.KeeperException.NoNodeException;
+import org.apache.zookeeper.KeeperException.NodeExistsException;
+import org.apache.zookeeper.data.Stat;
+
+/**
+ * Utility class for doing region assignment in ZooKeeper.  This class extends
+ * stuff done in {@link ZKUtil} to cover specific assignment operations.
+ * <p>
+ * Contains only static methods and constants.
+ * <p>
+ * Used by both the Master and RegionServer.
+ * <p>
+ * All valid transitions outlined below:
+ * <p>
+ * <b>MASTER</b>
+ * <ol>
+ *   <li>
+ *     Master creates an unassigned node as OFFLINE.
+ *     - Cluster startup and table enabling.
+ *   </li>
+ *   <li>
+ *     Master forces an existing unassigned node to OFFLINE.
+ *     - RegionServer failure.
+ *     - Allows transitions from all states to OFFLINE.
+ *   </li>
+ *   <li>
+ *     Master deletes an unassigned node that was in a OPENED state.
+ *     - Normal region transitions.  Besides cluster startup, no other deletions
+ *     of unassigned nodes is allowed.
+ *   </li>
+ *   <li>
+ *     Master deletes all unassigned nodes regardless of state.
+ *     - Cluster startup before any assignment happens.
+ *   </li>
+ * </ol>
+ * <p>
+ * <b>REGIONSERVER</b>
+ * <ol>
+ *   <li>
+ *     RegionServer creates an unassigned node as CLOSING.
+ *     - All region closes will do this in response to a CLOSE RPC from Master.
+ *     - A node can never be transitioned to CLOSING, only created.
+ *   </li>
+ *   <li>
+ *     RegionServer transitions an unassigned node from CLOSING to CLOSED.
+ *     - Normal region closes.  CAS operation.
+ *   </li>
+ *   <li>
+ *     RegionServer transitions an unassigned node from OFFLINE to OPENING.
+ *     - All region opens will do this in response to an OPEN RPC from the Master.
+ *     - Normal region opens.  CAS operation.
+ *   </li>
+ *   <li>
+ *     RegionServer transitions an unassigned node from OPENING to OPENED.
+ *     - Normal region opens.  CAS operation.
+ *   </li>
+ * </ol>
+ */
+public class ZKAssign {
+  private static final Log LOG = LogFactory.getLog(ZKAssign.class);
+
+  /**
+   * Gets the full path node name for the unassigned node for the specified
+   * region.
+   * @param zkw zk reference
+   * @param regionName region name
+   * @return full path node name
+   */
+  public static String getNodeName(ZooKeeperWatcher zkw, String regionName) {
+    return ZKUtil.joinZNode(zkw.assignmentZNode, regionName);
+  }
+
+  /**
+   * Gets the region name from the full path node name of an unassigned node.
+   * @param path full zk path
+   * @return region name
+   */
+  public static String getRegionName(ZooKeeperWatcher zkw, String path) {
+    return path.substring(zkw.assignmentZNode.length()+1);
+  }
+
+  // Master methods
+
+  /**
+   * Creates a new unassigned node in the OFFLINE state for the specified region.
+   *
+   * <p>Does not transition nodes from other states.  If a node already exists
+   * for this region, a {@link NodeExistsException} will be thrown.
+   *
+   * <p>Sets a watcher on the unassigned region node if the method is successful.
+   *
+   * <p>This method should only be used during cluster startup and the enabling
+   * of a table.
+   *
+   * @param zkw zk reference
+   * @param region region to be created as offline
+   * @param serverName server event originates from
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NodeExistsException if node already exists
+   */
+  public static void createNodeOffline(ZooKeeperWatcher zkw, HRegionInfo region,
+      String serverName)
+  throws KeeperException, KeeperException.NodeExistsException {
+    createNodeOffline(zkw, region, serverName, EventType.M_ZK_REGION_OFFLINE);
+  }
+
+  public static void createNodeOffline(ZooKeeperWatcher zkw, HRegionInfo region,
+      String serverName, final EventType event)
+  throws KeeperException, KeeperException.NodeExistsException {
+    LOG.debug(zkw.prefix("Creating unassigned node for " +
+      region.getEncodedName() + " in OFFLINE state"));
+    RegionTransitionData data = new RegionTransitionData(event,
+      region.getRegionName(), serverName);
+    synchronized(zkw.getNodes()) {
+      String node = getNodeName(zkw, region.getEncodedName());
+      zkw.getNodes().add(node);
+      ZKUtil.createAndWatch(zkw, node, data.getBytes());
+    }
+  }
+
+  /**
+   * Creates an unassigned node in the OFFLINE state for the specified region.
+   * <p>
+   * Runs asynchronously.  Depends on no pre-existing znode.
+   *
+   * <p>Sets a watcher on the unassigned region node.
+   *
+   * @param zkw zk reference
+   * @param region region to be created as offline
+   * @param serverName server event originates from
+   * @param cb
+   * @param ctx
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NodeExistsException if node already exists
+   */
+  public static void asyncCreateNodeOffline(ZooKeeperWatcher zkw,
+      HRegionInfo region, String serverName,
+      final AsyncCallback.StringCallback cb, final Object ctx)
+  throws KeeperException {
+    LOG.debug(zkw.prefix("Async create of unassigned node for " +
+      region.getEncodedName() + " with OFFLINE state"));
+    RegionTransitionData data = new RegionTransitionData(
+        EventType.M_ZK_REGION_OFFLINE, region.getRegionName(), serverName);
+    synchronized(zkw.getNodes()) {
+      String node = getNodeName(zkw, region.getEncodedName());
+      zkw.getNodes().add(node);
+      ZKUtil.asyncCreate(zkw, node, data.getBytes(), cb, ctx);
+    }
+  }
+
+  /**
+   * Forces an existing unassigned node to the OFFLINE state for the specified
+   * region.
+   *
+   * <p>Does not create a new node.  If a node does not already exist for this
+   * region, a {@link NoNodeException} will be thrown.
+   *
+   * <p>Sets a watcher on the unassigned region node if the method is
+   * successful.
+   *
+   * <p>This method should only be used during recovery of regionserver failure.
+   *
+   * @param zkw zk reference
+   * @param region region to be forced as offline
+   * @param serverName server event originates from
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NoNodeException if node does not exist
+   */
+  public static void forceNodeOffline(ZooKeeperWatcher zkw, HRegionInfo region,
+      String serverName)
+  throws KeeperException, KeeperException.NoNodeException {
+    LOG.debug(zkw.prefix("Forcing existing unassigned node for " +
+      region.getEncodedName() + " to OFFLINE state"));
+    RegionTransitionData data = new RegionTransitionData(
+        EventType.M_ZK_REGION_OFFLINE, region.getRegionName(), serverName);
+    synchronized(zkw.getNodes()) {
+      String node = getNodeName(zkw, region.getEncodedName());
+      zkw.getNodes().add(node);
+      ZKUtil.setData(zkw, node, data.getBytes());
+    }
+  }
+
+
+  /**
+   * Creates or force updates an unassigned node to the OFFLINE state for the
+   * specified region.
+   * <p>
+   * Attempts to create the node but if it exists will force it to transition to
+   * and OFFLINE state.
+   *
+   * <p>Sets a watcher on the unassigned region node if the method is
+   * successful.
+   *
+   * <p>This method should be used when assigning a region.
+   *
+   * @param zkw zk reference
+   * @param region region to be created as offline
+   * @param serverName server event originates from
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NodeExistsException if node already exists
+   */
+  public static boolean createOrForceNodeOffline(ZooKeeperWatcher zkw,
+      HRegionInfo region, String serverName)
+  throws KeeperException {
+    LOG.debug(zkw.prefix("Creating (or updating) unassigned node for " +
+      region.getEncodedName() + " with OFFLINE state"));
+    RegionTransitionData data = new RegionTransitionData(
+        EventType.M_ZK_REGION_OFFLINE, region.getRegionName(), serverName);
+    synchronized(zkw.getNodes()) {
+      String node = getNodeName(zkw, region.getEncodedName());
+      zkw.sync(node);
+      zkw.getNodes().add(node);
+      int version = ZKUtil.checkExists(zkw, node);
+      if(version == -1) {
+        ZKUtil.createAndWatch(zkw, node, data.getBytes());
+      } else {
+        if (!ZKUtil.setData(zkw, node, data.getBytes(), version)) {
+          return false;
+        } else {
+          // We successfully forced to OFFLINE, reset watch and handle if
+          // the state changed in between our set and the watch
+          RegionTransitionData curData =
+            ZKAssign.getData(zkw, region.getEncodedName());
+          if (curData.getEventType() != data.getEventType()) {
+            // state changed, need to process
+            return false;
+          }
+        }
+      }
+    }
+    return true;
+  }
+
+  /**
+   * Deletes an existing unassigned node that is in the OPENED state for the
+   * specified region.
+   *
+   * <p>If a node does not already exist for this region, a
+   * {@link NoNodeException} will be thrown.
+   *
+   * <p>No watcher is set whether this succeeds or not.
+   *
+   * <p>Returns false if the node was not in the proper state but did exist.
+   *
+   * <p>This method is used during normal region transitions when a region
+   * finishes successfully opening.  This is the Master acknowledging completion
+   * of the specified regions transition.
+   *
+   * @param zkw zk reference
+   * @param regionName opened region to be deleted from zk
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NoNodeException if node does not exist
+   */
+  public static boolean deleteOpenedNode(ZooKeeperWatcher zkw,
+      String regionName)
+  throws KeeperException, KeeperException.NoNodeException {
+    return deleteNode(zkw, regionName, EventType.RS_ZK_REGION_OPENED);
+  }
+
+  /**
+   * Deletes an existing unassigned node that is in the OFFLINE state for the
+   * specified region.
+   *
+   * <p>If a node does not already exist for this region, a
+   * {@link NoNodeException} will be thrown.
+   *
+   * <p>No watcher is set whether this succeeds or not.
+   *
+   * <p>Returns false if the node was not in the proper state but did exist.
+   *
+   * <p>This method is used during master failover when the regions on an RS
+   * that has died are all set to OFFLINE before being processed.
+   *
+   * @param zkw zk reference
+   * @param regionName closed region to be deleted from zk
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NoNodeException if node does not exist
+   */
+  public static boolean deleteOfflineNode(ZooKeeperWatcher zkw,
+      String regionName)
+  throws KeeperException, KeeperException.NoNodeException {
+    return deleteNode(zkw, regionName, EventType.M_ZK_REGION_OFFLINE);
+  }
+
+  /**
+   * Deletes an existing unassigned node that is in the CLOSED state for the
+   * specified region.
+   *
+   * <p>If a node does not already exist for this region, a
+   * {@link NoNodeException} will be thrown.
+   *
+   * <p>No watcher is set whether this succeeds or not.
+   *
+   * <p>Returns false if the node was not in the proper state but did exist.
+   *
+   * <p>This method is used during table disables when a region finishes
+   * successfully closing.  This is the Master acknowledging completion
+   * of the specified regions transition to being closed.
+   *
+   * @param zkw zk reference
+   * @param regionName closed region to be deleted from zk
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NoNodeException if node does not exist
+   */
+  public static boolean deleteClosedNode(ZooKeeperWatcher zkw,
+      String regionName)
+  throws KeeperException, KeeperException.NoNodeException {
+    return deleteNode(zkw, regionName, EventType.RS_ZK_REGION_CLOSED);
+  }
+
+  /**
+   * Deletes an existing unassigned node that is in the CLOSING state for the
+   * specified region.
+   *
+   * <p>If a node does not already exist for this region, a
+   * {@link NoNodeException} will be thrown.
+   *
+   * <p>No watcher is set whether this succeeds or not.
+   *
+   * <p>Returns false if the node was not in the proper state but did exist.
+   *
+   * <p>This method is used during table disables when a region finishes
+   * successfully closing.  This is the Master acknowledging completion
+   * of the specified regions transition to being closed.
+   *
+   * @param zkw zk reference
+   * @param region closing region to be deleted from zk
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NoNodeException if node does not exist
+   */
+  public static boolean deleteClosingNode(ZooKeeperWatcher zkw,
+      HRegionInfo region)
+  throws KeeperException, KeeperException.NoNodeException {
+    String regionName = region.getEncodedName();
+    return deleteNode(zkw, regionName, EventType.RS_ZK_REGION_CLOSING);
+  }
+
+  /**
+   * Deletes an existing unassigned node that is in the specified state for the
+   * specified region.
+   *
+   * <p>If a node does not already exist for this region, a
+   * {@link NoNodeException} will be thrown.
+   *
+   * <p>No watcher is set whether this succeeds or not.
+   *
+   * <p>Returns false if the node was not in the proper state but did exist.
+   *
+   * <p>This method is used during table disables when a region finishes
+   * successfully closing.  This is the Master acknowledging completion
+   * of the specified regions transition to being closed.
+   *
+   * @param zkw zk reference
+   * @param regionName region to be deleted from zk
+   * @param expectedState state region must be in for delete to complete
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NoNodeException if node does not exist
+   */
+  private static boolean deleteNode(ZooKeeperWatcher zkw, String regionName,
+      EventType expectedState)
+  throws KeeperException, KeeperException.NoNodeException {
+    LOG.debug(zkw.prefix("Deleting existing unassigned " +
+      "node for " + regionName + " that is in expected state " + expectedState));
+    String node = getNodeName(zkw, regionName);
+    zkw.sync(node);
+    Stat stat = new Stat();
+    byte [] bytes = ZKUtil.getDataNoWatch(zkw, node, stat);
+    if(bytes == null) {
+      throw KeeperException.create(Code.NONODE);
+    }
+    RegionTransitionData data = RegionTransitionData.fromBytes(bytes);
+    if(!data.getEventType().equals(expectedState)) {
+      LOG.warn(zkw.prefix("Attempting to delete unassigned " +
+        "node in " + expectedState +
+        " state but node is in " + data.getEventType() + " state"));
+      return false;
+    }
+    synchronized(zkw.getNodes()) {
+      // TODO: Does this go here or only if we successfully delete node?
+      zkw.getNodes().remove(node);
+      if(!ZKUtil.deleteNode(zkw, node, stat.getVersion())) {
+        LOG.warn(zkw.prefix("Attempting to delete " +
+          "unassigned node in " + expectedState +
+            " state but " +
+            "after verifying it was in OPENED state, we got a version mismatch"));
+        return false;
+      }
+      LOG.debug(zkw.prefix("Successfully deleted unassigned node for region " +
+          regionName + " in expected state " + expectedState));
+      return true;
+    }
+  }
+
+  /**
+   * Deletes all unassigned nodes regardless of their state.
+   *
+   * <p>No watchers are set.
+   *
+   * <p>This method is used by the Master during cluster startup to clear out
+   * any existing state from other cluster runs.
+   *
+   * @param zkw zk reference
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static void deleteAllNodes(ZooKeeperWatcher zkw)
+  throws KeeperException {
+    LOG.debug(zkw.prefix("Deleting any existing unassigned nodes"));
+    ZKUtil.deleteChildrenRecursively(zkw, zkw.assignmentZNode);
+  }
+
+  // RegionServer methods
+
+  /**
+   * Creates a new unassigned node in the CLOSING state for the specified
+   * region.
+   *
+   * <p>Does not transition nodes from any states.  If a node already exists
+   * for this region, a {@link NodeExistsException} will be thrown.
+   *
+   * <p>If creation is successful, returns the version number of the CLOSING
+   * node created.
+   *
+   * <p>Does not set any watches.
+   *
+   * <p>This method should only be used by a RegionServer when initiating a
+   * close of a region after receiving a CLOSE RPC from the Master.
+   *
+   * @param zkw zk reference
+   * @param region region to be created as closing
+   * @param serverName server event originates from
+   * @return version of node after transition, -1 if unsuccessful transition
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NodeExistsException if node already exists
+   */
+  public static int createNodeClosing(ZooKeeperWatcher zkw, HRegionInfo region,
+      String serverName)
+  throws KeeperException, KeeperException.NodeExistsException {
+    LOG.debug(zkw.prefix("Creating unassigned node for " +
+      region.getEncodedName() + " in a CLOSING state"));
+
+    RegionTransitionData data = new RegionTransitionData(
+        EventType.RS_ZK_REGION_CLOSING, region.getRegionName(), serverName);
+
+    synchronized (zkw.getNodes()) {
+      String node = getNodeName(zkw, region.getEncodedName());
+      zkw.getNodes().add(node);
+      return ZKUtil.createAndWatch(zkw, node, data.getBytes());
+    }
+  }
+
+  /**
+   * Transitions an existing unassigned node for the specified region which is
+   * currently in the CLOSING state to be in the CLOSED state.
+   *
+   * <p>Does not transition nodes from other states.  If for some reason the
+   * node could not be transitioned, the method returns -1.  If the transition
+   * is successful, the version of the node after transition is returned.
+   *
+   * <p>This method can fail and return false for three different reasons:
+   * <ul><li>Unassigned node for this region does not exist</li>
+   * <li>Unassigned node for this region is not in CLOSING state</li>
+   * <li>After verifying CLOSING state, update fails because of wrong version
+   * (someone else already transitioned the node)</li>
+   * </ul>
+   *
+   * <p>Does not set any watches.
+   *
+   * <p>This method should only be used by a RegionServer when initiating a
+   * close of a region after receiving a CLOSE RPC from the Master.
+   *
+   * @param zkw zk reference
+   * @param region region to be transitioned to closed
+   * @param serverName server event originates from
+   * @return version of node after transition, -1 if unsuccessful transition
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static int transitionNodeClosed(ZooKeeperWatcher zkw,
+      HRegionInfo region, String serverName, int expectedVersion)
+  throws KeeperException {
+    return transitionNode(zkw, region, serverName,
+        EventType.RS_ZK_REGION_CLOSING,
+        EventType.RS_ZK_REGION_CLOSED, expectedVersion);
+  }
+
+  /**
+   * Transitions an existing unassigned node for the specified region which is
+   * currently in the OFFLINE state to be in the OPENING state.
+   *
+   * <p>Does not transition nodes from other states.  If for some reason the
+   * node could not be transitioned, the method returns -1.  If the transition
+   * is successful, the version of the node written as OPENING is returned.
+   *
+   * <p>This method can fail and return -1 for three different reasons:
+   * <ul><li>Unassigned node for this region does not exist</li>
+   * <li>Unassigned node for this region is not in OFFLINE state</li>
+   * <li>After verifying OFFLINE state, update fails because of wrong version
+   * (someone else already transitioned the node)</li>
+   * </ul>
+   *
+   * <p>Does not set any watches.
+   *
+   * <p>This method should only be used by a RegionServer when initiating an
+   * open of a region after receiving an OPEN RPC from the Master.
+   *
+   * @param zkw zk reference
+   * @param region region to be transitioned to opening
+   * @param serverName server event originates from
+   * @return version of node after transition, -1 if unsuccessful transition
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static int transitionNodeOpening(ZooKeeperWatcher zkw,
+      HRegionInfo region, String serverName)
+  throws KeeperException {
+    return transitionNodeOpening(zkw, region, serverName,
+      EventType.M_ZK_REGION_OFFLINE);
+  }
+
+  public static int transitionNodeOpening(ZooKeeperWatcher zkw,
+      HRegionInfo region, String serverName, final EventType beginState)
+  throws KeeperException {
+    return transitionNode(zkw, region, serverName, beginState,
+      EventType.RS_ZK_REGION_OPENING, -1);
+  }
+
+  /**
+   * Retransitions an existing unassigned node for the specified region which is
+   * currently in the OPENING state to be in the OPENING state.
+   *
+   * <p>Does not transition nodes from other states.  If for some reason the
+   * node could not be transitioned, the method returns -1.  If the transition
+   * is successful, the version of the node rewritten as OPENING is returned.
+   *
+   * <p>This method can fail and return -1 for three different reasons:
+   * <ul><li>Unassigned node for this region does not exist</li>
+   * <li>Unassigned node for this region is not in OPENING state</li>
+   * <li>After verifying OPENING state, update fails because of wrong version
+   * (someone else already transitioned the node)</li>
+   * </ul>
+   *
+   * <p>Does not set any watches.
+   *
+   * <p>This method should only be used by a RegionServer when initiating an
+   * open of a region after receiving an OPEN RPC from the Master.
+   *
+   * @param zkw zk reference
+   * @param region region to be transitioned to opening
+   * @param serverName server event originates from
+   * @return version of node after transition, -1 if unsuccessful transition
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static int retransitionNodeOpening(ZooKeeperWatcher zkw,
+      HRegionInfo region, String serverName, int expectedVersion)
+  throws KeeperException {
+    return transitionNode(zkw, region, serverName,
+        EventType.RS_ZK_REGION_OPENING,
+        EventType.RS_ZK_REGION_OPENING, expectedVersion);
+  }
+
+  /**
+   * Transitions an existing unassigned node for the specified region which is
+   * currently in the OPENING state to be in the OPENED state.
+   *
+   * <p>Does not transition nodes from other states.  If for some reason the
+   * node could not be transitioned, the method returns -1.  If the transition
+   * is successful, the version of the node after transition is returned.
+   *
+   * <p>This method can fail and return false for three different reasons:
+   * <ul><li>Unassigned node for this region does not exist</li>
+   * <li>Unassigned node for this region is not in OPENING state</li>
+   * <li>After verifying OPENING state, update fails because of wrong version
+   * (this should never actually happen since an RS only does this transition
+   * following a transition to OPENING.  if two RS are conflicting, one would
+   * fail the original transition to OPENING and not this transition)</li>
+   * </ul>
+   *
+   * <p>Does not set any watches.
+   *
+   * <p>This method should only be used by a RegionServer when completing the
+   * open of a region.
+   *
+   * @param zkw zk reference
+   * @param region region to be transitioned to opened
+   * @param serverName server event originates from
+   * @return version of node after transition, -1 if unsuccessful transition
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static int transitionNodeOpened(ZooKeeperWatcher zkw,
+      HRegionInfo region, String serverName, int expectedVersion)
+  throws KeeperException {
+    return transitionNode(zkw, region, serverName,
+        EventType.RS_ZK_REGION_OPENING,
+        EventType.RS_ZK_REGION_OPENED, expectedVersion);
+  }
+
+  /**
+   * Private method that actually performs unassigned node transitions.
+   *
+   * <p>Attempts to transition the unassigned node for the specified region
+   * from the expected state to the state in the specified transition data.
+   *
+   * <p>Method first reads existing data and verifies it is in the expected
+   * state.  If the node does not exist or the node is not in the expected
+   * state, the method returns -1.  If the transition is successful, the
+   * version number of the node following the transition is returned.
+   *
+   * <p>If the read state is what is expected, it attempts to write the new
+   * state and data into the node.  When doing this, it includes the expected
+   * version (determined when the existing state was verified) to ensure that
+   * only one transition is successful.  If there is a version mismatch, the
+   * method returns -1.
+   *
+   * <p>If the write is successful, no watch is set and the method returns true.
+   *
+   * @param zkw zk reference
+   * @param region region to be transitioned to opened
+   * @param serverName server event originates from
+   * @param endState state to transition node to if all checks pass
+   * @param beginState state the node must currently be in to do transition
+   * @param expectedVersion expected version of data before modification, or -1
+   * @return version of node after transition, -1 if unsuccessful transition
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static int transitionNode(ZooKeeperWatcher zkw, HRegionInfo region,
+      String serverName, EventType beginState, EventType endState,
+      int expectedVersion)
+  throws KeeperException {
+    String encoded = region.getEncodedName();
+    if(LOG.isDebugEnabled()) {
+      LOG.debug(zkw.prefix("Attempting to transition node " +
+        HRegionInfo.prettyPrint(encoded) +
+        " from " + beginState.toString() + " to " + endState.toString()));
+    }
+
+    String node = getNodeName(zkw, encoded);
+    zkw.sync(node);
+
+    // Read existing data of the node
+    Stat stat = new Stat();
+    byte [] existingBytes =
+      ZKUtil.getDataNoWatch(zkw, node, stat);
+    RegionTransitionData existingData =
+      RegionTransitionData.fromBytes(existingBytes);
+
+    // Verify it is the expected version
+    if(expectedVersion != -1 && stat.getVersion() != expectedVersion) {
+      LOG.warn(zkw.prefix("Attempt to transition the " +
+        "unassigned node for " + encoded +
+        " from " + beginState + " to " + endState + " failed, " +
+        "the node existed but was version " + stat.getVersion() +
+        " not the expected version " + expectedVersion));
+        return -1;
+    }
+
+    // Verify it is in expected state
+    if(!existingData.getEventType().equals(beginState)) {
+      LOG.warn(zkw.prefix("Attempt to transition the " +
+        "unassigned node for " + encoded +
+        " from " + beginState + " to " + endState + " failed, " +
+        "the node existed but was in the state " + existingData.getEventType()));
+      return -1;
+    }
+
+    // Write new data, ensuring data has not changed since we last read it
+    try {
+      RegionTransitionData data = new RegionTransitionData(endState,
+          region.getRegionName(), serverName);
+      if(!ZKUtil.setData(zkw, node, data.getBytes(), stat.getVersion())) {
+        LOG.warn(zkw.prefix("Attempt to transition the " +
+        "unassigned node for " + encoded +
+        " from " + beginState + " to " + endState + " failed, " +
+        "the node existed and was in the expected state but then when " +
+        "setting data we got a version mismatch"));
+        return -1;
+      }
+      if(LOG.isDebugEnabled()) {
+        LOG.debug(zkw.prefix("Successfully transitioned node " + encoded +
+          " from " + beginState + " to " + endState));
+      }
+      return stat.getVersion() + 1;
+    } catch (KeeperException.NoNodeException nne) {
+      LOG.warn(zkw.prefix("Attempt to transition the " +
+        "unassigned node for " + encoded +
+        " from " + beginState + " to " + endState + " failed, " +
+        "the node existed and was in the expected state but then when " +
+        "setting data it no longer existed"));
+      return -1;
+    }
+  }
+
+  /**
+   * Gets the current data in the unassigned node for the specified region name
+   * or fully-qualified path.
+   *
+   * <p>Returns null if the region does not currently have a node.
+   *
+   * <p>Sets a watch on the node if the node exists.
+   *
+   * @param zkw zk reference
+   * @param pathOrRegionName fully-specified path or region name
+   * @return data for the unassigned node
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static RegionTransitionData getData(ZooKeeperWatcher zkw,
+      String pathOrRegionName)
+  throws KeeperException {
+    String node = pathOrRegionName.startsWith("/") ?
+        pathOrRegionName : getNodeName(zkw, pathOrRegionName);
+    byte [] data = ZKUtil.getDataAndWatch(zkw, node);
+    if(data == null) {
+      return null;
+    }
+    return RegionTransitionData.fromBytes(data);
+  }
+
+  /**
+   * Gets the current data in the unassigned node for the specified region name
+   * or fully-qualified path.
+   *
+   * <p>Returns null if the region does not currently have a node.
+   *
+   * <p>Does not set a watch.
+   *
+   * @param zkw zk reference
+   * @param pathOrRegionName fully-specified path or region name
+   * @param stat object to store node info into on getData call
+   * @return data for the unassigned node
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static RegionTransitionData getDataNoWatch(ZooKeeperWatcher zkw,
+      String pathOrRegionName, Stat stat)
+  throws KeeperException {
+    String node = pathOrRegionName.startsWith("/") ?
+        pathOrRegionName : getNodeName(zkw, pathOrRegionName);
+    byte [] data = ZKUtil.getDataNoWatch(zkw, node, stat);
+    if(data == null) {
+      return null;
+    }
+    return RegionTransitionData.fromBytes(data);
+  }
+
+  /**
+   * Delete the assignment node regardless of its current state.
+   * <p>
+   * Fail silent even if the node does not exist at all.
+   * @param watcher
+   * @param regionInfo
+   * @throws KeeperException
+   */
+  public static void deleteNodeFailSilent(ZooKeeperWatcher watcher,
+      HRegionInfo regionInfo)
+  throws KeeperException {
+    String node = getNodeName(watcher, regionInfo.getEncodedName());
+    ZKUtil.deleteNodeFailSilent(watcher, node);
+  }
+
+  /**
+   * Blocks until there are no node in regions in transition.
+   * <p>
+   * Used in testing only.
+   * @param zkw zk reference
+   * @throws KeeperException
+   * @throws InterruptedException
+   */
+  public static void blockUntilNoRIT(ZooKeeperWatcher zkw)
+  throws KeeperException, InterruptedException {
+    while (ZKUtil.nodeHasChildren(zkw, zkw.assignmentZNode)) {
+      List<String> znodes =
+        ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.assignmentZNode);
+      if (znodes != null && !znodes.isEmpty()) {
+        for (String znode : znodes) {
+          LOG.debug("ZK RIT -> " + znode);
+        }
+      }
+      Thread.sleep(100);
+    }
+  }
+
+  /**
+   * Blocks until there is at least one node in regions in transition.
+   * <p>
+   * Used in testing only.
+   * @param zkw zk reference
+   * @throws KeeperException
+   * @throws InterruptedException
+   */
+  public static void blockUntilRIT(ZooKeeperWatcher zkw)
+  throws KeeperException, InterruptedException {
+    while (!ZKUtil.nodeHasChildren(zkw, zkw.assignmentZNode)) {
+      List<String> znodes =
+        ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.assignmentZNode);
+      if (znodes == null || znodes.isEmpty()) {
+        LOG.debug("No RIT in ZK");
+      }
+      Thread.sleep(100);
+    }
+  }
+
+  /**
+   * Verifies that the specified region is in the specified state in ZooKeeper.
+   * <p>
+   * Returns true if region is in transition and in the specified state in
+   * ZooKeeper.  Returns false if the region does not exist in ZK or is in
+   * a different state.
+   * <p>
+   * Method synchronizes() with ZK so will yield an up-to-date result but is
+   * a slow read.
+   * @param zkw
+   * @param region
+   * @param expectedState
+   * @return true if region exists and is in expected state
+   */
+  public static boolean verifyRegionState(ZooKeeperWatcher zkw,
+      HRegionInfo region, EventType expectedState)
+  throws KeeperException {
+    String encoded = region.getEncodedName();
+
+    String node = getNodeName(zkw, encoded);
+    zkw.sync(node);
+
+    // Read existing data of the node
+    byte [] existingBytes = null;
+    try {
+      existingBytes = ZKUtil.getDataAndWatch(zkw, node);
+    } catch (KeeperException.NoNodeException nne) {
+      return false;
+    } catch (KeeperException e) {
+      throw e;
+    }
+    if (existingBytes == null) return false;
+    RegionTransitionData existingData =
+      RegionTransitionData.fromBytes(existingBytes);
+    if (existingData.getEventType() == expectedState){
+      return true;
+    }
+    return false;
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java
new file mode 100644
index 0000000..24bcafe
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java
@@ -0,0 +1,252 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.Map.Entry;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.util.StringUtils;
+
+/**
+ * Utility methods for reading, parsing, and building zookeeper configuration.
+ */
+public class ZKConfig {
+  private static final Log LOG = LogFactory.getLog(ZKConfig.class);
+
+  private static final String VARIABLE_START = "${";
+  private static final int VARIABLE_START_LENGTH = VARIABLE_START.length();
+  private static final String VARIABLE_END = "}";
+  private static final int VARIABLE_END_LENGTH = VARIABLE_END.length();
+
+  private static final String ZK_CFG_PROPERTY = "hbase.zookeeper.property.";
+  private static final int ZK_CFG_PROPERTY_SIZE = ZK_CFG_PROPERTY.length();
+  private static final String ZK_CLIENT_PORT_KEY = ZK_CFG_PROPERTY
+      + "clientPort";
+
+  /**
+   * Make a Properties object holding ZooKeeper config equivalent to zoo.cfg.
+   * If there is a zoo.cfg in the classpath, simply read it in. Otherwise parse
+   * the corresponding config options from the HBase XML configs and generate
+   * the appropriate ZooKeeper properties.
+   * @param conf Configuration to read from.
+   * @return Properties holding mappings representing ZooKeeper zoo.cfg file.
+   */
+  public static Properties makeZKProps(Configuration conf) {
+    // First check if there is a zoo.cfg in the CLASSPATH. If so, simply read
+    // it and grab its configuration properties.
+    ClassLoader cl = HQuorumPeer.class.getClassLoader();
+    final InputStream inputStream =
+      cl.getResourceAsStream(HConstants.ZOOKEEPER_CONFIG_NAME);
+    if (inputStream != null) {
+      try {
+        return parseZooCfg(conf, inputStream);
+      } catch (IOException e) {
+        LOG.warn("Cannot read " + HConstants.ZOOKEEPER_CONFIG_NAME +
+                 ", loading from XML files", e);
+      }
+    }
+
+    // Otherwise, use the configuration options from HBase's XML files.
+    Properties zkProperties = new Properties();
+
+    // Directly map all of the hbase.zookeeper.property.KEY properties.
+    for (Entry<String, String> entry : conf) {
+      String key = entry.getKey();
+      if (key.startsWith(ZK_CFG_PROPERTY)) {
+        String zkKey = key.substring(ZK_CFG_PROPERTY_SIZE);
+        String value = entry.getValue();
+        // If the value has variables substitutions, need to do a get.
+        if (value.contains(VARIABLE_START)) {
+          value = conf.get(key);
+        }
+        zkProperties.put(zkKey, value);
+      }
+    }
+
+    // If clientPort is not set, assign the default
+    if (zkProperties.getProperty(ZK_CLIENT_PORT_KEY) == null) {
+      zkProperties.put(ZK_CLIENT_PORT_KEY,
+                       HConstants.DEFAULT_ZOOKEPER_CLIENT_PORT);
+    }
+
+    // Create the server.X properties.
+    int peerPort = conf.getInt("hbase.zookeeper.peerport", 2888);
+    int leaderPort = conf.getInt("hbase.zookeeper.leaderport", 3888);
+
+    final String[] serverHosts = conf.getStrings(HConstants.ZOOKEEPER_QUORUM,
+                                                 "localhost");
+    for (int i = 0; i < serverHosts.length; ++i) {
+      String serverHost = serverHosts[i];
+      String address = serverHost + ":" + peerPort + ":" + leaderPort;
+      String key = "server." + i;
+      zkProperties.put(key, address);
+    }
+
+    return zkProperties;
+  }
+
+  /**
+   * Parse ZooKeeper's zoo.cfg, injecting HBase Configuration variables in.
+   * This method is used for testing so we can pass our own InputStream.
+   * @param conf HBaseConfiguration to use for injecting variables.
+   * @param inputStream InputStream to read from.
+   * @return Properties parsed from config stream with variables substituted.
+   * @throws IOException if anything goes wrong parsing config
+   */
+  public static Properties parseZooCfg(Configuration conf,
+      InputStream inputStream) throws IOException {
+    Properties properties = new Properties();
+    try {
+      properties.load(inputStream);
+    } catch (IOException e) {
+      final String msg = "fail to read properties from "
+        + HConstants.ZOOKEEPER_CONFIG_NAME;
+      LOG.fatal(msg);
+      throw new IOException(msg, e);
+    }
+    for (Entry<Object, Object> entry : properties.entrySet()) {
+      String value = entry.getValue().toString().trim();
+      String key = entry.getKey().toString().trim();
+      StringBuilder newValue = new StringBuilder();
+      int varStart = value.indexOf(VARIABLE_START);
+      int varEnd = 0;
+      while (varStart != -1) {
+        varEnd = value.indexOf(VARIABLE_END, varStart);
+        if (varEnd == -1) {
+          String msg = "variable at " + varStart + " has no end marker";
+          LOG.fatal(msg);
+          throw new IOException(msg);
+        }
+        String variable = value.substring(varStart + VARIABLE_START_LENGTH, varEnd);
+
+        String substituteValue = System.getProperty(variable);
+        if (substituteValue == null) {
+          substituteValue = conf.get(variable);
+        }
+        if (substituteValue == null) {
+          String msg = "variable " + variable + " not set in system property "
+                     + "or hbase configs";
+          LOG.fatal(msg);
+          throw new IOException(msg);
+        }
+
+        newValue.append(substituteValue);
+
+        varEnd += VARIABLE_END_LENGTH;
+        varStart = value.indexOf(VARIABLE_START, varEnd);
+      }
+      // Special case for 'hbase.cluster.distributed' property being 'true'
+      if (key.startsWith("server.")) {
+        if (conf.get(HConstants.CLUSTER_DISTRIBUTED).equals(HConstants.CLUSTER_IS_DISTRIBUTED)
+            && value.startsWith("localhost")) {
+          String msg = "The server in zoo.cfg cannot be set to localhost " +
+              "in a fully-distributed setup because it won't be reachable. " +
+              "See \"Getting Started\" for more information.";
+          LOG.fatal(msg);
+          throw new IOException(msg);
+        }
+      }
+      newValue.append(value.substring(varEnd));
+      properties.setProperty(key, newValue.toString());
+    }
+    return properties;
+  }
+
+  /**
+   * Return the ZK Quorum servers string given zk properties returned by
+   * makeZKProps
+   * @param properties
+   * @return Quorum servers String
+   */
+  public static String getZKQuorumServersString(Properties properties) {
+    String clientPort = null;
+    List<String> servers = new ArrayList<String>();
+
+    // The clientPort option may come after the server.X hosts, so we need to
+    // grab everything and then create the final host:port comma separated list.
+    boolean anyValid = false;
+    for (Entry<Object,Object> property : properties.entrySet()) {
+      String key = property.getKey().toString().trim();
+      String value = property.getValue().toString().trim();
+      if (key.equals("clientPort")) {
+        clientPort = value;
+      }
+      else if (key.startsWith("server.")) {
+        String host = value.substring(0, value.indexOf(':'));
+        servers.add(host);
+        try {
+          //noinspection ResultOfMethodCallIgnored
+          InetAddress.getByName(host);
+          anyValid = true;
+        } catch (UnknownHostException e) {
+          LOG.warn(StringUtils.stringifyException(e));
+        }
+      }
+    }
+
+    if (!anyValid) {
+      LOG.error("no valid quorum servers found in " + HConstants.ZOOKEEPER_CONFIG_NAME);
+      return null;
+    }
+
+    if (clientPort == null) {
+      LOG.error("no clientPort found in " + HConstants.ZOOKEEPER_CONFIG_NAME);
+      return null;
+    }
+
+    if (servers.isEmpty()) {
+      LOG.fatal("No server.X lines found in conf/zoo.cfg. HBase must have a " +
+                "ZooKeeper cluster configured for its operation.");
+      return null;
+    }
+
+    StringBuilder hostPortBuilder = new StringBuilder();
+    for (int i = 0; i < servers.size(); ++i) {
+      String host = servers.get(i);
+      if (i > 0) {
+        hostPortBuilder.append(',');
+      }
+      hostPortBuilder.append(host);
+      hostPortBuilder.append(':');
+      hostPortBuilder.append(clientPort);
+    }
+
+    return hostPortBuilder.toString();
+  }
+
+  /**
+   * Return the ZK Quorum servers string given the specified configuration.
+   * @param conf
+   * @return Quorum servers
+   */
+  public static String getZKQuorumServersString(Configuration conf) {
+    return getZKQuorumServersString(makeZKProps(conf));
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKServerTool.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKServerTool.java
new file mode 100644
index 0000000..500bd3c
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKServerTool.java
@@ -0,0 +1,54 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.util.Properties;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+
+/**
+ * Tool for reading ZooKeeper servers from HBase XML configuration and producing
+ * a line-by-line list for use by bash scripts.
+ */
+public class ZKServerTool {
+  /**
+   * Run the tool.
+   * @param args Command line arguments.
+   */
+  public static void main(String args[]) {
+    Configuration conf = HBaseConfiguration.create();
+    // Note that we do not simply grab the property
+    // HConstants.ZOOKEEPER_QUORUM from the HBaseConfiguration because the
+    // user may be using a zoo.cfg file.
+    Properties zkProps = ZKConfig.makeZKProps(conf);
+    for (Entry<Object, Object> entry : zkProps.entrySet()) {
+      String key = entry.getKey().toString().trim();
+      String value = entry.getValue().toString().trim();
+      if (key.startsWith("server.")) {
+        String[] parts = value.split(":");
+        String host = parts[0];
+        System.out.println("ZK host:" + host);
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTable.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTable.java
new file mode 100644
index 0000000..f079d02
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTable.java
@@ -0,0 +1,351 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.master.AssignmentManager;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Helper class for table state tracking for use by {@link AssignmentManager}.
+ * Reads, caches and sets state up in zookeeper.  If multiple read/write
+ * clients, will make for confusion.  Read-only clients other than
+ * AssignmentManager interested in learning table state can use the
+ * read-only utility methods {@link #isEnabledTable(ZooKeeperWatcher, String)}
+ * and {@link #isDisabledTable(ZooKeeperWatcher, String)}.
+ * 
+ * <p>To save on trips to the zookeeper ensemble, internally we cache table
+ * state.
+ */
+public class ZKTable {
+  // A znode will exist under the table directory if it is in any of the
+  // following states: {@link TableState#ENABLING} , {@link TableState#DISABLING},
+  // or {@link TableState#DISABLED}.  If {@link TableState#ENABLED}, there will
+  // be no entry for a table in zk.  Thats how it currently works.
+
+  private static final Log LOG = LogFactory.getLog(ZKTable.class);
+  private final ZooKeeperWatcher watcher;
+
+  /**
+   * Cache of what we found in zookeeper so we don't have to go to zk ensemble
+   * for every query.  Synchronize access rather than use concurrent Map because
+   * synchronization needs to span query of zk.
+   */
+  private final Map<String, TableState> cache =
+    new HashMap<String, TableState>();
+
+  // TODO: Make it so always a table znode. Put table schema here as well as table state.
+  // Have watcher on table znode so all are notified of state or schema change.
+  /**
+   * States a Table can be in.
+   * {@link TableState#ENABLED} is not used currently; its the absence of state
+   * in zookeeper that indicates an enabled table currently.
+   */
+  public static enum TableState {
+    ENABLED,
+    DISABLED,
+    DISABLING,
+    ENABLING
+  };
+
+  public ZKTable(final ZooKeeperWatcher zkw) throws KeeperException {
+    super();
+    this.watcher = zkw;
+    populateTableStates();
+  }
+
+  /**
+   * Gets a list of all the tables set as disabled in zookeeper.
+   * @param zkw
+   * @return list of disabled tables, empty list if none
+   * @throws KeeperException
+   */
+  private void populateTableStates()
+  throws KeeperException {
+    synchronized (this.cache) {
+      List<String> children =
+        ZKUtil.listChildrenNoWatch(this.watcher, this.watcher.tableZNode);
+      for (String child: children) {
+        TableState state = getTableState(this.watcher, child);
+        if (state != null) this.cache.put(child, state);
+      }
+    }
+  }
+
+  /**
+   * @param zkw
+   * @param child
+   * @return Null or {@link TableState} found in znode.
+   * @throws KeeperException
+   */
+  private static TableState getTableState(final ZooKeeperWatcher zkw,
+      final String child)
+  throws KeeperException {
+    String znode = ZKUtil.joinZNode(zkw.tableZNode, child);
+    byte [] data = ZKUtil.getData(zkw, znode);
+    if (data == null || data.length <= 0) {
+      // Null if table is enabled.
+      return null;
+    }
+    String str = Bytes.toString(data);
+    try {
+      return TableState.valueOf(str);
+    } catch (IllegalArgumentException e) {
+      throw new IllegalArgumentException(str);
+    }
+  }
+
+  /**
+   * Sets the specified table as DISABLED in zookeeper.  Fails silently if the
+   * table is already disabled in zookeeper.  Sets no watches.
+   * @param tableName
+   * @throws KeeperException unexpected zookeeper exception
+   */
+  public void setDisabledTable(String tableName)
+  throws KeeperException {
+    synchronized (this.cache) {
+      if (!isDisablingOrDisabledTable(tableName)) {
+        LOG.warn("Moving table " + tableName + " state to disabled but was " +
+          "not first in disabling state: " + this.cache.get(tableName));
+      }
+      setTableState(tableName, TableState.DISABLED);
+    }
+  }
+
+  /**
+   * Sets the specified table as DISABLING in zookeeper.  Fails silently if the
+   * table is already disabled in zookeeper.  Sets no watches.
+   * @param tableName
+   * @throws KeeperException unexpected zookeeper exception
+   */
+  public void setDisablingTable(final String tableName)
+  throws KeeperException {
+    synchronized (this.cache) {
+      if (!isEnabledOrDisablingTable(tableName)) {
+        LOG.warn("Moving table " + tableName + " state to disabling but was " +
+          "not first in enabled state: " + this.cache.get(tableName));
+      }
+      setTableState(tableName, TableState.DISABLING);
+    }
+  }
+
+  /**
+   * Sets the specified table as ENABLING in zookeeper.  Fails silently if the
+   * table is already disabled in zookeeper.  Sets no watches.
+   * @param tableName
+   * @throws KeeperException unexpected zookeeper exception
+   */
+  public void setEnablingTable(final String tableName)
+  throws KeeperException {
+    synchronized (this.cache) {
+      if (!isDisabledOrEnablingTable(tableName)) {
+        LOG.warn("Moving table " + tableName + " state to disabling but was " +
+          "not first in enabled state: " + this.cache.get(tableName));
+      }
+      setTableState(tableName, TableState.ENABLING);
+    }
+  }
+
+  private void setTableState(final String tableName, final TableState state)
+  throws KeeperException {
+    String znode = ZKUtil.joinZNode(this.watcher.tableZNode, tableName);
+    if (ZKUtil.checkExists(this.watcher, znode) == -1) {
+      ZKUtil.createAndFailSilent(this.watcher, znode);
+    }
+    synchronized (this.cache) {
+      ZKUtil.setData(this.watcher, znode, Bytes.toBytes(state.toString()));
+      this.cache.put(tableName, state);
+    }
+  }
+
+  public boolean isDisabledTable(final String tableName) {
+    return isTableState(tableName, TableState.DISABLED);
+  }
+
+  /**
+   * Go to zookeeper and see if state of table is {@link TableState#DISABLED}.
+   * This method does not use cache as {@link #isDisabledTable(String)} does.
+   * This method is for clients other than {@link AssignmentManager}
+   * @param zkw
+   * @param tableName
+   * @return True if table is enabled.
+   * @throws KeeperException
+   */
+  public static boolean isDisabledTable(final ZooKeeperWatcher zkw,
+      final String tableName)
+  throws KeeperException {
+    TableState state = getTableState(zkw, tableName);
+    return isTableState(TableState.DISABLED, state);
+  }
+
+  public boolean isDisablingTable(final String tableName) {
+    return isTableState(tableName, TableState.DISABLING);
+  }
+
+  public boolean isEnablingTable(final String tableName) {
+    return isTableState(tableName, TableState.ENABLING);
+  }
+
+  public boolean isEnabledTable(String tableName) {
+    synchronized (this.cache) {
+      // No entry in cache means enabled table.
+      return !this.cache.containsKey(tableName);
+    }
+  }
+
+  /**
+   * Go to zookeeper and see if state of table is {@link TableState#ENABLED}.
+   * This method does not use cache as {@link #isEnabledTable(String)} does.
+   * This method is for clients other than {@link AssignmentManager}
+   * @param zkw
+   * @param tableName
+   * @return True if table is enabled.
+   * @throws KeeperException
+   */
+  public static boolean isEnabledTable(final ZooKeeperWatcher zkw,
+      final String tableName)
+  throws KeeperException {
+    return getTableState(zkw, tableName) == null;
+  }
+
+  public boolean isDisablingOrDisabledTable(final String tableName) {
+    synchronized (this.cache) {
+      return isDisablingTable(tableName) || isDisabledTable(tableName);
+    }
+  }
+
+  /**
+   * Go to zookeeper and see if state of table is {@link TableState#DISABLING}
+   * of {@link TableState#DISABLED}.
+   * This method does not use cache as {@link #isEnabledTable(String)} does.
+   * This method is for clients other than {@link AssignmentManager}.
+   * @param zkw
+   * @param tableName
+   * @return True if table is enabled.
+   * @throws KeeperException
+   */
+  public static boolean isDisablingOrDisabledTable(final ZooKeeperWatcher zkw,
+      final String tableName)
+  throws KeeperException {
+    TableState state = getTableState(zkw, tableName);
+    return isTableState(TableState.DISABLING, state) ||
+      isTableState(TableState.DISABLED, state);
+  }
+
+  public boolean isEnabledOrDisablingTable(final String tableName) {
+    synchronized (this.cache) {
+      return isEnabledTable(tableName) || isDisablingTable(tableName);
+    }
+  }
+
+  public boolean isDisabledOrEnablingTable(final String tableName) {
+    synchronized (this.cache) {
+      return isDisabledTable(tableName) || isEnablingTable(tableName);
+    }
+  }
+
+  private boolean isTableState(final String tableName, final TableState state) {
+    synchronized (this.cache) {
+      TableState currentState = this.cache.get(tableName);
+      return isTableState(currentState, state);
+    }
+  }
+
+  private static boolean isTableState(final TableState expectedState,
+      final TableState currentState) {
+    return currentState != null && currentState.equals(expectedState);
+  }
+
+  /**
+   * Enables the table in zookeeper.  Fails silently if the
+   * table is not currently disabled in zookeeper.  Sets no watches.
+   * @param tableName
+   * @throws KeeperException unexpected zookeeper exception
+   */
+  public void setEnabledTable(final String tableName)
+  throws KeeperException {
+    synchronized (this.cache) {
+      if (this.cache.remove(tableName) == null) {
+        LOG.warn("Moving table " + tableName + " state to enabled but was " +
+          "already enabled");
+      }
+      ZKUtil.deleteNodeFailSilent(this.watcher,
+        ZKUtil.joinZNode(this.watcher.tableZNode, tableName));
+    }
+  }
+
+  /**
+   * Gets a list of all the tables set as disabled in zookeeper.
+   * @return Set of disabled tables, empty Set if none
+   */
+  public Set<String> getDisabledTables() {
+    Set<String> disabledTables = new HashSet<String>();
+    synchronized (this.cache) {
+      Set<String> tables = this.cache.keySet();
+      for (String table: tables) {
+        if (isDisabledTable(table)) disabledTables.add(table);
+      }
+    }
+    return disabledTables;
+  }
+
+  /**
+   * Gets a list of all the tables set as disabled in zookeeper.
+   * @return Set of disabled tables, empty Set if none
+   * @throws KeeperException 
+   */
+  public static Set<String> getDisabledTables(ZooKeeperWatcher zkw)
+  throws KeeperException {
+    Set<String> disabledTables = new HashSet<String>();
+    List<String> children =
+      ZKUtil.listChildrenNoWatch(zkw, zkw.tableZNode);
+    for (String child: children) {
+      TableState state = getTableState(zkw, child);
+      if (state == TableState.DISABLED) disabledTables.add(child);
+    }
+    return disabledTables;
+  }
+
+  /**
+   * Gets a list of all the tables set as disabled in zookeeper.
+   * @return Set of disabled tables, empty Set if none
+   * @throws KeeperException 
+   */
+  public static Set<String> getDisabledOrDisablingTables(ZooKeeperWatcher zkw)
+  throws KeeperException {
+    Set<String> disabledTables = new HashSet<String>();
+    List<String> children =
+      ZKUtil.listChildrenNoWatch(zkw, zkw.tableZNode);
+    for (String child: children) {
+      TableState state = getTableState(zkw, child);
+      if (state == TableState.DISABLED || state == TableState.DISABLING)
+        disabledTables.add(child);
+    }
+    return disabledTables;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableDisable.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableDisable.java
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableDisable.java
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
new file mode 100644
index 0000000..ead223f
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
@@ -0,0 +1,1115 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.PrintWriter;
+import java.net.Socket;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.executor.RegionTransitionData;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.zookeeper.AsyncCallback;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooKeeper;
+import org.apache.zookeeper.KeeperException.NoNodeException;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.apache.zookeeper.data.Stat;
+
+/**
+ * Internal HBase utility class for ZooKeeper.
+ *
+ * <p>Contains only static methods and constants.
+ *
+ * <p>Methods all throw {@link KeeperException} if there is an unexpected
+ * zookeeper exception, so callers of these methods must handle appropriately.
+ * If ZK is required for the operation, the server will need to be aborted.
+ */
+public class ZKUtil {
+  private static final Log LOG = LogFactory.getLog(ZKUtil.class);
+
+  // TODO: Replace this with ZooKeeper constant when ZOOKEEPER-277 is resolved.
+  private static final char ZNODE_PATH_SEPARATOR = '/';
+
+  /**
+   * Creates a new connection to ZooKeeper, pulling settings and ensemble config
+   * from the specified configuration object using methods from {@link ZKConfig}.
+   *
+   * Sets the connection status monitoring watcher to the specified watcher.
+   *
+   * @param conf configuration to pull ensemble and other settings from
+   * @param watcher watcher to monitor connection changes
+   * @return connection to zookeeper
+   * @throws IOException if unable to connect to zk or config problem
+   */
+  public static ZooKeeper connect(Configuration conf, Watcher watcher)
+  throws IOException {
+    Properties properties = ZKConfig.makeZKProps(conf);
+    String ensemble = ZKConfig.getZKQuorumServersString(properties);
+    return connect(conf, ensemble, watcher);
+  }
+
+  public static ZooKeeper connect(Configuration conf, String ensemble,
+      Watcher watcher)
+  throws IOException {
+    return connect(conf, ensemble, watcher, "");
+  }
+
+  public static ZooKeeper connect(Configuration conf, String ensemble,
+      Watcher watcher, final String descriptor)
+  throws IOException {
+    if(ensemble == null) {
+      throw new IOException("Unable to determine ZooKeeper ensemble");
+    }
+    int timeout = conf.getInt("zookeeper.session.timeout", 180 * 1000);
+    LOG.debug(descriptor + " opening connection to ZooKeeper with ensemble (" +
+        ensemble + ")");
+    return new ZooKeeper(ensemble, timeout, watcher);
+  }
+
+  //
+  // Helper methods
+  //
+
+  /**
+   * Join the prefix znode name with the suffix znode name to generate a proper
+   * full znode name.
+   *
+   * Assumes prefix does not end with slash and suffix does not begin with it.
+   *
+   * @param prefix beginning of znode name
+   * @param suffix ending of znode name
+   * @return result of properly joining prefix with suffix
+   */
+  public static String joinZNode(String prefix, String suffix) {
+    return prefix + ZNODE_PATH_SEPARATOR + suffix;
+  }
+
+  /**
+   * Returns the full path of the immediate parent of the specified node.
+   * @param node path to get parent of
+   * @return parent of path, null if passed the root node or an invalid node
+   */
+  public static String getParent(String node) {
+    int idx = node.lastIndexOf(ZNODE_PATH_SEPARATOR);
+    return idx <= 0 ? null : node.substring(0, idx);
+  }
+
+  /**
+   * Get the unique node-name for the specified regionserver.
+   *
+   * Used when a server puts up an ephemeral node for itself and needs to use
+   * a unique name.
+   *
+   * @param serverInfo server information
+   * @return unique, zookeeper-safe znode path for the server instance
+   */
+  public static String getNodeName(HServerInfo serverInfo) {
+    return serverInfo.getServerName();
+  }
+
+  /**
+   * Get the name of the current node from the specified fully-qualified path.
+   * @param path fully-qualified path
+   * @return name of the current node
+   */
+  public static String getNodeName(String path) {
+    return path.substring(path.lastIndexOf("/")+1);
+  }
+
+  /**
+   * Get the key to the ZK ensemble for this configuration without
+   * adding a name at the end
+   * @param conf Configuration to use to build the key
+   * @return ensemble key without a name
+   */
+  public static String getZooKeeperClusterKey(Configuration conf) {
+    return getZooKeeperClusterKey(conf, null);
+  }
+
+  /**
+   * Get the key to the ZK ensemble for this configuration and append
+   * a name at the end
+   * @param conf Configuration to use to build the key
+   * @param name Name that should be appended at the end if not empty or null
+   * @return ensemble key with a name (if any)
+   */
+  public static String getZooKeeperClusterKey(Configuration conf, String name) {
+    String ensemble = conf.get(HConstants.ZOOKEEPER_QUORUM.replaceAll(
+        "[\\t\\n\\x0B\\f\\r]", ""));
+    StringBuilder builder = new StringBuilder(ensemble);
+    builder.append(":");
+    builder.append(conf.get("hbase.zookeeper.property.clientPort"));
+    builder.append(":");
+    builder.append(conf.get(HConstants.ZOOKEEPER_ZNODE_PARENT));
+    if (name != null && !name.isEmpty()) {
+      builder.append(",");
+      builder.append(name);
+    }
+    return builder.toString();
+  }
+
+  /**
+   * Apply the settings in the given key to the given configuration, this is
+   * used to communicate with distant clusters
+   * @param conf configuration object to configure
+   * @param key string that contains the 3 required configuratins
+   * @throws IOException
+   */
+  public static void applyClusterKeyToConf(Configuration conf, String key)
+      throws IOException{
+    String[] parts = transformClusterKey(key);
+    conf.set(HConstants.ZOOKEEPER_QUORUM, parts[0]);
+    conf.set("hbase.zookeeper.property.clientPort", parts[1]);
+    conf.set(HConstants.ZOOKEEPER_ZNODE_PARENT, parts[2]);
+  }
+
+  /**
+   * Separate the given key into the three configurations it should contain:
+   * hbase.zookeeper.quorum, hbase.zookeeper.client.port
+   * and zookeeper.znode.parent
+   * @param key
+   * @return the three configuration in the described order
+   * @throws IOException
+   */
+  public static String[] transformClusterKey(String key) throws IOException {
+    String[] parts = key.split(":");
+    if (parts.length != 3) {
+      throw new IOException("Cluster key invalid, the format should be:" +
+          HConstants.ZOOKEEPER_QUORUM + ":hbase.zookeeper.client.port:"
+          + HConstants.ZOOKEEPER_ZNODE_PARENT);
+    }
+    return parts;
+  }
+
+  //
+  // Existence checks and watches
+  //
+
+  /**
+   * Watch the specified znode for delete/create/change events.  The watcher is
+   * set whether or not the node exists.  If the node already exists, the method
+   * returns true.  If the node does not exist, the method returns false.
+   *
+   * @param zkw zk reference
+   * @param znode path of node to watch
+   * @return true if znode exists, false if does not exist or error
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static boolean watchAndCheckExists(ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      Stat s = zkw.getZooKeeper().exists(znode, zkw);
+      LOG.debug(zkw.prefix("Set watcher on existing znode " + znode));
+      return s != null ? true : false;
+    } catch (KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to set watcher on znode " + znode), e);
+      zkw.keeperException(e);
+      return false;
+    } catch (InterruptedException e) {
+      LOG.warn(zkw.prefix("Unable to set watcher on znode " + znode), e);
+      zkw.interruptedException(e);
+      return false;
+    }
+  }
+
+  /**
+   * Check if the specified node exists.  Sets no watches.
+   *
+   * Returns true if node exists, false if not.  Returns an exception if there
+   * is an unexpected zookeeper exception.
+   *
+   * @param zkw zk reference
+   * @param znode path of node to watch
+   * @return version of the node if it exists, -1 if does not exist
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static int checkExists(ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      Stat s = zkw.getZooKeeper().exists(znode, null);
+      return s != null ? s.getVersion() : -1;
+    } catch (KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to set watcher on znode (" + znode + ")"), e);
+      zkw.keeperException(e);
+      return -1;
+    } catch (InterruptedException e) {
+      LOG.warn(zkw.prefix("Unable to set watcher on znode (" + znode + ")"), e);
+      zkw.interruptedException(e);
+      return -1;
+    }
+  }
+
+  //
+  // Znode listings
+  //
+
+  /**
+   * Lists the children znodes of the specified znode.  Also sets a watch on
+   * the specified znode which will capture a NodeDeleted event on the specified
+   * znode as well as NodeChildrenChanged if any children of the specified znode
+   * are created or deleted.
+   *
+   * Returns null if the specified node does not exist.  Otherwise returns a
+   * list of children of the specified node.  If the node exists but it has no
+   * children, an empty list will be returned.
+   *
+   * @param zkw zk reference
+   * @param znode path of node to list and watch children of
+   * @return list of children of the specified node, an empty list if the node
+   *          exists but has no children, and null if the node does not exist
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static List<String> listChildrenAndWatchForNewChildren(
+      ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      List<String> children = zkw.getZooKeeper().getChildren(znode, zkw);
+      return children;
+    } catch(KeeperException.NoNodeException ke) {
+      LOG.debug(zkw.prefix("Unable to list children of znode " + znode + " " +
+          "because node does not exist (not an error)"));
+      return null;
+    } catch (KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to list children of znode " + znode + " "), e);
+      zkw.keeperException(e);
+      return null;
+    } catch (InterruptedException e) {
+      LOG.warn(zkw.prefix("Unable to list children of znode " + znode + " "), e);
+      zkw.interruptedException(e);
+      return null;
+    }
+  }
+
+  /**
+   * List all the children of the specified znode, setting a watch for children
+   * changes and also setting a watch on every individual child in order to get
+   * the NodeCreated and NodeDeleted events.
+   * @param zkw zookeeper reference
+   * @param znode node to get children of and watch
+   * @return list of znode names, null if the node doesn't exist
+   * @throws KeeperException
+   */
+  public static List<String> listChildrenAndWatchThem(ZooKeeperWatcher zkw, 
+      String znode) throws KeeperException {
+    List<String> children = listChildrenAndWatchForNewChildren(zkw, znode);
+    if (children == null) {
+      return null;
+    }
+    for (String child : children) {
+      watchAndCheckExists(zkw, joinZNode(znode, child));
+    }
+    return children;
+  }
+
+  /**
+   * Lists the children of the specified znode, retrieving the data of each
+   * child as a server address.
+   *
+   * Used to list the currently online regionservers and their addresses.
+   *
+   * Sets no watches at all, this method is best effort.
+   *
+   * Returns an empty list if the node has no children.  Returns null if the
+   * parent node itself does not exist.
+   *
+   * @param zkw zookeeper reference
+   * @param znode node to get children of as addresses
+   * @return list of data of children of specified znode, empty if no children,
+   *         null if parent does not exist
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static List<HServerAddress> listChildrenAndGetAsAddresses(
+      ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    List<String> children = listChildrenNoWatch(zkw, znode);
+    if(children == null) {
+      return null;
+    }
+    List<HServerAddress> addresses =
+      new ArrayList<HServerAddress>(children.size());
+    for(String child : children) {
+      addresses.add(getDataAsAddress(zkw, joinZNode(znode, child)));
+    }
+    return addresses;
+  }
+
+  /**
+   * Lists the children of the specified znode without setting any watches.
+   *
+   * Used to list the currently online regionservers and their addresses.
+   *
+   * Sets no watches at all, this method is best effort.
+   *
+   * Returns an empty list if the node has no children.  Returns null if the
+   * parent node itself does not exist.
+   *
+   * @param zkw zookeeper reference
+   * @param znode node to get children of as addresses
+   * @return list of data of children of specified znode, empty if no children,
+   *         null if parent does not exist
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static List<String> listChildrenNoWatch(
+      ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    List<String> children = null;
+    try {
+      // List the children without watching
+      children = zkw.getZooKeeper().getChildren(znode, null);
+    } catch(KeeperException.NoNodeException nne) {
+      return null;
+    } catch(InterruptedException ie) {
+      zkw.interruptedException(ie);
+    }
+    return children;
+  }
+
+  /**
+   * Atomically add watches and read data from all unwatched unassigned nodes.
+   *
+   * <p>This works because master is the only person deleting nodes.
+   */
+  public static List<NodeAndData> watchAndGetNewChildren(ZooKeeperWatcher zkw,
+      String baseNode)
+  throws KeeperException {
+    List<NodeAndData> newNodes = new ArrayList<NodeAndData>();
+    synchronized(zkw.getNodes()) {
+      List<String> nodes =
+        ZKUtil.listChildrenAndWatchForNewChildren(zkw, baseNode);
+      for(String node : nodes) {
+        String nodePath = ZKUtil.joinZNode(baseNode, node);
+        if(!zkw.getNodes().contains(nodePath)) {
+          byte [] data = ZKUtil.getDataAndWatch(zkw, nodePath);
+          newNodes.add(new NodeAndData(nodePath, data));
+          zkw.getNodes().add(nodePath);
+        }
+      }
+    }
+    return newNodes;
+  }
+
+  /**
+   * Simple class to hold a node path and node data.
+   */
+  public static class NodeAndData {
+    private String node;
+    private byte [] data;
+    public NodeAndData(String node, byte [] data) {
+      this.node = node;
+      this.data = data;
+    }
+    public String getNode() {
+      return node;
+    }
+    public byte [] getData() {
+      return data;
+    }
+    @Override
+    public String toString() {
+      return node + " (" + RegionTransitionData.fromBytes(data) + ")";
+    }
+  }
+
+  /**
+   * Checks if the specified znode has any children.  Sets no watches.
+   *
+   * Returns true if the node exists and has children.  Returns false if the
+   * node does not exist or if the node does not have any children.
+   *
+   * Used during master initialization to determine if the master is a
+   * failed-over-to master or the first master during initial cluster startup.
+   * If the directory for regionserver ephemeral nodes is empty then this is
+   * a cluster startup, if not then it is not cluster startup.
+   *
+   * @param zkw zk reference
+   * @param znode path of node to check for children of
+   * @return true if node has children, false if not or node does not exist
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static boolean nodeHasChildren(ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      return !zkw.getZooKeeper().getChildren(znode, null).isEmpty();
+    } catch(KeeperException.NoNodeException ke) {
+      LOG.debug(zkw.prefix("Unable to list children of znode " + znode + " " +
+      "because node does not exist (not an error)"));
+      return false;
+    } catch (KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to list children of znode " + znode), e);
+      zkw.keeperException(e);
+      return false;
+    } catch (InterruptedException e) {
+      LOG.warn(zkw.prefix("Unable to list children of znode " + znode), e);
+      zkw.interruptedException(e);
+      return false;
+    }
+  }
+
+  /**
+   * Get the number of children of the specified node.
+   *
+   * If the node does not exist or has no children, returns 0.
+   *
+   * Sets no watches at all.
+   *
+   * @param zkw zk reference
+   * @param znode path of node to count children of
+   * @return number of children of specified node, 0 if none or parent does not
+   *         exist
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static int getNumberOfChildren(ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      Stat stat = zkw.getZooKeeper().exists(znode, null);
+      return stat == null ? 0 : stat.getNumChildren();
+    } catch(KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to get children of node " + znode));
+      zkw.keeperException(e);
+    } catch(InterruptedException e) {
+      zkw.interruptedException(e);
+    }
+    return 0;
+  }
+
+  //
+  // Data retrieval
+  //
+
+  /**
+   * Get znode data. Does not set a watcher.
+   * @return ZNode data
+   */
+  public static byte [] getData(ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      byte [] data = zkw.getZooKeeper().getData(znode, null, null);
+      logRetrievedMsg(zkw, znode, data, false);
+      return data;
+    } catch (KeeperException.NoNodeException e) {
+      LOG.debug(zkw.prefix("Unable to get data of znode " + znode + " " +
+        "because node does not exist (not an error)"));
+      return null;
+    } catch (KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to get data of znode " + znode), e);
+      zkw.keeperException(e);
+      return null;
+    } catch (InterruptedException e) {
+      LOG.warn(zkw.prefix("Unable to get data of znode " + znode), e);
+      zkw.interruptedException(e);
+      return null;
+    }
+  }
+
+  /**
+   * Get the data at the specified znode and set a watch.
+   *
+   * Returns the data and sets a watch if the node exists.  Returns null and no
+   * watch is set if the node does not exist or there is an exception.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @return data of the specified znode, or null
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static byte [] getDataAndWatch(ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      byte [] data = zkw.getZooKeeper().getData(znode, zkw, null);
+      logRetrievedMsg(zkw, znode, data, true);
+      return data;
+    } catch (KeeperException.NoNodeException e) {
+      LOG.debug(zkw.prefix("Unable to get data of znode " + znode + " " +
+        "because node does not exist (not an error)"));
+      return null;
+    } catch (KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to get data of znode " + znode), e);
+      zkw.keeperException(e);
+      return null;
+    } catch (InterruptedException e) {
+      LOG.warn(zkw.prefix("Unable to get data of znode " + znode), e);
+      zkw.interruptedException(e);
+      return null;
+    }
+  }
+
+  /**
+   * Get the data at the specified znode without setting a watch.
+   *
+   * Returns the data if the node exists.  Returns null if the node does not
+   * exist.
+   *
+   * Sets the stats of the node in the passed Stat object.  Pass a null stat if
+   * not interested.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @param stat node status to set if node exists
+   * @return data of the specified znode, or null if does not exist
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static byte [] getDataNoWatch(ZooKeeperWatcher zkw, String znode,
+      Stat stat)
+  throws KeeperException {
+    try {
+      byte [] data = zkw.getZooKeeper().getData(znode, zkw, stat);
+      logRetrievedMsg(zkw, znode, data, false);
+      return data;
+    } catch (KeeperException.NoNodeException e) {
+      LOG.debug(zkw.prefix("Unable to get data of znode " + znode + " " +
+          "because node does not exist (not necessarily an error)"));
+      return null;
+    } catch (KeeperException e) {
+      LOG.warn(zkw.prefix("Unable to get data of znode " + znode), e);
+      zkw.keeperException(e);
+      return null;
+    } catch (InterruptedException e) {
+      LOG.warn(zkw.prefix("Unable to get data of znode " + znode), e);
+      zkw.interruptedException(e);
+      return null;
+    }
+  }
+
+  /**
+   * Get the data at the specified znode, deserialize it as an HServerAddress,
+   * and set a watch.
+   *
+   * Returns the data as a server address and sets a watch if the node exists.
+   * Returns null and no watch is set if the node does not exist or there is an
+   * exception.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @return data of the specified node as a server address, or null
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static HServerAddress getDataAsAddress(ZooKeeperWatcher zkw,
+      String znode)
+  throws KeeperException {
+    byte [] data = getDataAndWatch(zkw, znode);
+    if(data == null) {
+      return null;
+    }
+    String addrString = Bytes.toString(data);
+    LOG.debug(zkw.prefix("Read server address from znode " + znode + ": " +
+      addrString));
+    return new HServerAddress(addrString);
+  }
+
+  /**
+   * Update the data of an existing node with the expected version to have the
+   * specified data.
+   *
+   * Throws an exception if there is a version mismatch or some other problem.
+   *
+   * Sets no watches under any conditions.
+   *
+   * @param zkw zk reference
+   * @param znode
+   * @param data
+   * @param expectedVersion
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.BadVersionException if version mismatch
+   */
+  public static void updateExistingNodeData(ZooKeeperWatcher zkw, String znode,
+      byte [] data, int expectedVersion)
+  throws KeeperException {
+    try {
+      zkw.getZooKeeper().setData(znode, data, expectedVersion);
+    } catch(InterruptedException ie) {
+      zkw.interruptedException(ie);
+    }
+  }
+
+  //
+  // Data setting
+  //
+
+  /**
+   * Set the specified znode to be an ephemeral node carrying the specified
+   * server address.  Used by masters for their ephemeral node and regionservers
+   * for their ephemeral node.
+   *
+   * If the node is created successfully, a watcher is also set on the node.
+   *
+   * If the node is not created successfully because it already exists, this
+   * method will also set a watcher on the node.
+   *
+   * If there is another problem, a KeeperException will be thrown.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @param address server address
+   * @return true if address set, false if not, watch set in both cases
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static boolean setAddressAndWatch(ZooKeeperWatcher zkw,
+      String znode, HServerAddress address)
+  throws KeeperException {
+    return createEphemeralNodeAndWatch(zkw, znode,
+        Bytes.toBytes(address.toString()));
+  }
+
+  /**
+   * Sets the data of the existing znode to be the specified data.  Ensures that
+   * the current data has the specified expected version.
+   *
+   * <p>If the node does not exist, a {@link NoNodeException} will be thrown.
+   *
+   * <p>If their is a version mismatch, method returns null.
+   *
+   * <p>No watches are set but setting data will trigger other watchers of this
+   * node.
+   *
+   * <p>If there is another problem, a KeeperException will be thrown.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @param data data to set for node
+   * @param expectedVersion version expected when setting data
+   * @return true if data set, false if version mismatch
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static boolean setData(ZooKeeperWatcher zkw, String znode,
+      byte [] data, int expectedVersion)
+  throws KeeperException, KeeperException.NoNodeException {
+    try {
+      return zkw.getZooKeeper().setData(znode, data, expectedVersion) != null;
+    } catch (InterruptedException e) {
+      zkw.interruptedException(e);
+      return false;
+    }
+  }
+
+  /**
+   * Set data into node creating node if it doesn't yet exist.
+   * Does not set watch.
+   * @param zkw zk reference
+   * @param znode path of node
+   * @param data data to set for node
+   * @throws KeeperException 
+   */
+  public static void createSetData(final ZooKeeperWatcher zkw, final String znode,
+      final byte [] data)
+  throws KeeperException {
+    if (checkExists(zkw, znode) != -1) {
+      ZKUtil.createWithParents(zkw, znode);
+    }
+    ZKUtil.setData(zkw, znode, data);
+  }
+
+  /**
+   * Sets the data of the existing znode to be the specified data.  The node
+   * must exist but no checks are done on the existing data or version.
+   *
+   * <p>If the node does not exist, a {@link NoNodeException} will be thrown.
+   *
+   * <p>No watches are set but setting data will trigger other watchers of this
+   * node.
+   *
+   * <p>If there is another problem, a KeeperException will be thrown.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @param data data to set for node
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static void setData(ZooKeeperWatcher zkw, String znode,
+      byte [] data)
+  throws KeeperException, KeeperException.NoNodeException {
+    setData(zkw, znode, data, -1);
+  }
+
+  //
+  // Node creation
+  //
+
+  /**
+   *
+   * Set the specified znode to be an ephemeral node carrying the specified
+   * data.
+   *
+   * If the node is created successfully, a watcher is also set on the node.
+   *
+   * If the node is not created successfully because it already exists, this
+   * method will also set a watcher on the node.
+   *
+   * If there is another problem, a KeeperException will be thrown.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @param data data of node
+   * @return true if node created, false if not, watch set in both cases
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static boolean createEphemeralNodeAndWatch(ZooKeeperWatcher zkw,
+      String znode, byte [] data)
+  throws KeeperException {
+    try {
+      zkw.getZooKeeper().create(znode, data, Ids.OPEN_ACL_UNSAFE,
+          CreateMode.EPHEMERAL);
+    } catch (KeeperException.NodeExistsException nee) {
+      if(!watchAndCheckExists(zkw, znode)) {
+        // It did exist but now it doesn't, try again
+        return createEphemeralNodeAndWatch(zkw, znode, data);
+      }
+      return false;
+    } catch (InterruptedException e) {
+      LOG.info("Interrupted", e);
+      Thread.currentThread().interrupt();
+    }
+    return true;
+  }
+
+  /**
+   * Creates the specified znode to be a persistent node carrying the specified
+   * data.
+   *
+   * Returns true if the node was successfully created, false if the node
+   * already existed.
+   *
+   * If the node is created successfully, a watcher is also set on the node.
+   *
+   * If the node is not created successfully because it already exists, this
+   * method will also set a watcher on the node but return false.
+   *
+   * If there is another problem, a KeeperException will be thrown.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @param data data of node
+   * @return true if node created, false if not, watch set in both cases
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static boolean createNodeIfNotExistsAndWatch(
+      ZooKeeperWatcher zkw, String znode, byte [] data)
+  throws KeeperException {
+    try {
+      zkw.getZooKeeper().create(znode, data, Ids.OPEN_ACL_UNSAFE,
+          CreateMode.PERSISTENT);
+    } catch (KeeperException.NodeExistsException nee) {
+      try {
+        zkw.getZooKeeper().exists(znode, zkw);
+      } catch (InterruptedException e) {
+        zkw.interruptedException(e);
+        return false;
+      }
+      return false;
+    } catch (InterruptedException e) {
+      zkw.interruptedException(e);
+      return false;
+    }
+    return true;
+  }
+
+  /**
+   * Creates the specified node with the specified data and watches it.
+   *
+   * <p>Throws an exception if the node already exists.
+   *
+   * <p>The node created is persistent and open access.
+   *
+   * <p>Returns the version number of the created node if successful.
+   *
+   * @param zkw zk reference
+   * @param znode path of node to create
+   * @param data data of node to create
+   * @return version of node created
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NodeExistsException if node already exists
+   */
+  public static int createAndWatch(ZooKeeperWatcher zkw,
+      String znode, byte [] data)
+  throws KeeperException, KeeperException.NodeExistsException {
+    try {
+      zkw.getZooKeeper().create(znode, data, Ids.OPEN_ACL_UNSAFE,
+          CreateMode.PERSISTENT);
+      return zkw.getZooKeeper().exists(znode, zkw).getVersion();
+    } catch (InterruptedException e) {
+      zkw.interruptedException(e);
+      return -1;
+    }
+  }
+
+  /**
+   * Async creates the specified node with the specified data.
+   *
+   * <p>Throws an exception if the node already exists.
+   *
+   * <p>The node created is persistent and open access.
+   *
+   * @param zkw zk reference
+   * @param znode path of node to create
+   * @param data data of node to create
+   * @param cb
+   * @param ctx
+   * @throws KeeperException if unexpected zookeeper exception
+   * @throws KeeperException.NodeExistsException if node already exists
+   */
+  public static void asyncCreate(ZooKeeperWatcher zkw,
+      String znode, byte [] data, final AsyncCallback.StringCallback cb,
+      final Object ctx)
+  throws KeeperException, KeeperException.NodeExistsException {
+    zkw.getZooKeeper().create(znode, data, Ids.OPEN_ACL_UNSAFE,
+       CreateMode.PERSISTENT, cb, ctx);
+  }
+
+  /**
+   * Creates the specified node, if the node does not exist.  Does not set a
+   * watch and fails silently if the node already exists.
+   *
+   * The node created is persistent and open access.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static void createAndFailSilent(ZooKeeperWatcher zkw,
+      String znode)
+  throws KeeperException {
+    try {
+      zkw.getZooKeeper().create(znode, new byte[0], Ids.OPEN_ACL_UNSAFE,
+          CreateMode.PERSISTENT);
+    } catch(KeeperException.NodeExistsException nee) {
+    } catch(InterruptedException ie) {
+      zkw.interruptedException(ie);
+    }
+  }
+
+  /**
+   * Creates the specified node and all parent nodes required for it to exist.
+   *
+   * No watches are set and no errors are thrown if the node already exists.
+   *
+   * The nodes created are persistent and open access.
+   *
+   * @param zkw zk reference
+   * @param znode path of node
+   * @throws KeeperException if unexpected zookeeper exception
+   */
+  public static void createWithParents(ZooKeeperWatcher zkw, String znode)
+  throws KeeperException {
+    try {
+      if(znode == null) {
+        return;
+      }
+      zkw.getZooKeeper().create(znode, new byte[0], Ids.OPEN_ACL_UNSAFE,
+          CreateMode.PERSISTENT);
+    } catch(KeeperException.NodeExistsException nee) {
+      return;
+    } catch(KeeperException.NoNodeException nne) {
+      createWithParents(zkw, getParent(znode));
+      createWithParents(zkw, znode);
+    } catch(InterruptedException ie) {
+      zkw.interruptedException(ie);
+    }
+  }
+
+  //
+  // Deletes
+  //
+
+  /**
+   * Delete the specified node.  Sets no watches.  Throws all exceptions.
+   */
+  public static void deleteNode(ZooKeeperWatcher zkw, String node)
+  throws KeeperException {
+    deleteNode(zkw, node, -1);
+  }
+
+  /**
+   * Delete the specified node with the specified version.  Sets no watches.
+   * Throws all exceptions.
+   */
+  public static boolean deleteNode(ZooKeeperWatcher zkw, String node,
+      int version)
+  throws KeeperException {
+    try {
+      zkw.getZooKeeper().delete(node, version);
+      return true;
+    } catch(KeeperException.BadVersionException bve) {
+      return false;
+    } catch(InterruptedException ie) {
+      zkw.interruptedException(ie);
+      return false;
+    }
+  }
+
+  /**
+   * Deletes the specified node.  Fails silent if the node does not exist.
+   * @param zkw
+   * @param node
+   * @throws KeeperException
+   */
+  public static void deleteNodeFailSilent(ZooKeeperWatcher zkw, String node)
+  throws KeeperException {
+    try {
+      zkw.getZooKeeper().delete(node, -1);
+    } catch(KeeperException.NoNodeException nne) {
+    } catch(InterruptedException ie) {
+      zkw.interruptedException(ie);
+    }
+  }
+
+  /**
+   * Delete the specified node and all of it's children.
+   *
+   * Sets no watches.  Throws all exceptions besides dealing with deletion of
+   * children.
+   */
+  public static void deleteNodeRecursively(ZooKeeperWatcher zkw, String node)
+  throws KeeperException {
+    try {
+      List<String> children = ZKUtil.listChildrenNoWatch(zkw, node);
+      if(!children.isEmpty()) {
+        for(String child : children) {
+          deleteNodeRecursively(zkw, joinZNode(node, child));
+        }
+      }
+      zkw.getZooKeeper().delete(node, -1);
+    } catch(InterruptedException ie) {
+      zkw.interruptedException(ie);
+    }
+  }
+
+  /**
+   * Delete all the children of the specified node but not the node itself.
+   *
+   * Sets no watches.  Throws all exceptions besides dealing with deletion of
+   * children.
+   */
+  public static void deleteChildrenRecursively(ZooKeeperWatcher zkw, String node)
+  throws KeeperException {
+    List<String> children = ZKUtil.listChildrenNoWatch(zkw, node);
+    if(children != null || !children.isEmpty()) {
+      for(String child : children) {
+        deleteNodeRecursively(zkw, joinZNode(node, child));
+      }
+    }
+  }
+
+  //
+  // ZooKeeper cluster information
+  //
+
+  /** @return String dump of everything in ZooKeeper. */
+  public static String dump(ZooKeeperWatcher zkw) {
+    StringBuilder sb = new StringBuilder();
+    try {
+      sb.append("HBase is rooted at ").append(zkw.baseZNode);
+      sb.append("\nMaster address: ").append(
+          getDataAsAddress(zkw, zkw.masterAddressZNode));
+      sb.append("\nRegion server holding ROOT: ").append(
+          getDataAsAddress(zkw, zkw.rootServerZNode));
+      sb.append("\nRegion servers:");
+      for (HServerAddress address : listChildrenAndGetAsAddresses(zkw,
+          zkw.rsZNode)) {
+        sb.append("\n ").append(address);
+      }
+      sb.append("\nQuorum Server Statistics:");
+      String[] servers = zkw.getQuorum().split(",");
+      for (String server : servers) {
+        sb.append("\n ").append(server);
+        try {
+          String[] stat = getServerStats(server);
+          for (String s : stat) {
+            sb.append("\n  ").append(s);
+          }
+        } catch (Exception e) {
+          sb.append("\n  ERROR: ").append(e.getMessage());
+        }
+      }
+    } catch(KeeperException ke) {
+      sb.append("\nFATAL ZooKeeper Exception!\n");
+      sb.append("\n" + ke.getMessage());
+    }
+    return sb.toString();
+  }
+
+  /**
+   * Gets the statistics from the given server. Uses a 1 minute timeout.
+   *
+   * @param server  The server to get the statistics from.
+   * @return The array of response strings.
+   * @throws IOException When the socket communication fails.
+   */
+  public static String[] getServerStats(String server)
+  throws IOException {
+    return getServerStats(server, 60 * 1000);
+  }
+
+  /**
+   * Gets the statistics from the given server.
+   *
+   * @param server  The server to get the statistics from.
+   * @param timeout  The socket timeout to use.
+   * @return The array of response strings.
+   * @throws IOException When the socket communication fails.
+   */
+  public static String[] getServerStats(String server, int timeout)
+  throws IOException {
+    String[] sp = server.split(":");
+    Socket socket = new Socket(sp[0],
+      sp.length > 1 ? Integer.parseInt(sp[1]) : 2181);
+    socket.setSoTimeout(timeout);
+    PrintWriter out = new PrintWriter(socket.getOutputStream(), true);
+    BufferedReader in = new BufferedReader(new InputStreamReader(
+      socket.getInputStream()));
+    out.println("stat");
+    out.flush();
+    ArrayList<String> res = new ArrayList<String>();
+    while (true) {
+      String line = in.readLine();
+      if (line != null) {
+        res.add(line);
+      } else {
+        break;
+      }
+    }
+    socket.close();
+    return res.toArray(new String[res.size()]);
+  }
+
+  private static void logRetrievedMsg(final ZooKeeperWatcher zkw,
+      final String znode, final byte [] data, final boolean watcherSet) {
+    if (!LOG.isDebugEnabled()) return;
+    LOG.debug(zkw.prefix("Retrieved " + ((data == null)? 0: data.length) +
+      " byte(s) of data from znode " + znode +
+      (watcherSet? " and set watcher; ": "; data=") +
+      (data == null? "null": (
+          znode.startsWith(zkw.assignmentZNode) ?
+              RegionTransitionData.fromBytes(data).toString()
+              : StringUtils.abbreviate(Bytes.toString(data), 32)))));
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperListener.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperListener.java
new file mode 100644
index 0000000..97e3af6
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperListener.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+
+/**
+ * Base class for internal listeners of ZooKeeper events.
+ *
+ * The {@link ZooKeeperWatcher} for a process will execute the appropriate
+ * methods of implementations of this class.  In order to receive events from
+ * the watcher, every listener must register itself via {@link ZooKeeperWatcher#registerListener}.
+ *
+ * Subclasses need only override those methods in which they are interested.
+ *
+ * Note that the watcher will be blocked when invoking methods in listeners so
+ * they must not be long-running.
+ */
+public class ZooKeeperListener {
+
+  // Reference to the zk watcher which also contains configuration and constants
+  protected ZooKeeperWatcher watcher;
+
+  /**
+   * Construct a ZooKeeper event listener.
+   */
+  public ZooKeeperListener(ZooKeeperWatcher watcher) {
+    this.watcher = watcher;
+  }
+
+  /**
+   * Called when a new node has been created.
+   * @param path full path of the new node
+   */
+  public void nodeCreated(String path) {
+    // no-op
+  }
+
+  /**
+   * Called when a node has been deleted
+   * @param path full path of the deleted node
+   */
+  public void nodeDeleted(String path) {
+    // no-op
+  }
+
+  /**
+   * Called when an existing node has changed data.
+   * @param path full path of the updated node
+   */
+  public void nodeDataChanged(String path) {
+    // no-op
+  }
+
+  /**
+   * Called when an existing node has a child node added or removed.
+   * @param path full path of the node whose children have changed
+   */
+  public void nodeChildrenChanged(String path) {
+    // no-op
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServerArg.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServerArg.java
new file mode 100644
index 0000000..c662a5b
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperMainServerArg.java
@@ -0,0 +1,68 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.util.Properties;
+import java.util.Map.Entry;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+
+/**
+ * Tool for reading a ZooKeeper server from HBase XML configuration producing
+ * the '-server host:port' argument to pass ZooKeeperMain.  This program
+ * emits either '-server HOST:PORT" where HOST is one of the zk ensemble
+ * members plus zk client port OR it emits '' if no zk servers found (Yes,
+ * it emits '-server' too).
+ */
+public class ZooKeeperMainServerArg {
+  public String parse(final Configuration c) {
+    // Note that we do not simply grab the property
+    // HConstants.ZOOKEEPER_QUORUM from the HBaseConfiguration because the
+    // user may be using a zoo.cfg file.
+    Properties zkProps = ZKConfig.makeZKProps(c);
+    String host = null;
+    String clientPort = null;
+    for (Entry<Object, Object> entry: zkProps.entrySet()) {
+      String key = entry.getKey().toString().trim();
+      String value = entry.getValue().toString().trim();
+      if (key.startsWith("server.") && host == null) {
+        String[] parts = value.split(":");
+        host = parts[0];
+      } else if (key.endsWith("clientPort")) {
+        clientPort = value;
+      }
+      if (host != null && clientPort != null) break;
+    }
+    return host != null && clientPort != null? host + ":" + clientPort: null;
+  }
+
+  /**
+   * Run the tool.
+   * @param args Command line arguments. First arg is path to zookeepers file.
+   */
+  public static void main(String args[]) {
+    Configuration conf = HBaseConfiguration.create();
+    String hostport = new ZooKeeperMainServerArg().parse(conf);
+    System.out.println((hostport == null || hostport.length() == 0)? "":
+      "-server " + hostport);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java
new file mode 100644
index 0000000..131aba3
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java
@@ -0,0 +1,182 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * Tracks the availability and value of a single ZooKeeper node.
+ *
+ * <p>Utilizes the {@link ZooKeeperListener} interface to get the necessary
+ * ZooKeeper events related to the node.
+ *
+ * <p>This is the base class used by trackers in both the Master and
+ * RegionServers.
+ */
+public abstract class ZooKeeperNodeTracker extends ZooKeeperListener {
+  /** Path of node being tracked */
+  protected final String node;
+
+  /** Data of the node being tracked */
+  private byte [] data;
+
+  /** Used to abort if a fatal error occurs */
+  protected final Abortable abortable;
+
+  private boolean stopped = false;
+
+  /**
+   * Constructs a new ZK node tracker.
+   *
+   * <p>After construction, use {@link #start} to kick off tracking.
+   *
+   * @param watcher
+   * @param node
+   * @param abortable
+   */
+  public ZooKeeperNodeTracker(ZooKeeperWatcher watcher, String node,
+      Abortable abortable) {
+    super(watcher);
+    this.node = node;
+    this.abortable = abortable;
+    this.data = null;
+  }
+
+  /**
+   * Starts the tracking of the node in ZooKeeper.
+   *
+   * <p>Use {@link #blockUntilAvailable()} to block until the node is available
+   * or {@link #getData()} to get the data of the node if it is available.
+   */
+  public synchronized void start() {
+    this.watcher.registerListener(this);
+    try {
+      if(ZKUtil.watchAndCheckExists(watcher, node)) {
+        byte [] data = ZKUtil.getDataAndWatch(watcher, node);
+        if(data != null) {
+          this.data = data;
+        } else {
+          // It existed but now does not, try again to ensure a watch is set
+          start();
+        }
+      }
+    } catch (KeeperException e) {
+      abortable.abort("Unexpected exception during initialization, aborting", e);
+    }
+  }
+
+  public synchronized void stop() {
+    this.stopped = true;
+    notifyAll();
+  }
+
+  /**
+   * Gets the data of the node, blocking until the node is available.
+   *
+   * @return data of the node
+   * @throws InterruptedException if the waiting thread is interrupted
+   */
+  public synchronized byte [] blockUntilAvailable()
+  throws InterruptedException {
+    return blockUntilAvailable(0);
+  }
+
+  /**
+   * Gets the data of the node, blocking until the node is available or the
+   * specified timeout has elapsed.
+   *
+   * @param timeout maximum time to wait for the node data to be available,
+   * n milliseconds.  Pass 0 for no timeout.
+   * @return data of the node
+   * @throws InterruptedException if the waiting thread is interrupted
+   */
+  public synchronized byte [] blockUntilAvailable(long timeout)
+  throws InterruptedException {
+    if (timeout < 0) throw new IllegalArgumentException();
+    boolean notimeout = timeout == 0;
+    long startTime = System.currentTimeMillis();
+    long remaining = timeout;
+    while (!this.stopped && (notimeout || remaining > 0) && this.data == null) {
+      if (notimeout) {
+        wait();
+        continue;
+      }
+      wait(remaining);
+      remaining = timeout - (System.currentTimeMillis() - startTime);
+    }
+    return data;
+  }
+
+  /**
+   * Gets the data of the node.
+   *
+   * <p>If the node is currently available, the most up-to-date known version of
+   * the data is returned.  If the node is not currently available, null is
+   * returned.
+   *
+   * @return data of the node, null if unavailable
+   */
+  public synchronized byte [] getData() {
+    return data;
+  }
+
+  public String getNode() {
+    return this.node;
+  }
+
+  @Override
+  public synchronized void nodeCreated(String path) {
+    if (!path.equals(node)) return;
+    try {
+      byte [] data = ZKUtil.getDataAndWatch(watcher, node);
+      if (data != null) {
+        this.data = data;
+        notifyAll();
+      } else {
+        nodeDeleted(path);
+      }
+    } catch(KeeperException e) {
+      abortable.abort("Unexpected exception handling nodeCreated event", e);
+    }
+  }
+
+  @Override
+  public synchronized void nodeDeleted(String path) {
+    if(path.equals(node)) {
+      try {
+        if(ZKUtil.watchAndCheckExists(watcher, node)) {
+          nodeCreated(path);
+        } else {
+          this.data = null;
+        }
+      } catch(KeeperException e) {
+        abortable.abort("Unexpected exception handling nodeDeleted event", e);
+      }
+    }
+  }
+
+  @Override
+  public synchronized void nodeDataChanged(String path) {
+    if(path.equals(node)) {
+      nodeCreated(path);
+    }
+  }
+}
diff --git a/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java
new file mode 100644
index 0000000..b90d970
--- /dev/null
+++ b/0.90/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java
@@ -0,0 +1,410 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.IOException;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CopyOnWriteArrayList;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooKeeper;
+
+/**
+ * Acts as the single ZooKeeper Watcher.  One instance of this is instantiated
+ * for each Master, RegionServer, and client process.
+ *
+ * <p>This is the only class that implements {@link Watcher}.  Other internal
+ * classes which need to be notified of ZooKeeper events must register with
+ * the local instance of this watcher via {@link #registerListener}.
+ *
+ * <p>This class also holds and manages the connection to ZooKeeper.  Code to
+ * deal with connection related events and exceptions are handled here.
+ */
+public class ZooKeeperWatcher implements Watcher, Abortable {
+  private static final Log LOG = LogFactory.getLog(ZooKeeperWatcher.class);
+
+  // Identifiier for this watcher (for logging only).  Its made of the prefix
+  // passed on construction and the zookeeper sessionid.
+  private String identifier;
+
+  // zookeeper quorum
+  private String quorum;
+
+  // zookeeper connection
+  private ZooKeeper zooKeeper;
+
+  // abortable in case of zk failure
+  private Abortable abortable;
+
+  // listeners to be notified
+  private final List<ZooKeeperListener> listeners =
+    new CopyOnWriteArrayList<ZooKeeperListener>();
+
+  // set of unassigned nodes watched
+  private Set<String> unassignedNodes = new HashSet<String>();
+
+  // node names
+
+  // base znode for this cluster
+  public String baseZNode;
+  // znode containing location of server hosting root region
+  public String rootServerZNode;
+  // znode containing ephemeral nodes of the regionservers
+  public String rsZNode;
+  // znode of currently active master
+  public String masterAddressZNode;
+  // znode containing the current cluster state
+  public String clusterStateZNode;
+  // znode used for region transitioning and assignment
+  public String assignmentZNode;
+  // znode used for table disabling/enabling
+  public String tableZNode;
+
+  private final Configuration conf;
+
+  private final Exception constructorCaller;
+
+  /**
+   * Instantiate a ZooKeeper connection and watcher.
+   * @param descriptor Descriptive string that is added to zookeeper sessionid
+   * and used as identifier for this instance.
+   * @throws IOException
+   * @throws ZooKeeperConnectionException
+   */
+  public ZooKeeperWatcher(Configuration conf, String descriptor,
+      Abortable abortable)
+  throws IOException, ZooKeeperConnectionException {
+    this.conf = conf;
+    // Capture a stack trace now.  Will print it out later if problem so we can
+    // distingush amongst the myriad ZKWs.
+    try {
+      throw new Exception("ZKW CONSTRUCTOR STACK TRACE FOR DEBUGGING");
+    } catch (Exception e) {
+      this.constructorCaller = e;
+    }
+    this.quorum = ZKConfig.getZKQuorumServersString(conf);
+    // Identifier will get the sessionid appended later below down when we
+    // handle the syncconnect event.
+    this.identifier = descriptor;
+    this.abortable = abortable;
+    setNodeNames(conf);
+    this.zooKeeper = ZKUtil.connect(conf, quorum, this, descriptor);
+    try {
+      // Create all the necessary "directories" of znodes
+      // TODO: Move this to an init method somewhere so not everyone calls it?
+
+      // The first call against zk can fail with connection loss.  Seems common.
+      // Apparently this is recoverable.  Retry a while.
+      // See http://wiki.apache.org/hadoop/ZooKeeper/ErrorHandling
+      // TODO: Generalize out in ZKUtil.
+      long wait = conf.getLong("hbase.zookeeper.recoverable.waittime", 10000);
+      long finished = System.currentTimeMillis() + wait;
+      KeeperException ke = null;
+      do {
+        try {
+          ZKUtil.createAndFailSilent(this, baseZNode);
+          ke = null;
+          break;
+        } catch (KeeperException.ConnectionLossException e) {
+          if (LOG.isDebugEnabled() && (isFinishedRetryingRecoverable(finished))) {
+            LOG.debug("Retrying zk create for another " +
+              (finished - System.currentTimeMillis()) +
+              "ms; set 'hbase.zookeeper.recoverable.waittime' to change " +
+              "wait time); " + e.getMessage());
+          }
+          ke = e;
+        }
+      } while (isFinishedRetryingRecoverable(finished));
+      // Convert connectionloss exception to ZKCE.
+      if (ke != null) throw new ZooKeeperConnectionException(ke);
+      ZKUtil.createAndFailSilent(this, assignmentZNode);
+      ZKUtil.createAndFailSilent(this, rsZNode);
+      ZKUtil.createAndFailSilent(this, tableZNode);
+    } catch (KeeperException e) {
+      LOG.error(prefix("Unexpected KeeperException creating base node"), e);
+      throw new IOException(e);
+    }
+  }
+
+  private boolean isFinishedRetryingRecoverable(final long finished) {
+    return System.currentTimeMillis() < finished;
+  }
+
+  @Override
+  public String toString() {
+    return this.identifier;
+  }
+
+  /**
+   * Adds this instance's identifier as a prefix to the passed <code>str</code>
+   * @param str String to amend.
+   * @return A new string with this instance's identifier as prefix: e.g.
+   * if passed 'hello world', the returned string could be
+   */
+  public String prefix(final String str) {
+    return this.toString() + " " + str;
+  }
+
+  /**
+   * Set the local variable node names using the specified configuration.
+   */
+  private void setNodeNames(Configuration conf) {
+    baseZNode = conf.get(HConstants.ZOOKEEPER_ZNODE_PARENT,
+        HConstants.DEFAULT_ZOOKEEPER_ZNODE_PARENT);
+    rootServerZNode = ZKUtil.joinZNode(baseZNode,
+        conf.get("zookeeper.znode.rootserver", "root-region-server"));
+    rsZNode = ZKUtil.joinZNode(baseZNode,
+        conf.get("zookeeper.znode.rs", "rs"));
+    masterAddressZNode = ZKUtil.joinZNode(baseZNode,
+        conf.get("zookeeper.znode.master", "master"));
+    clusterStateZNode = ZKUtil.joinZNode(baseZNode,
+        conf.get("zookeeper.znode.state", "shutdown"));
+    assignmentZNode = ZKUtil.joinZNode(baseZNode,
+        conf.get("zookeeper.znode.unassigned", "unassigned"));
+    tableZNode = ZKUtil.joinZNode(baseZNode,
+        conf.get("zookeeper.znode.tableEnableDisable", "table"));
+  }
+
+  /**
+   * Register the specified listener to receive ZooKeeper events.
+   * @param listener
+   */
+  public void registerListener(ZooKeeperListener listener) {
+    listeners.add(listener);
+  }
+
+  /**
+   * Register the specified listener to receive ZooKeeper events and add it as
+   * the first in the list of current listeners.
+   * @param listener
+   */
+  public void registerListenerFirst(ZooKeeperListener listener) {
+    listeners.add(0, listener);
+  }
+
+  /**
+   * Get the connection to ZooKeeper.
+   * @return connection reference to zookeeper
+   */
+  public ZooKeeper getZooKeeper() {
+    return zooKeeper;
+  }
+
+  /**
+   * Get the quorum address of this instance.
+   * @return quorum string of this zookeeper connection instance
+   */
+  public String getQuorum() {
+    return quorum;
+  }
+
+  /**
+   * Method called from ZooKeeper for events and connection status.
+   *
+   * Valid events are passed along to listeners.  Connection status changes
+   * are dealt with locally.
+   */
+  @Override
+  public void process(WatchedEvent event) {
+    LOG.debug(prefix("Received ZooKeeper Event, " +
+        "type=" + event.getType() + ", " +
+        "state=" + event.getState() + ", " +
+        "path=" + event.getPath()));
+
+    switch(event.getType()) {
+
+      // If event type is NONE, this is a connection status change
+      case None: {
+        connectionEvent(event);
+        break;
+      }
+
+      // Otherwise pass along to the listeners
+
+      case NodeCreated: {
+        for(ZooKeeperListener listener : listeners) {
+          listener.nodeCreated(event.getPath());
+        }
+        break;
+      }
+
+      case NodeDeleted: {
+        for(ZooKeeperListener listener : listeners) {
+          listener.nodeDeleted(event.getPath());
+        }
+        break;
+      }
+
+      case NodeDataChanged: {
+        for(ZooKeeperListener listener : listeners) {
+          listener.nodeDataChanged(event.getPath());
+        }
+        break;
+      }
+
+      case NodeChildrenChanged: {
+        for(ZooKeeperListener listener : listeners) {
+          listener.nodeChildrenChanged(event.getPath());
+        }
+        break;
+      }
+    }
+  }
+
+  // Connection management
+
+  /**
+   * Called when there is a connection-related event via the Watcher callback.
+   *
+   * If Disconnected or Expired, this should shutdown the cluster. But, since
+   * we send a KeeperException.SessionExpiredException along with the abort
+   * call, it's possible for the Abortable to catch it and try to create a new
+   * session with ZooKeeper. This is what the client does in HCM.
+   *
+   * @param event
+   */
+  private void connectionEvent(WatchedEvent event) {
+    switch(event.getState()) {
+      case SyncConnected:
+        // Now, this callback can be invoked before the this.zookeeper is set.
+        // Wait a little while.
+        long finished = System.currentTimeMillis() +
+          this.conf.getLong("hbase.zookeeper.watcher.sync.connected.wait", 2000);
+        while (System.currentTimeMillis() < finished) {
+          Threads.sleep(1);
+          if (this.zooKeeper != null) break;
+        }
+        if (this.zooKeeper == null) {
+          LOG.error("ZK is null on connection event -- see stack trace " +
+            "for the stack trace when constructor was called on this zkw",
+            this.constructorCaller);
+          throw new NullPointerException("ZK is null");
+        }
+        this.identifier = this.identifier + "-0x" +
+          Long.toHexString(this.zooKeeper.getSessionId());
+        // Update our identifier.  Otherwise ignore.
+        LOG.debug(this.identifier + " connected");
+        break;
+
+      // Abort the server if Disconnected or Expired
+      // TODO: Åny reason to handle these two differently?
+      case Disconnected:
+        LOG.debug(prefix("Received Disconnected from ZooKeeper, ignoring"));
+        break;
+
+      case Expired:
+        String msg = prefix(this.identifier + " received expired from " +
+          "ZooKeeper, aborting");
+        // TODO: One thought is to add call to ZooKeeperListener so say,
+        // ZooKeperNodeTracker can zero out its data values.
+        if (this.abortable != null) this.abortable.abort(msg,
+            new KeeperException.SessionExpiredException());
+        break;
+    }
+  }
+
+  /**
+   * Forces a synchronization of this ZooKeeper client connection.
+   * <p>
+   * Executing this method before running other methods will ensure that the
+   * subsequent operations are up-to-date and consistent as of the time that
+   * the sync is complete.
+   * <p>
+   * This is used for compareAndSwap type operations where we need to read the
+   * data of an existing node and delete or transition that node, utilizing the
+   * previously read version and data.  We want to ensure that the version read
+   * is up-to-date from when we begin the operation.
+   */
+  public void sync(String path) {
+    this.zooKeeper.sync(path, null, null);
+  }
+
+  /**
+   * Get the set of already watched unassigned nodes.
+   * @return Set of Nodes.
+   */
+  public Set<String> getNodes() {
+    return unassignedNodes;
+  }
+
+  /**
+   * Handles KeeperExceptions in client calls.
+   *
+   * This may be temporary but for now this gives one place to deal with these.
+   *
+   * TODO: Currently this method rethrows the exception to let the caller handle
+   *
+   * @param ke
+   * @throws KeeperException
+   */
+  public void keeperException(KeeperException ke)
+  throws KeeperException {
+    LOG.error(prefix("Received unexpected KeeperException, re-throwing exception"), ke);
+    throw ke;
+  }
+
+  /**
+   * Handles InterruptedExceptions in client calls.
+   *
+   * This may be temporary but for now this gives one place to deal with these.
+   *
+   * TODO: Currently, this method does nothing.
+   *       Is this ever expected to happen?  Do we abort or can we let it run?
+   *       Maybe this should be logged as WARN?  It shouldn't happen?
+   *
+   * @param ie
+   */
+  public void interruptedException(InterruptedException ie) {
+    LOG.debug(prefix("Received InterruptedException, doing nothing here"), ie);
+    // At least preserver interrupt.
+    Thread.currentThread().interrupt();
+    // no-op
+  }
+
+  /**
+   * Close the connection to ZooKeeper.
+   * @throws InterruptedException
+   */
+  public void close() {
+    try {
+      if (zooKeeper != null) {
+        zooKeeper.close();
+//        super.close();
+      }
+    } catch (InterruptedException e) {
+    }
+  }
+
+  @Override
+  public void abort(String why, Throwable e) {
+    this.abortable.abort(why, e);
+  }
+}
diff --git a/0.90/src/main/javadoc/org/apache/hadoop/hbase/io/hfile/package.html b/0.90/src/main/javadoc/org/apache/hadoop/hbase/io/hfile/package.html
new file mode 100644
index 0000000..fa9244f
--- /dev/null
+++ b/0.90/src/main/javadoc/org/apache/hadoop/hbase/io/hfile/package.html
@@ -0,0 +1,25 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+Provides the hbase data+index+metadata file.
+</body>
+</html>
diff --git a/0.90/src/main/javadoc/org/apache/hadoop/hbase/ipc/package.html b/0.90/src/main/javadoc/org/apache/hadoop/hbase/ipc/package.html
new file mode 100644
index 0000000..0e01bdc
--- /dev/null
+++ b/0.90/src/main/javadoc/org/apache/hadoop/hbase/ipc/package.html
@@ -0,0 +1,24 @@
+<html>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<body>
+Tools to help define network clients and servers.
+This is the hadoop copied local so can fix bugs and make hbase-specific optimizations.
+</body>
+</html>
diff --git a/0.90/src/main/javadoc/org/apache/hadoop/hbase/replication/package.html b/0.90/src/main/javadoc/org/apache/hadoop/hbase/replication/package.html
new file mode 100644
index 0000000..6f1a087
--- /dev/null
+++ b/0.90/src/main/javadoc/org/apache/hadoop/hbase/replication/package.html
@@ -0,0 +1,142 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+   Copyright 2010 The Apache Software Foundation
+
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+<h1>Multi Cluster Replication</h1>
+This package provides replication between HBase clusters.
+<p>
+
+<h2>Table Of Contents</h2>
+<ol>
+    <li><a href="#status">Status</a></li>
+    <li><a href="#requirements">Requirements</a></li>
+    <li><a href="#deployment">Deployment</a></li>
+    <li><a href="#verify">Verifying Replicated Data</a></li>
+</ol>
+
+<p>
+<a name="status">
+<h2>Status</h2>
+</a>
+<p>
+This package is experimental quality software and is only meant to be a base
+for future developments. The current implementation offers the following
+features:
+
+<ol>
+    <li>Master/Slave replication limited to 1 slave cluster. </li>
+    <li>Replication of scoped families in user tables.</li>
+    <li>Start/stop replication stream.</li>
+    <li>Supports clusters of different sizes.</li>
+    <li>Handling of partitions longer than 10 minutes.</li>
+    <li>Ability to add/remove slave clusters at runtime.</li>
+    <li>MapReduce job to compare tables on two clusters</li>
+</ol>
+Please report bugs on the project's Jira when found.
+<p>
+<a name="requirements">
+<h2>Requirements</h2>
+</a>
+<p>
+
+Before trying out replication, make sure to review the following requirements:
+
+<ol>
+    <li>Zookeeper should be handled by yourself, not by HBase, and should
+    always be available during the deployment.</li>
+    <li>All machines from both clusters should be able to reach every
+    other machine since replication goes from any region server to any
+    other one on the slave cluster. That also includes the
+    Zookeeper clusters.</li>
+    <li>Both clusters should have the same HBase and Hadoop major revision.
+    For example, having 0.90.1 on the master and 0.90.0 on the slave is
+    correct but not 0.90.1 and 0.89.20100725.</li>
+    <li>Every table that contains families that are scoped for replication
+    should exist on every cluster with the exact same name, same for those
+    replicated families.</li>
+</ol>
+
+<p>
+<a name="deployment">
+<h2>Deployment</h2>
+</a>
+<p>
+
+The following steps describe how to enable replication from a cluster
+to another.
+<ol>
+    <li>Edit ${HBASE_HOME}/conf/hbase-site.xml on both cluster to add
+    the following configurations:
+        <pre>
+&lt;property&gt;
+  &lt;name&gt;hbase.replication&lt;/name&gt;
+  &lt;value&gt;true&lt;/value&gt;
+&lt;/property&gt;</pre>
+    </li>
+    <li>Run the following command in the master's shell while it's running
+    <pre>add_peer</pre>
+    This will show you the help to setup the replication stream between
+    both clusters. If both clusters use the same Zookeeper cluster, you have
+    to use a different <b>zookeeper.znode.parent</b> since they can't
+    write in the same folder.
+    </li>
+</ol>
+
+You can confirm that your setup works by looking at any region server's log
+on the master cluster and look for the following lines;
+
+<pre>
+Considering 1 rs, with ratio 0.1
+Getting 1 rs from peer cluster # 0
+Choosing peer 10.10.1.49:62020</pre>
+
+In this case it indicates that 1 region server from the slave cluster
+was chosen for replication.<br><br>
+
+Should you want to stop the replication while the clusters are running, open
+the shell on the master cluster and issue this command:
+<pre>
+hbase(main):001:0> stop_replication</pre>
+
+Replication of already queued edits will still happen after you
+issued that command but new entries won't be. To start it back, simply replace
+"false" with "true" in the command. 
+
+<p>
+
+<a name="verify">
+<h2>Verifying Replicated Data</h2>
+</a>
+<p>
+Verifying the replicated data on two clusters is easy to do in the shell when
+looking only at a few rows, but doing a systematic comparison requires more
+computing power. This is why the VerifyReplication MR job was created, it has
+to be run on the master cluster and needs to be provided with a peer id (the
+one provided when establishing a replication stream) and a table name. Other
+options let you specify a time range and specific families. This job's short
+name is "verifyrep" and needs to be provided when pointing "hadoop jar" to the
+hbase jar.
+</p>
+
+</body>
+</html>
diff --git a/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/Hbase.html b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/Hbase.html
new file mode 100644
index 0000000..992d878
--- /dev/null
+++ b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/Hbase.html
@@ -0,0 +1,608 @@
+<html><head>
+<link href="style.css" rel="stylesheet" type="text/css"/>
+<title>Thrift module: Hbase</title></head><body>
+<h1>Thrift module: Hbase</h1>
+<table><tr><th>Module</th><th>Services</th><th>Data types</th><th>Constants</th></tr>
+<tr>
+<td>Hbase</td><td><a href="Hbase.html#Svc_Hbase">Hbase</a><br/>
+<ul>
+<li><a href="Hbase.html#Fn_Hbase_atomicIncrement">atomicIncrement</a></li>
+<li><a href="Hbase.html#Fn_Hbase_compact">compact</a></li>
+<li><a href="Hbase.html#Fn_Hbase_createTable">createTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAll">deleteAll</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAllRow">deleteAllRow</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAllRowTs">deleteAllRowTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAllTs">deleteAllTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteTable">deleteTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_disableTable">disableTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_enableTable">enableTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_get">get</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getColumnDescriptors">getColumnDescriptors</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRow">getRow</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRowTs">getRowTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRowWithColumns">getRowWithColumns</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRowWithColumnsTs">getRowWithColumnsTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getTableNames">getTableNames</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getTableRegions">getTableRegions</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getVer">getVer</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getVerTs">getVerTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_isTableEnabled">isTableEnabled</a></li>
+<li><a href="Hbase.html#Fn_Hbase_majorCompact">majorCompact</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRow">mutateRow</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRowTs">mutateRowTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRows">mutateRows</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRowsTs">mutateRowsTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerClose">scannerClose</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerGet">scannerGet</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerGetList">scannerGetList</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpen">scannerOpen</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenTs">scannerOpenTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenWithPrefix">scannerOpenWithPrefix</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenWithStop">scannerOpenWithStop</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenWithStopTs">scannerOpenWithStopTs</a></li>
+</ul>
+</td>
+<td><a href="Hbase.html#Struct_AlreadyExists">AlreadyExists</a><br/>
+<a href="Hbase.html#Struct_BatchMutation">BatchMutation</a><br/>
+<a href="Hbase.html#Typedef_Bytes">Bytes</a><br/>
+<a href="Hbase.html#Struct_ColumnDescriptor">ColumnDescriptor</a><br/>
+<a href="Hbase.html#Struct_IOError">IOError</a><br/>
+<a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a><br/>
+<a href="Hbase.html#Struct_Mutation">Mutation</a><br/>
+<a href="Hbase.html#Typedef_ScannerID">ScannerID</a><br/>
+<a href="Hbase.html#Struct_TCell">TCell</a><br/>
+<a href="Hbase.html#Struct_TRegionInfo">TRegionInfo</a><br/>
+<a href="Hbase.html#Struct_TRowResult">TRowResult</a><br/>
+<a href="Hbase.html#Typedef_Text">Text</a><br/>
+</td>
+<td><code></code></td>
+</tr></table>
+<hr/><h2 id="Typedefs">Type declarations</h2>
+<div class="definition"><h3 id="Typedef_Text">Typedef: Text</h3>
+<p><strong>Base type:</strong>&nbsp;<code>string</code></p>
+</div>
+<div class="definition"><h3 id="Typedef_Bytes">Typedef: Bytes</h3>
+<p><strong>Base type:</strong>&nbsp;<code>string</code></p>
+</div>
+<div class="definition"><h3 id="Typedef_ScannerID">Typedef: ScannerID</h3>
+<p><strong>Base type:</strong>&nbsp;<code>i32</code></p>
+</div>
+<hr/><h2 id="Structs">Data structures</h2>
+<div class="definition"><h3 id="Struct_TCell">Struct: TCell</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>value</td><td><code><a href="Hbase.html#Typedef_Bytes">Bytes</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>timestamp</td><td><code>i64</code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>TCell - Used to transport a cell value (byte[]) and the timestamp it was
+stored with together as a result for get and getRow methods. This promotes
+the timestamp of a cell to a first-class value, making it easy to take
+note of temporal data. Cell is used all the way from HStore up to HTable.
+<br/></div><div class="definition"><h3 id="Struct_ColumnDescriptor">Struct: ColumnDescriptor</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>name</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>maxVersions</td><td><code>i32</code></td><td></td><td>yes</td><td>3</td></tr>
+<tr><td>compression</td><td><code>string</code></td><td></td><td>yes</td><td>"NONE"</td></tr>
+<tr><td>inMemory</td><td><code>bool</code></td><td></td><td>yes</td><td>0</td></tr>
+<tr><td>bloomFilterType</td><td><code>string</code></td><td></td><td>yes</td><td>"NONE"</td></tr>
+<tr><td>bloomFilterVectorSize</td><td><code>i32</code></td><td></td><td>yes</td><td>0</td></tr>
+<tr><td>bloomFilterNbHashes</td><td><code>i32</code></td><td></td><td>yes</td><td>0</td></tr>
+<tr><td>blockCacheEnabled</td><td><code>bool</code></td><td></td><td>yes</td><td>0</td></tr>
+<tr><td>timeToLive</td><td><code>i32</code></td><td></td><td>yes</td><td>-1</td></tr>
+</table><br/>An HColumnDescriptor contains information about a column family
+such as the number of versions, compression settings, etc. It is
+used as input when creating a table or adding a column.
+<br/></div><div class="definition"><h3 id="Struct_TRegionInfo">Struct: TRegionInfo</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>startKey</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>endKey</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>id</td><td><code>i64</code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>name</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>version</td><td><code>byte</code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>A TRegionInfo contains information about an HTable region.
+<br/></div><div class="definition"><h3 id="Struct_Mutation">Struct: Mutation</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>isDelete</td><td><code>bool</code></td><td></td><td>yes</td><td>0</td></tr>
+<tr><td>column</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>value</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>A Mutation object is used to either update or delete a column-value.
+<br/></div><div class="definition"><h3 id="Struct_BatchMutation">Struct: BatchMutation</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>row</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>mutations</td><td><code>list&lt;<code><a href="Hbase.html#Struct_Mutation">Mutation</a></code>&gt;</code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>A BatchMutation object is used to apply a number of Mutations to a single row.
+<br/></div><div class="definition"><h3 id="Struct_TRowResult">Struct: TRowResult</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>row</td><td><code><a href="Hbase.html#Typedef_Text">Text</a></code></td><td></td><td>yes</td><td></td></tr>
+<tr><td>columns</td><td><code>map&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>, <code><a href="Hbase.html#Struct_TCell">TCell</a></code>&gt;</code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>Holds row name and then a map of columns to cells.
+<br/></div><div class="definition"><h3 id="Struct_IOError">Exception: IOError</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>message</td><td><code>string</code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>An IOError exception signals that an error occurred communicating
+to the Hbase master or an Hbase region server.  Also used to return
+more general Hbase error conditions.
+<br/></div><div class="definition"><h3 id="Struct_IllegalArgument">Exception: IllegalArgument</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>message</td><td><code>string</code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>An IllegalArgument exception indicates an illegal or invalid
+argument was passed into a procedure.
+<br/></div><div class="definition"><h3 id="Struct_AlreadyExists">Exception: AlreadyExists</h3>
+<table><tr><th>Field</th><th>Type</th><th>Description</th><th>Required</th><th>Default value</th></tr>
+<tr><td>message</td><td><code>string</code></td><td></td><td>yes</td><td></td></tr>
+</table><br/>An AlreadyExists exceptions signals that a table with the specified
+name already exists
+<br/></div><hr/><h2 id="Services">Services</h2>
+<h3 id="Svc_Hbase">Service: Hbase</h3>
+<div class="definition"><h4 id="Fn_Hbase_enableTable">Function: Hbase.enableTable</h4>
+<pre><code>void</code> enableTable(<code><a href="Hbase.html#Typedef_Bytes">Bytes</a></code> tableName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Brings a table on-line (enables it)
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of the table
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_disableTable">Function: Hbase.disableTable</h4>
+<pre><code>void</code> disableTable(<code><a href="Hbase.html#Typedef_Bytes">Bytes</a></code> tableName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Disables a table (takes it off-line) If it is being served, the master
+will tell the servers to stop serving it.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of the table
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_isTableEnabled">Function: Hbase.isTableEnabled</h4>
+<pre><code>bool</code> isTableEnabled(<code><a href="Hbase.html#Typedef_Bytes">Bytes</a></code> tableName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>@return true if table is on-line
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of the table to check
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_compact">Function: Hbase.compact</h4>
+<pre><code>void</code> compact(<code><a href="Hbase.html#Typedef_Bytes">Bytes</a></code> tableNameOrRegionName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableNameOrRegionName</td><td></td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_majorCompact">Function: Hbase.majorCompact</h4>
+<pre><code>void</code> majorCompact(<code><a href="Hbase.html#Typedef_Bytes">Bytes</a></code> tableNameOrRegionName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableNameOrRegionName</td><td></td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getTableNames">Function: Hbase.getTableNames</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> getTableNames()
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>List all the userspace tables.
+<p/>
+@return returns a list of names
+<br/></div><div class="definition"><h4 id="Fn_Hbase_getColumnDescriptors">Function: Hbase.getColumnDescriptors</h4>
+<pre><code>map&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>, <code><a href="Hbase.html#Struct_ColumnDescriptor">ColumnDescriptor</a></code>&gt;</code> getColumnDescriptors(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>List all the column families assoicated with a table.
+<p/>
+@return list of column family descriptors
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>table name
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getTableRegions">Function: Hbase.getTableRegions</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TRegionInfo">TRegionInfo</a></code>&gt;</code> getTableRegions(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>List the regions associated with a table.
+<p/>
+@return list of region descriptors
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>table name
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_createTable">Function: Hbase.createTable</h4>
+<pre><code>void</code> createTable(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                 <code>list&lt;<code><a href="Hbase.html#Struct_ColumnDescriptor">ColumnDescriptor</a></code>&gt;</code> columnFamilies)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>, <code><a href="Hbase.html#Struct_AlreadyExists">AlreadyExists</a></code>
+</pre>Create a table with the specified column families.  The name
+field for each ColumnDescriptor must be set and must end in a
+colon (:). All other fields are optional and will get default
+values if not explicitly specified.
+<p/>
+@throws IllegalArgument if an input parameter is invalid
+<p/>
+@throws AlreadyExists if the table name already exists
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table to create
+</td></tr><tr><td>columnFamilies</td><td>list of column family descriptors
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_deleteTable">Function: Hbase.deleteTable</h4>
+<pre><code>void</code> deleteTable(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Deletes a table
+<p/>
+@throws IOError if table doesn't exist on server or there was some other
+problem
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table to delete
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_get">Function: Hbase.get</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TCell">TCell</a></code>&gt;</code> get(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                <code><a href="Hbase.html#Typedef_Text">Text</a></code> column)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get a single TCell for the specified table, row, and column at the
+latest timestamp. Returns an empty list if no such value exists.
+<p/>
+@return value for specified row/column
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>column</td><td>column name
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getVer">Function: Hbase.getVer</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TCell">TCell</a></code>&gt;</code> getVer(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                   <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                   <code><a href="Hbase.html#Typedef_Text">Text</a></code> column,
+                   <code>i32</code> numVersions)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get the specified number of versions for the specified table,
+row, and column.
+<p/>
+@return list of cells for specified row/column
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>column</td><td>column name
+</td></tr><tr><td>numVersions</td><td>number of versions to retrieve
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getVerTs">Function: Hbase.getVerTs</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TCell">TCell</a></code>&gt;</code> getVerTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                     <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                     <code><a href="Hbase.html#Typedef_Text">Text</a></code> column,
+                     <code>i64</code> timestamp,
+                     <code>i32</code> numVersions)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get the specified number of versions for the specified table,
+row, and column.  Only versions less than or equal to the specified
+timestamp will be returned.
+<p/>
+@return list of cells for specified row/column
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>column</td><td>column name
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr><tr><td>numVersions</td><td>number of versions to retrieve
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getRow">Function: Hbase.getRow</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TRowResult">TRowResult</a></code>&gt;</code> getRow(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                        <code><a href="Hbase.html#Typedef_Text">Text</a></code> row)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get all the data for the specified table and row at the latest
+timestamp. Returns an empty list if the row does not exist.
+<p/>
+@return TRowResult containing the row and map of columns to TCells
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getRowWithColumns">Function: Hbase.getRowWithColumns</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TRowResult">TRowResult</a></code>&gt;</code> getRowWithColumns(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                                   <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                                   <code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> columns)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get the specified columns for the specified table and row at the latest
+timestamp. Returns an empty list if the row does not exist.
+<p/>
+@return TRowResult containing the row and map of columns to TCells
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>columns</td><td>List of columns to return, null for all columns
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getRowTs">Function: Hbase.getRowTs</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TRowResult">TRowResult</a></code>&gt;</code> getRowTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                          <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                          <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get all the data for the specified table and row at the specified
+timestamp. Returns an empty list if the row does not exist.
+<p/>
+@return TRowResult containing the row and map of columns to TCells
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of the table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_getRowWithColumnsTs">Function: Hbase.getRowWithColumnsTs</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TRowResult">TRowResult</a></code>&gt;</code> getRowWithColumnsTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                                     <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                                     <code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> columns,
+                                     <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get the specified columns for the specified table and row at the specified
+timestamp. Returns an empty list if the row does not exist.
+<p/>
+@return TRowResult containing the row and map of columns to TCells
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>columns</td><td>List of columns to return, null for all columns
+</td></tr><tr><td>timestamp</td><td></td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_mutateRow">Function: Hbase.mutateRow</h4>
+<pre><code>void</code> mutateRow(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+               <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+               <code>list&lt;<code><a href="Hbase.html#Struct_Mutation">Mutation</a></code>&gt;</code> mutations)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Apply a series of mutations (updates/deletes) to a row in a
+single transaction.  If an exception is thrown, then the
+transaction is aborted.  Default current timestamp is used, and
+all entries will have an identical timestamp.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>mutations</td><td>list of mutation commands
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_mutateRowTs">Function: Hbase.mutateRowTs</h4>
+<pre><code>void</code> mutateRowTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                 <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                 <code>list&lt;<code><a href="Hbase.html#Struct_Mutation">Mutation</a></code>&gt;</code> mutations,
+                 <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Apply a series of mutations (updates/deletes) to a row in a
+single transaction.  If an exception is thrown, then the
+transaction is aborted.  The specified timestamp is used, and
+all entries will have an identical timestamp.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row key
+</td></tr><tr><td>mutations</td><td>list of mutation commands
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_mutateRows">Function: Hbase.mutateRows</h4>
+<pre><code>void</code> mutateRows(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                <code>list&lt;<code><a href="Hbase.html#Struct_BatchMutation">BatchMutation</a></code>&gt;</code> rowBatches)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Apply a series of batches (each a series of mutations on a single row)
+in a single transaction.  If an exception is thrown, then the
+transaction is aborted.  Default current timestamp is used, and
+all entries will have an identical timestamp.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>rowBatches</td><td>list of row batches
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_mutateRowsTs">Function: Hbase.mutateRowsTs</h4>
+<pre><code>void</code> mutateRowsTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                  <code>list&lt;<code><a href="Hbase.html#Struct_BatchMutation">BatchMutation</a></code>&gt;</code> rowBatches,
+                  <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Apply a series of batches (each a series of mutations on a single row)
+in a single transaction.  If an exception is thrown, then the
+transaction is aborted.  The specified timestamp is used, and
+all entries will have an identical timestamp.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>rowBatches</td><td>list of row batches
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_atomicIncrement">Function: Hbase.atomicIncrement</h4>
+<pre><code>i64</code> atomicIncrement(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                    <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                    <code><a href="Hbase.html#Typedef_Text">Text</a></code> column,
+                    <code>i64</code> value)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Atomically increment the column value specified.  Returns the next value post increment.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>row to increment
+</td></tr><tr><td>column</td><td>name of column
+</td></tr><tr><td>value</td><td>amount to increment by
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_deleteAll">Function: Hbase.deleteAll</h4>
+<pre><code>void</code> deleteAll(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+               <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+               <code><a href="Hbase.html#Typedef_Text">Text</a></code> column)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Delete all cells that match the passed row and column.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>Row to update
+</td></tr><tr><td>column</td><td>name of column whose value is to be deleted
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_deleteAllTs">Function: Hbase.deleteAllTs</h4>
+<pre><code>void</code> deleteAllTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                 <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                 <code><a href="Hbase.html#Typedef_Text">Text</a></code> column,
+                 <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Delete all cells that match the passed row and column and whose
+timestamp is equal-to or older than the passed timestamp.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>Row to update
+</td></tr><tr><td>column</td><td>name of column whose value is to be deleted
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_deleteAllRow">Function: Hbase.deleteAllRow</h4>
+<pre><code>void</code> deleteAllRow(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                  <code><a href="Hbase.html#Typedef_Text">Text</a></code> row)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Completely delete the row's cells.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>key of the row to be completely deleted.
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_deleteAllRowTs">Function: Hbase.deleteAllRowTs</h4>
+<pre><code>void</code> deleteAllRowTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                    <code><a href="Hbase.html#Typedef_Text">Text</a></code> row,
+                    <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Completely delete the row's cells marked with a timestamp
+equal-to or older than the passed timestamp.
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>row</td><td>key of the row to be completely deleted.
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerOpen">Function: Hbase.scannerOpen</h4>
+<pre><code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> scannerOpen(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                      <code><a href="Hbase.html#Typedef_Text">Text</a></code> startRow,
+                      <code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> columns)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get a scanner on the current table starting at the specified row and
+ending at the last row in the table.  Return the specified columns.
+<p/>
+@return scanner id to be used with other scanner procedures
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>startRow</td><td>Starting row in table to scan.
+Send "" (empty string) to start at the first row.
+</td></tr><tr><td>columns</td><td>columns to scan. If column name is a column family, all
+columns of the specified column family are returned. It's also possible
+to pass a regex in the column qualifier.
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerOpenWithStop">Function: Hbase.scannerOpenWithStop</h4>
+<pre><code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> scannerOpenWithStop(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                              <code><a href="Hbase.html#Typedef_Text">Text</a></code> startRow,
+                              <code><a href="Hbase.html#Typedef_Text">Text</a></code> stopRow,
+                              <code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> columns)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get a scanner on the current table starting and stopping at the
+specified rows.  ending at the last row in the table.  Return the
+specified columns.
+<p/>
+@return scanner id to be used with other scanner procedures
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>startRow</td><td>Starting row in table to scan.
+Send "" (empty string) to start at the first row.
+</td></tr><tr><td>stopRow</td><td>row to stop scanning on. This row is *not* included in the
+scanner's results
+</td></tr><tr><td>columns</td><td>columns to scan. If column name is a column family, all
+columns of the specified column family are returned. It's also possible
+to pass a regex in the column qualifier.
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerOpenWithPrefix">Function: Hbase.scannerOpenWithPrefix</h4>
+<pre><code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> scannerOpenWithPrefix(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                                <code><a href="Hbase.html#Typedef_Text">Text</a></code> startAndPrefix,
+                                <code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> columns)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Open a scanner for a given prefix.  That is all rows will have the specified
+prefix. No other rows will be returned.
+<p/>
+@return scanner id to use with other scanner calls
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>startAndPrefix</td><td>the prefix (and thus start row) of the keys you want
+</td></tr><tr><td>columns</td><td>the columns you want returned
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerOpenTs">Function: Hbase.scannerOpenTs</h4>
+<pre><code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> scannerOpenTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                        <code><a href="Hbase.html#Typedef_Text">Text</a></code> startRow,
+                        <code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> columns,
+                        <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get a scanner on the current table starting at the specified row and
+ending at the last row in the table.  Return the specified columns.
+Only values with the specified timestamp are returned.
+<p/>
+@return scanner id to be used with other scanner procedures
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>startRow</td><td>Starting row in table to scan.
+Send "" (empty string) to start at the first row.
+</td></tr><tr><td>columns</td><td>columns to scan. If column name is a column family, all
+columns of the specified column family are returned. It's also possible
+to pass a regex in the column qualifier.
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerOpenWithStopTs">Function: Hbase.scannerOpenWithStopTs</h4>
+<pre><code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> scannerOpenWithStopTs(<code><a href="Hbase.html#Typedef_Text">Text</a></code> tableName,
+                                <code><a href="Hbase.html#Typedef_Text">Text</a></code> startRow,
+                                <code><a href="Hbase.html#Typedef_Text">Text</a></code> stopRow,
+                                <code>list&lt;<code><a href="Hbase.html#Typedef_Text">Text</a></code>&gt;</code> columns,
+                                <code>i64</code> timestamp)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>
+</pre>Get a scanner on the current table starting and stopping at the
+specified rows.  ending at the last row in the table.  Return the
+specified columns.  Only values with the specified timestamp are
+returned.
+<p/>
+@return scanner id to be used with other scanner procedures
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>tableName</td><td>name of table
+</td></tr><tr><td>startRow</td><td>Starting row in table to scan.
+Send "" (empty string) to start at the first row.
+</td></tr><tr><td>stopRow</td><td>row to stop scanning on. This row is *not* included in the
+scanner's results
+</td></tr><tr><td>columns</td><td>columns to scan. If column name is a column family, all
+columns of the specified column family are returned. It's also possible
+to pass a regex in the column qualifier.
+</td></tr><tr><td>timestamp</td><td>timestamp
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerGet">Function: Hbase.scannerGet</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TRowResult">TRowResult</a></code>&gt;</code> scannerGet(<code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> id)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Returns the scanner's current row value and advances to the next
+row in the table.  When there are no more rows in the table, or a key
+greater-than-or-equal-to the scanner's specified stopRow is reached,
+an empty list is returned.
+<p/>
+@return a TRowResult containing the current row and a map of the columns to TCells.
+<p/>
+@throws IllegalArgument if ScannerID is invalid
+<p/>
+@throws NotFound when the scanner reaches the end
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>id</td><td>id of a scanner returned by scannerOpen
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerGetList">Function: Hbase.scannerGetList</h4>
+<pre><code>list&lt;<code><a href="Hbase.html#Struct_TRowResult">TRowResult</a></code>&gt;</code> scannerGetList(<code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> id,
+                                <code>i32</code> nbRows)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Returns, starting at the scanner's current row value nbRows worth of
+rows and advances to the next row in the table.  When there are no more
+rows in the table, or a key greater-than-or-equal-to the scanner's
+specified stopRow is reached,  an empty list is returned.
+<p/>
+@return a TRowResult containing the current row and a map of the columns to TCells.
+<p/>
+@throws IllegalArgument if ScannerID is invalid
+<p/>
+@throws NotFound when the scanner reaches the end
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>id</td><td>id of a scanner returned by scannerOpen
+</td></tr><tr><td>nbRows</td><td>number of results to return
+</td></tr></table><br/>
+</div><div class="definition"><h4 id="Fn_Hbase_scannerClose">Function: Hbase.scannerClose</h4>
+<pre><code>void</code> scannerClose(<code><a href="Hbase.html#Typedef_ScannerID">ScannerID</a></code> id)
+    throws <code><a href="Hbase.html#Struct_IOError">IOError</a></code>, <code><a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a></code>
+</pre>Closes the server-state associated with an open scanner.
+<p/>
+@throws IllegalArgument if ScannerID is invalid
+<br/><br/><b> Parameters</b><br/>
+<table><tr><th>Name</th><th>Description</th></tr>
+<tr><td>id</td><td>id of a scanner returned by scannerOpen
+</td></tr></table><br/>
+</div></body></html>
diff --git a/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/index.html b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/index.html
new file mode 100644
index 0000000..7d8eef1
--- /dev/null
+++ b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/index.html
@@ -0,0 +1,60 @@
+<html><head>
+<link href="style.css" rel="stylesheet" type="text/css"/>
+<title>All Thrift declarations</title></head><body>
+<h1>All Thrift declarations</h1>
+<table><tr><th>Module</th><th>Services</th><th>Data types</th><th>Constants</th></tr>
+<tr>
+<td>Hbase</td><td><a href="Hbase.html#Svc_Hbase">Hbase</a><br/>
+<ul>
+<li><a href="Hbase.html#Fn_Hbase_atomicIncrement">atomicIncrement</a></li>
+<li><a href="Hbase.html#Fn_Hbase_compact">compact</a></li>
+<li><a href="Hbase.html#Fn_Hbase_createTable">createTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAll">deleteAll</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAllRow">deleteAllRow</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAllRowTs">deleteAllRowTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteAllTs">deleteAllTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_deleteTable">deleteTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_disableTable">disableTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_enableTable">enableTable</a></li>
+<li><a href="Hbase.html#Fn_Hbase_get">get</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getColumnDescriptors">getColumnDescriptors</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRow">getRow</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRowTs">getRowTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRowWithColumns">getRowWithColumns</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getRowWithColumnsTs">getRowWithColumnsTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getTableNames">getTableNames</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getTableRegions">getTableRegions</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getVer">getVer</a></li>
+<li><a href="Hbase.html#Fn_Hbase_getVerTs">getVerTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_isTableEnabled">isTableEnabled</a></li>
+<li><a href="Hbase.html#Fn_Hbase_majorCompact">majorCompact</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRow">mutateRow</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRowTs">mutateRowTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRows">mutateRows</a></li>
+<li><a href="Hbase.html#Fn_Hbase_mutateRowsTs">mutateRowsTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerClose">scannerClose</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerGet">scannerGet</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerGetList">scannerGetList</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpen">scannerOpen</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenTs">scannerOpenTs</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenWithPrefix">scannerOpenWithPrefix</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenWithStop">scannerOpenWithStop</a></li>
+<li><a href="Hbase.html#Fn_Hbase_scannerOpenWithStopTs">scannerOpenWithStopTs</a></li>
+</ul>
+</td>
+<td><a href="Hbase.html#Struct_AlreadyExists">AlreadyExists</a><br/>
+<a href="Hbase.html#Struct_BatchMutation">BatchMutation</a><br/>
+<a href="Hbase.html#Typedef_Bytes">Bytes</a><br/>
+<a href="Hbase.html#Struct_ColumnDescriptor">ColumnDescriptor</a><br/>
+<a href="Hbase.html#Struct_IOError">IOError</a><br/>
+<a href="Hbase.html#Struct_IllegalArgument">IllegalArgument</a><br/>
+<a href="Hbase.html#Struct_Mutation">Mutation</a><br/>
+<a href="Hbase.html#Typedef_ScannerID">ScannerID</a><br/>
+<a href="Hbase.html#Struct_TCell">TCell</a><br/>
+<a href="Hbase.html#Struct_TRegionInfo">TRegionInfo</a><br/>
+<a href="Hbase.html#Struct_TRowResult">TRowResult</a><br/>
+<a href="Hbase.html#Typedef_Text">Text</a><br/>
+</td>
+<td><code></code></td>
+</tr></table>
+</body></html>
diff --git a/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/style.css b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/style.css
new file mode 100644
index 0000000..6dc2f22
--- /dev/null
+++ b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/doc-files/style.css
@@ -0,0 +1,10 @@
+/* Auto-generated CSS for generated Thrift docs */
+body { font-family: Tahoma, sans-serif; }
+pre { background-color: #dddddd; padding: 6px; }
+h3,h4 { padding-top: 0px; margin-top: 0px; }
+div.definition { border: 1px solid gray; margin: 10px; padding: 10px; }
+div.extends { margin: -0.5em 0 1em 5em }
+table { border: 1px solid grey; border-collapse: collapse; }
+td { border: 1px solid grey; padding: 1px 6px; vertical-align: top; }
+th { border: 1px solid black; background-color: #bbbbbb;
+     text-align: left; padding: 1px 6px; }
diff --git a/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/package.html b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/package.html
new file mode 100644
index 0000000..f41a9ab
--- /dev/null
+++ b/0.90/src/main/javadoc/org/apache/hadoop/hbase/thrift/package.html
@@ -0,0 +1,108 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
+<html>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<head />
+<body bgcolor="white">
+Provides an HBase <a href="http://incubator.apache.org/thrift/">Thrift</a>
+service.
+
+This directory contains a Thrift interface definition file for an Hbase RPC
+service and a Java server implementation.
+
+<h2><a name="whatisthrift">What is Thrift?</a></h2> 
+<p><blockquote>"Thrift is a software framework for scalable cross-language services development.
+It combines a software stack with a code generation engine to build services
+that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby,
+Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml."</blockquote>
+
+<h2><a name="description">Description</a></h2>
+
+<p><i>Important note:</i> This Thrift interface is <i>deprecated</i> and scheduled for removal in HBase 0.22.
+A new version that matches the client API that was introduced in HBase 0.21 can be found
+in the <code>contrib</code> directory.
+</p>
+
+<p>The {@link org.apache.hadoop.hbase.thrift.generated.Hbase.Iface HBase API} is defined in the
+file <a href="doc-files/index.html">Hbase.thrift</a> (Click the former to see the
+thrift generated documentation of thrift interface). A server-side implementation of the API is in
+{@link org.apache.hadoop.hbase.thrift.ThriftServer}. The generated interfaces,
+types, and RPC utility files reside in the
+{@link org.apache.hadoop.hbase.thrift.generated} package.
+</p>
+
+<p>To start ThriftServer, use:
+<pre>
+  ./bin/hbase-daemon.sh start thrift
+</pre>
+
+<p>To stop, use:
+<pre>
+  ./bin/hbase-daemon.sh stop thrift
+</pre>
+
+These are the command line arguments the Thrift server understands in addition to <code>start</code> and <code>stop</code>:
+<dl>
+    <dt><code>-b, --bind</code></dt>
+    <dd>Address to bind the Thrift server to. Not supported by the Nonblocking and HsHa server [default: <code>0.0.0.0</code>]</dd>
+
+    <dt><code>-p, --port</code></dt>
+    <dd>Port to bind to [default: <code>9090</code>]</dd>
+
+    <dt><code>-f, --framed</code></dt>
+    <dd>Use framed transport (implied when using one of the non-blocking servers)</dd>
+
+    <dt><code>-c, --compact</code></dt>
+    <dd>Use the compact protocol [default: binary protocol]</dd>
+
+    <dt><code>-h, --help</code></dt>
+    <dd>Displays usage information for the Thrift server</dd>
+
+    <dt><code>-threadpool</code></dt>
+    <dd>Use the TThreadPoolServer. This is the default.</dd>
+
+    <dt><code>-hsha</code></dt>
+    <dd>Use the THsHaServer. This implies the framed transport.</dd>
+
+    <dt><code>-nonblocking</code></dt>
+    <dd>Use the TNonblockingServer. This implies the framed transport.</dd>
+</dl>
+
+<p></p><i>Important note:</i> The <code>bind</code> option only works with the default ThreadPoolServer.
+This will be fixed in the next Thrift version. See <a href="https://issues.apache.org/jira/browse/HBASE-2155">HBASE-2155</a>
+for more details on this issue.
+
+<h3><a name="details">Details</a></h3>
+
+<p>HBase currently uses version 0.2.0 of Apache Thrift.</p>
+
+<p>The files were generated by running the commands:
+<pre>
+  thrift -strict --gen java:hashcode Hbase.thrift
+  mv gen-java/org/apache/hadoop/hbase/thrift/generated .
+  rm -rf gen-java
+</pre>
+
+<p>The 'thrift' binary is the Thrift compiler, and it is distributed as a part
+of the Thrift package. Additionally, specific language runtime libraries are a
+part of the Thrift package. A version of the Java runtime is checked into SVN
+under the <code>hbase/lib</code> directory.</p>
+
+</body>
+</html>
diff --git a/0.90/src/main/javadoc/overview.html b/0.90/src/main/javadoc/overview.html
new file mode 100644
index 0000000..e79d715
--- /dev/null
+++ b/0.90/src/main/javadoc/overview.html
@@ -0,0 +1,57 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<html>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<head>
+    <title>HBase</title>
+</head>
+<body bgcolor="white">
+<a href="http://hbase.org">HBase</a> is a scalable, distributed database built on <a href="http://hadoop.apache.org/core">Hadoop Core</a>.
+
+<h2>Table of Contents</h2>
+<ul>
+<li>
+  <a href="#getting_started" >Getting Started</a>
+</li>
+<li><a href="#client_example">Example API Usage</a></li>
+<li><a href="#related" >Related Documentation</a></li>
+</ul>
+
+
+<h2><a name="getting_started" >Getting Started</a></h2>
+<p>See the <a href="../book.html#getting_started">Getting Started</a>
+section of the <a href="../book.html">HBase Book</a>.
+</p>
+
+<h2><a name="client_example">Example API Usage</a></h2>
+<p>For sample Java code, see <a href="org/apache/hadoop/hbase/client/package-summary.html#package_description">org.apache.hadoop.hbase.client</a> documentation.</p>
+
+<p>If your client is NOT Java, consider the Thrift or REST libraries.</p>
+
+<h2><a name="related" >Related Documentation</a></h2>
+<ul>
+  <li><a href="http://hbase.org">HBase Home Page</a> </li>
+  <li><a href="http://hbase.org/docs/current/book.html">HBase Book</a> </li>
+  <li><a href="http://wiki.apache.org/hadoop/Hbase">HBase Wiki</a> </li>
+  <li><a href="http://hadoop.apache.org/">Hadoop Home Page</a> </li>
+  </li>
+</ul>
+
+</body>
+</html>
diff --git a/0.90/src/main/resources/hbase-default.xml b/0.90/src/main/resources/hbase-default.xml
new file mode 100644
index 0000000..115936f
--- /dev/null
+++ b/0.90/src/main/resources/hbase-default.xml
@@ -0,0 +1,572 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>hbase.rootdir</name>
+    <value>file:///tmp/hbase-${user.name}/hbase</value>
+    <description>The directory shared by region servers and into
+    which HBase persists.  The URL should be 'fully-qualified'
+    to include the filesystem scheme.  For example, to specify the
+    HDFS directory '/hbase' where the HDFS instance's namenode is
+    running at namenode.example.org on port 9000, set this value to:
+    hdfs://namenode.example.org:9000/hbase.  By default HBase writes
+    into /tmp.  Change this configuration else all data will be lost
+    on machine restart.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.port</name>
+    <value>60000</value>
+    <description>The port the HBase Master should bind to.</description>
+  </property>
+  <property>
+    <name>hbase.cluster.distributed</name>
+    <value>false</value>
+    <description>The mode the cluster will be in. Possible values are
+      false for standalone mode and true for distributed mode.  If
+      false, startup will run all HBase and ZooKeeper daemons together
+      in the one JVM.
+    </description>
+  </property>
+  <property>
+    <name>hbase.tmp.dir</name>
+    <value>/tmp/hbase-${user.name}</value>
+    <description>Temporary directory on the local filesystem.
+    Change this setting to point to a location more permanent
+    than '/tmp' (The '/tmp' directory is often cleared on
+    machine restart).
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.info.port</name>
+    <value>60010</value>
+    <description>The port for the HBase Master web UI.
+    Set to -1 if you do not want a UI instance run.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.info.bindAddress</name>
+    <value>0.0.0.0</value>
+    <description>The bind address for the HBase Master web UI
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.write.buffer</name>
+    <value>2097152</value>
+    <description>Default size of the HTable clien write buffer in bytes.
+    A bigger buffer takes more memory -- on both the client and server
+    side since server instantiates the passed write buffer to process
+    it -- but a larger buffer size reduces the number of RPCs made.
+    For an estimate of server-side memory-used, evaluate
+    hbase.client.write.buffer * hbase.regionserver.handler.count
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.port</name>
+    <value>60020</value>
+    <description>The port the HBase RegionServer binds to.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port</name>
+    <value>60030</value>
+    <description>The port for the HBase RegionServer web UI
+    Set to -1 if you do not want the RegionServer UI to run.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port.auto</name>
+    <value>false</value>
+    <description>Whether or not the Master or RegionServer
+    UI should search for a port to bind to. Enables automatic port
+    search if hbase.regionserver.info.port is already in use.
+    Useful for testing, turned off by default.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.bindAddress</name>
+    <value>0.0.0.0</value>
+    <description>The address for the HBase RegionServer web UI
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.class</name>
+    <value>org.apache.hadoop.hbase.ipc.HRegionInterface</value>
+    <description>The RegionServer interface to use.
+    Used by the client opening proxy to remote region server.
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.pause</name>
+    <value>1000</value>
+    <description>General client pause value.  Used mostly as value to wait
+    before running a retry of a failed get, region lookup, etc.</description>
+  </property>
+  <property>
+    <name>hbase.client.retries.number</name>
+    <value>10</value>
+    <description>Maximum retries.  Used as maximum for all retryable
+    operations such as fetching of the root region from root region
+    server, getting a cell's value, starting a row update, etc.
+    Default: 10.
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.scanner.caching</name>
+    <value>1</value>
+    <description>Number of rows that will be fetched when calling next
+    on a scanner if it is not served from (local, client) memory. Higher
+    caching values will enable faster scanners but will eat up more memory
+    and some calls of next may take longer and longer times when the cache is empty.
+    Do not set this value such that the time between invocations is greater
+    than the scanner timeout; i.e. hbase.regionserver.lease.period
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.keyvalue.maxsize</name>
+    <value>10485760</value>
+    <description>Specifies the combined maximum allowed size of a KeyValue
+    instance. This is to set an upper boundary for a single entry saved in a
+    storage file. Since they cannot be split it helps avoiding that a region
+    cannot be split any further because the data is too large. It seems wise
+    to set this to a fraction of the maximum region size. Setting it to zero
+    or less disables the check.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.lease.period</name>
+    <value>60000</value>
+    <description>HRegion server lease period in milliseconds. Default is
+    60 seconds. Clients must report in within this period else they are
+    considered dead.</description>
+  </property>
+  <property>
+    <name>hbase.regionserver.handler.count</name>
+    <value>10</value>
+    <description>Count of RPC Server instances spun up on RegionServers
+    Same property is used by the Master for count of master handlers.
+    Default is 10.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.msginterval</name>
+    <value>3000</value>
+    <description>Interval between messages from the RegionServer to Master
+    in milliseconds.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.flushlogentries</name>
+    <value>1</value>
+    <description>Sync the HLog to HDFS when it has accumulated this many
+    entries. Default 1. Value is checked on every HLog.hflush
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.optionallogflushinterval</name>
+    <value>1000</value>
+    <description>Sync the HLog to the HDFS after this interval if it has not
+    accumulated enough entries to trigger a sync. Default 1 second. Units:
+    milliseconds.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.regionSplitLimit</name>
+    <value>2147483647</value>
+    <description>Limit for the number of regions after which no more region
+    splitting should take place. This is not a hard limit for the number of
+    regions but acts as a guideline for the regionserver to stop splitting after
+    a certain limit. Default is set to MAX_INT; i.e. do not block splitting.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.logroll.period</name>
+    <value>3600000</value>
+    <description>Period at which we will roll the commit log regardless
+    of how many edits it has.</description>
+  </property>
+  <property>
+    <name>hbase.regionserver.hlog.reader.impl</name>
+    <value>org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader</value>
+    <description>The HLog file reader implementation.</description>
+  </property>
+  <property>
+    <name>hbase.regionserver.hlog.writer.impl</name>
+    <value>org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter</value>
+    <description>The HLog file writer implementation.</description>
+  </property>
+  <property>
+    <name>hbase.regionserver.thread.splitcompactcheckfrequency</name>
+    <value>20000</value>
+    <description>How often a region server runs the split/compaction check.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.nbreservationblocks</name>
+    <value>4</value>
+    <description>The number of resevoir blocks of memory release on
+    OOME so we can cleanup properly before server shutdown.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.dns.interface</name>
+    <value>default</value>
+    <description>The name of the Network Interface from which a ZooKeeper server
+      should report its IP address.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.dns.nameserver</name>
+    <value>default</value>
+    <description>The host name or IP address of the name server (DNS)
+      which a ZooKeeper server should use to determine the host name used by the
+      master for communication and display purposes.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.dns.interface</name>
+    <value>default</value>
+    <description>The name of the Network Interface from which a region server
+      should report its IP address.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.dns.nameserver</name>
+    <value>default</value>
+    <description>The host name or IP address of the name server (DNS)
+      which a region server should use to determine the host name used by the
+      master for communication and display purposes.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.dns.interface</name>
+    <value>default</value>
+    <description>The name of the Network Interface from which a master
+      should report its IP address.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.dns.nameserver</name>
+    <value>default</value>
+    <description>The host name or IP address of the name server (DNS)
+      which a master should use to determine the host name used
+      for communication and display purposes.
+    </description>
+  </property>
+  <property>
+    <name>hbase.balancer.period
+    </name>
+    <value>300000</value>
+    <description>Period at which the region balancer runs in the Master.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.logcleaner.ttl</name>
+    <value>600000</value>
+    <description>Maximum time a HLog can stay in the .oldlogdir directory,
+    after which it will be cleaned by a Master thread.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.logcleaner.plugins</name>
+    <value>org.apache.hadoop.hbase.master.TimeToLiveLogCleaner</value>
+    <description>A comma-separated list of LogCleanerDelegate invoked by
+    the LogsCleaner service. These WAL/HLog cleaners are called in order,
+    so put the HLog cleaner that prunes the most HLog files in front. To
+    implement your own LogCleanerDelegate, just put it in HBase's classpath
+    and add the fully qualified class name here. Always add the above
+    default log cleaners in the list.
+    </description>
+  </property>  
+  <property>
+    <name>hbase.regionserver.global.memstore.upperLimit</name>
+    <value>0.4</value>
+    <description>Maximum size of all memstores in a region server before new
+      updates are blocked and flushes are forced. Defaults to 40% of heap
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.global.memstore.lowerLimit</name>
+    <value>0.35</value>
+    <description>When memstores are being forced to flush to make room in
+      memory, keep flushing until we hit this mark. Defaults to 35% of heap.
+      This value equal to hbase.regionserver.global.memstore.upperLimit causes
+      the minimum possible flushing to occur when updates are blocked due to
+      memstore limiting.
+    </description>
+  </property>
+  <property>
+    <name>hbase.server.thread.wakefrequency</name>
+    <value>10000</value>
+    <description>Time to sleep in between searches for work (in milliseconds).
+    Used as sleep interval by service threads such as log roller.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.flush.size</name>
+    <value>67108864</value>
+    <description>
+    Memstore will be flushed to disk if size of the memstore
+    exceeds this number of bytes.  Value is checked by a thread that runs
+    every hbase.server.thread.wakefrequency.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.preclose.flush.size</name>
+    <value>5242880</value>
+    <description>
+      If the memstores in a region are this size or larger when we go
+      to close, run a "pre-flush" to clear out memstores before we put up
+      the region closed flag and take the region offline.  On close,
+      a flush is run under the close flag to empty memory.  During
+      this time the region is offline and we are not taking on any writes.
+      If the memstore content is large, this flush could take a long time to
+      complete.  The preflush is meant to clean out the bulk of the memstore
+      before putting up the close flag and taking the region offline so the
+      flush that runs under the close flag has little to do.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.block.multiplier</name>
+    <value>2</value>
+    <description>
+    Block updates if memstore has hbase.hregion.block.memstore
+    time hbase.hregion.flush.size bytes.  Useful preventing
+    runaway memstore during spikes in update traffic.  Without an
+    upper-bound, memstore fills such that when it flushes the
+    resultant flush files take a long time to compact or split, or
+    worse, we OOME.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.max.filesize</name>
+    <value>268435456</value>
+    <description>
+    Maximum HStoreFile size. If any one of a column families' HStoreFiles has
+    grown to exceed this value, the hosting HRegion is split in two.
+    Default: 256M.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hstore.compactionThreshold</name>
+    <value>3</value>
+    <description>
+    If more than this number of HStoreFiles in any one HStore
+    (one HStoreFile is written per flush of memstore) then a compaction
+    is run to rewrite all HStoreFiles files as one.  Larger numbers
+    put off compaction but when it runs, it takes longer to complete.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hstore.blockingStoreFiles</name>
+    <value>7</value>
+    <description>
+    If more than this number of StoreFiles in any one Store
+    (one StoreFile is written per flush of MemStore) then updates are
+    blocked for this HRegion until a compaction is completed, or
+    until hbase.hstore.blockingWaitTime has been exceeded.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hstore.blockingWaitTime</name>
+    <value>90000</value>
+    <description>
+    The time an HRegion will block updates for after hitting the StoreFile
+    limit defined by hbase.hstore.blockingStoreFiles.
+    After this time has elapsed, the HRegion will stop blocking updates even
+    if a compaction has not been completed.  Default: 90 seconds.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hstore.compaction.max</name>
+    <value>10</value>
+    <description>Max number of HStoreFiles to compact per 'minor' compaction.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.majorcompaction</name>
+    <value>86400000</value>
+    <description>The time (in miliseconds) between 'major' compactions of all
+    HStoreFiles in a region.  Default: 1 day.
+    Set to 0 to disable automated major compactions.
+    </description>
+  </property>
+  <property>
+    <name>hbase.mapreduce.hfileoutputformat.blocksize</name>
+    <value>65536</value>
+    <description>The mapreduce HFileOutputFormat writes storefiles/hfiles.
+    This is the minimum hfile blocksize to emit.  Usually in hbase, writing
+    hfiles, the blocksize is gotten from the table schema (HColumnDescriptor)
+    but in the mapreduce outputformat context, we don't have access to the
+    schema so get blocksize from Configuation.  The smaller you make
+    the blocksize, the bigger your index and the less you fetch on a
+    random-access.  Set the blocksize down if you have small cells and want
+    faster random-access of individual cells.
+    </description>
+  </property>
+  <property>
+      <name>hfile.block.cache.size</name>
+      <value>0.2</value>
+      <description>
+          Percentage of maximum heap (-Xmx setting) to allocate to block cache
+          used by HFile/StoreFile. Default of 0.2 means allocate 20%.
+          Set to 0 to disable.
+      </description>
+  </property>
+  <property>
+    <name>hbase.hash.type</name>
+    <value>murmur</value>
+    <description>The hashing algorithm for use in HashFunction. Two values are
+    supported now: murmur (MurmurHash) and jenkins (JenkinsHash).
+    Used by bloom filters.
+    </description>
+  </property>
+  <property>
+    <name>zookeeper.session.timeout</name>
+    <value>180000</value>
+    <description>ZooKeeper session timeout.
+      HBase passes this to the zk quorum as suggested maximum time for a
+      session.  See http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions
+      "The client sends a requested timeout, the server responds with the
+      timeout that it can give the client. "
+      In milliseconds.
+    </description>
+  </property>
+  <property>
+    <name>zookeeper.znode.parent</name>
+    <value>/hbase</value>
+    <description>Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
+      files that are configured with a relative path will go under this node.
+      By default, all of HBase's ZooKeeper file path are configured with a
+      relative path, so they will all go under this directory unless changed.
+    </description>
+  </property>
+  <property>
+    <name>zookeeper.znode.rootserver</name>
+    <value>root-region-server</value>
+    <description>Path to ZNode holding root region location. This is written by
+      the master and read by clients and region servers. If a relative path is
+      given, the parent folder will be ${zookeeper.znode.parent}. By default,
+      this means the root location is stored at /hbase/root-region-server.
+    </description>
+  </property>
+
+  <!--
+  The following three properties are used together to create the list of
+  host:peer_port:leader_port quorum servers for ZooKeeper.
+  -->
+  <property>
+    <name>hbase.zookeeper.quorum</name>
+    <value>localhost</value>
+    <description>Comma separated list of servers in the ZooKeeper Quorum.
+    For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
+    By default this is set to localhost for local and pseudo-distributed modes
+    of operation. For a fully-distributed setup, this should be set to a full
+    list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
+    this is the list of servers which we will start/stop ZooKeeper on.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.peerport</name>
+    <value>2888</value>
+    <description>Port used by ZooKeeper peers to talk to each other.
+    See http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
+    for more information.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.leaderport</name>
+    <value>3888</value>
+    <description>Port used by ZooKeeper for leader election.
+    See http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperStarted.html#sc_RunningReplicatedZooKeeper
+    for more information.
+    </description>
+  </property>
+  <!-- End of properties used to generate ZooKeeper host:port quorum list. -->
+
+  <!--
+  Beginning of properties that are directly mapped from ZooKeeper's zoo.cfg.
+  All properties with an "hbase.zookeeper.property." prefix are converted for
+  ZooKeeper's configuration. Hence, if you want to add an option from zoo.cfg,
+  e.g.  "initLimit=10" you would append the following to your configuration:
+    <property>
+      <name>hbase.zookeeper.property.initLimit</name>
+      <value>10</value>
+    </property>
+  -->
+  <property>
+    <name>hbase.zookeeper.property.initLimit</name>
+    <value>10</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The number of ticks that the initial synchronization phase can take.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.syncLimit</name>
+    <value>5</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The number of ticks that can pass between sending a request and getting an
+    acknowledgment.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.dataDir</name>
+    <value>${hbase.tmp.dir}/zookeeper</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The directory where the snapshot is stored.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.clientPort</name>
+    <value>2181</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The port at which the clients will connect.
+    </description>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.maxClientCnxns</name>
+    <value>30</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    Limit on number of concurrent connections (at the socket level) that a
+    single client, identified by IP address, may make to a single member of
+    the ZooKeeper ensemble. Set high to avoid zk connection issues running
+    standalone and pseudo-distributed.
+    </description>
+  </property>
+  <!-- End of properties that are directly mapped from ZooKeeper's zoo.cfg -->
+  <property>
+    <name>hbase.rest.port</name>
+    <value>8080</value>
+    <description>The port for the HBase REST server.</description>
+  </property>
+  <property>
+    <name>hbase.rest.readonly</name>
+    <value>false</value>
+    <description>
+    Defines the mode the REST server will be started in. Possible values are:
+    false: All HTTP methods are permitted - GET/PUT/POST/DELETE.
+    true: Only the GET method is permitted.
+    </description>
+  </property>
+</configuration>
diff --git a/0.90/src/main/resources/hbase-webapps/master/index.html b/0.90/src/main/resources/hbase-webapps/master/index.html
new file mode 100644
index 0000000..6d301ab
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/master/index.html
@@ -0,0 +1 @@
+<meta HTTP-EQUIV="REFRESH" content="0;url=master.jsp"/>
diff --git a/0.90/src/main/resources/hbase-webapps/master/master.jsp b/0.90/src/main/resources/hbase-webapps/master/master.jsp
new file mode 100644
index 0000000..ed38ff2
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/master/master.jsp
@@ -0,0 +1,160 @@
+<%@ page contentType="text/html;charset=UTF-8"
+  import="java.util.*"
+  import="org.apache.hadoop.conf.Configuration"
+  import="org.apache.hadoop.util.StringUtils"
+  import="org.apache.hadoop.hbase.util.Bytes"
+  import="org.apache.hadoop.hbase.util.JvmVersion"
+  import="org.apache.hadoop.hbase.util.FSUtils"
+  import="org.apache.hadoop.hbase.master.HMaster"
+  import="org.apache.hadoop.hbase.HConstants"
+  import="org.apache.hadoop.hbase.client.HBaseAdmin"
+  import="org.apache.hadoop.hbase.HServerInfo"
+  import="org.apache.hadoop.hbase.HServerAddress"
+  import="org.apache.hadoop.hbase.HTableDescriptor" %><%
+  HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
+  Configuration conf = master.getConfiguration();
+  HServerAddress rootLocation = master.getCatalogTracker().getRootLocation();
+  boolean metaOnline = master.getCatalogTracker().getMetaLocation() != null;
+  Map<String, HServerInfo> serverToServerInfos =
+    master.getServerManager().getOnlineServers();
+  int interval = conf.getInt("hbase.regionserver.msginterval", 1000)/1000;
+  if (interval == 0) {
+      interval = 1;
+  }
+  boolean showFragmentation = conf.getBoolean("hbase.master.ui.fragmentation.enabled", false);
+  Map<String, Integer> frags = null;
+  if (showFragmentation) {
+      frags = FSUtils.getTableFragmentation(master);
+  }
+%><?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
+  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+<title>HBase Master: <%= master.getMasterAddress().getHostname()%>:<%= master.getMasterAddress().getPort() %></title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Master: <%=master.getMasterAddress().getHostname()%>:<%=master.getMasterAddress().getPort()%></h1>
+<p id="links_menu"><a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+
+<!-- Various warnings that cluster admins should be aware of -->
+<% if (JvmVersion.isBadJvmVersion()) { %>
+  <div class="warning">
+  Your current JVM version <%= System.getProperty("java.version") %> is known to be
+  unstable with HBase. Please see the
+  <a href="http://wiki.apache.org/hadoop/Hbase/Troubleshooting#A18">HBase wiki</a>
+  for details.
+  </div>
+<% } %>
+<% if (!FSUtils.isAppendSupported(conf) && FSUtils.isHDFS(conf)) { %>
+  <div class="warning">
+  You are currently running the HMaster without HDFS append support enabled.
+  This may result in data loss.
+  Please see the <a href="http://wiki.apache.org/hadoop/Hbase/HdfsSyncSupport">HBase wiki</a>
+  for details.
+  </div>
+<% } %>
+
+<hr id="head_rule" />
+
+<h2>Master Attributes</h2>
+<table>
+<tr><th>Attribute Name</th><th>Value</th><th>Description</th></tr>
+<tr><td>HBase Version</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getVersion() %>, r<%= org.apache.hadoop.hbase.util.VersionInfo.getRevision() %></td><td>HBase version and svn revision</td></tr>
+<tr><td>HBase Compiled</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getDate() %>, <%= org.apache.hadoop.hbase.util.VersionInfo.getUser() %></td><td>When HBase version was compiled and by whom</td></tr>
+<tr><td>Hadoop Version</td><td><%= org.apache.hadoop.util.VersionInfo.getVersion() %>, r<%= org.apache.hadoop.util.VersionInfo.getRevision() %></td><td>Hadoop version and svn revision</td></tr>
+<tr><td>Hadoop Compiled</td><td><%= org.apache.hadoop.util.VersionInfo.getDate() %>, <%= org.apache.hadoop.util.VersionInfo.getUser() %></td><td>When Hadoop version was compiled and by whom</td></tr>
+<tr><td>HBase Root Directory</td><td><%= FSUtils.getRootDir(master.getConfiguration()).toString() %></td><td>Location of HBase home directory</td></tr>
+<tr><td>Load average</td><td><%= StringUtils.limitDecimalTo2(master.getServerManager().getAverageLoad()) %></td><td>Average number of regions per regionserver. Naive computation.</td></tr>
+<%  if (showFragmentation) { %>
+        <tr><td>Fragmentation</td><td><%= frags.get("-TOTAL-") != null ? frags.get("-TOTAL-").intValue() + "%" : "n/a" %></td><td>Overall fragmentation of all tables, including .META. and -ROOT-.</td></tr>
+<%  } %>
+<tr><td>Zookeeper Quorum</td><td><%= master.getZooKeeperWatcher().getQuorum() %></td><td>Addresses of all registered ZK servers. For more, see <a href="/zk.jsp">zk dump</a>.</td></tr>
+</table>
+
+<h2>Catalog Tables</h2>
+<% 
+  if (rootLocation != null) { %>
+<table>
+<tr>
+    <th>Table</th>
+<%  if (showFragmentation) { %>
+        <th title="Fragmentation - Will be 0% after a major compaction and fluctuate during normal usage.">Frag.</th>
+<%  } %>
+    <th>Description</th>
+</tr>
+<tr>
+    <td><a href="table.jsp?name=<%= Bytes.toString(HConstants.ROOT_TABLE_NAME) %>"><%= Bytes.toString(HConstants.ROOT_TABLE_NAME) %></a></td>
+<%  if (showFragmentation) { %>
+        <td align="center"><%= frags.get("-ROOT-") != null ? frags.get("-ROOT-").intValue() + "%" : "n/a" %></td>
+<%  } %>
+    <td>The -ROOT- table holds references to all .META. regions.</td>
+</tr>
+<%
+    if (metaOnline) { %>
+<tr>
+    <td><a href="table.jsp?name=<%= Bytes.toString(HConstants.META_TABLE_NAME) %>"><%= Bytes.toString(HConstants.META_TABLE_NAME) %></a></td>
+<%  if (showFragmentation) { %>
+        <td align="center"><%= frags.get(".META.") != null ? frags.get(".META.").intValue() + "%" : "n/a" %></td>
+<%  } %>
+    <td>The .META. table holds references to all User Table regions</td>
+</tr>
+  
+<%  } %>
+</table>
+<%} %>
+
+<h2>User Tables</h2>
+<% HTableDescriptor[] tables = new HBaseAdmin(conf).listTables(); 
+   if(tables != null && tables.length > 0) { %>
+<table>
+<tr>
+    <th>Table</th>
+<%  if (showFragmentation) { %>
+        <th title="Fragmentation - Will be 0% after a major compaction and fluctuate during normal usage.">Frag.</th>
+<%  } %>
+    <th>Description</th>
+</tr>
+<%   for(HTableDescriptor htDesc : tables ) { %>
+<tr>
+    <td><a href=table.jsp?name=<%= htDesc.getNameAsString() %>><%= htDesc.getNameAsString() %></a> </td>
+<%  if (showFragmentation) { %>
+        <td align="center"><%= frags.get(htDesc.getNameAsString()) != null ? frags.get(htDesc.getNameAsString()).intValue() + "%" : "n/a" %></td>
+<%  } %>
+    <td><%= htDesc.toString() %></td>
+</tr>
+<%   }  %>
+
+<p> <%= tables.length %> table(s) in set.</p>
+</table>
+<% } %>
+
+<h2>Region Servers</h2>
+<% if (serverToServerInfos != null && serverToServerInfos.size() > 0) { %>
+<%   int totalRegions = 0;
+     int totalRequests = 0; 
+%>
+
+<table>
+<tr><th rowspan="<%= serverToServerInfos.size() + 1%>"></th><th>Address</th><th>Start Code</th><th>Load</th></tr>
+<%   String[] serverNames = serverToServerInfos.keySet().toArray(new String[serverToServerInfos.size()]);
+     Arrays.sort(serverNames);
+     for (String serverName: serverNames) {
+       HServerInfo hsi = serverToServerInfos.get(serverName);
+       String hostname = hsi.getServerAddress().getHostname() + ":" + hsi.getInfoPort();
+       String url = "http://" + hostname + "/";
+       totalRegions += hsi.getLoad().getNumberOfRegions();
+       totalRequests += hsi.getLoad().getNumberOfRequests() / interval;
+       long startCode = hsi.getStartCode();
+%>
+<tr><td><a href="<%= url %>"><%= hostname %></a></td><td><%= startCode %></td><td><%= hsi.getLoad().toString(interval) %></td></tr>
+<%   } %>
+<tr><th>Total: </th><td>servers: <%= serverToServerInfos.size() %></td><td>&nbsp;</td><td>requests=<%= totalRequests %>, regions=<%= totalRegions %></td></tr>
+</table>
+
+<p>Load is requests per second and count of regions loaded</p>
+<% } %>
+</body>
+</html>
diff --git a/0.90/src/main/resources/hbase-webapps/master/table.jsp b/0.90/src/main/resources/hbase-webapps/master/table.jsp
new file mode 100644
index 0000000..b433e20
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/master/table.jsp
@@ -0,0 +1,225 @@
+<%@ page contentType="text/html;charset=UTF-8"
+  import="java.util.Map"
+  import="org.apache.hadoop.io.Writable"
+  import="org.apache.hadoop.conf.Configuration"
+  import="org.apache.hadoop.hbase.client.HTable"
+  import="org.apache.hadoop.hbase.client.HBaseAdmin"
+  import="org.apache.hadoop.hbase.HRegionInfo"
+  import="org.apache.hadoop.hbase.HServerAddress"
+  import="org.apache.hadoop.hbase.HServerInfo"
+  import="org.apache.hadoop.hbase.io.ImmutableBytesWritable"
+  import="org.apache.hadoop.hbase.master.HMaster" 
+  import="org.apache.hadoop.hbase.util.Bytes"
+  import="org.apache.hadoop.hbase.util.FSUtils"
+  import="java.util.Map"
+  import="org.apache.hadoop.hbase.HConstants"%><%
+  HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
+  Configuration conf = master.getConfiguration();
+  HBaseAdmin hbadmin = new HBaseAdmin(conf);
+  String tableName = request.getParameter("name");
+  HTable table = new HTable(conf, tableName);
+  String tableHeader = "<h2>Table Regions</h2><table><tr><th>Name</th><th>Region Server</th><th>Start Key</th><th>End Key</th></tr>";
+  HServerAddress rl = master.getCatalogTracker().getRootLocation();
+  boolean showFragmentation = conf.getBoolean("hbase.master.ui.fragmentation.enabled", false);
+  Map<String, Integer> frags = null;
+  if (showFragmentation) {
+      frags = FSUtils.getTableFragmentation(master);
+  }
+%>
+
+<?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
+  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
+<html xmlns="http://www.w3.org/1999/xhtml">
+
+<%
+  String action = request.getParameter("action");
+  String key = request.getParameter("key");
+  if ( action != null ) {
+%>
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Table action request accepted</h1>
+<p><hr><p>
+<%
+  if (action.equals("split")) {
+    if (key != null && key.length() > 0) {
+      hbadmin.split(key);
+    } else {
+      hbadmin.split(tableName);
+    }
+    
+    %> Split request accepted. <%
+  } else if (action.equals("compact")) {
+    if (key != null && key.length() > 0) {
+      hbadmin.compact(key);
+    } else {
+      hbadmin.compact(tableName);
+    }
+    %> Compact request accepted. <%
+  }
+%>
+<p>Reload.
+</body>
+<%
+} else {
+%>
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+<title>Table: <%= tableName %></title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Table: <%= tableName %></h1>
+<p id="links_menu"><a href="/master.jsp">Master</a>, <a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+<hr id="head_rule" />
+<%
+  if(tableName.equals(Bytes.toString(HConstants.ROOT_TABLE_NAME))) {
+%>
+<%= tableHeader %>
+<%
+  int infoPort = master.getServerManager().getHServerInfo(rl).getInfoPort();
+  String url = "http://" + rl.getHostname() + ":" + infoPort + "/";
+%>
+<tr>
+  <td><%= tableName %></td>
+  <td><a href="<%= url %>"><%= rl.getHostname() %>:<%= rl.getPort() %></a></td>
+  <td>-</td>
+  <td></td>
+  <td>-</td>
+</tr>
+</table>
+<%
+  } else if(tableName.equals(Bytes.toString(HConstants.META_TABLE_NAME))) {
+%>
+<%= tableHeader %>
+<%
+  // NOTE: Presumes one meta region only.
+  HRegionInfo meta = HRegionInfo.FIRST_META_REGIONINFO;
+  HServerAddress metaLocation = master.getCatalogTracker().getMetaLocation();
+  for (int i = 0; i <= 1; i++) {
+    int infoPort = master.getServerManager().getHServerInfo(metaLocation).getInfoPort();
+    String url = "http://" + metaLocation.getHostname() + ":" + infoPort + "/";
+%>
+<tr>
+  <td><%= meta.getRegionNameAsString() %></td>
+    <td><a href="<%= url %>"><%= metaLocation.getHostname().toString() + ":" + infoPort %></a></td>
+    <td>-</td><td><%= Bytes.toString(meta.getStartKey()) %></td><td><%= Bytes.toString(meta.getEndKey()) %></td>
+</tr>
+<%  } %>
+</table>
+<%} else {
+  try { %>
+<h2>Table Attributes</h2>
+<table>
+  <tr>
+      <th>Attribute Name</th>
+      <th>Value</th>
+      <th>Description</th></tr>
+  <tr>
+      <td>Enabled</td>
+      <td><%= hbadmin.isTableEnabled(table.getTableName()) %></td>
+      <td>Is the table enabled</td>
+  </tr>
+<%  if (showFragmentation) { %>
+  <tr>
+      <td>Fragmentation</td>
+      <td><%= frags.get(tableName) != null ? frags.get(tableName).intValue() + "%" : "n/a" %></td>
+      <td>How fragmented is the table. After a major compaction it is 0%.</td>
+  </tr>
+<%  } %>
+</table>
+<%
+  Map<HRegionInfo, HServerAddress> regions = table.getRegionsInfo();
+  if(regions != null && regions.size() > 0) { %>
+<%=     tableHeader %>
+<%
+  for(Map.Entry<HRegionInfo, HServerAddress> hriEntry : regions.entrySet()) {
+    HRegionInfo regionInfo = hriEntry.getKey();
+    HServerAddress addr = hriEntry.getValue();
+
+    int infoPort = 0;
+    String urlRegionServer = null;
+
+    if (addr != null) {
+      HServerInfo info = master.getServerManager().getHServerInfo(addr);
+      if (info != null) {
+        infoPort = info.getInfoPort();
+        urlRegionServer =
+            "http://" + addr.getHostname().toString() + ":" + infoPort + "/";
+      }
+    }
+%>
+<tr>
+  <td><%= Bytes.toStringBinary(regionInfo.getRegionName())%></td>
+  <%
+  if (urlRegionServer != null) {
+  %>
+  <td>
+    <a href="<%= urlRegionServer %>"><%= addr.getHostname().toString() + ":" + infoPort %></a>
+  </td>
+  <%
+  } else {
+  %>
+  <td class="undeployed-region">not deployed</td>
+  <%
+  }
+  %>
+  <td><%= Bytes.toStringBinary(regionInfo.getStartKey())%></td>
+  <td><%= Bytes.toStringBinary(regionInfo.getEndKey())%></td>
+</tr>
+<% } %>
+</table>
+<% }
+} catch(Exception ex) {
+  ex.printStackTrace(System.err);
+}
+} // end else
+%>
+
+<p><hr><p>
+Actions:
+<p>
+<center>
+<table style="border-style: none" width="90%">
+<tr>
+  <form method="get">
+  <input type="hidden" name="action" value="compact">
+  <input type="hidden" name="name" value="<%= tableName %>">
+  <td style="border-style: none; text-align: center">
+      <input style="font-size: 12pt; width: 10em" type="submit" value="Compact"></td>
+  <td style="border-style: none" width="5%">&nbsp;</td>
+  <td style="border-style: none">Region Key (optional):<input type="text" name="key" size="40"></td>
+  <td style="border-style: none">This action will force a compaction of all
+  regions of the table, or, if a key is supplied, only the region containing the
+  given key.</td>
+  </form>
+</tr>
+<tr><td style="border-style: none" colspan="4">&nbsp;</td></tr>
+<tr>
+  <form method="get">
+  <input type="hidden" name="action" value="split">
+  <input type="hidden" name="name" value="<%= tableName %>">
+  <td style="border-style: none; text-align: center">
+      <input style="font-size: 12pt; width: 10em" type="submit" value="Split"></td>
+  <td style="border-style: none" width="5%">&nbsp;</td>
+  <td style="border-style: none">Region Key (optional):<input type="text" name="key" size="40"></td>
+  <td style="border-style: none">This action will force a split of all eligible
+  regions of the table, or, if a key is supplied, only the region containing the
+  given key. An eligible region is one that does not contain any references to
+  other regions. Split requests for noneligible regions will be ignored.</td>
+  </form>
+</tr>
+</table>
+</center>
+<p>
+
+<%
+}
+%>
+
+</body>
+</html>
diff --git a/0.90/src/main/resources/hbase-webapps/master/zk.jsp b/0.90/src/main/resources/hbase-webapps/master/zk.jsp
new file mode 100644
index 0000000..e7d3269
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/master/zk.jsp
@@ -0,0 +1,37 @@
+<%@ page contentType="text/html;charset=UTF-8"
+  import="java.io.IOException"
+  import="org.apache.hadoop.conf.Configuration"
+  import="org.apache.hadoop.hbase.client.HBaseAdmin"
+  import="org.apache.hadoop.hbase.client.HConnection"
+  import="org.apache.hadoop.hbase.HRegionInfo"
+  import="org.apache.hadoop.hbase.zookeeper.ZKUtil"
+  import="org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher"
+  import="org.apache.hadoop.hbase.HBaseConfiguration"
+  import="org.apache.hadoop.hbase.master.HMaster" 
+  import="org.apache.hadoop.hbase.HConstants"%><%
+  HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
+  Configuration conf = master.getConfiguration();
+  HBaseAdmin hbadmin = new HBaseAdmin(conf);
+  HConnection connection = hbadmin.getConnection();
+  ZooKeeperWatcher watcher = connection.getZooKeeperWatcher();
+%>
+
+<?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
+  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+<title>ZooKeeper Dump</title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+<body>
+<a id="logo" href="http://hbase.org"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">ZooKeeper Dump</h1>
+<p id="links_menu"><a href="/master.jsp">Master</a>, <a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+<hr id="head_rule" />
+<pre>
+<%= ZKUtil.dump(watcher) %>
+</pre>
+
+</body>
+</html>
diff --git a/0.90/src/main/resources/hbase-webapps/regionserver/index.html b/0.90/src/main/resources/hbase-webapps/regionserver/index.html
new file mode 100644
index 0000000..bdd3c6a
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/regionserver/index.html
@@ -0,0 +1 @@
+<meta HTTP-EQUIV="REFRESH" content="0;url=regionserver.jsp"/>
diff --git a/0.90/src/main/resources/hbase-webapps/regionserver/regionserver.jsp b/0.90/src/main/resources/hbase-webapps/regionserver/regionserver.jsp
new file mode 100644
index 0000000..68d4e42
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/regionserver/regionserver.jsp
@@ -0,0 +1,79 @@
+<%@ page contentType="text/html;charset=UTF-8"
+  import="java.util.*"
+  import="java.io.IOException"
+  import="org.apache.hadoop.io.Text"
+  import="org.apache.hadoop.hbase.regionserver.HRegionServer"
+  import="org.apache.hadoop.hbase.regionserver.HRegion"
+  import="org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics"
+  import="org.apache.hadoop.hbase.util.Bytes"
+  import="org.apache.hadoop.hbase.HConstants"
+  import="org.apache.hadoop.hbase.HServerInfo"
+  import="org.apache.hadoop.hbase.HServerLoad"
+  import="org.apache.hadoop.hbase.HRegionInfo" %><%
+  HRegionServer regionServer = (HRegionServer)getServletContext().getAttribute(HRegionServer.REGIONSERVER);
+  HServerInfo serverInfo = null;
+  try {
+    serverInfo = regionServer.getHServerInfo();
+  } catch (IOException e) {
+    e.printStackTrace();
+  }
+  RegionServerMetrics metrics = regionServer.getMetrics();
+  List<HRegionInfo> onlineRegions = regionServer.getOnlineRegions();
+  int interval = regionServer.getConfiguration().getInt("hbase.regionserver.msginterval", 3000)/1000;
+
+%><?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
+  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
+<title>HBase Region Server: <%= serverInfo.getServerAddress().getHostname() %>:<%= serverInfo.getServerAddress().getPort() %></title>
+<link rel="stylesheet" type="text/css" href="/static/hbase.css" />
+</head>
+
+<body>
+<a id="logo" href="http://wiki.apache.org/lucene-hadoop/Hbase"><img src="/static/hbase_logo_med.gif" alt="HBase Logo" title="HBase Logo" /></a>
+<h1 id="page_title">Region Server: <%= serverInfo.getServerAddress().getHostname() %>:<%= serverInfo.getServerAddress().getPort() %></h1>
+<p id="links_menu"><a href="/logs/">Local logs</a>, <a href="/stacks">Thread Dump</a>, <a href="/logLevel">Log Level</a></p>
+<hr id="head_rule" />
+
+<h2>Region Server Attributes</h2>
+<table>
+<tr><th>Attribute Name</th><th>Value</th><th>Description</th></tr>
+<tr><td>HBase Version</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getVersion() %>, r<%= org.apache.hadoop.hbase.util.VersionInfo.getRevision() %></td><td>HBase version and svn revision</td></tr>
+<tr><td>HBase Compiled</td><td><%= org.apache.hadoop.hbase.util.VersionInfo.getDate() %>, <%= org.apache.hadoop.hbase.util.VersionInfo.getUser() %></td><td>When HBase version was compiled and by whom</td></tr>
+<tr><td>Metrics</td><td><%= metrics.toString() %></td><td>RegionServer Metrics; file and heap sizes are in megabytes</td></tr>
+<tr><td>Zookeeper Quorum</td><td><%= regionServer.getZooKeeper().getQuorum() %></td><td>Addresses of all registered ZK servers</td></tr>
+</table>
+
+<h2>Online Regions</h2>
+<% if (onlineRegions != null && onlineRegions.size() > 0) { %>
+<table>
+<tr><th>Region Name</th><th>Start Key</th><th>End Key</th><th>Metrics</th></tr>
+<%   
+  Collections.sort(onlineRegions);
+  for (HRegionInfo r: onlineRegions) { 
+        HServerLoad.RegionLoad load = regionServer.createRegionLoad(r.getEncodedName());
+ %>
+<tr><td><%= r.getRegionNameAsString() %></td>
+    <td><%= Bytes.toStringBinary(r.getStartKey()) %></td><td><%= Bytes.toStringBinary(r.getEndKey()) %></td>
+    <td><%= load.toString() %></td>
+    </tr>
+<%   } %>
+</table>
+<p>Region names are made of the containing table's name, a comma,
+the start key, a comma, and a randomly generated region id.  To illustrate,
+the region named
+<em>domains,apache.org,5464829424211263407</em> is party to the table 
+<em>domains</em>, has an id of <em>5464829424211263407</em> and the first key
+in the region is <em>apache.org</em>.  The <em>-ROOT-</em>
+and <em>.META.</em> 'tables' are internal sytem tables (or 'catalog' tables in db-speak).
+The -ROOT- keeps a list of all regions in the .META. table.  The .META. table
+keeps a list of all regions in the system. The empty key is used to denote
+table start and table end.  A region with an empty start key is the first region in a table.
+If region has both an empty start and an empty end key, its the only region in the table.  See
+<a href="http://hbase.org">HBase Home</a> for further explication.<p>
+<% } else { %>
+<p>Not serving regions</p>
+<% } %>
+</body>
+</html>
diff --git a/0.90/src/main/resources/hbase-webapps/static/hbase.css b/0.90/src/main/resources/hbase-webapps/static/hbase.css
new file mode 100644
index 0000000..1163fda
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/static/hbase.css
@@ -0,0 +1,19 @@
+h1, h2, h3 { color: DarkSlateBlue }
+table { border: thin solid DodgerBlue }
+tr { border: thin solid DodgerBlue }
+td { border: thin solid DodgerBlue }
+th { border: thin solid DodgerBlue }
+#logo {float: left;}
+#logo img {border: none;}
+#page_title {padding-top: 27px;}
+
+div.warning {
+  border: 1px solid #666;
+  background-color: #fcc;
+  font-size: 110%;
+  font-weight: bold;
+}
+
+td.undeployed-region {
+  background-color: #faa;
+}
diff --git a/0.90/src/main/resources/hbase-webapps/static/hbase_logo_med.gif b/0.90/src/main/resources/hbase-webapps/static/hbase_logo_med.gif
new file mode 100644
index 0000000..36d3e3c
--- /dev/null
+++ b/0.90/src/main/resources/hbase-webapps/static/hbase_logo_med.gif
Binary files differ
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/mapred/RowCounter_Counters.properties b/0.90/src/main/resources/org/apache/hadoop/hbase/mapred/RowCounter_Counters.properties
new file mode 100644
index 0000000..02a978e
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/mapred/RowCounter_Counters.properties
@@ -0,0 +1,6 @@
+
+# ResourceBundle properties file for RowCounter MR job
+
+CounterGroupName=         RowCounter
+
+ROWS.name=                Rows
\ No newline at end of file
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/mapreduce/RowCounter_Counters.properties b/0.90/src/main/resources/org/apache/hadoop/hbase/mapreduce/RowCounter_Counters.properties
new file mode 100644
index 0000000..5f4e2c5
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/mapreduce/RowCounter_Counters.properties
@@ -0,0 +1,6 @@
+
+# ResourceBundle properties file for RowCounter MR job
+
+CounterGroupName=         RowCounter
+
+ROWS.name=                Rows
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/XMLSchema.xsd b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/XMLSchema.xsd
new file mode 100644
index 0000000..fcaf810
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/XMLSchema.xsd
@@ -0,0 +1,152 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<schema targetNamespace="ModelSchema" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="ModelSchema">
+
+    <element name="Version" type="tns:Version"></element>
+    
+    <complexType name="Version">
+      <attribute name="REST" type="string"></attribute>
+      <attribute name="JVM" type="string"></attribute>
+      <attribute name="OS" type="string"></attribute>
+      <attribute name="Server" type="string"></attribute>
+      <attribute name="Jersey" type="string"></attribute>
+    </complexType>
+
+    <element name="TableList" type="tns:TableList"></element>
+    
+    <complexType name="TableList">
+        <sequence>
+            <element name="table" type="tns:Table" maxOccurs="unbounded" minOccurs="1"></element>
+        </sequence>
+    </complexType>
+
+    <complexType name="Table">
+        <sequence>
+            <element name="name" type="string"></element>
+        </sequence>
+    </complexType>
+
+    <element name="TableInfo" type="tns:TableInfo"></element>
+    
+    <complexType name="TableInfo">
+        <sequence>
+            <element name="region" type="tns:TableRegion" maxOccurs="unbounded" minOccurs="1"></element>
+        </sequence>
+        <attribute name="name" type="string"></attribute>
+    </complexType>
+
+    <complexType name="TableRegion">
+        <attribute name="name" type="string"></attribute>
+        <attribute name="id" type="int"></attribute>
+        <attribute name="startKey" type="base64Binary"></attribute>
+        <attribute name="endKey" type="base64Binary"></attribute>
+        <attribute name="location" type="string"></attribute>
+    </complexType>
+
+    <element name="TableSchema" type="tns:TableSchema"></element>
+    
+    <complexType name="TableSchema">
+        <sequence>
+            <element name="column" type="tns:ColumnSchema" maxOccurs="unbounded" minOccurs="1"></element>
+        </sequence>
+        <attribute name="name" type="string"></attribute>
+        <anyAttribute></anyAttribute>
+    </complexType>
+
+    <complexType name="ColumnSchema">
+        <attribute name="name" type="string"></attribute>
+        <anyAttribute></anyAttribute>
+    </complexType>
+
+    <element name="CellSet" type="tns:CellSet"></element>
+
+    <complexType name="CellSet">
+        <sequence>
+            <element name="row" type="tns:Row" maxOccurs="unbounded" minOccurs="1"></element>
+        </sequence>
+    </complexType>
+
+    <element name="Row" type="tns:Row"></element>
+
+    <complexType name="Row">
+        <sequence>
+            <element name="key" type="base64Binary"></element>
+            <element name="cell" type="tns:Cell" maxOccurs="unbounded" minOccurs="1"></element>
+        </sequence>
+    </complexType>
+ 
+    <element name="Cell" type="tns:Cell"></element>
+
+    <complexType name="Cell">
+        <sequence>
+            <element name="value" maxOccurs="1" minOccurs="1">
+                <simpleType><restriction base="base64Binary">
+                </simpleType>
+            </element>
+        </sequence>
+        <attribute name="column" type="base64Binary" />
+        <attribute name="timestamp" type="int" />
+    </complexType>
+
+    <element name="Scanner" type="tns:Scanner"></element>
+    
+    <complexType name="Scanner">
+        <sequence>
+            <element name="column" type="base64Binary" minOccurs="0" maxOccurs="unbounded"></element>
+        </sequence>
+        <sequence>
+            <element name="filter" type="string" minOccurs="0" maxOccurs="1"></element>
+        </sequence>
+        <attribute name="startRow" type="base64Binary"></attribute>
+        <attribute name="endRow" type="base64Binary"></attribute>
+        <attribute name="batch" type="int"></attribute>
+        <attribute name="startTime" type="int"></attribute>
+        <attribute name="endTime" type="int"></attribute>
+    </complexType>
+
+    <element name="StorageClusterVersion" type="tns:StorageClusterVersion" />
+
+    <complexType name="StorageClusterVersion">
+        <attribute name="version" type="string"></attribute>
+    </complexType>
+
+    <element name="StorageClusterStatus"
+        type="tns:StorageClusterStatus">
+    </element>
+    
+    <complexType name="StorageClusterStatus">
+        <sequence>
+            <element name="liveNode" type="tns:Node"
+                maxOccurs="unbounded" minOccurs="0">
+            </element>
+            <element name="deadNode" type="string" maxOccurs="unbounded"
+                minOccurs="0">
+            </element>
+        </sequence>
+        <attribute name="regions" type="int"></attribute>
+        <attribute name="requests" type="int"></attribute>
+        <attribute name="averageLoad" type="float"></attribute>
+    </complexType>
+
+    <complexType name="Node">
+        <sequence>
+            <element name="region" type="tns:Region"
+                maxOccurs="unbounded" minOccurs="0">
+            </element>
+        </sequence>
+        <attribute name="name" type="string"></attribute>
+        <attribute name="startCode" type="int"></attribute>
+        <attribute name="requests" type="int"></attribute>
+        <attribute name="heapSizeMB" type="int"></attribute>
+        <attribute name="maxHeapSizeMB" type="int"></attribute>
+    </complexType>
+
+    <complexType name="Region">
+        <attribute name="name" type="base64Binary"></attribute>
+        <attribute name="stores" type="int"></attribute>
+        <attribute name="storefiles" type="int"></attribute>
+        <attribute name="storefileSizeMB" type="int"></attribute>
+        <attribute name="memstoreSizeMB" type="int"></attribute>
+        <attribute name="storefileIndexSizeMB" type="int"></attribute>
+    </complexType>
+
+</schema>
\ No newline at end of file
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/CellMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/CellMessage.proto
new file mode 100644
index 0000000..a7bfe83
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/CellMessage.proto
@@ -0,0 +1,26 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message Cell {
+  optional bytes row = 1;       // unused if Cell is in a CellSet
+  optional bytes column = 2;
+  optional int64 timestamp = 3;
+  optional bytes data = 4;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/CellSetMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/CellSetMessage.proto
new file mode 100644
index 0000000..dfdf125
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/CellSetMessage.proto
@@ -0,0 +1,29 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+import "CellMessage.proto";
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message CellSet {
+  message Row {
+    required bytes key = 1;
+    repeated Cell values = 2;
+  }
+  repeated Row rows = 1;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ColumnSchemaMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ColumnSchemaMessage.proto
new file mode 100644
index 0000000..0a9a9af
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ColumnSchemaMessage.proto
@@ -0,0 +1,32 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message ColumnSchema {
+  optional string name = 1;
+  message Attribute {
+    required string name = 1;
+    required string value = 2;
+  }
+  repeated Attribute attrs = 2;
+  // optional helpful encodings of commonly used attributes
+  optional int32 ttl = 3;
+  optional int32 maxVersions = 4;
+  optional string compression = 5;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ScannerMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ScannerMessage.proto
new file mode 100644
index 0000000..6ef3191
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ScannerMessage.proto
@@ -0,0 +1,30 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message Scanner {
+  optional bytes startRow = 1;
+  optional bytes endRow = 2;
+  repeated bytes columns = 3;
+  optional int32 batch = 4;
+  optional int64 startTime = 5;
+  optional int64 endTime = 6;
+  optional int32 maxVersions = 7;
+  optional string filter = 8;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/StorageClusterStatusMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/StorageClusterStatusMessage.proto
new file mode 100644
index 0000000..2b032f7
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/StorageClusterStatusMessage.proto
@@ -0,0 +1,45 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message StorageClusterStatus {
+  message Region {
+    required bytes name = 1;
+    optional int32 stores = 2;
+    optional int32 storefiles = 3;
+    optional int32 storefileSizeMB = 4;
+    optional int32 memstoreSizeMB = 5;
+    optional int32 storefileIndexSizeMB = 6;
+  }
+  message Node {
+    required string name = 1;    // name:port
+    optional int64 startCode = 2;
+    optional int32 requests = 3;
+    optional int32 heapSizeMB = 4;
+    optional int32 maxHeapSizeMB = 5;
+    repeated Region regions = 6;
+  }
+  // node status
+  repeated Node liveNodes = 1;
+  repeated string deadNodes = 2;
+  // summary statistics
+  optional int32 regions = 3; 
+  optional int32 requests = 4; 
+  optional double averageLoad = 5;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableInfoMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableInfoMessage.proto
new file mode 100644
index 0000000..5dd9120
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableInfoMessage.proto
@@ -0,0 +1,31 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message TableInfo {
+  required string name = 1;
+  message Region {
+    required string name = 1;
+    optional bytes startKey = 2;
+    optional bytes endKey = 3;
+    optional int64 id = 4;
+    optional string location = 5;
+  }
+  repeated Region regions = 2;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableListMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableListMessage.proto
new file mode 100644
index 0000000..2ce4d25
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableListMessage.proto
@@ -0,0 +1,23 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message TableList {
+  repeated string name = 1;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableSchemaMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableSchemaMessage.proto
new file mode 100644
index 0000000..d817722
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/TableSchemaMessage.proto
@@ -0,0 +1,34 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+import "ColumnSchemaMessage.proto";
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message TableSchema {
+  optional string name = 1;
+  message Attribute {
+    required string name = 1;
+    required string value = 2;
+  }  
+  repeated Attribute attrs = 2;
+  repeated ColumnSchema columns = 3;
+  // optional helpful encodings of commonly used attributes
+  optional bool inMemory = 4;
+  optional bool readOnly = 5;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/VersionMessage.proto b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/VersionMessage.proto
new file mode 100644
index 0000000..2404a2e
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/VersionMessage.proto
@@ -0,0 +1,27 @@
+// Copyright 2010 The Apache Software Foundation
+//
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package org.apache.hadoop.hbase.rest.protobuf.generated;
+
+message Version {
+  optional string restVersion = 1;
+  optional string jvmVersion = 2;
+  optional string osVersion = 3;
+  optional string serverVersion = 4;
+  optional string jerseyVersion = 5;
+}
diff --git a/0.90/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift b/0.90/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
new file mode 100644
index 0000000..5f948dd
--- /dev/null
+++ b/0.90/src/main/resources/org/apache/hadoop/hbase/thrift/Hbase.thrift
@@ -0,0 +1,691 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+// ----------------------------------------------------------------
+// Hbase.thrift
+//
+// This is a Thrift interface definition file for the Hbase service.
+// Target language libraries for C++, Java, Ruby, PHP, (and more) are
+// generated by running this file through the Thrift compiler with the
+// appropriate flags. The Thrift compiler binary and runtime
+// libraries for various languages are available
+// from the Apache Incubator (http://incubator.apache.org/thrift/)
+//
+// See the package.html file for information on the version of Thrift
+// used to generate the *.java files checked into the Hbase project.
+// ----------------------------------------------------------------
+
+namespace java org.apache.hadoop.hbase.thrift.generated
+namespace cpp  apache.hadoop.hbase.thrift
+namespace rb Apache.Hadoop.Hbase.Thrift
+namespace py hbase
+namespace perl Hbase
+
+//
+// Types
+//
+
+// NOTE: all variables with the Text type are assumed to be correctly
+// formatted UTF-8 strings.  This is a programming language and locale
+// dependent property that the client application is repsonsible for
+// maintaining.  If strings with an invalid encoding are sent, an
+// IOError will be thrown.
+
+typedef binary Text
+typedef binary Bytes
+typedef i32    ScannerID
+
+/**
+ * TCell - Used to transport a cell value (byte[]) and the timestamp it was 
+ * stored with together as a result for get and getRow methods. This promotes
+ * the timestamp of a cell to a first-class value, making it easy to take 
+ * note of temporal data. Cell is used all the way from HStore up to HTable.
+ */
+struct TCell{
+  1:Bytes value,
+  2:i64 timestamp
+}
+
+/**
+ * An HColumnDescriptor contains information about a column family
+ * such as the number of versions, compression settings, etc. It is
+ * used as input when creating a table or adding a column.
+ */
+struct ColumnDescriptor {
+  1:Text name,
+  2:i32 maxVersions = 3,
+  3:string compression = "NONE",
+  4:bool inMemory = 0,
+  5:string bloomFilterType = "NONE",
+  6:i32 bloomFilterVectorSize = 0,
+  7:i32 bloomFilterNbHashes = 0,
+  8:bool blockCacheEnabled = 0,
+  9:i32 timeToLive = -1
+}
+
+/**
+ * A TRegionInfo contains information about an HTable region.
+ */
+struct TRegionInfo {
+  1:Text startKey,
+  2:Text endKey,
+  3:i64 id,
+  4:Text name,
+  5:byte version 
+}
+
+/**
+ * A Mutation object is used to either update or delete a column-value.
+ */
+struct Mutation {
+  1:bool isDelete = 0,
+  2:Text column,
+  3:Text value
+}
+
+
+/**
+ * A BatchMutation object is used to apply a number of Mutations to a single row.
+ */
+struct BatchMutation {
+  1:Text row,
+  2:list<Mutation> mutations
+}
+
+
+/**
+ * Holds row name and then a map of columns to cells. 
+ */
+struct TRowResult {
+  1:Text row,
+  2:map<Text, TCell> columns
+}
+
+//
+// Exceptions
+//
+/**
+ * An IOError exception signals that an error occurred communicating
+ * to the Hbase master or an Hbase region server.  Also used to return
+ * more general Hbase error conditions.
+ */
+exception IOError {
+  1:string message
+}
+
+/**
+ * An IllegalArgument exception indicates an illegal or invalid
+ * argument was passed into a procedure.
+ */
+exception IllegalArgument {
+  1:string message
+}
+
+/**
+ * An AlreadyExists exceptions signals that a table with the specified
+ * name already exists
+ */
+exception AlreadyExists {
+  1:string message
+}
+
+//
+// Service 
+//
+
+service Hbase {
+  /**
+   * Brings a table on-line (enables it)
+   */
+  void enableTable(
+    /** name of the table */
+    1:Bytes tableName
+  ) throws (1:IOError io)
+    
+  /**
+   * Disables a table (takes it off-line) If it is being served, the master
+   * will tell the servers to stop serving it.
+   */
+  void disableTable(
+    /** name of the table */
+    1:Bytes tableName
+  ) throws (1:IOError io)
+
+  /**
+   * @return true if table is on-line
+   */
+  bool isTableEnabled(
+    /** name of the table to check */
+    1:Bytes tableName
+  ) throws (1:IOError io)
+    
+  void compact(1:Bytes tableNameOrRegionName)
+    throws (1:IOError io)
+  
+  void majorCompact(1:Bytes tableNameOrRegionName)
+    throws (1:IOError io)
+    
+  /**
+   * List all the userspace tables.
+   *
+   * @return returns a list of names
+   */
+  list<Text> getTableNames()
+    throws (1:IOError io)
+
+  /**
+   * List all the column families assoicated with a table.
+   *
+   * @return list of column family descriptors
+   */
+  map<Text,ColumnDescriptor> getColumnDescriptors (
+    /** table name */
+    1:Text tableName
+  ) throws (1:IOError io)
+
+  /**
+   * List the regions associated with a table.
+   *
+   * @return list of region descriptors
+   */
+  list<TRegionInfo> getTableRegions(
+    /** table name */
+    1:Text tableName)
+    throws (1:IOError io)
+
+  /**
+   * Create a table with the specified column families.  The name
+   * field for each ColumnDescriptor must be set and must end in a
+   * colon (:). All other fields are optional and will get default
+   * values if not explicitly specified.
+   *
+   * @throws IllegalArgument if an input parameter is invalid
+   *
+   * @throws AlreadyExists if the table name already exists
+   */
+  void createTable(
+    /** name of table to create */
+    1:Text tableName,
+
+    /** list of column family descriptors */
+    2:list<ColumnDescriptor> columnFamilies
+  ) throws (1:IOError io, 2:IllegalArgument ia, 3:AlreadyExists exist)
+
+  /**
+   * Deletes a table
+   *
+   * @throws IOError if table doesn't exist on server or there was some other
+   * problem
+   */
+  void deleteTable(
+    /** name of table to delete */
+    1:Text tableName
+  ) throws (1:IOError io)
+
+  /** 
+   * Get a single TCell for the specified table, row, and column at the
+   * latest timestamp. Returns an empty list if no such value exists.
+   *
+   * @return value for specified row/column
+   */
+  list<TCell> get(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** column name */
+    3:Text column
+  ) throws (1:IOError io)
+
+  /** 
+   * Get the specified number of versions for the specified table,
+   * row, and column.
+   *
+   * @return list of cells for specified row/column
+   */
+  list<TCell> getVer(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** column name */
+    3:Text column,
+
+    /** number of versions to retrieve */
+    4:i32 numVersions
+  ) throws (1:IOError io)
+
+  /** 
+   * Get the specified number of versions for the specified table,
+   * row, and column.  Only versions less than or equal to the specified
+   * timestamp will be returned.
+   *
+   * @return list of cells for specified row/column
+   */
+  list<TCell> getVerTs(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** column name */
+    3:Text column,
+
+    /** timestamp */
+    4:i64 timestamp,
+
+    /** number of versions to retrieve */
+    5:i32 numVersions
+  ) throws (1:IOError io)
+
+  /** 
+   * Get all the data for the specified table and row at the latest
+   * timestamp. Returns an empty list if the row does not exist.
+   * 
+   * @return TRowResult containing the row and map of columns to TCells
+   */
+  list<TRowResult> getRow(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row
+  ) throws (1:IOError io)
+
+  /** 
+   * Get the specified columns for the specified table and row at the latest
+   * timestamp. Returns an empty list if the row does not exist.
+   * 
+   * @return TRowResult containing the row and map of columns to TCells
+   */
+  list<TRowResult> getRowWithColumns(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** List of columns to return, null for all columns */
+    3:list<Text> columns
+  ) throws (1:IOError io)
+
+  /** 
+   * Get all the data for the specified table and row at the specified
+   * timestamp. Returns an empty list if the row does not exist.
+   * 
+   * @return TRowResult containing the row and map of columns to TCells
+   */
+  list<TRowResult> getRowTs(
+    /** name of the table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** timestamp */
+    3:i64 timestamp
+  ) throws (1:IOError io)
+    
+  /** 
+   * Get the specified columns for the specified table and row at the specified
+   * timestamp. Returns an empty list if the row does not exist.
+   * 
+   * @return TRowResult containing the row and map of columns to TCells
+   */
+  list<TRowResult> getRowWithColumnsTs(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** List of columns to return, null for all columns */
+    3:list<Text> columns,
+    4:i64 timestamp
+  ) throws (1:IOError io)
+
+  /** 
+   * Apply a series of mutations (updates/deletes) to a row in a
+   * single transaction.  If an exception is thrown, then the
+   * transaction is aborted.  Default current timestamp is used, and
+   * all entries will have an identical timestamp.
+   */
+  void mutateRow(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** list of mutation commands */
+    3:list<Mutation> mutations
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+
+  /** 
+   * Apply a series of mutations (updates/deletes) to a row in a
+   * single transaction.  If an exception is thrown, then the
+   * transaction is aborted.  The specified timestamp is used, and
+   * all entries will have an identical timestamp.
+   */
+  void mutateRowTs(
+    /** name of table */
+    1:Text tableName,
+
+    /** row key */
+    2:Text row,
+
+    /** list of mutation commands */
+    3:list<Mutation> mutations,
+
+    /** timestamp */
+    4:i64 timestamp
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+
+  /** 
+   * Apply a series of batches (each a series of mutations on a single row)
+   * in a single transaction.  If an exception is thrown, then the
+   * transaction is aborted.  Default current timestamp is used, and
+   * all entries will have an identical timestamp.
+   */
+  void mutateRows(
+    /** name of table */
+    1:Text tableName,
+
+    /** list of row batches */
+    2:list<BatchMutation> rowBatches
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+
+  /** 
+   * Apply a series of batches (each a series of mutations on a single row)
+   * in a single transaction.  If an exception is thrown, then the
+   * transaction is aborted.  The specified timestamp is used, and
+   * all entries will have an identical timestamp.
+   */
+  void mutateRowsTs(
+    /** name of table */
+    1:Text tableName,
+
+    /** list of row batches */
+    2:list<BatchMutation> rowBatches,
+
+    /** timestamp */
+    3:i64 timestamp
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+
+  /**
+   * Atomically increment the column value specified.  Returns the next value post increment.
+   */
+  i64 atomicIncrement(
+    /** name of table */
+    1:Text tableName,
+
+    /** row to increment */
+    2:Text row,
+
+    /** name of column */
+    3:Text column,
+
+    /** amount to increment by */
+    4:i64 value
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+    
+  /** 
+   * Delete all cells that match the passed row and column.
+   */
+  void deleteAll(
+    /** name of table */
+    1:Text tableName,
+
+    /** Row to update */
+    2:Text row,
+
+    /** name of column whose value is to be deleted */
+    3:Text column
+  ) throws (1:IOError io)
+
+  /** 
+   * Delete all cells that match the passed row and column and whose
+   * timestamp is equal-to or older than the passed timestamp.
+   */
+  void deleteAllTs(
+    /** name of table */
+    1:Text tableName,
+
+    /** Row to update */
+    2:Text row,
+
+    /** name of column whose value is to be deleted */
+    3:Text column,
+
+    /** timestamp */
+    4:i64 timestamp
+  ) throws (1:IOError io)
+
+  /**
+   * Completely delete the row's cells.
+   */
+  void deleteAllRow(
+    /** name of table */
+    1:Text tableName,
+
+    /** key of the row to be completely deleted. */
+    2:Text row
+  ) throws (1:IOError io)
+
+  /**
+   * Completely delete the row's cells marked with a timestamp
+   * equal-to or older than the passed timestamp.
+   */
+  void deleteAllRowTs(
+    /** name of table */
+    1:Text tableName,
+
+    /** key of the row to be completely deleted. */
+    2:Text row,
+
+    /** timestamp */
+    3:i64 timestamp
+  ) throws (1:IOError io)
+
+  /** 
+   * Get a scanner on the current table starting at the specified row and
+   * ending at the last row in the table.  Return the specified columns.
+   *
+   * @return scanner id to be used with other scanner procedures
+   */
+  ScannerID scannerOpen(
+    /** name of table */
+    1:Text tableName,
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    2:Text startRow,
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    3:list<Text> columns
+  ) throws (1:IOError io)
+
+  /** 
+   * Get a scanner on the current table starting and stopping at the
+   * specified rows.  ending at the last row in the table.  Return the
+   * specified columns.
+   *
+   * @return scanner id to be used with other scanner procedures
+   */
+  ScannerID scannerOpenWithStop(
+    /** name of table */
+    1:Text tableName,
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    2:Text startRow,
+
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    3:Text stopRow,
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    4:list<Text> columns
+  ) throws (1:IOError io)
+
+  /**
+   * Open a scanner for a given prefix.  That is all rows will have the specified
+   * prefix. No other rows will be returned.
+   *
+   * @return scanner id to use with other scanner calls
+   */
+  ScannerID scannerOpenWithPrefix(
+    /** name of table */
+    1:Text tableName,
+
+    /** the prefix (and thus start row) of the keys you want */
+    2:Text startAndPrefix,
+
+    /** the columns you want returned */
+    3:list<Text> columns
+  ) throws (1:IOError io)
+
+  /** 
+   * Get a scanner on the current table starting at the specified row and
+   * ending at the last row in the table.  Return the specified columns.
+   * Only values with the specified timestamp are returned.
+   *
+   * @return scanner id to be used with other scanner procedures
+   */
+  ScannerID scannerOpenTs(
+    /** name of table */
+    1:Text tableName,
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    2:Text startRow,
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    3:list<Text> columns,
+
+    /** timestamp */
+    4:i64 timestamp
+  ) throws (1:IOError io)
+
+  /** 
+   * Get a scanner on the current table starting and stopping at the
+   * specified rows.  ending at the last row in the table.  Return the
+   * specified columns.  Only values with the specified timestamp are
+   * returned.
+   *
+   * @return scanner id to be used with other scanner procedures
+   */
+  ScannerID scannerOpenWithStopTs(
+    /** name of table */
+    1:Text tableName,
+
+    /**
+     * Starting row in table to scan.
+     * Send "" (empty string) to start at the first row.
+     */
+    2:Text startRow,
+
+    /**
+     * row to stop scanning on. This row is *not* included in the
+     * scanner's results
+     */
+    3:Text stopRow,
+
+    /**
+     * columns to scan. If column name is a column family, all
+     * columns of the specified column family are returned. It's also possible
+     * to pass a regex in the column qualifier.
+     */
+    4:list<Text> columns,
+
+    /** timestamp */
+    5:i64 timestamp
+  ) throws (1:IOError io)
+
+  /**
+   * Returns the scanner's current row value and advances to the next
+   * row in the table.  When there are no more rows in the table, or a key
+   * greater-than-or-equal-to the scanner's specified stopRow is reached,
+   * an empty list is returned.
+   *
+   * @return a TRowResult containing the current row and a map of the columns to TCells.
+   *
+   * @throws IllegalArgument if ScannerID is invalid
+   *
+   * @throws NotFound when the scanner reaches the end
+   */
+  list<TRowResult> scannerGet(
+    /** id of a scanner returned by scannerOpen */
+    1:ScannerID id
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+
+  /**
+   * Returns, starting at the scanner's current row value nbRows worth of
+   * rows and advances to the next row in the table.  When there are no more 
+   * rows in the table, or a key greater-than-or-equal-to the scanner's 
+   * specified stopRow is reached,  an empty list is returned.
+   *
+   * @return a TRowResult containing the current row and a map of the columns to TCells.
+   *
+   * @throws IllegalArgument if ScannerID is invalid
+   *
+   * @throws NotFound when the scanner reaches the end
+   */
+  list<TRowResult> scannerGetList(
+    /** id of a scanner returned by scannerOpen */
+    1:ScannerID id,
+
+    /** number of results to return */
+    2:i32 nbRows
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+
+  /**
+   * Closes the server-state associated with an open scanner.
+   *
+   * @throws IllegalArgument if ScannerID is invalid
+   */
+  void scannerClose(
+    /** id of a scanner returned by scannerOpen */
+    1:ScannerID id
+  ) throws (1:IOError io, 2:IllegalArgument ia)
+}
diff --git a/0.90/src/main/ruby/hbase.rb b/0.90/src/main/ruby/hbase.rb
new file mode 100644
index 0000000..a62a53b
--- /dev/null
+++ b/0.90/src/main/ruby/hbase.rb
@@ -0,0 +1,71 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# HBase ruby classes.
+# Has wrapper classes for org.apache.hadoop.hbase.client.HBaseAdmin
+# and for org.apache.hadoop.hbase.client.HTable.  Classes take
+# Formatters on construction and outputs any results using
+# Formatter methods.  These classes are only really for use by
+# the hirb.rb HBase Shell script; they don't make much sense elsewhere.
+# For example, the exists method on Admin class prints to the formatter
+# whether the table exists and returns nil regardless.
+include Java
+
+include_class('java.lang.Integer') {|package,name| "J#{name}" }
+include_class('java.lang.Long') {|package,name| "J#{name}" }
+include_class('java.lang.Boolean') {|package,name| "J#{name}" }
+
+module HBaseConstants
+  COLUMN = "COLUMN"
+  COLUMNS = "COLUMNS"
+  TIMESTAMP = "TIMESTAMP"
+  NAME = org.apache.hadoop.hbase.HConstants::NAME
+  VERSIONS = org.apache.hadoop.hbase.HConstants::VERSIONS
+  IN_MEMORY = org.apache.hadoop.hbase.HConstants::IN_MEMORY
+  STOPROW = "STOPROW"
+  STARTROW = "STARTROW"
+  ENDROW = STOPROW
+  LIMIT = "LIMIT"
+  METHOD = "METHOD"
+  MAXLENGTH = "MAXLENGTH"
+  CACHE_BLOCKS = "CACHE_BLOCKS"
+  REPLICATION_SCOPE = "REPLICATION_SCOPE"
+  INTERVAL = 'INTERVAL'
+  CACHE = 'CACHE'
+
+  # Load constants from hbase java API
+  def self.promote_constants(constants)
+    # The constants to import are all in uppercase
+    constants.each do |c|
+      next if c =~ /DEFAULT_.*/ || c != c.upcase
+      next if eval("defined?(#{c})")
+      eval("#{c} = '#{c}'")
+    end
+  end
+
+  promote_constants(org.apache.hadoop.hbase.HColumnDescriptor.constants)
+  promote_constants(org.apache.hadoop.hbase.HTableDescriptor.constants)
+end
+
+# Include classes definition
+require 'hbase/hbase'
+require 'hbase/admin'
+require 'hbase/table'
+require 'hbase/replication_admin'
diff --git a/0.90/src/main/ruby/hbase/admin.rb b/0.90/src/main/ruby/hbase/admin.rb
new file mode 100644
index 0000000..d923551
--- /dev/null
+++ b/0.90/src/main/ruby/hbase/admin.rb
@@ -0,0 +1,402 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include Java
+
+# Wrapper for org.apache.hadoop.hbase.client.HBaseAdmin
+
+module Hbase
+  class Admin
+    include HBaseConstants
+
+    def initialize(configuration, formatter)
+      @admin = org.apache.hadoop.hbase.client.HBaseAdmin.new(configuration)
+      connection = @admin.getConnection()
+      @zk_wrapper = connection.getZooKeeperWatcher()
+      zk = @zk_wrapper.getZooKeeper()
+      @zk_main = org.apache.zookeeper.ZooKeeperMain.new(zk)
+      @formatter = formatter
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Returns a list of tables in hbase
+    def list
+      @admin.listTables.map { |t| t.getNameAsString }
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Requests a table or region flush
+    def flush(table_or_region_name)
+      @admin.flush(table_or_region_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Requests a table or region compaction
+    def compact(table_or_region_name)
+      @admin.compact(table_or_region_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Requests a table or region major compaction
+    def major_compact(table_or_region_name)
+      @admin.majorCompact(table_or_region_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Requests a table or region split
+    def split(table_or_region_name)
+      @admin.split(table_or_region_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Requests a cluster balance
+    # Returns true if balancer ran
+    def balancer()
+      @admin.balancer()
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Enable/disable balancer
+    # Returns previous balancer switch setting.
+    def balance_switch(enableDisable)
+      @admin.balanceSwitch(java.lang.Boolean::valueOf(enableDisable))
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Enables a table
+    def enable(table_name)
+      tableExists(table_name)
+      return if enabled?(table_name)
+      @admin.enableTable(table_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Disables a table
+    def disable(table_name)
+      tableExists(table_name)
+      return if disabled?(table_name)
+      @admin.disableTable(table_name)
+    end
+
+    #---------------------------------------------------------------------------------------------
+    # Throw exception if table doesn't exist
+    def tableExists(table_name)
+      raise ArgumentError, "Table #{table_name} does not exist.'" unless exists?(table_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Is table disabled?
+    def disabled?(table_name)
+      @admin.isTableDisabled(table_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Drops a table
+    def drop(table_name)
+      tableExists(table_name)
+      raise ArgumentError, "Table #{table_name} is enabled. Disable it first.'" if enabled?(table_name)
+
+      @admin.deleteTable(table_name)
+      flush(org.apache.hadoop.hbase.HConstants::META_TABLE_NAME)
+      major_compact(org.apache.hadoop.hbase.HConstants::META_TABLE_NAME)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Returns ZooKeeper status dump
+    def zk_dump
+      org.apache.hadoop.hbase.zookeeper.ZKUtil::dump(@zk_wrapper)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Creates a table
+    def create(table_name, *args)
+      # Fail if table name is not a string
+      raise(ArgumentError, "Table name must be of type String") unless table_name.kind_of?(String)
+
+      # Flatten params array
+      args = args.flatten.compact
+
+      # Fail if no column families defined
+      raise(ArgumentError, "Table must have at least one column family") if args.empty?
+
+      # Start defining the table
+      htd = org.apache.hadoop.hbase.HTableDescriptor.new(table_name)
+
+      # All args are columns, add them to the table definition
+      # TODO: add table options support
+      args.each do |arg|
+        unless arg.kind_of?(String) || arg.kind_of?(Hash)
+          raise(ArgumentError, "#{arg.class} of #{arg.inspect} is not of Hash or String type")
+        end
+
+        # Add column to the table
+        descriptor = hcd(arg, htd)
+        if arg[COMPRESSION_COMPACT]
+          descriptor.setValue(COMPRESSION_COMPACT, arg[COMPRESSION_COMPACT])
+        end
+        htd.addFamily(descriptor)
+      end
+
+      # Perform the create table call
+      @admin.createTable(htd)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Closes a region
+    def close_region(region_name, server = nil)
+      @admin.closeRegion(region_name, server ? [server].to_java : nil)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Assign a region
+    def assign(region_name, force)
+      @admin.assign(org.apache.hadoop.hbase.util.Bytes.toBytes(region_name), java.lang.Boolean::valueOf(force))
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Unassign a region
+    def unassign(region_name, force)
+      @admin.unassign(org.apache.hadoop.hbase.util.Bytes.toBytes(region_name), java.lang.Boolean::valueOf(force))
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Move a region
+    def move(encoded_region_name, server = nil)
+      @admin.move(org.apache.hadoop.hbase.util.Bytes.toBytes(encoded_region_name), server ? org.apache.hadoop.hbase.util.Bytes.toBytes(server): nil)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Returns table's structure description
+    def describe(table_name)
+      tables = @admin.listTables.to_a
+      tables << org.apache.hadoop.hbase.HTableDescriptor::META_TABLEDESC
+      tables << org.apache.hadoop.hbase.HTableDescriptor::ROOT_TABLEDESC
+
+      tables.each do |t|
+        # Found the table
+        return t.to_s if t.getNameAsString == table_name
+      end
+
+      raise(ArgumentError, "Failed to find table named #{table_name}")
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Truncates table (deletes all records by recreating the table)
+    def truncate(table_name)
+      h_table = org.apache.hadoop.hbase.client.HTable.new(table_name)
+      table_description = h_table.getTableDescriptor()
+      yield 'Disabling table...' if block_given?
+      disable(table_name)
+
+      yield 'Dropping table...' if block_given?
+      drop(table_name)
+
+      yield 'Creating table...' if block_given?
+      @admin.createTable(table_description)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Change table structure or table options
+    def alter(table_name, *args)
+      # Table name should be a string
+      raise(ArgumentError, "Table name must be of type String") unless table_name.kind_of?(String)
+
+      # Table should exist
+      raise(ArgumentError, "Can't find a table: #{table_name}") unless exists?(table_name)
+
+      # Table should be disabled
+      raise(ArgumentError, "Table #{table_name} is enabled. Disable it first before altering.") if enabled?(table_name)
+
+      # There should be at least one argument
+      raise(ArgumentError, "There should be at least one argument but the table name") if args.empty?
+
+      # Get table descriptor
+      htd = @admin.getTableDescriptor(table_name.to_java_bytes)
+
+      # Process all args
+      args.each do |arg|
+        # Normalize args to support column name only alter specs
+        arg = { NAME => arg } if arg.kind_of?(String)
+
+        # Normalize args to support shortcut delete syntax
+        arg = { METHOD => 'delete', NAME => arg['delete'] } if arg['delete']
+
+        # No method parameter, try to use the args as a column definition
+        unless method = arg.delete(METHOD)
+          descriptor = hcd(arg, htd)
+          if arg[COMPRESSION_COMPACT]
+            descriptor.setValue(COMPRESSION_COMPACT, arg[COMPRESSION_COMPACT])
+          end
+          column_name = descriptor.getNameAsString
+
+          # If column already exist, then try to alter it. Create otherwise.
+          if htd.hasFamily(column_name.to_java_bytes)
+            @admin.modifyColumn(table_name, column_name, descriptor)
+          else
+            @admin.addColumn(table_name, descriptor)
+          end
+          next
+        end
+
+        # Delete column family
+        if method == "delete"
+          raise(ArgumentError, "NAME parameter missing for delete method") unless arg[NAME]
+          @admin.deleteColumn(table_name, arg[NAME])
+          next
+        end
+
+        # Change table attributes
+        if method == "table_att"
+          htd.setMaxFileSize(JLong.valueOf(arg[MAX_FILESIZE])) if arg[MAX_FILESIZE]
+          htd.setReadOnly(JBoolean.valueOf(arg[READONLY])) if arg[READONLY]
+          htd.setMemStoreFlushSize(JLong.valueOf(arg[MEMSTORE_FLUSHSIZE])) if arg[MEMSTORE_FLUSHSIZE]
+          htd.setDeferredLogFlush(JBoolean.valueOf(arg[DEFERRED_LOG_FLUSH])) if arg[DEFERRED_LOG_FLUSH]
+          @admin.modifyTable(table_name.to_java_bytes, htd)
+          next
+        end
+
+        # Unknown method
+        raise ArgumentError, "Unknown method: #{method}"
+      end
+    end
+
+    def status(format)
+      status = @admin.getClusterStatus()
+      if format == "detailed"
+        puts("version %s" % [ status.getHBaseVersion() ])
+        # Put regions in transition first because usually empty
+        puts("%d regionsInTransition" % status.getRegionsInTransition().size())
+        for k, v in status.getRegionsInTransition()
+          puts("    %s" % [v])
+        end
+        puts("%d live servers" % [ status.getServers() ])
+        for server in status.getServerInfo()
+          puts("    %s:%d %d" % \
+            [ server.getServerAddress().getHostname(),  \
+              server.getServerAddress().getPort(), server.getStartCode() ])
+          puts("        %s" % [ server.getLoad().toString() ])
+          for region in server.getLoad().getRegionsLoad()
+            puts("        %s" % [ region.getNameAsString() ])
+            puts("            %s" % [ region.toString() ])
+          end
+        end
+        puts("%d dead servers" % [ status.getDeadServers() ])
+        for server in status.getDeadServerNames()
+          puts("    %s" % [ server ])
+        end
+      elsif format == "simple"
+        load = 0
+        regions = 0
+        puts("%d live servers" % [ status.getServers() ])
+        for server in status.getServerInfo()
+          puts("    %s:%d %d" % \
+            [ server.getServerAddress().getHostname(),  \
+              server.getServerAddress().getPort(), server.getStartCode() ])
+          puts("        %s" % [ server.getLoad().toString() ])
+          load += server.getLoad().getNumberOfRequests()
+          regions += server.getLoad().getNumberOfRegions()
+        end
+        puts("%d dead servers" % [ status.getDeadServers() ])
+        for server in status.getDeadServerNames()
+          puts("    %s" % [ server ])
+        end
+        puts("Aggregate load: %d, regions: %d" % [ load , regions ] )
+      else
+        puts "#{status.getServers} servers, #{status.getDeadServers} dead, #{'%.4f' % status.getAverageLoad} average load"
+      end
+    end
+
+    #----------------------------------------------------------------------------------------------
+    #
+    # Helper methods
+    #
+
+    # Does table exist?
+    def exists?(table_name)
+      @admin.tableExists(table_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Is table enabled
+    def enabled?(table_name)
+      @admin.isTableEnabled(table_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Return a new HColumnDescriptor made of passed args
+    def hcd(arg, htd)
+      # String arg, single parameter constructor
+      return org.apache.hadoop.hbase.HColumnDescriptor.new(arg) if arg.kind_of?(String)
+
+      raise(ArgumentError, "Column family #{arg} must have a name") unless name = arg[NAME]
+      
+      family = htd.getFamily(name.to_java_bytes)
+      # create it if it's a new family
+      family ||= org.apache.hadoop.hbase.HColumnDescriptor.new(name.to_java_bytes)
+
+      family.setBlockCacheEnabled(JBoolean.valueOf(arg[org.apache.hadoop.hbase.HColumnDescriptor::BLOCKCACHE])) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::BLOCKCACHE)
+      family.setScope(JInteger.valueOf(arg[REPLICATION_SCOPE])) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::REPLICATION_SCOPE)
+      family.setInMemory(JBoolean.valueOf(arg[IN_MEMORY])) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::IN_MEMORY)
+      family.setTimeToLive(JInteger.valueOf(arg[HColumnDescriptor::TTL])) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::TTL)
+      family.setBlocksize(JInteger.valueOf(arg[HColumnDescriptor::BLOCKSIZE])) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::BLOCKSIZE)
+      family.setMaxVersions(JInteger.valueOf(arg[VERSIONS])) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::VERSIONS)
+      if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::BLOOMFILTER)
+        bloomtype = arg[org.apache.hadoop.hbase.HColumnDescriptor::BLOOMFILTER].upcase
+        unless org.apache.hadoop.hbase.regionserver.StoreFile::BloomType.constants.include?(bloomtype)      
+          raise(ArgumentError, "BloomFilter type #{bloomtype} is not supported. Use one of " + org.apache.hadoop.hbase.regionserver.StoreFile::BloomType.constants.join(" ")) 
+        else 
+          family.setBloomFilterType(org.apache.hadoop.hbase.regionserver.StoreFile::BloomType.valueOf(bloomtype))
+        end
+      end
+      if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::COMPRESSION)
+        compression = arg[org.apache.hadoop.hbase.HColumnDescriptor::COMPRESSION].upcase
+        unless org.apache.hadoop.hbase.io.hfile.Compression::Algorithm.constants.include?(compression)      
+          raise(ArgumentError, "Compression #{compression} is not supported. Use one of " + org.apache.hadoop.hbase.io.hfile.Compression::Algorithm.constants.join(" ")) 
+        else 
+          family.setCompressionType(org.apache.hadoop.hbase.io.hfile.Compression::Algorithm.valueOf(compression))
+        end
+      end
+      return family
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Enables/disables a region by name
+    def online(region_name, on_off)
+      # Open meta table
+      meta = org.apache.hadoop.hbase.client.HTable.new(org.apache.hadoop.hbase.HConstants::META_TABLE_NAME)
+
+      # Read region info
+      # FIXME: fail gracefully if can't find the region
+      region_bytes = org.apache.hadoop.hbase.util.Bytes.toBytes(region_name)
+      g = org.apache.hadoop.hbase.client.Get.new(region_bytes)
+      g.addColumn(org.apache.hadoop.hbase.HConstants::CATALOG_FAMILY, org.apache.hadoop.hbase.HConstants::REGIONINFO_QUALIFIER)
+      hri_bytes = meta.get(g).value
+
+      # Change region status
+      hri = org.apache.hadoop.hbase.util.Writables.getWritable(hri_bytes, org.apache.hadoop.hbase.HRegionInfo.new)
+      hri.setOffline(on_off)
+
+      # Write it back
+      put = org.apache.hadoop.hbase.client.Put.new(region_bytes)
+      put.add(org.apache.hadoop.hbase.HConstants::CATALOG_FAMILY, org.apache.hadoop.hbase.HConstants::REGIONINFO_QUALIFIER, org.apache.hadoop.hbase.util.Writables.getBytes(hri))
+      meta.put(put)
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/hbase/hbase.rb b/0.90/src/main/ruby/hbase/hbase.rb
new file mode 100644
index 0000000..beb2450
--- /dev/null
+++ b/0.90/src/main/ruby/hbase/hbase.rb
@@ -0,0 +1,56 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include Java
+
+require 'hbase/admin'
+require 'hbase/table'
+
+module Hbase
+  class Hbase
+    attr_accessor :configuration
+
+    def initialize(config = nil)
+      # Create configuration
+      if config
+        self.configuration = config
+      else
+        self.configuration = org.apache.hadoop.hbase.HBaseConfiguration.create
+        # Turn off retries in hbase and ipc.  Human doesn't want to wait on N retries.
+        configuration.setInt("hbase.client.retries.number", 7)
+        configuration.setInt("ipc.client.connect.max.retries", 3)
+      end
+    end
+
+    def admin(formatter)
+      ::Hbase::Admin.new(configuration, formatter)
+    end
+
+    # Create new one each time
+    def table(table, formatter)
+      ::Hbase::Table.new(configuration, table, formatter)
+    end
+
+    def replication_admin(formatter)
+      ::Hbase::RepAdmin.new(configuration, formatter)
+    end
+
+  end
+end
diff --git a/0.90/src/main/ruby/hbase/replication_admin.rb b/0.90/src/main/ruby/hbase/replication_admin.rb
new file mode 100644
index 0000000..35e37af
--- /dev/null
+++ b/0.90/src/main/ruby/hbase/replication_admin.rb
@@ -0,0 +1,70 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include Java
+
+# Wrapper for org.apache.hadoop.hbase.client.HBaseAdmin
+
+module Hbase
+  class RepAdmin
+    include HBaseConstants
+
+    def initialize(configuration, formatter)
+      @replication_admin = org.apache.hadoop.hbase.client.replication.ReplicationAdmin.new(configuration)
+      @formatter = formatter
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Add a new peer cluster to replicate to
+    def add_peer(id, cluster_key)
+      @replication_admin.addPeer(id, cluster_key)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Remove a peer cluster, stops the replication
+    def remove_peer(id)
+      @replication_admin.removePeer(id)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Restart the replication stream to the specified peer
+    def enable_peer(id)
+      @replication_admin.enablePeer(id)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Stop the replication stream to the specified peer
+    def disable_peer(id)
+      @replication_admin.disablePeer(id)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Restart the replication, in an unknown state
+    def start_replication
+      @replication_admin.setReplicating(true)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Kill switch for replication, stops all its features
+    def stop_replication
+      @replication_admin.setReplicating(false)
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/hbase/table.rb b/0.90/src/main/ruby/hbase/table.rb
new file mode 100644
index 0000000..c8e0076
--- /dev/null
+++ b/0.90/src/main/ruby/hbase/table.rb
@@ -0,0 +1,320 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+include Java
+
+# Wrapper for org.apache.hadoop.hbase.client.HTable
+
+module Hbase
+  class Table
+    include HBaseConstants
+
+    def initialize(configuration, table_name, formatter)
+      @table = org.apache.hadoop.hbase.client.HTable.new(configuration, table_name)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Put a cell 'value' at specified table/row/column
+    def put(row, column, value, timestamp = nil)
+      p = org.apache.hadoop.hbase.client.Put.new(row.to_s.to_java_bytes)
+      family, qualifier = parse_column_name(column)
+      if timestamp
+        p.add(family, qualifier, timestamp, value.to_s.to_java_bytes)
+      else
+        p.add(family, qualifier, value.to_s.to_java_bytes)
+      end
+      @table.put(p)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Delete a cell
+    def delete(row, column, timestamp = org.apache.hadoop.hbase.HConstants::LATEST_TIMESTAMP)
+      deleteall(row, column, timestamp)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Delete a row
+    def deleteall(row, column = nil, timestamp = org.apache.hadoop.hbase.HConstants::LATEST_TIMESTAMP)
+      d = org.apache.hadoop.hbase.client.Delete.new(row.to_s.to_java_bytes, timestamp, nil)
+      if column
+        family, qualifier = parse_column_name(column)
+        d.deleteColumns(family, qualifier, timestamp)
+      end
+      @table.delete(d)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Increment a counter atomically
+    def incr(row, column, value = nil)
+      value ||= 1
+      family, qualifier = parse_column_name(column)
+      @table.incrementColumnValue(row.to_s.to_java_bytes, family, qualifier, value)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Count rows in a table
+    def count(interval = 1000, caching_rows = 10)
+      # We can safely set scanner caching with the first key only filter
+      scan = org.apache.hadoop.hbase.client.Scan.new
+      scan.cache_blocks = false
+      scan.caching = caching_rows
+      scan.setFilter(org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter.new)
+
+      # Run the scanner
+      scanner = @table.getScanner(scan)
+      count = 0
+      iter = scanner.iterator
+
+      # Iterate results
+      while iter.hasNext
+        row = iter.next
+        count += 1
+        next unless (block_given? && count % interval == 0)
+        # Allow command modules to visualize counting process
+        yield(count, String.from_java_bytes(row.getRow))
+      end
+
+      # Return the counter
+      return count
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Get from table
+    def get(row, *args)
+      get = org.apache.hadoop.hbase.client.Get.new(row.to_s.to_java_bytes)
+      maxlength = -1
+
+      # Normalize args
+      args = args.first if args.first.kind_of?(Hash)
+      if args.kind_of?(String) || args.kind_of?(Array)
+        columns = [ args ].flatten.compact
+        args = { COLUMNS => columns }
+      end
+
+      #
+      # Parse arguments
+      #
+      unless args.kind_of?(Hash)
+        raise ArgumentError, "Failed parse of of #{args.inspect}, #{args.class}"
+      end
+
+      # Get maxlength parameter if passed
+      maxlength = args.delete(MAXLENGTH) if args[MAXLENGTH]
+
+      unless args.empty?
+        columns = args[COLUMN] || args[COLUMNS]
+        if columns
+          # Normalize types, convert string to an array of strings
+          columns = [ columns ] if columns.is_a?(String)
+
+          # At this point it is either an array or some unsupported stuff
+          unless columns.kind_of?(Array)
+            raise ArgumentError, "Failed parse column argument type #{args.inspect}, #{args.class}"
+          end
+
+          # Get each column name and add it to the filter
+          columns.each do |column|
+            family, qualifier = parse_column_name(column.to_s)
+            if qualifier
+              get.addColumn(family, qualifier)
+            else
+              get.addFamily(family)
+            end
+          end
+
+          # Additional params
+          get.setMaxVersions(args[VERSIONS] || 1)
+          get.setTimeStamp(args[TIMESTAMP]) if args[TIMESTAMP]
+        else
+          # May have passed TIMESTAMP and row only; wants all columns from ts.
+          unless ts = args[TIMESTAMP]
+            raise ArgumentError, "Failed parse of #{args.inspect}, #{args.class}"
+          end
+
+          # Set the timestamp
+          get.setTimeStamp(ts.to_i)
+        end
+      end
+
+      # Call hbase for the results
+      result = @table.get(get)
+      return nil if result.isEmpty
+
+      # Print out results.  Result can be Cell or RowResult.
+      res = {}
+      result.list.each do |kv|
+        family = String.from_java_bytes(kv.getFamily)
+        qualifier = org.apache.hadoop.hbase.util.Bytes::toStringBinary(kv.getQualifier)
+
+        column = "#{family}:#{qualifier}"
+        value = to_string(column, kv, maxlength)
+
+        if block_given?
+          yield(column, value)
+        else
+          res[column] = value
+        end
+      end
+
+      # If block given, we've yielded all the results, otherwise just return them
+      return ((block_given?) ? nil : res)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Fetches and decodes a counter value from hbase
+    def get_counter(row, column)
+      family, qualifier = parse_column_name(column.to_s)
+      # Format get request
+      get = org.apache.hadoop.hbase.client.Get.new(row.to_s.to_java_bytes)
+      get.addColumn(family, qualifier)
+      get.setMaxVersions(1)
+
+      # Call hbase
+      result = @table.get(get)
+      return nil if result.isEmpty
+
+      # Fetch cell value
+      cell = result.list.first
+      org.apache.hadoop.hbase.util.Bytes::toLong(cell.getValue)
+    end
+
+    #----------------------------------------------------------------------------------------------
+    # Scans whole table or a range of keys and returns rows matching specific criterias
+    def scan(args = {})
+      unless args.kind_of?(Hash)
+        raise ArgumentError, "Arguments should be a hash. Failed to parse #{args.inspect}, #{args.class}"
+      end
+
+      limit = args.delete("LIMIT") || -1
+      maxlength = args.delete("MAXLENGTH") || -1
+
+      if args.any?
+        filter = args["FILTER"]
+        startrow = args["STARTROW"] || ''
+        stoprow = args["STOPROW"]
+        timestamp = args["TIMESTAMP"]
+        columns = args["COLUMNS"] || args["COLUMN"] || get_all_columns
+        cache = args["CACHE_BLOCKS"] || true
+        versions = args["VERSIONS"] || 1
+
+        # Normalize column names
+        columns = [columns] if columns.class == String
+        unless columns.kind_of?(Array)
+          raise ArgumentError.new("COLUMNS must be specified as a String or an Array")
+        end
+
+        scan = if stoprow
+          org.apache.hadoop.hbase.client.Scan.new(startrow.to_java_bytes, stoprow.to_java_bytes)
+        else
+          org.apache.hadoop.hbase.client.Scan.new(startrow.to_java_bytes)
+        end
+
+        columns.each { |c| scan.addColumns(c) }
+        scan.setFilter(filter) if filter
+        scan.setTimeStamp(timestamp) if timestamp
+        scan.setCacheBlocks(cache)
+        scan.setMaxVersions(versions) if versions > 1
+      else
+        scan = org.apache.hadoop.hbase.client.Scan.new
+      end
+
+      # Start the scanner
+      scanner = @table.getScanner(scan)
+      count = 0
+      res = {}
+      iter = scanner.iterator
+
+      # Iterate results
+      while iter.hasNext
+        if limit > 0 && count >= limit
+          break
+        end
+
+        row = iter.next
+        key = org.apache.hadoop.hbase.util.Bytes::toStringBinary(row.getRow)
+
+        row.list.each do |kv|
+          family = String.from_java_bytes(kv.getFamily)
+          qualifier = org.apache.hadoop.hbase.util.Bytes::toStringBinary(kv.getQualifier)
+
+          column = "#{family}:#{qualifier}"
+          cell = to_string(column, kv, maxlength)
+
+          if block_given?
+            yield(key, "column=#{column}, #{cell}")
+          else
+            res[key] ||= {}
+            res[key][column] = cell
+          end
+        end
+
+        # One more row processed
+        count += 1
+      end
+
+      return ((block_given?) ? count : res)
+    end
+
+    #----------------------------------------------------------------------------------------
+    # Helper methods
+
+    # Returns a list of column names in the table
+    def get_all_columns
+      @table.table_descriptor.getFamilies.map do |family|
+        "#{family.getNameAsString}:"
+      end
+    end
+
+    # Checks if current table is one of the 'meta' tables
+    def is_meta_table?
+      tn = @table.table_name
+      org.apache.hadoop.hbase.util.Bytes.equals(tn, org.apache.hadoop.hbase.HConstants::META_TABLE_NAME) || org.apache.hadoop.hbase.util.Bytes.equals(tn, org.apache.hadoop.hbase.HConstants::ROOT_TABLE_NAME)
+    end
+
+    # Returns family and (when has it) qualifier for a column name
+    def parse_column_name(column)
+      split = org.apache.hadoop.hbase.KeyValue.parseColumn(column.to_java_bytes)
+      return split[0], (split.length > 1) ? split[1] : nil
+    end
+
+    # Make a String of the passed kv
+    # Intercept cells whose format we know such as the info:regioninfo in .META.
+    def to_string(column, kv, maxlength = -1)
+      if is_meta_table?
+        if column == 'info:regioninfo' or column == 'info:splitA' or column == 'info:splitB'
+          hri = org.apache.hadoop.hbase.util.Writables.getHRegionInfoOrNull(kv.getValue)
+          return "timestamp=%d, value=%s" % [kv.getTimestamp, hri.toString]
+        end
+        if column == 'info:serverstartcode'
+          if kv.getValue.length > 0
+            str_val = org.apache.hadoop.hbase.util.Bytes.toLong(kv.getValue)
+          else
+            str_val = org.apache.hadoop.hbase.util.Bytes.toStringBinary(kv.getValue)
+          end
+          return "timestamp=%d, value=%s" % [kv.getTimestamp, str_val]
+        end
+      end
+
+      val = "timestamp=#{kv.getTimestamp}, value=#{org.apache.hadoop.hbase.util.Bytes::toStringBinary(kv.getValue)}"
+      (maxlength != -1) ? val[0, maxlength] : val
+    end
+
+  end
+end
diff --git a/0.90/src/main/ruby/irb/hirb.rb b/0.90/src/main/ruby/irb/hirb.rb
new file mode 100644
index 0000000..454c933
--- /dev/null
+++ b/0.90/src/main/ruby/irb/hirb.rb
@@ -0,0 +1,52 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module IRB
+  # Subclass of IRB so can intercept methods
+  class HIRB < Irb
+    def initialize
+      # This is ugly.  Our 'help' method above provokes the following message
+      # on irb construction: 'irb: warn: can't alias help from irb_help.'
+      # Below, we reset the output so its pointed at /dev/null during irb
+      # construction just so this message does not come out after we emit
+      # the banner.  Other attempts at playing with the hash of methods
+      # down in IRB didn't seem to work. I think the worst thing that can
+      # happen is the shell exiting because of failed IRB construction with
+      # no error (though we're not blanking STDERR)
+      begin
+        f = File.open("/dev/null", "w")
+        $stdout = f
+        super
+      ensure
+        f.close()
+        $stdout = STDOUT
+      end
+    end
+
+    def output_value
+      # Suppress output if last_value is 'nil'
+      # Otherwise, when user types help, get ugly 'nil'
+      # after all output.
+      if @context.last_value != nil
+        super
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell.rb b/0.90/src/main/ruby/shell.rb
new file mode 100644
index 0000000..9027202
--- /dev/null
+++ b/0.90/src/main/ruby/shell.rb
@@ -0,0 +1,278 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Shell commands module
+module Shell
+  @@commands = {}
+  def self.commands
+    @@commands
+  end
+
+  @@command_groups = {}
+  def self.command_groups
+    @@command_groups
+  end
+
+  def self.load_command(name, group)
+    return if commands[name]
+
+    # Register command in the group
+    raise ArgumentError, "Unknown group: #{group}" unless command_groups[group]
+    command_groups[group][:commands] << name
+
+    # Load command
+    begin
+      require "shell/commands/#{name}"
+      klass_name = name.to_s.gsub(/(?:^|_)(.)/) { $1.upcase } # camelize
+      commands[name] = eval("Commands::#{klass_name}")
+    rescue => e
+      raise "Can't load hbase shell command: #{name}. Error: #{e}\n#{e.backtrace.join("\n")}"
+    end
+  end
+
+  def self.load_command_group(group, opts)
+    raise ArgumentError, "No :commands for group #{group}" unless opts[:commands]
+
+    command_groups[group] = {
+      :commands => [],
+      :command_names => opts[:commands],
+      :full_name => opts[:full_name] || group,
+      :comment => opts[:comment]
+    }
+
+    opts[:commands].each do |command|
+      load_command(command, group)
+    end
+  end
+
+  #----------------------------------------------------------------------
+  class Shell
+    attr_accessor :hbase
+    attr_accessor :formatter
+
+    @debug = false
+    attr_accessor :debug
+
+    def initialize(hbase, formatter)
+      self.hbase = hbase
+      self.formatter = formatter
+    end
+
+    def hbase_admin
+      @hbase_admin ||= hbase.admin(formatter)
+    end
+
+    def hbase_table(name)
+      hbase.table(name, formatter)
+    end
+
+    def hbase_replication_admin
+      @hbase_replication_admin ||= hbase.replication_admin(formatter)
+    end
+
+    def export_commands(where)
+      ::Shell.commands.keys.each do |cmd|
+        where.send :instance_eval, <<-EOF
+          def #{cmd}(*args)
+            @shell.command('#{cmd}', *args)
+            puts
+          end
+        EOF
+      end
+    end
+
+    def command_instance(command)
+      ::Shell.commands[command.to_s].new(self)
+    end
+
+    def command(command, *args)
+      command_instance(command).command_safe(self.debug, *args)
+    end
+
+    def print_banner
+      puts "HBase Shell; enter 'help<RETURN>' for list of supported commands."
+      puts 'Type "exit<RETURN>" to leave the HBase Shell'
+      print 'Version '
+      command('version')
+      puts
+    end
+
+    def help_multi_command(command)
+      puts "Command: #{command}"
+      puts command_instance(command).help
+      puts
+      return nil
+    end
+
+    def help_command(command)
+      puts command_instance(command).help
+      return nil
+    end
+
+    def help_group(group_name)
+      group = ::Shell.command_groups[group_name.to_s]
+      group[:commands].sort.each { |cmd| help_multi_command(cmd) }
+      if group[:comment]
+        puts '-' * 80
+        puts
+        puts group[:comment]
+        puts
+      end
+      return nil
+    end
+
+    def help(command = nil)
+      if command
+        return help_command(command) if ::Shell.commands[command.to_s]
+        return help_group(command) if ::Shell.command_groups[command.to_s]
+        puts "ERROR: Invalid command or command group name: #{command}"
+        puts
+      end
+
+      puts help_header
+      puts
+      puts 'COMMAND GROUPS:'
+      ::Shell.command_groups.each do |name, group|
+        puts "  Group name: " + name
+        puts "  Commands: " + group[:command_names].sort.join(', ')
+        puts
+      end
+      unless command
+        puts 'SHELL USAGE:'
+        help_footer
+      end
+      return nil
+    end
+
+    def help_header
+      return "HBase Shell, version #{org.apache.hadoop.hbase.util.VersionInfo.getVersion()}, " +
+             "r#{org.apache.hadoop.hbase.util.VersionInfo.getRevision()}, " +
+             "#{org.apache.hadoop.hbase.util.VersionInfo.getDate()}" + "\n" +
+        "Type 'help \"COMMAND\"', (e.g. 'help \"get\"' -- the quotes are necessary) for help on a specific command.\n" +
+        "Commands are grouped. Type 'help \"COMMAND_GROUP\"', (e.g. 'help \"general\"') for help on a command group."
+    end
+
+    def help_footer
+      puts <<-HERE
+Quote all names in HBase Shell such as table and column names.  Commas delimit
+command parameters.  Type <RETURN> after entering a command to run it.
+Dictionaries of configuration used in the creation and alteration of tables are
+Ruby Hashes. They look like this:
+
+  {'key1' => 'value1', 'key2' => 'value2', ...}
+
+and are opened and closed with curley-braces.  Key/values are delimited by the
+'=>' character combination.  Usually keys are predefined constants such as
+NAME, VERSIONS, COMPRESSION, etc.  Constants do not need to be quoted.  Type
+'Object.constants' to see a (messy) list of all constants in the environment.
+
+If you are using binary keys or values and need to enter them in the shell, use
+double-quote'd hexadecimal representation. For example:
+
+  hbase> get 't1', "key\\x03\\x3f\\xcd"
+  hbase> get 't1', "key\\003\\023\\011"
+  hbase> put 't1', "test\\xef\\xff", 'f1:', "\\x01\\x33\\x40"
+
+The HBase shell is the (J)Ruby IRB with the above HBase-specific commands added.
+For more on the HBase Shell, see http://hbase.apache.org/docs/current/book.html
+      HERE
+    end
+  end
+end
+
+# Load commands base class
+require 'shell/commands'
+
+# Load all commands
+Shell.load_command_group(
+  'general',
+  :full_name => 'GENERAL HBASE SHELL COMMANDS',
+  :commands => %w[
+    status
+    version
+  ]
+)
+
+Shell.load_command_group(
+  'ddl',
+  :full_name => 'TABLES MANAGEMENT COMMANDS',
+  :commands => %w[
+    alter
+    create
+    describe
+    disable
+    is_disabled
+    drop
+    enable
+    is_enabled
+    exists
+    list
+  ]
+)
+
+Shell.load_command_group(
+  'dml',
+  :full_name => 'DATA MANIPULATION COMMANDS',
+  :commands => %w[
+    count
+    delete
+    deleteall
+    get
+    get_counter
+    incr
+    put
+    scan
+    truncate
+  ]
+)
+
+Shell.load_command_group(
+  'tools',
+  :full_name => 'HBASE SURGERY TOOLS',
+  :comment => "WARNING: Above commands are for 'experts'-only as misuse can damage an install",
+  :commands => %w[
+    assign
+    balancer
+    balance_switch
+    close_region
+    compact
+    flush
+    major_compact
+    move
+    split
+    unassign
+    zk_dump
+  ]
+)
+
+Shell.load_command_group(
+  'replication',
+  :full_name => 'CLUSTER REPLICATION TOOLS',
+  :comment => "In order to use these tools, hbase.replication must be true. enabling/disabling is currently unsupported",
+  :commands => %w[
+    add_peer
+    remove_peer
+    enable_peer
+    disable_peer
+    start_replication
+    stop_replication
+  ]
+)
+
diff --git a/0.90/src/main/ruby/shell/commands.rb b/0.90/src/main/ruby/shell/commands.rb
new file mode 100644
index 0000000..a352c2e
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands.rb
@@ -0,0 +1,81 @@
+#
+# Copyright 2009 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Command
+      attr_accessor :shell
+
+      def initialize(shell)
+        self.shell = shell
+      end
+
+      def command_safe(debug, *args)
+        translate_hbase_exceptions(*args) { command(*args) }
+      rescue => e
+        puts
+        puts "ERROR: #{e}"
+        puts "Backtrace: #{e.backtrace.join("\n           ")}" if debug
+        puts
+        puts "Here is some help for this command:"
+        puts help
+        puts
+      ensure
+        return nil
+      end
+
+      def admin
+        shell.hbase_admin
+      end
+
+      def table(name)
+        shell.hbase_table(name)
+      end
+
+      def replication_admin
+        shell.hbase_replication_admin
+      end
+
+      #----------------------------------------------------------------------
+
+      def formatter
+        shell.formatter
+      end
+
+      def format_simple_command
+        now = Time.now
+        yield
+        formatter.header
+        formatter.footer(now)
+      end
+
+      def translate_hbase_exceptions(*args)
+        yield
+      rescue org.apache.hadoop.hbase.TableNotFoundException
+        raise "Unknown table #{args.first}!"
+      rescue org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException
+        valid_cols = table(args.first).get_all_columns.map { |c| c + '*' }
+        raise "Unknown column family! Valid column names: #{valid_cols.join(", ")}"
+      rescue org.apache.hadoop.hbase.TableExistsException
+        raise "Table already exists: #{args.first}!"
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/add_peer.rb b/0.90/src/main/ruby/shell/commands/add_peer.rb
new file mode 100644
index 0000000..7669fb7
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/add_peer.rb
@@ -0,0 +1,44 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class AddPeer< Command
+      def help
+        return <<-EOF
+Add a peer cluster to replicate to, the id must be a short and
+the cluster key is composed like this:
+hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent
+This gives a full path for HBase to connect to another cluster.
+Examples:
+
+  hbase> add_peer '1', "server1.cie.com:2181:/hbase"
+  hbase> add_peer '2', "zk1,zk2,zk3:2182:/hbase-prod"
+EOF
+      end
+
+      def command(id, cluster_key)
+        format_simple_command do
+          replication_admin.add_peer(id, cluster_key)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/alter.rb b/0.90/src/main/ruby/shell/commands/alter.rb
new file mode 100644
index 0000000..1dd43ad
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/alter.rb
@@ -0,0 +1,64 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Alter < Command
+      def help
+        return <<-EOF
+Alter column family schema;  pass table name and a dictionary
+specifying new column family schema. Dictionaries are described
+on the main help command output. Dictionary must include name
+of column family to alter. For example,
+
+To change or add the 'f1' column family in table 't1' from defaults
+to instead keep a maximum of 5 cell VERSIONS, do:
+
+  hbase> alter 't1', NAME => 'f1', VERSIONS => 5
+
+To delete the 'f1' column family in table 't1', do:
+
+  hbase> alter 't1', NAME => 'f1', METHOD => 'delete'
+
+or a shorter version:
+
+  hbase> alter 't1', 'delete' => 'f1'
+
+You can also change table-scope attributes like MAX_FILESIZE
+MEMSTORE_FLUSHSIZE, READONLY, and DEFERRED_LOG_FLUSH.
+
+For example, to change the max size of a family to 128MB, do:
+
+  hbase> alter 't1', METHOD => 'table_att', MAX_FILESIZE => '134217728'
+
+There could be more than one alteration in one command:
+
+  hbase> alter 't1', {NAME => 'f1'}, {NAME => 'f2', METHOD => 'delete'}
+EOF
+      end
+
+      def command(table, *args)
+        format_simple_command do
+          admin.alter(table, *args)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/assign.rb b/0.90/src/main/ruby/shell/commands/assign.rb
new file mode 100644
index 0000000..2fe4a7f
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/assign.rb
@@ -0,0 +1,39 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Assign < Command
+      def help
+        return <<-EOF
+Assign a region.  Add 'true' to force assign of a region. Use with caution.
+If region already assigned, this command will just go ahead and reassign
+the region anyways. For experts only.
+EOF
+      end
+
+      def command(region_name, force = 'false')
+        format_simple_command do
+          admin.assign(region_name, force)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/balance_switch.rb b/0.90/src/main/ruby/shell/commands/balance_switch.rb
new file mode 100644
index 0000000..0eac765
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/balance_switch.rb
@@ -0,0 +1,43 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class BalanceSwitch < Command
+      def help
+        return <<-EOF
+Enable/Disable balancer. Returns previous balancer state.
+Examples:
+
+  hbase> balance_switch true
+  hbase> balance_switch false
+EOF
+      end
+
+      def command(enableDisable)
+        format_simple_command do
+          formatter.row([
+            admin.balance_switch(enableDisable)? "true" : "false"
+          ])
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/balancer.rb b/0.90/src/main/ruby/shell/commands/balancer.rb
new file mode 100644
index 0000000..013980c
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/balancer.rb
@@ -0,0 +1,40 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Balancer < Command
+      def help
+        return <<-EOF
+Trigger the cluster balancer. Returns true if balancer ran.  Otherwise
+false (Will not run if regions in transition).
+EOF
+      end
+
+      def command()
+        format_simple_command do
+          formatter.row([
+            admin.balancer()? "true": "false"
+          ])
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/close_region.rb b/0.90/src/main/ruby/shell/commands/close_region.rb
new file mode 100644
index 0000000..412c0fb
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/close_region.rb
@@ -0,0 +1,45 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class CloseRegion < Command
+      def help
+        return <<-EOF
+Close a single region. Optionally specify regionserver. Connects to the
+regionserver and runs close on hosting regionserver.  The close is done
+without the master's involvement (It will not know of the close).  Once
+closed, region will stay closed.  Use assign to reopen/reassign.  Use
+unassign or move to assign the region elsewhere on cluster. Use with
+caution.  For experts only.  Examples:
+
+  hbase> close_region 'REGIONNAME'
+  hbase> close_region 'REGIONNAME', 'REGIONSERVER_IP:PORT'
+EOF
+      end
+
+      def command(region_name, server = nil)
+        format_simple_command do
+          admin.close_region(region_name, server)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/compact.rb b/0.90/src/main/ruby/shell/commands/compact.rb
new file mode 100644
index 0000000..d8f71de
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/compact.rb
@@ -0,0 +1,38 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Compact < Command
+      def help
+        return <<-EOF
+Compact all regions in passed table or pass a region row
+to compact an individual region
+EOF
+      end
+
+      def command(table_or_region_name)
+        format_simple_command do
+          admin.compact(table_or_region_name)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/count.rb b/0.90/src/main/ruby/shell/commands/count.rb
new file mode 100644
index 0000000..6596441
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/count.rb
@@ -0,0 +1,61 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Count < Command
+      def help
+        return <<-EOF
+Count the number of rows in a table. This operation may take a LONG
+time (Run '$HADOOP_HOME/bin/hadoop jar hbase.jar rowcount' to run a
+counting mapreduce job). Current count is shown every 1000 rows by
+default. Count interval may be optionally specified. Scan caching
+is enabled on count scans by default. Default cache size is 10 rows.
+If your rows are small in size, you may want to increase this
+parameter. Examples:
+
+ hbase> count 't1'
+ hbase> count 't1', INTERVAL => 100000
+ hbase> count 't1', CACHE => 1000
+ hbase> count 't1', INTERVAL => 10, CACHE => 1000
+EOF
+      end
+
+      def command(table, params = {})
+        # If the second parameter is an integer, then it is the old command syntax
+        params = { 'INTERVAL' => params } if params.kind_of?(Fixnum)
+
+        # Merge params with defaults
+        params = {
+          'INTERVAL' => 1000,
+          'CACHE' => 10
+        }.merge(params)
+
+        # Call the counter method
+        now = Time.now
+        formatter.header
+        count = table(table).count(params['INTERVAL'].to_i, params['CACHE'].to_i) do |cnt, row|
+          formatter.row([ "Current count: #{cnt}, row: #{row}" ])
+        end
+        formatter.footer(now, count)
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/create.rb b/0.90/src/main/ruby/shell/commands/create.rb
new file mode 100644
index 0000000..cfe5d3f
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/create.rb
@@ -0,0 +1,46 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Create < Command
+      def help
+        return <<-EOF
+Create table; pass table name, a dictionary of specifications per
+column family, and optionally a dictionary of table configuration.
+Dictionaries are described below in the GENERAL NOTES section.
+Examples:
+
+  hbase> create 't1', {NAME => 'f1', VERSIONS => 5}
+  hbase> create 't1', {NAME => 'f1'}, {NAME => 'f2'}, {NAME => 'f3'}
+  hbase> # The above in shorthand would be the following:
+  hbase> create 't1', 'f1', 'f2', 'f3'
+  hbase> create 't1', {NAME => 'f1', VERSIONS => 1, TTL => 2592000, BLOCKCACHE => true}
+EOF
+      end
+
+      def command(table, *args)
+        format_simple_command do
+          admin.create(table, *args)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/delete.rb b/0.90/src/main/ruby/shell/commands/delete.rb
new file mode 100644
index 0000000..12bc405
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/delete.rb
@@ -0,0 +1,43 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Delete < Command
+      def help
+        return <<-EOF
+Put a delete cell value at specified table/row/column and optionally
+timestamp coordinates.  Deletes must match the deleted cell's
+coordinates exactly.  When scanning, a delete cell suppresses older
+versions. To delete a cell from  't1' at row 'r1' under column 'c1'
+marked with the time 'ts1', do:
+
+  hbase> delete 't1', 'r1', 'c1', ts1
+EOF
+      end
+
+      def command(table, row, column, timestamp = org.apache.hadoop.hbase.HConstants::LATEST_TIMESTAMP)
+        format_simple_command do
+          table(table).delete(row, column, timestamp)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/deleteall.rb b/0.90/src/main/ruby/shell/commands/deleteall.rb
new file mode 100644
index 0000000..5731b60
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/deleteall.rb
@@ -0,0 +1,42 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Deleteall < Command
+      def help
+        return <<-EOF
+Delete all cells in a given row; pass a table name, row, and optionally
+a column and timestamp. Examples:
+
+  hbase> deleteall 't1', 'r1'
+  hbase> deleteall 't1', 'r1', 'c1'
+  hbase> deleteall 't1', 'r1', 'c1', ts1
+EOF
+      end
+
+      def command(table, row, column = nil, timestamp = org.apache.hadoop.hbase.HConstants::LATEST_TIMESTAMP)
+        format_simple_command do
+          table(table).deleteall(row, column, timestamp)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/describe.rb b/0.90/src/main/ruby/shell/commands/describe.rb
new file mode 100644
index 0000000..0f35507
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/describe.rb
@@ -0,0 +1,42 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Describe < Command
+      def help
+        return <<-EOF
+Describe the named table. For example:
+  hbase> describe 't1'
+EOF
+      end
+
+      def command(table)
+        now = Time.now
+
+        desc = admin.describe(table)
+
+        formatter.header([ "DESCRIPTION", "ENABLED" ], [ 64 ])
+        formatter.row([ desc, admin.enabled?(table).to_s ], true, [ 64 ])
+        formatter.footer(now)
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/disable.rb b/0.90/src/main/ruby/shell/commands/disable.rb
new file mode 100644
index 0000000..34c5f9c
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/disable.rb
@@ -0,0 +1,37 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Disable < Command
+      def help
+        return <<-EOF
+Start disable of named table: e.g. "hbase> disable 't1'"
+EOF
+      end
+
+      def command(table)
+        format_simple_command do
+          admin.disable(table)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/disable_peer.rb b/0.90/src/main/ruby/shell/commands/disable_peer.rb
new file mode 100644
index 0000000..ad1ebbd
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/disable_peer.rb
@@ -0,0 +1,44 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class DisablePeer< Command
+      def help
+        return <<-EOF
+Stops the replication stream to the specified cluster, but still
+keeps track of new edits to replicate.
+
+CURRENTLY UNSUPPORTED
+
+Examples:
+
+  hbase> disable_peer '1'
+EOF
+      end
+
+      def command(id)
+        format_simple_command do
+          replication_admin.disable_peer(id)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/drop.rb b/0.90/src/main/ruby/shell/commands/drop.rb
new file mode 100644
index 0000000..181b835
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/drop.rb
@@ -0,0 +1,40 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Drop < Command
+      def help
+        return <<-EOF
+Drop the named table. Table must first be disabled. If table has
+more than one region, run a major compaction on .META.:
+
+  hbase> major_compact ".META."
+EOF
+      end
+
+      def command(table)
+        format_simple_command do
+          admin.drop(table)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/enable.rb b/0.90/src/main/ruby/shell/commands/enable.rb
new file mode 100644
index 0000000..a0dc340
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/enable.rb
@@ -0,0 +1,37 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Enable < Command
+      def help
+        return <<-EOF
+Start enable of named table: e.g. "hbase> enable 't1'"
+EOF
+      end
+
+      def command(table)
+        format_simple_command do
+          admin.enable(table)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/enable_peer.rb b/0.90/src/main/ruby/shell/commands/enable_peer.rb
new file mode 100644
index 0000000..099f3fd
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/enable_peer.rb
@@ -0,0 +1,44 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class EnablePeer< Command
+      def help
+        return <<-EOF
+Restarts the replication to the specified peer cluster,
+continuing from where it was disabled.
+
+CURRENTLY UNSUPPORTED
+
+Examples:
+
+  hbase> enable_peer '1'
+EOF
+      end
+
+      def command(id)
+        format_simple_command do
+          replication_admin.enable_peer(id)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/exists.rb b/0.90/src/main/ruby/shell/commands/exists.rb
new file mode 100644
index 0000000..f35f197
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/exists.rb
@@ -0,0 +1,39 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Exists < Command
+      def help
+        return <<-EOF
+Does the named table exist? e.g. "hbase> exists 't1'"
+EOF
+      end
+
+      def command(table)
+        format_simple_command do
+          formatter.row([
+            "Table #{table} " + (admin.exists?(table.to_s) ? "does exist" : "does not exist")
+          ])
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/flush.rb b/0.90/src/main/ruby/shell/commands/flush.rb
new file mode 100644
index 0000000..ba59766
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/flush.rb
@@ -0,0 +1,41 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Flush < Command
+      def help
+        return <<-EOF
+Flush all regions in passed table or pass a region row to
+flush an individual region.  For example:
+
+  hbase> flush 'TABLENAME'
+  hbase> flush 'REGIONNAME'
+EOF
+      end
+
+      def command(table_or_region_name)
+        format_simple_command do
+          admin.flush(table_or_region_name)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/get.rb b/0.90/src/main/ruby/shell/commands/get.rb
new file mode 100644
index 0000000..f42062c
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/get.rb
@@ -0,0 +1,52 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Get < Command
+      def help
+        return <<-EOF
+Get row or cell contents; pass table name, row, and optionally
+a dictionary of column(s), timestamp and versions. Examples:
+
+  hbase> get 't1', 'r1'
+  hbase> get 't1', 'r1', {COLUMN => 'c1'}
+  hbase> get 't1', 'r1', {COLUMN => ['c1', 'c2', 'c3']}
+  hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1}
+  hbase> get 't1', 'r1', {COLUMN => 'c1', TIMESTAMP => ts1, VERSIONS => 4}
+  hbase> get 't1', 'r1', 'c1'
+  hbase> get 't1', 'r1', 'c1', 'c2'
+  hbase> get 't1', 'r1', ['c1', 'c2']
+EOF
+      end
+
+      def command(table, row, *args)
+        now = Time.now
+        formatter.header(["COLUMN", "CELL"])
+
+        table(table).get(row, *args) do |column, value|
+          formatter.row([ column, value ])
+        end
+
+        formatter.footer(now)
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/get_counter.rb b/0.90/src/main/ruby/shell/commands/get_counter.rb
new file mode 100644
index 0000000..3cbe226
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/get_counter.rb
@@ -0,0 +1,43 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class GetCounter < Command
+      def help
+        return <<-EOF
+Return a counter cell value at specified table/row/column coordinates.
+A cell cell should be managed with atomic increment function oh HBase
+and the data should be binary encoded. Example:
+
+  hbase> get_counter 't1', 'r1', 'c1'
+EOF
+      end
+
+      def command(table, row, column, value = nil)
+        if cnt = table(table).get_counter(row, column)
+          puts "COUNTER VALUE = #{cnt}"
+        else
+          puts "No counter found at specified coordinates"
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/incr.rb b/0.90/src/main/ruby/shell/commands/incr.rb
new file mode 100644
index 0000000..38a2fc5
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/incr.rb
@@ -0,0 +1,42 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Incr < Command
+      def help
+        return <<-EOF
+Increments a cell 'value' at specified table/row/column coordinates.
+To increment a cell value in table 't1' at row 'r1' under column
+'c1' by 1 (can be omitted) or 10 do:
+
+  hbase> incr 't1', 'r1', 'c1'
+  hbase> incr 't1', 'r1', 'c1', 1
+  hbase> incr 't1', 'r1', 'c1', 10
+EOF
+      end
+
+      def command(table, row, column, value = nil)
+        cnt = table(table).incr(row, column, value)
+        puts "COUNTER VALUE = #{cnt}"
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/is_disabled.rb b/0.90/src/main/ruby/shell/commands/is_disabled.rb
new file mode 100644
index 0000000..9d3c7ee
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/is_disabled.rb
@@ -0,0 +1,39 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class IsDisabled < Command
+      def help
+        return <<-EOF
+Is named table disabled?: e.g. "hbase> is_disabled 't1'"
+EOF
+      end
+
+      def command(table)
+        format_simple_command do
+          formatter.row([
+            admin.disabled?(table)? "true" : "false"
+          ])
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/is_enabled.rb b/0.90/src/main/ruby/shell/commands/is_enabled.rb
new file mode 100644
index 0000000..96b2b15
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/is_enabled.rb
@@ -0,0 +1,39 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class IsEnabled < Command
+      def help
+        return <<-EOF
+Is named table enabled?: e.g. "hbase> is_enabled 't1'"
+EOF
+      end
+
+      def command(table)
+        format_simple_command do
+          formatter.row([
+            admin.enabled?(table)? "true" : "false"
+          ])
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/list.rb b/0.90/src/main/ruby/shell/commands/list.rb
new file mode 100644
index 0000000..592fb5e
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/list.rb
@@ -0,0 +1,48 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class List < Command
+      def help
+        return <<-EOF
+List all tables in hbase. Optional regular expression parameter could
+be used to filter the output. Examples:
+
+  hbase> list
+  hbase> list 'abc.*'
+EOF
+      end
+
+      def command(regex = ".*")
+        now = Time.now
+        formatter.header([ "TABLE" ])
+
+        regex = /#{regex}/ unless regex.is_a?(Regexp)
+        list = admin.list.grep(regex)
+        list.each do |table|
+          formatter.row([ table ])
+        end
+
+        formatter.footer(now, list.size)
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/major_compact.rb b/0.90/src/main/ruby/shell/commands/major_compact.rb
new file mode 100644
index 0000000..56b081e
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/major_compact.rb
@@ -0,0 +1,38 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class MajorCompact < Command
+      def help
+        return <<-EOF
+Run major compaction on passed table or pass a region row
+to major compact an individual region
+EOF
+      end
+
+      def command(table_or_region_name)
+        format_simple_command do
+          admin.major_compact(table_or_region_name)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/move.rb b/0.90/src/main/ruby/shell/commands/move.rb
new file mode 100644
index 0000000..0e3db8f
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/move.rb
@@ -0,0 +1,48 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Move < Command
+      def help
+        return <<-EOF
+Move a region.  Optionally specify target regionserver else we choose one
+at random.  NOTE: You pass the encoded region name, not the region name so
+this command is a little different to the others.  The encoded region name
+is the hash suffix on region names: e.g. if the region name were
+TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396. then
+the encoded region name portion is 527db22f95c8a9e0116f0cc13c680396
+A server name is its host, port plus startcode. For example:
+host187.example.com,60020,1289493121758
+Examples:
+
+  hbase> move 'ENCODED_REGIONNAME'
+  hbase> move 'ENCODED_REGIONNAME', 'SERVER_NAME'
+EOF
+      end
+
+      def command(encoded_region_name, server_name = nil)
+        format_simple_command do
+          admin.move(encoded_region_name, server_name)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/put.rb b/0.90/src/main/ruby/shell/commands/put.rb
new file mode 100644
index 0000000..dde0433
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/put.rb
@@ -0,0 +1,41 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Put < Command
+      def help
+        return <<-EOF
+Put a cell 'value' at specified table/row/column and optionally
+timestamp coordinates.  To put a cell value into table 't1' at
+row 'r1' under column 'c1' marked with the time 'ts1', do:
+
+  hbase> put 't1', 'r1', 'c1', 'value', ts1
+EOF
+      end
+
+      def command(table, row, column, value, timestamp = nil)
+        format_simple_command do
+          table(table).put(row, column, value, timestamp)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/remove_peer.rb b/0.90/src/main/ruby/shell/commands/remove_peer.rb
new file mode 100644
index 0000000..034434a
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/remove_peer.rb
@@ -0,0 +1,40 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class RemovePeer< Command
+      def help
+        return <<-EOF
+Stops the specified replication stream and deletes all the meta
+information kept about it. Examples:
+
+  hbase> remove_peer '1'
+EOF
+      end
+
+      def command(id)
+        format_simple_command do
+          replication_admin.remove_peer(id)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/scan.rb b/0.90/src/main/ruby/shell/commands/scan.rb
new file mode 100644
index 0000000..4d722af
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/scan.rb
@@ -0,0 +1,57 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Scan < Command
+      def help
+        return <<-EOF
+Scan a table; pass table name and optionally a dictionary of scanner
+specifications.  Scanner specifications may include one or more of
+the following: LIMIT, STARTROW, STOPROW, TIMESTAMP, or COLUMNS.  If
+no columns are specified, all columns will be scanned.  To scan all
+members of a column family, leave the qualifier empty as in
+'col_family:'.  Examples:
+
+  hbase> scan '.META.'
+  hbase> scan '.META.', {COLUMNS => 'info:regioninfo'}
+  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
+
+For experts, there is an additional option -- CACHE_BLOCKS -- which
+switches block caching for the scanner on (true) or off (false).  By
+default it is enabled.  Examples:
+
+  hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
+EOF
+      end
+
+      def command(table, args = {})
+        now = Time.now
+        formatter.header(["ROW", "COLUMN+CELL"])
+
+        count = table(table).scan(args) do |row, cells|
+          formatter.row([ row, cells ])
+        end
+
+        formatter.footer(now, count)
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/shutdown.rb b/0.90/src/main/ruby/shell/commands/shutdown.rb
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/shutdown.rb
diff --git a/0.90/src/main/ruby/shell/commands/split.rb b/0.90/src/main/ruby/shell/commands/split.rb
new file mode 100644
index 0000000..c4de875
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/split.rb
@@ -0,0 +1,37 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Split < Command
+      def help
+        return <<-EOF
+Split table or pass a region row to split individual region
+EOF
+      end
+
+      def command(table_or_region_name)
+        format_simple_command do
+          admin.split(table_or_region_name)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/start_replication.rb b/0.90/src/main/ruby/shell/commands/start_replication.rb
new file mode 100644
index 0000000..5d1cd1b
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/start_replication.rb
@@ -0,0 +1,43 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class StartReplication < Command
+      def help
+        return <<-EOF
+Restarts all the replication features. The state in which each
+stream starts in is undetermined.
+WARNING:
+start/stop replication is only meant to be used in critical load situations.
+Examples:
+
+  hbase> start_replication
+EOF
+      end
+
+      def command
+        format_simple_command do
+          replication_admin.start_replication
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/status.rb b/0.90/src/main/ruby/shell/commands/status.rb
new file mode 100644
index 0000000..4b22acb
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/status.rb
@@ -0,0 +1,41 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Status < Command
+      def help
+        return <<-EOF
+Show cluster status. Can be 'summary', 'simple', or 'detailed'. The
+default is 'summary'. Examples:
+
+  hbase> status
+  hbase> status 'simple'
+  hbase> status 'summary'
+  hbase> status 'detailed'
+EOF
+      end
+
+      def command(format = 'summary')
+        admin.status(format)
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/stop_replication.rb b/0.90/src/main/ruby/shell/commands/stop_replication.rb
new file mode 100644
index 0000000..f5074d7
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/stop_replication.rb
@@ -0,0 +1,43 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class StopReplication < Command
+      def help
+        return <<-EOF
+Stops all the replication features. The state in which each
+stream stops in is undetermined.
+WARNING:
+start/stop replication is only meant to be used in critical load situations.
+Examples:
+
+  hbase> stop_replication
+EOF
+      end
+
+      def command
+        format_simple_command do
+          replication_admin.stop_replication
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/truncate.rb b/0.90/src/main/ruby/shell/commands/truncate.rb
new file mode 100644
index 0000000..a24e167b
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/truncate.rb
@@ -0,0 +1,39 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Truncate < Command
+      def help
+        return <<-EOF
+  Disables, drops and recreates the specified table.
+EOF
+      end
+
+      def command(table)
+        format_simple_command do
+          puts "Truncating '#{table}' table (it may take a while):"
+          admin.truncate(table) { |log| puts " - #{log}" }
+        end
+      end
+
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/unassign.rb b/0.90/src/main/ruby/shell/commands/unassign.rb
new file mode 100644
index 0000000..0095dd6
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/unassign.rb
@@ -0,0 +1,43 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Unassign < Command
+      def help
+        return <<-EOF
+Unassign a region. Unassign will close region in current location and then
+reopen it again.  Pass 'true' to force the unassignment ('force' will clear
+all in-memory state in master before the reassign).  Use with caution.  For
+expert use only.  Examples:
+
+  hbase> unassign 'REGIONNAME'
+  hbase> unassign 'REGIONNAME', true
+EOF
+      end
+
+      def command(region_name, force = 'false')
+        format_simple_command do
+          admin.unassign(region_name, force)
+        end
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/version.rb b/0.90/src/main/ruby/shell/commands/version.rb
new file mode 100644
index 0000000..372b0dc
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/version.rb
@@ -0,0 +1,38 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class Version < Command
+      def help
+        return <<-EOF
+Output this HBase version
+EOF
+      end
+
+      def command
+        # Output version.
+        puts "#{org.apache.hadoop.hbase.util.VersionInfo.getVersion()}, " +
+             "r#{org.apache.hadoop.hbase.util.VersionInfo.getRevision()}, " +
+             "#{org.apache.hadoop.hbase.util.VersionInfo.getDate()}"
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/commands/zk_dump.rb b/0.90/src/main/ruby/shell/commands/zk_dump.rb
new file mode 100644
index 0000000..bb23962
--- /dev/null
+++ b/0.90/src/main/ruby/shell/commands/zk_dump.rb
@@ -0,0 +1,35 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+module Shell
+  module Commands
+    class ZkDump < Command
+      def help
+        return <<-EOF
+Dump status of HBase cluster as seen by ZooKeeper.
+EOF
+      end
+
+      def command
+        puts admin.zk_dump
+      end
+    end
+  end
+end
diff --git a/0.90/src/main/ruby/shell/formatter.rb b/0.90/src/main/ruby/shell/formatter.rb
new file mode 100644
index 0000000..90ab45b
--- /dev/null
+++ b/0.90/src/main/ruby/shell/formatter.rb
@@ -0,0 +1,153 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# Results formatter
+module Shell
+  module Formatter
+    # Base abstract class for results formatting.
+    class Base
+      attr_reader :row_count
+
+      def is_valid_io?(obj)
+        obj.instance_of?(IO) || obj == Kernel
+      end
+
+      def refresh_width()
+        @max_width = Java::jline.Terminal.getTerminal().getTerminalWidth() if $stdout.tty?
+        # the above doesn't work in some terminals (such as shell running within emacs)
+        @max_width = 100 if @max_width.to_i.zero?
+      end
+
+      # Takes an output stream and a print width.
+      def initialize(opts = {})
+        options = {
+          :output_stream => Kernel,
+        }.merge(opts)
+
+        @out = options[:output_stream]
+        refresh_width
+        @row_count = 0
+
+        # raise an error if the stream is not valid
+        raise(TypeError, "Type #{@out.class} of parameter #{@out} is not IO") unless is_valid_io?(@out)
+      end
+
+      def header(args = [], widths = [])
+        refresh_width
+        row(args, false, widths) if args.length > 0
+        @row_count = 0
+      end
+
+      # Output a row.
+      # Inset is whether or not to offset row by a space.
+      def row(args = [], inset = true, widths = [])
+        # Print out nothing
+        return if !args || args.empty?
+
+        # Print a string
+        if args.is_a?(String)
+          output(@max_width, args)
+          @out.puts
+          return
+        end
+
+        # TODO: Look at the type.  Is it RowResult?
+        if args.length == 1
+          splits = split(@max_width, dump(args[0]))
+          for l in splits
+            output(@max_width, l)
+            @out.puts
+          end
+        elsif args.length == 2
+          col1width = (not widths or widths.length == 0) ? @max_width / 4 : @max_width * widths[0] / 100
+          col2width = (not widths or widths.length < 2) ? @max_width - col1width - 2 : @max_width * widths[1] / 100 - 2
+          splits1 = split(col1width, dump(args[0]))
+          splits2 = split(col2width, dump(args[1]))
+          biggest = (splits2.length > splits1.length)? splits2.length: splits1.length
+          index = 0
+          while index < biggest
+            # Inset by one space if inset is set.
+            @out.print(" ") if inset
+            output(col1width, splits1[index])
+            # Add extra space so second column lines up w/ second column output
+            @out.print(" ") unless inset
+            @out.print(" ")
+            output(col2width, splits2[index])
+            index += 1
+            @out.puts
+          end
+        else
+          # Print a space to set off multi-column rows
+          print ' '
+          first = true
+          for e in args
+            @out.print " " unless first
+            first = false
+            @out.print e
+          end
+          puts
+        end
+        @row_count += 1
+      end
+
+      def split(width, str)
+        result = []
+        index = 0
+        while index < str.length do
+          result << str.slice(index, width)
+          index += width
+        end
+        result
+      end
+
+      def dump(str)
+        return if str.instance_of?(Fixnum)
+        # Remove double-quotes added by 'dump'.
+        return str
+      end
+
+      def output(width, str)
+        # Make up a spec for printf
+        spec = "%%-%ds" % width
+        @out.printf(spec, str)
+      end
+
+      def footer(start_time = nil, row_count = nil)
+        return unless start_time
+        row_count ||= @row_count
+        # Only output elapsed time and row count if startTime passed
+        @out.puts("%d row(s) in %.4f seconds" % [row_count, Time.now - start_time])
+      end
+    end
+
+
+    class Console < Base
+    end
+
+    class XHTMLFormatter < Base
+      # http://www.germane-software.com/software/rexml/doc/classes/REXML/Document.html
+      # http://www.crummy.com/writing/RubyCookbook/test_results/75942.html
+    end
+
+    class JSON < Base
+    end
+  end
+end
+
diff --git a/0.90/src/main/xslt/configuration_to_docbook_section.xsl b/0.90/src/main/xslt/configuration_to_docbook_section.xsl
new file mode 100644
index 0000000..e637279
--- /dev/null
+++ b/0.90/src/main/xslt/configuration_to_docbook_section.xsl
@@ -0,0 +1,66 @@
+<?xml version="1.0"?>
+<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
+<xsl:output method="xml"/>
+<xsl:template match="configuration">
+<!--
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+This stylesheet is used making an html version of hbase-default.xml.
+-->
+<section xml:id="hbase_default_configurations"
+version="5.0" xmlns="http://docbook.org/ns/docbook"
+      xmlns:xlink="http://www.w3.org/1999/xlink"
+      xmlns:xi="http://www.w3.org/2001/XInclude"
+      xmlns:svg="http://www.w3.org/2000/svg"
+      xmlns:m="http://www.w3.org/1998/Math/MathML"
+      xmlns:html="http://www.w3.org/1999/xhtml"
+      xmlns:db="http://docbook.org/ns/docbook">
+<title>HBase Default Configuration</title>
+<para>
+</para>
+
+<glossary xmlns='http://docbook.org/ns/docbook' xml:id="hbase.default.configuration">
+<title>HBase Default Configuration</title>
+<para>
+This documentation is generated using the default hbase configuration file,
+<filename>hbase-default.xml</filename>, as source.
+</para>
+
+<xsl:for-each select="property">
+<glossentry>
+  <xsl:attribute name="id">
+    <xsl:value-of select="name" />
+  </xsl:attribute>
+  <glossterm>
+    <varname><xsl:value-of select="name"/></varname>
+  </glossterm>
+  <glossdef>
+  <para><xsl:value-of select="description"/></para>
+  <para>Default: <varname><xsl:value-of select="value"/></varname></para>
+  </glossdef>
+</glossentry>
+</xsl:for-each>
+
+</glossary>
+
+</section>
+</xsl:template>
+</xsl:stylesheet>
diff --git a/0.90/src/saveVersion.sh b/0.90/src/saveVersion.sh
new file mode 100755
index 0000000..baae4e2
--- /dev/null
+++ b/0.90/src/saveVersion.sh
@@ -0,0 +1,47 @@
+#!/bin/sh
+
+# This file is used to generate the annotation of package info that
+# records the user, url, revision and timestamp.
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+unset LANG
+unset LC_CTYPE
+version=$1
+outputDirectory=$2
+user=`whoami`
+date=`date`
+cwd=`pwd`
+if [ -d .svn ]; then
+  revision=`svn info | sed -n -e 's/Last Changed Rev: \(.*\)/\1/p'`
+  url=`svn info | sed -n -e 's/URL: \(.*\)/\1/p'`
+elif [ -d .git ]; then
+  revision=`git log -1 --pretty=format:"%H"`
+  hostname=`hostname`
+  url="git://${hostname}${cwd}"
+else
+  revision="Unknown"
+  url="file://$cwd"
+fi
+mkdir -p "$outputDirectory/org/apache/hadoop/hbase"
+cat >"$outputDirectory/org/apache/hadoop/hbase/package-info.java" <<EOF
+/*
+ * Generated by src/saveVersion.sh
+ */
+@VersionAnnotation(version="$version", revision="$revision",
+                         user="$user", date="$date", url="$url")
+package org.apache.hadoop.hbase;
+EOF
diff --git a/0.90/src/site/resources/css/freebsd_docbook.css b/0.90/src/site/resources/css/freebsd_docbook.css
new file mode 100644
index 0000000..3d40fa7
--- /dev/null
+++ b/0.90/src/site/resources/css/freebsd_docbook.css
@@ -0,0 +1,208 @@
+/*
+ * Copyright (c) 2001, 2003, 2010 The FreeBSD Documentation Project
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ * $FreeBSD: doc/share/misc/docbook.css,v 1.15 2010/03/20 04:15:01 hrs Exp $
+ */
+
+BODY ADDRESS {
+	line-height: 1.3;
+	margin: .6em 0;
+}
+
+BODY BLOCKQUOTE {
+	margin-top: .75em;
+	line-height: 1.5;
+	margin-bottom: .75em;
+}
+
+HTML BODY {
+	margin: 1em 8% 1em 10%;
+	line-height: 1.2;
+}
+
+.LEGALNOTICE {
+	font-size: small;
+	font-variant: small-caps;
+}
+
+BODY DIV {
+	margin: 0;
+}
+
+DL {
+	margin: .8em 0;
+	line-height: 1.2;
+}
+
+BODY FORM {
+	margin: .6em 0;
+}
+
+H1, H2, H3, H4, H5, H6,
+DIV.EXAMPLE P B,
+.QUESTION,
+DIV.TABLE P B,
+DIV.PROCEDURE P B {
+	color: #990000;
+}
+
+BODY H1, BODY H2, BODY H3, BODY H4, BODY H5, BODY H6 {
+	line-height: 1.3;
+	margin-left: 0;
+}
+
+BODY H1, BODY H2 {
+	margin: .8em 0 0 -4%;
+}
+
+BODY H3, BODY H4 {
+	margin: .8em 0 0 -3%;
+}
+
+BODY H5 {
+	margin: .8em 0 0 -2%;
+}
+
+BODY H6 {
+	margin: .8em 0 0 -1%;
+}
+
+BODY HR {
+	margin: .6em;
+	border-width: 0 0 1px 0;
+	border-style: solid;
+	border-color: #cecece;
+}
+
+BODY IMG.NAVHEADER {
+	margin: 0 0 0 -4%;
+}
+
+OL {
+	margin: 0 0 0 5%;
+	line-height: 1.2;
+}
+
+BODY PRE {
+	margin: .75em 0;
+	line-height: 1.0;
+	font-family: monospace;
+}
+
+BODY TD, BODY TH {
+	line-height: 1.2;
+}
+
+UL, BODY DIR, BODY MENU {
+	margin: 0 0 0 5%;
+	line-height: 1.2;
+}
+
+HTML {
+	margin: 0; 
+	padding: 0;
+}
+
+BODY P B.APPLICATION {
+	color: #000000;
+}
+
+.FILENAME {
+	color: #007a00;
+}
+
+.GUIMENU, .GUIMENUITEM, .GUISUBMENU,
+.GUILABEL, .INTERFACE,
+.SHORTCUT, .SHORTCUT .KEYCAP {
+	font-weight: bold;
+}
+
+.GUIBUTTON {
+	background-color: #CFCFCF;
+	padding: 2px;
+}
+
+.ACCEL {
+	background-color: #F0F0F0;
+	text-decoration: underline;
+} 
+
+.SCREEN {
+	padding: 1ex;
+}
+
+.PROGRAMLISTING {
+	padding: 1ex;
+	background-color: #eee;
+	border: 1px solid #ccc;
+}
+
+@media screen {  /* hide from IE3 */
+	a[href]:hover { background: #ffa }
+}
+
+BLOCKQUOTE.NOTE {
+	color: #222;
+	background: #eee;
+	border: 1px solid #ccc;
+	padding: 0.4em 0.4em;
+	width: 85%;
+}
+
+BLOCKQUOTE.TIP {
+	color: #004F00;
+	background: #d8ecd6;
+	border: 1px solid green;
+	padding: 0.2em 2em;
+	width: 85%;
+}
+
+BLOCKQUOTE.IMPORTANT {
+	font-style:italic;
+	border: 1px solid #a00;
+	border-left: 12px solid #c00;
+	padding: 0.1em 1em;
+}
+
+BLOCKQUOTE.WARNING {
+	color: #9F1313;
+	background: #f8e8e8;
+	border: 1px solid #e59595;
+	padding: 0.2em 2em;
+	width: 85%;
+}
+
+.EXAMPLE {
+	background: #fefde6;
+	border: 1px solid #f1bb16;
+	margin: 1em 0;
+	padding: 0.2em 2em;
+	width: 90%;
+}
+
+.INFORMALTABLE TABLE.CALSTABLE TR TD {
+        padding-left: 1em;
+        padding-right: 1em;
+}
diff --git a/0.90/src/site/resources/css/site.css b/0.90/src/site/resources/css/site.css
new file mode 100644
index 0000000..a88f052
--- /dev/null
+++ b/0.90/src/site/resources/css/site.css
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+a.externalLink, a.externalLink:link, a.externalLink:visited, a.externalLink:active, a.externalLink:hover {
+  background: none;
+  padding-right: 0;
+}
+
+/*
+body ul {
+  list-style-type: square;
+}
+*/
+
+#downloadbox {
+  float: right;
+  margin: 0 10px 20px 20px;
+  padding: 5px;
+  border: 1px solid #999;
+  background-color: #eee;
+}
+
+#downloadbox h5 {
+  color: #000;
+  margin: 0;
+  border-bottom: 1px solid #aaaaaa;
+  font-size: smaller;
+  padding: 0;
+}
+
+#downloadbox p {
+  margin-top: 1em;
+  margin-bottom: 0;
+}
+
+#downloadbox ul {
+  margin-top: 0;
+  margin-bottom: 1em;
+  list-style-type: disc;
+}
+
+#downloadbox li {
+  font-size: smaller;
+}
+
+/*
+h4 {
+  padding: 0;
+  border: none;
+  color: #000;
+  margin: 0;
+  font-size: larger;
+  font-weight: bold;
+}
+*/
+
+#banner {
+  background: none;
+}
+
+#banner img {
+  margin: 10px;
+}
+
+.frontpagebox {
+  float: left;
+  text-align: center;
+  width: 15em;
+  margin-left: 0.5em;
+  margin-right: 0.5em;
+  margin-top: 2em;
+}
+
+.headline {
+  font-size: 120%;
+  font-weight: bold;
+  padding-top: 1px;
+  padding-bottom: 5px;
+  background-image: url(../images/breadcrumbs.jpg);
+  background-repeat: repeat-x;
+}
+
+/*
+#leftColumn {
+  display: none !important
+}
+
+#bodyColumn {
+  margin-left: 1.5em;
+}
+*/
+
+
diff --git a/0.90/src/site/resources/images/architecture.gif b/0.90/src/site/resources/images/architecture.gif
new file mode 100644
index 0000000..8d84a23
--- /dev/null
+++ b/0.90/src/site/resources/images/architecture.gif
Binary files differ
diff --git a/0.90/src/site/resources/images/asf_logo_wide.png b/0.90/src/site/resources/images/asf_logo_wide.png
new file mode 100644
index 0000000..c584eba
--- /dev/null
+++ b/0.90/src/site/resources/images/asf_logo_wide.png
Binary files differ
diff --git a/0.90/src/site/resources/images/favicon.ico b/0.90/src/site/resources/images/favicon.ico
new file mode 100644
index 0000000..161bcf7
--- /dev/null
+++ b/0.90/src/site/resources/images/favicon.ico
Binary files differ
diff --git a/0.90/src/site/resources/images/hadoop-logo.jpg b/0.90/src/site/resources/images/hadoop-logo.jpg
new file mode 100644
index 0000000..809525d
--- /dev/null
+++ b/0.90/src/site/resources/images/hadoop-logo.jpg
Binary files differ
diff --git a/0.90/src/site/resources/images/hbase_logo_med.gif b/0.90/src/site/resources/images/hbase_logo_med.gif
new file mode 100644
index 0000000..36d3e3c
--- /dev/null
+++ b/0.90/src/site/resources/images/hbase_logo_med.gif
Binary files differ
diff --git a/0.90/src/site/resources/images/hbase_small.gif b/0.90/src/site/resources/images/hbase_small.gif
new file mode 100644
index 0000000..3275765
--- /dev/null
+++ b/0.90/src/site/resources/images/hbase_small.gif
Binary files differ
diff --git a/0.90/src/site/resources/images/replication_overview.png b/0.90/src/site/resources/images/replication_overview.png
new file mode 100644
index 0000000..47d7b4c
--- /dev/null
+++ b/0.90/src/site/resources/images/replication_overview.png
Binary files differ
diff --git a/0.90/src/site/site.vm b/0.90/src/site/site.vm
new file mode 100644
index 0000000..05a4578
--- /dev/null
+++ b/0.90/src/site/site.vm
@@ -0,0 +1,512 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<!-- Generated by Apache Maven Doxia at $dateFormat.format( $currentDate ) -->
+#macro ( link $href $name $target $img $position $alt $border $width $height )
+  #set ( $linkTitle = ' title="' + $name + '"' )
+  #if( $target )
+    #set ( $linkTarget = ' target="' + $target + '"' )
+  #else
+    #set ( $linkTarget = "" )
+  #end
+  #if ( ( $href.toLowerCase().startsWith("http") || $href.toLowerCase().startsWith("https") ) )
+    #set ( $linkClass = ' class="externalLink"' )
+  #else
+    #set ( $linkClass = "" )
+  #end
+  #if ( $img )
+    #if ( $position == "left" )
+      <a href="$href"$linkClass$linkTarget$linkTitle>#image($img $alt $border $width $height)$name</a>
+    #else
+      <a href="$href"$linkClass$linkTarget$linkTitle>$name #image($img $alt $border $width $height)</a>
+    #end
+  #else
+    <a href="$href"$linkClass$linkTarget$linkTitle>$name</a>
+  #end
+#end
+##
+#macro ( image $img $alt $border $width $height )
+  #if( $img )
+    #if ( ! ( $img.toLowerCase().startsWith("http") || $img.toLowerCase().startsWith("https") ) )
+      #set ( $imgSrc = $PathTool.calculateLink( $img, $relativePath ) )
+      #set ( $imgSrc = $imgSrc.replaceAll( "\\", "/" ) )
+      #set ( $imgSrc = ' src="' + $imgSrc + '"' )
+    #else
+      #set ( $imgSrc = ' src="' + $img + '"' )
+    #end
+    #if( $alt )
+      #set ( $imgAlt = ' alt="' + $alt + '"' )
+    #else
+      #set ( $imgAlt = ' alt=""' )
+    #end
+    #if( $border )
+      #set ( $imgBorder = ' border="' + $border + '"' )
+    #else
+      #set ( $imgBorder = "" )
+    #end
+    #if( $width )
+      #set ( $imgWidth = ' width="' + $width + '"' )
+    #else
+      #set ( $imgWidth = "" )
+    #end
+    #if( $height )
+      #set ( $imgHeight = ' height="' + $height + '"' )
+    #else
+      #set ( $imgHeight = "" )
+    #end
+    <img class="imageLink"$imgSrc$imgAlt$imgBorder$imgWidth$imgHeight/>
+  #end
+#end
+#macro ( banner $banner $id )
+  #if ( $banner )
+    #if( $banner.href )
+      <a href="$banner.href" id="$id"#if( $banner.alt ) title="$banner.alt"#end>
+    #else
+        <div id="$id">
+    #end
+##
+    #if( $banner.src )
+        #set ( $src = $banner.src )
+        #if ( ! ( $src.toLowerCase().startsWith("http") || $src.toLowerCase().startsWith("https") ) )
+            #set ( $src = $PathTool.calculateLink( $src, $relativePath ) )
+            #set ( $src = $src.replaceAll( "\\", "/" ) )
+        #end
+        #if ( $banner.alt )
+            #set ( $alt = $banner.alt )
+        #else
+            #set ( $alt = $banner.name )
+        #end
+        <img src="$src" alt="$alt" />
+    #else
+        $banner.name
+    #end
+##
+    #if( $banner.href )
+        </a>
+    #else
+        </div>
+    #end
+  #end
+#end
+##
+#macro ( links $links )
+  #set ( $counter = 0 )
+  #foreach( $item in $links )
+    #set ( $counter = $counter + 1 )
+    #set ( $currentItemHref = $PathTool.calculateLink( $item.href, $relativePath ) )
+    #set ( $currentItemHref = $currentItemHref.replaceAll( "\\", "/" ) )
+    #link( $currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height )
+    #if ( $links.size() > $counter )
+      |
+    #end
+  #end
+#end
+##
+#macro ( breadcrumbs $breadcrumbs )
+  #set ( $counter = 0 )
+  #foreach( $item in $breadcrumbs )
+    #set ( $counter = $counter + 1 )
+    #set ( $currentItemHref = $PathTool.calculateLink( $item.href, $relativePath ) )
+    #set ( $currentItemHref = $currentItemHref.replaceAll( "\\", "/" ) )
+##
+    #if ( $currentItemHref == $alignedFileName || $currentItemHref == "" )
+      $item.name
+    #else
+      #link( $currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height )
+    #end
+    #if ( $breadcrumbs.size() > $counter )
+      &gt;
+    #end
+  #end
+#end
+##
+#macro ( displayTree $display $item )
+  #if ( $item && $item.items && $item.items.size() > 0 )
+    #foreach( $subitem in $item.items )
+      #set ( $subitemHref = $PathTool.calculateLink( $subitem.href, $relativePath ) )
+      #set ( $subitemHref = $subitemHref.replaceAll( "\\", "/" ) )
+      #if ( $alignedFileName == $subitemHref )
+        #set ( $display = true )
+      #end
+##
+      #displayTree( $display $subitem )
+    #end
+  #end
+#end
+##
+#macro ( menuItem $item )
+  #set ( $collapse = "none" )
+  #set ( $currentItemHref = $PathTool.calculateLink( $item.href, $relativePath ) )
+  #set ( $currentItemHref = $currentItemHref.replaceAll( "\\", "/" ) )
+##
+  #if ( $item && $item.items && $item.items.size() > 0 )
+    #if ( $item.collapse == false )
+      #set ( $collapse = "expanded" )
+    #else
+      ## By default collapsed
+      #set ( $collapse = "collapsed" )
+    #end
+##
+    #set ( $display = false )
+    #displayTree( $display $item )
+##
+    #if ( $alignedFileName == $currentItemHref || $display )
+      #set ( $collapse = "expanded" )
+    #end
+  #end
+  <li class="$collapse">
+  #if ( $item.img )
+    #if ( $item.position == "left" )
+      #if ( $alignedFileName == $currentItemHref )
+        <strong>#image($item.img $item.alt $item.border $item.width $item.height) $item.name</strong>
+      #else
+        #link($currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height)
+      #end
+    #else
+      #if ( $alignedFileName == $currentItemHref )
+        <strong>$item.name #image($item.img $item.alt $item.border $item.width $item.height)</strong>
+      #else
+        #link($currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height)
+      #end
+    #end
+  #else
+    #if ( $alignedFileName == $currentItemHref )
+      <strong>$item.name</strong>
+    #else
+      #link( $currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height )
+    #end
+  #end
+  #if ( $item && $item.items && $item.items.size() > 0 )
+    #if ( $collapse == "expanded" )
+      <ul>
+        #foreach( $subitem in $item.items )
+          #menuItem( $subitem )
+        #end
+      </ul>
+    #end
+  #end
+  </li>
+#end
+##
+#macro ( mainMenu $menus )
+  #foreach( $menu in $menus )
+    #if ( $menu.name )
+      #if ( $menu.img )
+        #if( $menu.position )
+          #set ( $position = $menu.position )
+        #else
+          #set ( $position = "left" )
+        #end
+##
+        #if ( ! ( $menu.img.toLowerCase().startsWith("http") || $menu.img.toLowerCase().startsWith("https") ) )
+          #set ( $src = $PathTool.calculateLink( $menu.img, $relativePath ) )
+          #set ( $src = $src.replaceAll( "\\", "/" ) )
+          #set ( $src = ' src="' + $src + '"' )
+        #else
+          #set ( $src = ' src="' + $menu.img + '"' )
+        #end
+##
+        #if( $menu.alt )
+          #set ( $alt = ' alt="' + $menu.alt + '"' )
+        #else
+          #set ( $alt = ' alt="' + $menu.name + '"' )
+        #end
+##
+        #if( $menu.border )
+          #set ( $border = ' border="' + $menu.border + '"' )
+        #else
+          #set ( $border = ' border="0"' )
+        #end
+##
+        #if( $menu.width )
+          #set ( $width = ' width="' + $menu.width + '"' )
+        #else
+          #set ( $width = "" )
+        #end
+        #if( $menu.height )
+          #set ( $height = ' height="' + $menu.height + '"' )
+        #else
+          #set ( $height = "" )
+        #end
+##
+        #set ( $img = '<img class="imageLink"' + $src + $alt + $border + $width + $height + "/>" )
+##
+        #if ( $position == "left" )
+        <h5>$img $menu.name</h5>
+        #else
+        <h5>$menu.name $img</h5>
+        #end
+      #else
+       <h5>$menu.name</h5>
+      #end
+    #end
+    #if ( $menu.items && $menu.items.size() > 0 )
+    <ul>
+      #foreach( $item in $menu.items )
+        #menuItem( $item )
+      #end
+    </ul>
+    #end
+  #end
+#end
+##
+#macro ( copyright )
+  #if ( $project )
+    #if ( ${project.organization} && ${project.organization.name} )
+      #set ( $period = "" )
+    #else
+      #set ( $period = "." )
+   #end
+##
+   #set ( $currentYear = ${currentDate.year} + 1900 )
+##
+    #if ( ${project.inceptionYear} && ( ${project.inceptionYear} != ${currentYear.toString()} ) )
+      ${project.inceptionYear}-${currentYear}${period}
+    #else
+      ${currentYear}${period}
+    #end
+##
+    #if ( ${project.organization} )
+      #if ( ${project.organization.name} && ${project.organization.url} )
+          <a href="$project.organization.url">${project.organization.name}</a>.
+      #elseif ( ${project.organization.name} )
+        ${project.organization.name}.
+      #end
+    #end
+  #end
+#end
+##
+#macro ( publishDate $position $publishDate $version )
+  #if ( $publishDate && $publishDate.format )
+    #set ( $format = $publishDate.format )
+  #else
+    #set ( $format = "yyyy-MM-dd" )
+  #end
+##
+  $dateFormat.applyPattern( $format )
+##
+  #set ( $dateToday = $dateFormat.format( $currentDate ) )
+##
+  #if ( $publishDate && $publishDate.position )
+    #set ( $datePosition = $publishDate.position )
+  #else
+    #set ( $datePosition = "left" )
+  #end
+##
+  #if ( $version )
+    #if ( $version.position )
+      #set ( $versionPosition = $version.position )
+    #else
+      #set ( $versionPosition = "left" )
+    #end
+  #else
+    #set ( $version = "" )
+    #set ( $versionPosition = "left" )
+  #end
+##
+  #set ( $breadcrumbs = $decoration.body.breadcrumbs )
+  #set ( $links = $decoration.body.links )
+
+  #if ( $datePosition.equalsIgnoreCase( "right" ) && $links && $links.size() > 0 )
+    #set ( $prefix = "&nbsp;|" )
+  #else
+    #set ( $prefix = "" )
+  #end
+##
+  #if ( $datePosition.equalsIgnoreCase( $position ) )
+    #if ( ( $datePosition.equalsIgnoreCase( "right" ) ) || ( $datePosition.equalsIgnoreCase( "bottom" ) ) )
+      $prefix <span id="publishDate">$i18n.getString( "site-renderer", $locale, "template.lastpublished" ): $dateToday</span>
+      #if ( $versionPosition.equalsIgnoreCase( $position ) )
+        &nbsp;| <span id="projectVersion">$i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version}</span>
+      #end
+    #elseif ( ( $datePosition.equalsIgnoreCase( "navigation-bottom" ) ) || ( $datePosition.equalsIgnoreCase( "navigation-top" ) ) )
+      <div id="lastPublished">
+        <span id="publishDate">$i18n.getString( "site-renderer", $locale, "template.lastpublished" ): $dateToday</span>
+        #if ( $versionPosition.equalsIgnoreCase( $position ) )
+          &nbsp;| <span id="projectVersion">$i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version}</span>
+        #end
+      </div>
+    #elseif ( $datePosition.equalsIgnoreCase("left") )
+      <div class="xleft">
+        <span id="publishDate">$i18n.getString( "site-renderer", $locale, "template.lastpublished" ): $dateToday</span>
+        #if ( $versionPosition.equalsIgnoreCase( $position ) )
+          &nbsp;| <span id="projectVersion">$i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version}</span>
+        #end
+        #if ( $breadcrumbs && $breadcrumbs.size() > 0 )
+          | #breadcrumbs( $breadcrumbs )
+        #end
+      </div>
+    #end
+  #elseif ( $versionPosition.equalsIgnoreCase( $position ) )
+    #if ( ( $versionPosition.equalsIgnoreCase( "right" ) ) || ( $versionPosition.equalsIgnoreCase( "bottom" ) ) )
+      $prefix <span id="projectVersion">$i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version}</span>
+    #elseif ( ( $versionPosition.equalsIgnoreCase( "navigation-bottom" ) ) || ( $versionPosition.equalsIgnoreCase( "navigation-top" ) ) )
+      <div id="lastPublished">
+        <span id="projectVersion">$i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version}</span>
+      </div>
+    #elseif ( $versionPosition.equalsIgnoreCase("left") )
+      <div class="xleft">
+        <span id="projectVersion">$i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version}</span>
+        #if ( $breadcrumbs && $breadcrumbs.size() > 0 )
+          | #breadcrumbs( $breadcrumbs )
+        #end
+      </div>
+    #end
+  #elseif ( $position.equalsIgnoreCase( "left" ) )
+    #if ( $breadcrumbs && $breadcrumbs.size() > 0 )
+      <div class="xleft">
+        #breadcrumbs( $breadcrumbs )
+      </div>
+    #end
+  #end
+#end
+##
+#macro ( poweredByLogo $poweredBy )
+  #if( $poweredBy )
+    #foreach ($item in $poweredBy)
+      #if( $item.href )
+        #set ( $href = $PathTool.calculateLink( $item.href, $relativePath ) )
+        #set ( $href = $href.replaceAll( "\\", "/" ) )
+      #else
+        #set ( $href="http://maven.apache.org/" )
+      #end
+##
+      #if( $item.name )
+        #set ( $name = $item.name )
+      #else
+        #set ( $name = $i18n.getString( "site-renderer", $locale, "template.builtby" )  )
+        #set ( $name = "${name} Maven"  )
+      #end
+##
+      #if( $item.img )
+        #set ( $img = $item.img )
+      #else
+        #set ( $img = "images/logos/maven-feather.png" )
+      #end
+##
+      #if ( ! ( $img.toLowerCase().startsWith("http") || $img.toLowerCase().startsWith("https") ) )
+        #set ( $img = $PathTool.calculateLink( $img, $relativePath ) )
+        #set ( $img = $src.replaceAll( "\\", "/" ) )
+      #end
+##
+      #if( $item.alt )
+        #set ( $alt = ' alt="' + $item.alt + '"' )
+      #else
+        #set ( $alt = ' alt="' + $name + '"' )
+      #end
+##
+      #if( $item.border )
+        #set ( $border = ' border="' + $item.border + '"' )
+      #else
+        #set ( $border = "" )
+      #end
+##
+      #if( $item.width )
+        #set ( $width = ' width="' + $item.width + '"' )
+      #else
+        #set ( $width = "" )
+      #end
+      #if( $item.height )
+        #set ( $height = ' height="' + $item.height + '"' )
+      #else
+        #set ( $height = "" )
+      #end
+##
+      <a href="$href" title="$name" class="poweredBy">
+        <img class="poweredBy" $alt src="$img" $border $width $height />
+      </a>
+    #end
+    #if( $poweredBy.isEmpty() )
+      <a href="http://maven.apache.org/" title="$i18n.getString( "site-renderer", $locale, "template.builtby" ) Maven" class="poweredBy">
+        <img class="poweredBy" alt="$i18n.getString( "site-renderer", $locale, "template.builtby" ) Maven" src="$relativePath/images/logos/maven-feather.png" />
+      </a>
+    #end
+  #else
+    <a href="http://maven.apache.org/" title="$i18n.getString( "site-renderer", $locale, "template.builtby" ) Maven" class="poweredBy">
+      <img class="poweredBy" alt="$i18n.getString( "site-renderer", $locale, "template.builtby" ) Maven" src="$relativePath/images/logos/maven-feather.png" />
+    </a>
+  #end
+#end
+##
+<html xmlns="http://www.w3.org/1999/xhtml"#if ( $locale ) xml:lang="$locale.language" lang="$locale.language"#end>
+  <head>
+    <meta http-equiv="Content-Type" content="text/html; charset=${outputEncoding}" />
+    <title>$title</title>
+    <style type="text/css" media="all">
+      @import url("$relativePath/css/maven-base.css");
+      @import url("$relativePath/css/maven-theme.css");
+      @import url("$relativePath/css/site.css");
+    </style>
+    <link rel="stylesheet" href="$relativePath/css/print.css" type="text/css" media="print" />
+#foreach( $author in $authors )
+      <meta name="author" content="$author" />
+#end
+#if ( $dateCreation )
+    <meta name="Date-Creation-yyyymmdd" content="$dateCreation" />
+#end
+#if ( $dateRevision )
+    <meta name="Date-Revision-yyyymmdd" content="$dateRevision" />
+#end
+#if ( $locale )
+    <meta http-equiv="Content-Language" content="$locale.language" />
+#end
+    #if ( $decoration.body.head )
+      #foreach( $item in $decoration.body.head.getChildren() )
+        ## Workaround for DOXIA-150 due to a non-desired behaviour in p-u
+        ## @see org.codehaus.plexus.util.xml.Xpp3Dom#toString()
+        ## @see org.codehaus.plexus.util.xml.Xpp3Dom#toUnescapedString()
+        #set ( $documentHeader = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" )
+        #set ( $documentHeader = $documentHeader.replaceAll( "\\", "" ) )
+        #if ( $item.name == "script" )
+          $StringUtils.replace( $item.toUnescapedString(), $documentHeader, "" )
+        #else
+          $StringUtils.replace( $item.toString(), $documentHeader, "" )
+        #end
+      #end
+    #end
+    ## $headContent
+  </head>
+  <body class="composite">
+    <div id="banner">
+      #banner( $decoration.bannerLeft "bannerLeft" )
+      #banner( $decoration.bannerRight "bannerRight" )
+      <div class="clear">
+        <hr/>
+      </div>
+    </div>
+    <div id="breadcrumbs">
+      #publishDate( "left" $decoration.publishDate $decoration.version )
+      <div class="xright" style="padding-left: 8px; margin-top: -4px;">
+        <form method="GET" action="http://search-hadoop.com/">
+          <input type="text" style="width: 192px; height: 15px; font-size: inherit; border: 1px solid darkgray" name="q" value="Search wiki, mailing lists, sources & more" onfocus="this.value=''"/>
+          <input type="hidden" name="fc_project" value="HBase"/>
+          <button style="height: 20px; width: 60px;">Search</button>
+        </form>
+      </div>
+      <div class="xright">#links( $decoration.body.links )#publishDate( "right" $decoration.publishDate $decoration.version )</div>
+      <div class="clear">
+        <hr/>
+      </div>
+    </div>
+    <div id="leftColumn">
+      <div id="navcolumn">
+       #publishDate( "navigation-top" $decoration.publishDate $decoration.version )
+       #mainMenu( $decoration.body.menus )
+       #poweredByLogo( $decoration.poweredBy )
+       #publishDate( "navigation-bottom" $decoration.publishDate $decoration.version )
+      </div>
+    </div>
+    <div id="bodyColumn">
+      <div id="contentBox">
+        $bodyContent
+      </div>
+    </div>
+    <div class="clear">
+      <hr/>
+    </div>
+    <div id="footer">
+      <div class="xright">Copyright &#169;#copyright()All Rights Reserved.#publishDate( "bottom" $decoration.publishDate $decoration.version )</div>
+      <div class="clear">
+        <hr/>
+      </div>
+    </div>
+  </body>
+</html>
diff --git a/0.90/src/site/site.xml b/0.90/src/site/site.xml
new file mode 100644
index 0000000..f8fb915
--- /dev/null
+++ b/0.90/src/site/site.xml
@@ -0,0 +1,48 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+
+<project xmlns="http://maven.apache.org/DECORATION/1.0.0"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://maven.apache.org/DECORATION/1.0.0 http://maven.apache.org/xsd/decoration-1.0.0.xsd">
+  <bannerLeft>
+    <name>HBase</name>
+    <src>images/hbase_logo_med.gif</src>
+    <href>http://hbase.apache.org/</href>
+  </bannerLeft>
+  <bannerRight>
+      <src>images/asf_logo_wide.png</src>
+    <href>http://www.apache.org/</href>
+  </bannerRight>
+  <version position="right" />
+  <publishDate position="right" />
+  <body>
+    <menu name="HBase Project">
+      <item name="Overview" href="index.html"/>
+      <item name="License" href="license.html" />
+      <item name="Downloads" href="http://www.apache.org/dyn/closer.cgi/hbase/" />
+      <item name="Release Notes" href="https://issues.apache.org/jira/browse/HBASE?report=com.atlassian.jira.plugin.system.project:changelog-panel" />
+      <item name="Issue Tracking" href="issue-tracking.html" />
+      <item name="Mailing Lists" href="mail-lists.html" />
+      <item name="Source Repository" href="source-repository.html" />
+      <item name="Team" href="team-list.html" />
+    </menu>
+    <menu name="Documentation">
+      <item name="Getting Started: Quick" href="quickstart.html" />
+      <item name="Getting Started: Detailed" href="notsoquick.html" />
+      <item name="API" href="apidocs/index.html" />
+      <item name="X-Ref" href="xref/index.html" />
+      <item name="Book"      href="book.html" />
+      <item name="FAQ" href="faq.html" />
+      <item name="Wiki" href="http://wiki.apache.org/hadoop/Hbase" />
+      <item name="ACID Semantics" href="acid-semantics.html" />
+      <item name="Bulk Loads" href="bulk-loads.html" />
+      <item name="Metrics"      href="metrics.html" />
+      <item name="HBase on Windows"      href="cygwin.html" />
+      <item name="Cluster replication"      href="replication.html" />
+      <item name="Pseudo-Dist. Extras"      href="pseudo-distributed.html" />
+    </menu>
+  </body>
+    <skin>
+        <groupId>org.apache.maven.skins</groupId>
+      <artifactId>maven-stylus-skin</artifactId>
+    </skin>
+</project>
diff --git a/0.90/src/site/xdoc/acid-semantics.xml b/0.90/src/site/xdoc/acid-semantics.xml
new file mode 100644
index 0000000..a7243cb
--- /dev/null
+++ b/0.90/src/site/xdoc/acid-semantics.xml
@@ -0,0 +1,217 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title> 
+      HBase ACID Properties
+    </title>
+  </properties>
+
+  <body>
+    <section name="About this Document">
+      <p>HBase is not an ACID compliant database. However, it does guarantee certain specific
+      properties.</p>
+      <p>This specification enumerates the ACID properties of HBase.</p>
+    </section>
+    <section name="Definitions">
+      <p>For the sake of common vocabulary, we define the following terms:</p>
+      <dl>
+        <dt>Atomicity</dt>
+        <dd>an operation is atomic if it either completes entirely or not at all</dd>
+
+        <dt>Consistency</dt>
+        <dd>
+          all actions cause the table to transition from one valid state directly to another
+          (eg a row will not disappear during an update, etc)
+        </dd>
+
+        <dt>Isolation</dt>
+        <dd>
+          an operation is isolated if it appears to complete independently of any other concurrent transaction
+        </dd>
+
+        <dt>Durability</dt>
+        <dd>any update that reports &quot;successful&quot; to the client will not be lost</dd>
+
+        <dt>Visibility</dt>
+        <dd>an update is considered visible if any subsequent read will see the update as having been committed</dd>
+      </dl>
+      <p>
+        The terms <em>must</em> and <em>may</em> are used as specified by RFC 2119.
+        In short, the word &quot;must&quot; implies that, if some case exists where the statement
+        is not true, it is a bug. The word &quot;may&quot; implies that, even if the guarantee
+        is provided in a current release, users should not rely on it.
+      </p>
+    </section>
+    <section name="APIs to consider">
+      <ul>
+        <li>Read APIs
+        <ul>
+          <li>get</li>
+          <li>scan</li>
+        </ul>
+        </li>
+        <li>Write APIs</li>
+        <ul>
+          <li>put</li>
+          <li>batch put</li>
+          <li>delete</li>
+        </ul>
+        <li>Combination (read-modify-write) APIs</li>
+        <ul>
+          <li>incrementColumnValue</li>
+          <li>checkAndPut</li>
+        </ul>
+      </ul>
+    </section>
+
+    <section name="Guarantees Provided">
+
+      <section name="Atomicity">
+
+        <ol>
+          <li>All mutations are atomic within a row. Any put will either wholely succeed or wholely fail.</li>
+          <ol>
+            <li>An operation that returns a &quot;success&quot; code has completely succeeded.</li>
+            <li>An operation that returns a &quot;failure&quot; code has completely failed.</li>
+            <li>An operation that times out may have succeeded and may have failed. However,
+            it will not have partially succeeded or failed.</li>
+          </ol>
+          <li> This is true even if the mutation crosses multiple column families within a row.</li>
+          <li> APIs that mutate several rows will _not_ be atomic across the multiple rows.
+          For example, a multiput that operates on rows 'a','b', and 'c' may return having
+          mutated some but not all of the rows. In such cases, these APIs will return a list
+          of success codes, each of which may be succeeded, failed, or timed out as described above.</li>
+          <li> The checkAndPut API happens atomically like the typical compareAndSet (CAS) operation
+          found in many hardware architectures.</li>
+          <li> The order of mutations is seen to happen in a well-defined order for each row, with no
+          interleaving. For example, if one writer issues the mutation &quot;a=1,b=1,c=1&quot; and
+          another writer issues the mutation &quot;a=2,b=2,c=2&quot;, the row must either
+          be &quot;a=1,b=1,c=1&quot; or &quot;a=2,b=2,c=2&quot; and must <em>not</em> be something
+          like &quot;a=1,b=2,c=1&quot;.</li>
+          <ol>
+            <li>Please note that this is not true _across rows_ for multirow batch mutations.</li>
+          </ol>
+        </ol>
+      </section>
+      <section name="Consistency and Isolation">
+        <ol>
+          <li>All rows returned via any access API will consist of a complete row that existed at
+          some point in the table's history.</li>
+          <li>This is true across column families - i.e a get of a full row that occurs concurrent
+          with some mutations 1,2,3,4,5 will return a complete row that existed at some point in time
+          between mutation i and i+1 for some i between 1 and 5.</li>
+          <li>The state of a row will only move forward through the history of edits to it.</li>
+        </ol>
+
+        <section name="Consistency of Scans">
+        <p>
+          A scan is <strong>not</strong> a consistent view of a table. Scans do
+          <strong>not</strong> exhibit <em>snapshot isolation</em>.
+        </p>
+        <p>
+          Rather, scans have the following properties:
+        </p>
+
+        <ol>
+          <li>
+            Any row returned by the scan will be a consistent view (i.e. that version
+            of the complete row existed at some point in time)
+          </li>
+          <li>
+            A scan will always reflect a view of the data <em>at least as new as</em>
+            the beginning of the scan. This satisfies the visibility guarantees
+          enumerated below.</li>
+          <ol>
+            <li>For example, if client A writes data X and then communicates via a side
+            channel to client B, any scans started by client B will contain data at least
+            as new as X.</li>
+            <li>A scan _must_ reflect all mutations committed prior to the construction
+            of the scanner, and _may_ reflect some mutations committed subsequent to the
+            construction of the scanner.</li>
+            <li>Scans must include <em>all</em> data written prior to the scan (except in
+            the case where data is subsequently mutated, in which case it _may_ reflect
+            the mutation)</li>
+          </ol>
+        </ol>
+        <p>
+          Those familiar with relational databases will recognize this isolation level as &quot;read committed&quot;.
+        </p>
+        <p>
+          Please note that the guarantees listed above regarding scanner consistency
+          are referring to &quot;transaction commit time&quot;, not the &quot;timestamp&quot;
+          field of each cell. That is to say, a scanner started at time <em>t</em> may see edits
+          with a timestamp value greater than <em>t</em>, if those edits were committed with a
+          &quot;forward dated&quot; timestamp before the scanner was constructed.
+        </p>
+        </section>
+      </section>
+      <section name="Visibility">
+        <ol>
+          <li> When a client receives a &quot;success&quot; response for any mutation, that
+          mutation is immediately visible to both that client and any client with whom it
+          later communicates through side channels.</li>
+          <li> A row must never exhibit so-called &quot;time-travel&quot; properties. That
+          is to say, if a series of mutations moves a row sequentially through a series of
+          states, any sequence of concurrent reads will return a subsequence of those states.</li>
+          <ol>
+            <li>For example, if a row's cells are mutated using the &quot;incrementColumnValue&quot;
+            API, a client must never see the value of any cell decrease.</li>
+            <li>This is true regardless of which read API is used to read back the mutation.</li>
+          </ol>
+          <li> Any version of a cell that has been returned to a read operation is guaranteed to
+          be durably stored.</li>
+        </ol>
+
+      </section>
+      <section name="Durability">
+        <ol>
+          <li> All visible data is also durable data. That is to say, a read will never return
+          data that has not been made durable on disk[1]</li>
+          <li> Any operation that returns a &quot;success&quot; code (eg does not throw an exception)
+          will be made durable.</li>
+          <li> Any operation that returns a &quot;failure&quot; code will not be made durable
+          (subject to the Atomicity guarantees above)</li>
+          <li> All reasonable failure scenarios will not affect any of the guarantees of this document.</li>
+
+        </ol>
+      </section>
+      <section name="Tunability">
+        <p>All of the above guarantees must be possible within HBase. For users who would like to trade
+        off some guarantees for performance, HBase may offer several tuning options. For example:</p>
+        <ul>
+          <li>Visibility may be tuned on a per-read basis to allow stale reads or time travel.</li>
+          <li>Durability may be tuned to only flush data to disk on a periodic basis</li>
+        </ul>
+      </section>
+    </section>
+    <section name="Footnotes">
+
+      <p>[1] In the context of HBase, &quot;durably on disk&quot; implies an hflush() call on the transaction
+      log. This does not actually imply an fsync() to magnetic media, but rather just that the data has been
+      written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is
+      possible that the edits are not truly durable.</p>
+    </section>
+
+  </body>
+</document>
diff --git a/0.90/src/site/xdoc/bulk-loads.xml b/0.90/src/site/xdoc/bulk-loads.xml
new file mode 100644
index 0000000..8ecd005
--- /dev/null
+++ b/0.90/src/site/xdoc/bulk-loads.xml
@@ -0,0 +1,137 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title> 
+      Bulk Loads in HBase
+    </title>
+  </properties>
+  <body>
+    <section name="Overview">
+      <p>
+        HBase includes several methods of loading data into tables.
+        The most straightforward method is to either use the TableOutputFormat
+        class from a MapReduce job, or use the normal client APIs; however,
+        these are not always the most efficient methods.
+      </p>
+      <p>
+        This document describes HBase's bulk load functionality. The bulk load
+        feature uses a MapReduce job to output table data in HBase's internal
+        data format, and then directly loads the data files into a running
+        cluster.
+      </p>
+    </section>
+    <section name="Bulk Load Architecture">
+      <p>
+        The HBase bulk load process consists of two main steps.
+      </p>
+      <section name="Preparing data via a MapReduce job">
+        <p>
+          The first step of a bulk load is to generate HBase data files from
+          a MapReduce job using HFileOutputFormat. This output format writes
+          out data in HBase's internal storage format so that they can be
+          later loaded very efficiently into the cluster.
+        </p>
+        <p>
+          In order to function efficiently, HFileOutputFormat must be configured
+          such that each output HFile fits within a single region. In order to
+          do this, jobs use Hadoop's TotalOrderPartitioner class to partition the
+          map output into disjoint ranges of the key space, corresponding to the
+          key ranges of the regions in the table.
+        </p>
+        <p>
+          HFileOutputFormat includes a convenience function, <code>configureIncrementalLoad()</code>,
+          which automatically sets up a TotalOrderPartitioner based on the current
+          region boundaries of a table.
+        </p>
+      </section>
+      <section name="Completing the data load">
+        <p>
+          After the data has been prepared using <code>HFileOutputFormat</code>, it
+          is loaded into the cluster using a command line tool. This command line tool
+          iterates through the prepared data files, and for each one determines the
+          region the file belongs to. It then contacts the appropriate Region Server
+          which adopts the HFile, moving it into its storage directory and making
+          the data available to clients.
+        </p>
+        <p>
+          If the region boundaries have changed during the course of bulk load
+          preparation, or between the preparation and completion steps, the bulk
+          load commandline utility will automatically split the data files into
+          pieces corresponding to the new boundaries. This process is not
+          optimally efficient, so users should take care to minimize the delay between
+          preparing a bulk load and importing it into the cluster, especially
+          if other clients are simultaneously loading data through other means.
+        </p>
+      </section>
+    </section>
+    <section name="Preparing a bulk load using the importtsv tool">
+      <p>
+        HBase ships with a command line tool called <code>importtsv</code>. This tool
+        is available by running <code>hadoop jar /path/to/hbase-VERSION.jar importtsv</code>.
+        Running this tool with no arguments prints brief usage information:
+      </p>
+      <code><pre>
+Usage: importtsv -Dimporttsv.columns=a,b,c &lt;tablename&gt; &lt;inputdir&gt;
+
+Imports the given input directory of TSV data into the specified table.
+
+The column names of the TSV data must be specified using the -Dimporttsv.columns
+option. This option takes the form of comma-separated column names, where each
+column name is either a simple column family, or a columnfamily:qualifier. The special
+column name HBASE_ROW_KEY is used to designate that this column should be used
+as the row key for each imported record. You must specify exactly one column
+to be the row key.
+
+In order to prepare data for a bulk data load, pass the option:
+  -Dimporttsv.bulk.output=/path/for/output
+
+Other options that may be specified with -D include:
+  -Dimporttsv.skip.bad.lines=false - fail if encountering an invalid line
+</pre></code>
+    </section>
+    <section name="Importing the prepared data using the completebulkload tool">
+      <p>
+        After a data import has been prepared using the <code>importtsv</code> tool, the
+        <code>completebulkload</code> tool is used to import the data into the running cluster.
+      </p>
+      <p>
+        The <code>completebulkload</code> tool simply takes the same output path where
+        <code>importtsv</code> put its results, and the table name. For example:
+      </p>
+      <code>$ hadoop jar hbase-VERSION.jar completebulkload /user/todd/myoutput mytable</code>
+      <p>
+        This tool will run quickly, after which point the new data will be visible in
+        the cluster.
+      </p>
+    </section>
+    <section name="Advanced Usage">
+      <p>
+        Although the <code>importtsv</code> tool is useful in many cases, advanced users may
+        want to generate data programatically, or import data from other formats. To get
+        started doing so, dig into <code>ImportTsv.java</code> and check the JavaDoc for
+        HFileOutputFormat.
+      </p>
+      <p>
+        The import step of the bulk load can also be done programatically. See the
+        <code>LoadIncrementalHFiles</code> class for more information.
+      </p>
+    </section>
+  </body>
+</document>
diff --git a/0.90/src/site/xdoc/cygwin.xml b/0.90/src/site/xdoc/cygwin.xml
new file mode 100644
index 0000000..2bdce12
--- /dev/null
+++ b/0.90/src/site/xdoc/cygwin.xml
@@ -0,0 +1,242 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Installing HBase on Windows using Cygwin</title>
+  </properties>
+
+<body>
+<section name="Introduction">
+<p><a title="HBase project" href="http://hadoop.apache.org/hbase" target="_blank">HBase</a> is a distributed, column-oriented store, modeled after Google's <a title="Google's BigTable" href="http://labs.google.com/papers/bigtable.html" target="_blank">BigTable</a>. HBase is built on top of <a title="Hadoop project" href="http://hadoop.apache.org">Hadoop</a> for its <a title="Hadoop MapReduce project" href="http://hadoop.apache.org/mapreduce" target="_blank">MapReduce </a>and <a title="Hadoop DFS project" href="http://hadoop.apache.org/hdfs">distributed file system</a> implementation. All these projects are open-source and part of the <a title="The Apache Software Foundation" href="http://www.apache.org/" target="_blank">Apache Software Foundation</a>.</p>
+
+<p style="text-align: justify; ">As being distributed, large scale platforms, the Hadoop and HBase projects mainly focus on <em><strong>*nix</strong></em><strong> environments</strong> for production installations. However, being developed in <strong>Java</strong>, both projects are fully <strong>portable</strong> across platforms and, hence, also to the <strong>Windows operating system</strong>. For ease of development the projects rely on <a title="Cygwin site" href="http://www.cygwin.com/" target="_blank">Cygwin</a> to have a *nix-like environment on Windows to run the shell scripts.</p>
+</section>
+<section name="Purpose">
+<p style="text-align: justify; ">This document explains the <strong>intricacies of running HBase on Windows using Cygwin</strong> as an all-in-one single-node installation for testing and development. The HBase <a title="HBase Overview" href="http://hadoop.apache.org/hbase/docs/current/api/overview-summary.html#overview_description" target="_blank">Overview</a> and <a title="HBase QuickStart" href="http://hadoop.apache.org/common/docs/current/quickstart.html" target="_blank">QuickStart</a> guides on the other hand go a long way in explaning how to setup <a title="HBase project" href="http://hadoop.apache.org/hbase" target="_blank">HBase</a> in more complex deployment scenario's.</p>
+</section>
+
+<section name="Installation">
+<p style="text-align: justify; ">For running HBase on Windows, 3 technologies are required: <strong>Java, Cygwin and SSH</strong>. The following paragraphs detail the installation of each of the aforementioned technologies.</p>
+<section name="Java">
+<p style="text-align: justify; ">HBase depends on the <a title="Java Platform, Standard Edition, 6 Release" href="http://java.sun.com/javase/6/" target="_blank">Java Platform, Standard Edition, 6 Release</a>. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from <a title="Java SE Downloads" href="http://java.sun.com/javase/downloads/index.jsp" target="_blank">Sun's download page</a>. Installation is a simple GUI wizard that guides you through the process.</p>
+</section>
+<section name="Cygwin">
+<p style="text-align: justify; ">Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.</p>
+
+<p style="text-align: justify; ">For installation, Cygwin provides the <a title="Cygwin Setup Utility" href="http://cygwin.com/setup.exe" target="_blank"><strong><code>setup.exe</code> utility</strong></a> that tracks the versions of all installed components on the target system and provides the mechanism for <strong>installing</strong> or <strong>updating </strong>everything from the mirror sites of Cygwin.</p>
+
+<p style="text-align: justify; ">To support installation, the <code>setup.exe</code> utility uses 2 directories on the target system. The <strong>Root</strong> directory for Cygwin (defaults to <code>C:\cygwin)</code> which will become <code>/</code> within the eventual Cygwin installation; and the <strong>Local Package </strong>directory (e.g. <code>C:\cygsetup</code> that is the cache where <code>setup.exe</code> stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.</p>
+
+<p style="text-align: justify; ">Perform following steps to install Cygwin, which are elaboratly detailed in the <a title="Setting Up Cygwin" href="http://cygwin.com/cygwin-ug-net/setup-net.html" target="_self">2nd chapter</a> of the <a title="Cygwin User's Guide" href="http://cygwin.com/cygwin-ug-net/cygwin-ug-net.html" target="_blank">Cygwin User's Guide</a>:</p>
+
+<ol style="text-align: justify; ">
+	<li>Make sure you have <code>Administrator</code> privileges on the target system.</li>
+	<li>Choose and create you <strong>Root</strong> and <strong>Local Package</strong> directories. A good suggestion is to use <code>C:\cygwin\root</code> and <code>C:\cygwin\setup</code> folders.</li>
+	<li>Download the <code>setup.exe</code> utility and save it to the <strong>Local Package</strong> directory.</li>
+	<li>Run the <code>setup.exe</code> utility,
+<ol>
+	<li>Choose  the <code>Install from Internet</code> option,</li>
+	<li>Choose your <strong>Root</strong> and <strong>Local Package</strong> folders</li>
+	<li>and select an appropriate mirror.</li>
+	<li>Don't select any additional packages yet, as we only want to install Cygwin for now.</li>
+	<li>Wait for download and install</li>
+	<li>Finish the installation</li>
+</ol>
+</li>
+	<li>Optionally, you can now also add a shortcut to your Start menu pointing to the <code>setup.exe</code> utility in the <strong>Local Package </strong>folder.</li>
+	<li>Add <code>CYGWIN_HOME</code> system-wide environment variable that points to your <strong>Root </strong>directory.</li>
+	<li>Add <code>%CYGWIN_HOME%\bin</code> to the end of your <code>PATH</code> environment variable.</li>
+	<li>Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.</li>
+	<li>Test your installation by running your freshly created shortcuts or the <code>Cygwin.bat</code> command in the <strong>Root</strong> folder. You should end up in a terminal window that is running a <a title="Bash Reference Manual" href="http://www.gnu.org/software/bash/manual/bashref.html" target="_blank">Bash shell</a>. Test the shell by issuing following commands:
+<ol>
+	<li><code>cd /</code> should take you to thr <strong>Root</strong> directory in Cygwin;</li>
+	<li>the <code>LS</code> commands that should list all files and folders in the current directory.</li>
+	<li>Use the <code>exit</code> command to end the terminal.</li>
+</ol>
+</li>
+	<li>When needed, to <strong>uninstall</strong> Cygwin you can simply delete the <strong>Root</strong> and <strong>Local Package</strong> directory, and the <strong>shortcuts</strong> that were created during installation.</li>
+</ol>
+</section>
+<section name="SSH">
+<p style="text-align: justify; ">HBase (and Hadoop) rely on <a title="Secure Shell" href="http://nl.wikipedia.org/wiki/Secure_Shell" target="_blank"><strong>SSH</strong></a> for interprocess/-node <strong>communication</strong> and launching<strong> remote commands</strong>. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as <strong>Windows services</strong>!</p>
+
+<ol style="text-align: justify; ">
+	<li>Rerun the <code><strong>setup.exe</strong></code><strong> utility</strong>.</li>
+	<li>Leave all parameters as is, skipping through the wizard using the <code>Next</code> button until the <code>Select Packages</code> panel is shown.</li>
+	<li>Maximize the window and click the <code>View</code> button to toggle to the list view, which is ordered alfabetically on <code>Package</code>, making it easier to find the packages we'll need.</li>
+	<li>Select the following packages by clicking the status word (normally <code>Skip</code>) so it's marked for installation. Use the <code>Next </code>button to download and install the packages.
+<ol>
+	<li>OpenSSH</li>
+	<li>tcp_wrappers</li>
+	<li>diffutils</li>
+	<li>zlib</li>
+</ol>
+</li>
+	<li>Wait for the install to complete and finish the installation.</li>
+</ol>
+</section>
+<section name="HBase">
+<p style="text-align: justify; ">Download the <strong>latest release </strong>of HBase from the <a title="HBase Releases" href="http://hadoop.apache.org/hbase/releases.html" target="_blank">website</a>. As the HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final <strong>installation</strong> directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use <code>/usr/local/</code> (or [<code><strong>Root</strong> directory]\usr\local</code> in Windows slang). You should end up with a <code>/usr/local/hbase-<em>&lt;version&gt;</em></code> installation in Cygwin.</p>
+
+This finishes installation. We go on with the configuration.
+</section>
+</section>
+<section name="Configuration">
+<p style="text-align: justify; ">There are 3 parts left to configure: <strong>Java, SSH and HBase</strong> itself. Following paragraphs explain eacht topic in detail.</p>
+<section name="Java">
+<p style="text-align: justify; ">One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using <strong>symbolic links</strong>.</p>
+
+<ol style="text-align: justify; ">
+	<li style="text-align: justify; ">Create a link in <code>/usr/local</code> to the Java home directory by using the following command and substituting the name of your chosen Java environment:
+<pre>LN -s /cygdrive/c/Program\ Files/Java/<em>&lt;jre name&gt; </em>/usr/local/<em>&lt;jre name&gt;</em></pre>
+</li>
+	<li>Test your java installation by changing directories to your Java folder <code>CD /usr/local/<em>&lt;jre name&gt;</em></code> and issueing the command <code>./bin/java -version</code>. This should output your version of the chosen JRE.</li>
+</ol>
+</section>
+<section>
+<title>SSH</title>
+<p style="text-align: justify; ">Configuring <strong>SSH </strong>is quite elaborate, but primarily a question of launching it by default as a<strong> Windows service</strong>.</p>
+
+<ol style="text-align: justify; ">
+	<li style="text-align: justify; ">On Windows Vista and above make sure you run the Cygwin shell with <strong>elevated privileges</strong>, by right-clicking on the shortcut an using <code>Run as Administrator</code>.</li>
+	<li style="text-align: justify; ">First of all, we have to make sure the <strong>rights on some crucial files</strong> are correct. Use the commands underneath. You can verify all rights by using the <code>LS -L</code> command on the different files. Also, notice the auto-completion feature in the shell using <code>&lt;TAB&gt;</code> is extremely handy in these situations.
+<ol>
+	<li><code>chmod +r /etc/passwd</code> to make the passwords file readable for all</li>
+	<li><code>chmod u+w /etc/passwd</code> to make the passwords file writable for the owner</li>
+	<li><code>chmod +r /etc/group</code> to make the groups file readable for all</li>
+</ol>
+<ol>
+	<li><code>chmod u+w /etc/group</code> to make the groups file writable for the owner</li>
+</ol>
+<ol>
+	<li><code>chmod 755 /var</code> to make the var folder writable to owner and readable and executable to all</li>
+</ol>
+</li>
+	<li>Edit the <strong>/etc/hosts.allow</strong> file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the <code>PARANOID</code> line:
+<ol>
+	<li><code>ALL : localhost 127.0.0.1/32 : allow</code></li>
+	<li><code>ALL : [::1]/128 : allow</code></li>
+</ol>
+</li>
+	<li>Next we have to <strong>configure SSH</strong> by using the script <code>ssh-host-config</code>
+<ol>
+	<li>If this script asks to overwrite an existing <code>/etc/ssh_config</code>, answer <code>yes</code>.</li>
+	<li>If this script asks to overwrite an existing <code>/etc/sshd_config</code>, answer <code>yes</code>.</li>
+	<li>If this script asks to use privilege separation, answer <code>yes</code>.</li>
+	<li>If this script asks to install <code>sshd</code> as a service, answer <code>yes</code>. Make sure you started your shell as Adminstrator!</li>
+	<li>If this script asks for the CYGWIN value, just <code>&lt;enter&gt;</code> as the default is <code>ntsec</code>.</li>
+	<li>If this script asks to create the <code>sshd</code> account, answer <code>yes</code>.</li>
+	<li>If this script asks to use a different user name as service account, answer <code>no</code> as the default will suffice.</li>
+	<li>If this script asks to create the <code>cyg_server</code> account, answer <code>yes</code>. Enter a password for the account.</li>
+</ol>
+</li>
+	<li><strong>Start the SSH service</strong> using <code>net start sshd</code> or <code>cygrunsrv  --start  sshd</code>. Notice that <code>cygrunsrv</code> is the utility that make the process run as a Windows service. Confirm that you see a message stating that <code>the CYGWIN sshd service  was started succesfully.</code></li>
+	<li>Harmonize Windows and Cygwin<strong> user account</strong> by using the commands:
+<ol>
+	<li><code>mkpasswd -cl &gt; /etc/passwd</code></li>
+	<li><code>mkgroup --local &gt; /etc/group</code></li>
+</ol>
+</li>
+	<li><strong>Test </strong>the installation of SSH:
+<ol>
+	<li>Open a new Cygwin terminal</li>
+	<li>Use the command <code>whoami</code> to verify your userID</li>
+	<li>Issue an <code>ssh localhost</code> to connect to the system itself
+<ol>
+	<li>Answer <code>yes</code> when presented with the server's fingerprint</li>
+	<li>Issue your password when prompted</li>
+	<li>test a few commands in the remote session</li>
+	<li>The <code>exit</code> command should take you back to your first shell in Cygwin</li>
+</ol>
+</li>
+	<li><code>Exit</code> should terminate the Cygwin shell.</li>
+</ol>
+</li>
+</ol>
+</section>
+<section name="HBase">
+If all previous configurations are working properly, we just need some tinkering at the <strong>HBase config</strong> files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase <code>[<strong>installation</strong> directory]</code> as working directory.
+<ol>
+	<li>HBase uses the <code>./conf/<strong>hbase-env.sh</strong></code> to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like:
+<ol>
+	<li><code>export JAVA_HOME=/usr/local/<em>&lt;jre name&gt;</em></code></li>
+	<li><code>export HBASE_IDENT_STRING=$HOSTNAME</code> as this most likely does not inlcude spaces.</li>
+</ol>
+</li>
+	<li>HBase uses the ./conf/<code><strong>hbase-default.xml</strong></code> file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root <code>/</code>. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence <code>C:\</code>-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
+<ol>
+	<li><code>hbase.rootdir</code> must read e.g. <code>file:///C:/cygwin/root/tmp/hbase/data</code></li>
+	<li><code>hbase.tmp.dir</code> must read <code>C:/cygwin/root/tmp/hbase/tmp</code></li>
+	<li><code>hbase.zookeeper.quorum</code> must read <code>127.0.0.1</code> because for some reason <code>localhost</code> doesn't seem to resolve properly on Cygwin.</li>
+</ol>
+</li>
+	<li>Make sure the configured <code>hbase.rootdir</code> and <code>hbase.tmp.dir</code> <strong>directories exist</strong> and have the proper<strong> rights</strong> set up e.g. by issuing a <code>chmod 777</code> on them.</li>
+</ol>
+</section>
+</section>
+<section>
+<title>Testing</title>
+<p>
+This should conclude the installation and configuration of HBase on Windows using Cygwin. So it's time <strong>to test it</strong>.
+<ol>
+	<li>Start a Cygwin<strong> terminal</strong>, if you haven't already.</li>
+	<li>Change directory to HBase <strong>installation</strong> using <code>CD /usr/local/hbase-<em>&lt;version&gt;</em></code>, preferably using auto-completion.</li>
+	<li><strong>Start HBase</strong> using the command <code>./bin/start-hbase.sh</code>
+<ol>
+	<li>When prompted to accept the SSH fingerprint, answer <code>yes</code>.</li>
+	<li>When prompted, provide your password. Maybe multiple times.</li>
+	<li>When the command completes, the HBase server should have started.</li>
+	<li>However, to be absolutely certain, check the logs in the <code>./logs</code> directory for any exceptions.</li>
+</ol>
+</li>
+	<li>Next we <strong>start the HBase shell</strong> using the command <code>./bin/hbase shell</code></li>
+	<li>We run some simple <strong>test commands</strong>
+<ol>
+	<li>Create a simple table using command <code>create 'test', 'data'</code></li>
+	<li>Verify the table exists using the command <code>list</code></li>
+	<li>Insert data into the table using e.g.
+<pre>put 'test', 'row1', 'data:1', 'value1'
+put 'test', 'row2', 'data:2', 'value2'
+put 'test', 'row3', 'data:3', 'value3'</pre>
+</li>
+	<li>List all rows in the table using the command <code>scan 'test'</code> that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!</li>
+	<li>Finally we get rid of the table by issuing <code>disable 'test'</code> followed by <code>drop 'test'</code> and verified by <code>list</code> which should give an empty listing.</li>
+</ol>
+</li>
+	<li><strong>Leave the shell</strong> by <code>exit</code></li>
+	<li>To <strong>stop the HBase server</strong> issue the <code>./bin/stop-hbase.sh</code> command. And wait for it to complete!!! Killing the process might corrupt your data on disk.</li>
+	<li>In case of <strong>problems</strong>,
+<ol>
+	<li>verify the HBase logs in the <code>./logs</code> directory.</li>
+	<li>Try to fix the problem</li>
+	<li>Get help on the forums or IRC (<code>#hbase@freenode.net</code>). People are very active and keen to help out!</li>
+	<li>Stopr, restart and retest the server.</li>
+</ol>
+</li>
+</ol>
+</p>
+</section>
+
+<section name="Conclusion">
+<p>
+Now your <strong>HBase </strong>server is running, <strong>start coding</strong> and build that next killer app on this particular, but scalable datastore!
+</p>
+</section>
+</body>
+</document>
diff --git a/0.90/src/site/xdoc/index.xml b/0.90/src/site/xdoc/index.xml
new file mode 100644
index 0000000..d4316c4
--- /dev/null
+++ b/0.90/src/site/xdoc/index.xml
@@ -0,0 +1,60 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>HBase Home</title>
+  </properties>
+  
+  <body>
+    <section name="This is Apache HBase">
+    <p>
+    HBase is the <a href="http://hadoop.apache.org">Hadoop</a> database. Use it when you need random, realtime read/write access to your Big Data.
+    This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
+<div style="float: right;">
+    <img src="http://hbase.apache.org/images/hadoop-logo.jpg" style="border-style;none" />
+</div>
+HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' <a hef="http://labs.google.com/papers/bigtable.html">Bigtable: A Distributed Storage System for Structured</a> by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop.
+HBase includes:
+<ul>
+    <li>Convenient base classes for backing Hadoop MapReduce jobs with HBase tables including cascading, hive and pig source and sink modules
+</li>
+    <li>Query predicate push down via server side scan and get filters
+</li>
+    <li>Optimizations for real time queries
+</li>
+    <li>A Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
+</li>
+    <li>Extensible jruby-based (JIRB) shell
+</li>
+    <li>Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
+</li>
+</ul>
+</p>
+
+        </section>
+        <section name="News">
+      <p>November 19th, <a href="http://huguk.org/">Hadoop HUG in London</a> is all about HBase</p>
+      <p>November 15-19th, <a href="http://www.devoxx.com/display/Devoxx2K10/Home">Devoxx</a> features HBase Training and multiple HBase presentations</p>
+      <p>October 12th, HBase-related presentations by core contributors and users at <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/">Hadoop World 2010</a></p>
+      <p>October 11th, <a href="http://www.meetup.com/hbaseusergroup/calendar/14606174/">HUG-NYC: HBase User Group NYC Edition</a> (Night before Hadoop World)</p>
+      <p><small><a href="old_news.html">Old News</a></small></p>
+    </section>
+  </body>
+  
+</document>
diff --git a/0.90/src/site/xdoc/metrics.xml b/0.90/src/site/xdoc/metrics.xml
new file mode 100644
index 0000000..89e0db7
--- /dev/null
+++ b/0.90/src/site/xdoc/metrics.xml
@@ -0,0 +1,143 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title> 
+      HBase Metrics
+    </title>
+  </properties>
+
+  <body>
+    <section name="Introduction">
+      <p>
+      HBase emits Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+      </p>
+      </section>
+      <section>
+        <title>HOWTO</title>
+      <p>First read up on Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+      If you are using ganglia, the <a href="http://wiki.apache.org/hadoop/GangliaMetrics">GangliaMetrics</a>
+      wiki page is useful read.</p>
+      <p>To have HBase emit metrics, edit <code>$HBASE_HOME/conf/hadoop-metrics.properties</code>
+      and enable metric 'contexts' per plugin.  As of this writing, hadoop supports
+      <strong>file</strong> and <strong>ganglia</strong> plugins.
+      Yes, the hbase metrics files is named hadoop-metrics rather than
+      <em>hbase-metrics</em> because currently at least the hadoop metrics system has the
+      properties filename hardcoded. Per metrics <em>context</em>,
+      comment out the NullContext and enable one or more plugins instead.
+      </p>
+      <p>
+      If you enable the <em>hbase</em> context, on regionservers you'll see total requests since last
+      metric emission, count of regions and storefiles as well as a count of memstore size.
+      On the master, you'll see a count of the cluster's requests.
+      </p>
+      <p>
+      Enabling the <em>rpc</em> context is good if you are interested in seeing
+      metrics on each hbase rpc method invocation (counts and time taken).
+      </p>
+      <p>
+      The <em>jvm</em> context is
+      useful for long-term stats on running hbase jvms -- memory used, thread counts, etc.
+      As of this writing, if more than one jvm is running emitting metrics, at least
+      in ganglia, the stats are aggregated rather than reported per instance.
+      </p>
+    </section>
+
+    <section name="Using with JMX">
+      <p>
+      In addition to the standard output contexts supported by the Hadoop 
+      metrics package, you can also export HBase metrics via Java Management 
+      Extensions (JMX).  This will allow viewing HBase stats in JConsole or 
+      any other JMX client.
+      </p>
+      <section name="Enable HBase stats collection">
+      <p>
+      To enable JMX support in HBase, first edit 
+      <code>$HBASE_HOME/conf/hadoop-metrics.properties</code> to support 
+      metrics refreshing. (If you've already configured 
+      <code>hadoop-metrics.properties</code> for another output context, 
+      you can skip this step).
+      </p>
+      <source>
+# Configuration of the "hbase" context for null
+hbase.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+hbase.period=60
+
+# Configuration of the "jvm" context for null
+jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+jvm.period=60
+
+# Configuration of the "rpc" context for null
+rpc.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+rpc.period=60
+      </source>
+      </section>
+      <section name="Setup JMX remote access">
+      <p>
+      For remote access, you will need to configure JMX remote passwords 
+      and access profiles.  Create the files:
+      </p>
+      <dl>
+        <dt><code>$HBASE_HOME/conf/jmxremote.passwd</code> (set permissions 
+        to 600)</dt>
+        <dd>
+        <source>
+monitorRole monitorpass
+controlRole controlpass
+        </source>
+        </dd>
+        
+        <dt><code>$HBASE_HOME/conf/jmxremote.access</code></dt>
+        <dd>
+        <source>
+monitorRole readonly
+controlRole readwrite
+        </source>
+        </dd>
+      </dl>
+      </section>
+      <section name="Configure JMX in HBase startup">
+      <p>
+      Finally, edit the <code>$HBASE_HOME/conf/hbase-env.sh</code>
+      script to add JMX support: 
+      </p>
+      <dl>
+        <dt><code>$HBASE_HOME/conf/hbase-env.sh</code></dt>
+        <dd>
+        <p>Add the lines:</p>
+        <source>
+HBASE_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.access.file=$HBASE_HOME/conf/jmxremote.access"
+
+export HBASE_MASTER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10101"
+export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10102"
+        </source>
+        </dd>
+      </dl>
+      <p>
+      After restarting the processes you want to monitor, you should now be 
+      able to run JConsole (included with the JDK since JDK 5.0) to view 
+      the statistics via JMX.  HBase MBeans are exported under the 
+      <strong><code>hadoop</code></strong> domain in JMX.
+      </p>
+      </section>
+    </section>
+  </body>
+</document>
diff --git a/0.90/src/site/xdoc/old_news.xml b/0.90/src/site/xdoc/old_news.xml
new file mode 100644
index 0000000..435a147
--- /dev/null
+++ b/0.90/src/site/xdoc/old_news.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title> 
+      Old News
+    </title>
+  </properties>
+  <body>
+  <section name="Old News">
+      <p>June 30th, <a href="http://www.meetup.com/hbaseusergroup/calendar/13562846/">HBase Contributor Workshop</a> (Day after Hadoop Summit)</p>
+      <p>May 10th, 2010: HBase graduates from Hadoop sub-project to Apache Top Level Project </p>
+      <p>Signup for <a href="http://www.meetup.com/hbaseusergroup/calendar/12689490/">HBase User Group Meeting, HUG10</a> hosted by Trend Micro, April 19th, 2010</p>
+
+      <p><a href="http://www.meetup.com/hbaseusergroup/calendar/12689351/">HBase User Group Meeting, HUG9</a> hosted by Mozilla, March 10th, 2010</p>
+      <p>Sign up for the <a href="http://www.meetup.com/hbaseusergroup/calendar/12241393/">HBase User Group Meeting, HUG8</a>, January 27th, 2010 at StumbleUpon in SF</p>
+      <p>September 8th, 2010: HBase 0.20.0 is faster, stronger, slimmer, and sweeter tasting than any previous HBase release.  Get it off the <a href="releases.html">Releases</a> page.</p>
+      <p><a href="http://dev.us.apachecon.com/c/acus2009/">ApacheCon</a> in Oakland: November 2-6th, 2009: 
+      The Apache Foundation will be celebrating its 10th anniversary in beautiful Oakland by the Bay. Lots of good talks and meetups including an HBase presentation by a couple of the lads.</p>
+      <p>HBase at Hadoop World in NYC: October 2nd, 2009: A few of us will be talking on Practical HBase out east at <a href="http://www.cloudera.com/hadoop-world-nyc">Hadoop World: NYC</a>.</p>
+      <p>HUG7 and HBase Hackathon: August 7th-9th, 2009 at StumbleUpon in SF: Sign up for the <a href="http://www.meetup.com/hbaseusergroup/calendar/10950511/">HBase User Group Meeting, HUG7</a> or for the <a href="http://www.meetup.com/hackathon/calendar/10951718/">Hackathon</a> or for both (all are welcome!).</p>
+      <p>June, 2009 -- HBase at HadoopSummit2009 and at NOSQL: See the <a href="http://wiki.apache.org/hadoop/HBase/HBasePresentations">presentations</a></p>
+      <p>March 3rd, 2009 -- HUG6: <a href="http://www.meetup.com/hbaseusergroup/calendar/9764004/">HBase User Group 6</a></p>
+      <p>January 30th, 2009 -- LA Hbackathon:<a href="http://www.meetup.com/hbasela/calendar/9450876/">HBase January Hackathon Los Angeles</a> at <a href="http://streamy.com" >Streamy</a> in Manhattan Beach</p>
+  </section>
+  </body>
+</document>
diff --git a/0.90/src/site/xdoc/pseudo-distributed.xml b/0.90/src/site/xdoc/pseudo-distributed.xml
new file mode 100644
index 0000000..1b3e5e6
--- /dev/null
+++ b/0.90/src/site/xdoc/pseudo-distributed.xml
@@ -0,0 +1,77 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title> 
+Running HBase in pseudo-distributed mode
+    </title>
+  </properties>
+
+  <body>
+      <p>This document augments what is described in the HBase 'Getting Started' in the 
+ <a href="http://hbase.apache.org/docs/current/api/overview-summary.html#distributed">Distributed Operation: Pseudo- and Fully-distributed modes</a> section.
+ In particular it describes scripts that allow you start extra masters and regionservers when running in pseudo-distributed mode.
+ </p>
+
+ <ol><li>Copy the psuedo-distributed suggested configuration file (feel free to take a peek and understand what it's doing)
+             <source>% cp conf/hbase-site.xml{.psuedo-distributed.template,}</source>
+    </li>
+    <li>(Optional) Start up <a href="http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html#PseudoDistributed">Pseudo-distributed HDFS</a>.
+             <ol><li>If you do, go to conf/hbase-site.xml.  Uncomment the 'hbase.rootdir' property.
+                 </li>
+               <li>Additionally, if you want to test HBase with high data durability enabled, also uncomment the 'dfs.support.append' property.
+               </li>
+       </ol>
+   </li>
+<li>Start up the initial HBase cluster
+                   <source>% bin/start-hbase.sh</source>
+                   <ol>    <li>To start up an extra backup master(s) on the same server run
+                       <source>% bin/local-master-backup.sh start 1</source>
+                       Here the '1' means use ports 60001 &amp; 60011, and this backup master's logfile will be at <i>logs/hbase-${USER}-1-master-${HOSTNAME}.log</i>.
+                       To startup multiple backup masters run <source>% bin/local-master-backup.sh start 2 3</source> You can start up to 9 backup masters (10 total).
+ </li>
+ <li>To start up more regionservers
+     <source>% bin/local-regionservers.sh start 1</source>
+     where '1' means use ports 60201 &amp; 60301 and its logfile will be at <i>logs/hbase-${USER}-1-regionserver-${HOSTNAME}.log</i>.
+     To add 4 more regionservers in addition to the one you just started by running <source>% bin/local-regionservers.sh start 2 3 4 5</source>
+     Supports up to 99 extra regionservers (100 total).
+                    </li>
+                </ol>
+</li>
+<li>To stop the cluster
+    <ol>
+        <li>Assuming you want to stop master backup # 1, run
+            <source>% cat /tmp/hbase-${USER}-1-master.pid |xargs kill -9</source>
+            Note that bin/local-master-backup.sh stop 1 will try to stop the cluster along with the master
+                        </li>
+                        <li>To stop an individual regionserver, run
+                            <source>% bin/local-regionservers.sh stop 1
+                            </source>
+                        </li>
+                    </ol>
+</li>
+</ol>
+</body>
+
+</document>
+
diff --git a/0.90/src/site/xdoc/replication.xml b/0.90/src/site/xdoc/replication.xml
new file mode 100644
index 0000000..8fc5fcc
--- /dev/null
+++ b/0.90/src/site/xdoc/replication.xml
@@ -0,0 +1,407 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Copyright 2010 The Apache Software Foundation
+
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      HBase Replication
+    </title>
+  </properties>
+  <body>
+    <section name="Overview">
+      <p>
+        HBase replication is a way to copy data between HBase deployments. It
+        can serve as a disaster recovery solution and can contribute to provide
+        higher availability at the HBase layer. It can also serve more practically;
+        for example, as a way to easily copy edits from a web-facing cluster to a "MapReduce"
+        cluster which will process old and new data and ship back the results
+        automatically.
+      </p>
+      <p>
+        The basic architecture pattern used for HBase replication is (HBase cluster) master-push;
+        it is much easier to keep track of what’s currently being replicated since
+        each region server has its own write-ahead-log (aka WAL or HLog), just like
+        other well known solutions like MySQL master/slave replication where
+        there’s only one bin log to keep track of. One master cluster can
+        replicate to any number of slave clusters, and each region server will
+        participate to replicate their own stream of edits.
+      </p>
+      <p>
+        The replication is done asynchronously, meaning that the clusters can
+        be geographically distant, the links between them can be offline for
+        some time, and rows inserted on the master cluster won’t be
+        available at the same time on the slave clusters (eventual consistency).
+      </p>
+      <p>
+        The replication format used in this design is conceptually the same as
+        <a href="http://dev.mysql.com/doc/refman/5.1/en/replication-formats.html">
+        MySQL’s statement-based replication </a>. Instead of SQL statements, whole
+        WALEdits (consisting of multiple cell inserts coming from the clients'
+        Put and Delete) are replicated in order to maintain atomicity.
+      </p>
+      <p>
+        The HLogs from each region server are the basis of HBase replication,
+        and must be kept in HDFS as long as they are needed to replicate data
+        to any slave cluster. Each RS reads from the oldest log it needs to
+        replicate and keeps the current position inside ZooKeeper to simplify
+        failure recovery. That position can be different for every slave 
+        cluster, same for the queue of HLogs to process.
+      </p>
+      <p>
+        The clusters participating in replication can be of asymmetric sizes
+        and the master cluster will do its “best effort” to balance the stream
+        of replication on the slave clusters by relying on randomization.
+      </p>
+      <img src="images/replication_overview.png"/>
+    </section>
+    <section name="Life of a log edit">
+      <p>
+        The following sections describe the life of a single edit going from a
+        client that communicates with a master cluster all the way to a single
+        slave cluster.
+      </p>
+      <section name="Normal processing">
+        <p>
+          The client uses a HBase API that sends a Put, Delete or ICV to a region
+          server. The key values are transformed into a WALEdit by the region
+          server and is inspected by the replication code that, for each family
+          that is scoped for replication, adds the scope to the edit. The edit
+          is appended to the current WAL and is then applied to its MemStore.
+        </p>
+        <p>
+          In a separate thread, the edit is read from the log (as part of a batch)
+          and only the KVs that are replicable are kept (that is, that they are part
+          of a family scoped GLOBAL in the family's schema and non-catalog so not
+          .META. or -ROOT-). When the buffer is filled, or the reader hits the
+          end of the file, the buffer is sent to a random region server on the
+          slave cluster.
+        </p>
+        <p>
+          Synchronously, the region server that receives the edits reads them
+          sequentially and separates each of them into buffers, one per table.
+          Once all edits are read, each buffer is flushed using the normal HBase
+          client (HTables managed by a HTablePool). This is done in order to
+          leverage parallel insertion (MultiPut).
+        </p>
+        <p>
+          Back in the master cluster's region server, the offset for the current
+          WAL that's being replicated is registered in ZooKeeper.
+        </p>
+      </section>
+      <section name="Non-responding slave clusters">
+        <p>
+          The edit is inserted in the same way.
+        </p>
+        <p>
+          In the separate thread, the region server reads, filters and buffers
+          the log edits the same way as during normal processing. The slave
+          region server that's contacted doesn't answer to the RPC, so the master
+          region server will sleep and retry up to a configured number of times.
+          If the slave RS still isn't available, the master cluster RS will select a
+          new subset of RS to replicate to and will retry sending the buffer of
+          edits.
+        </p>
+        <p>
+          In the mean time, the WALs will be rolled and stored in a queue in
+          ZooKeeper. Logs that are archived by their region server (archiving is
+          basically moving a log from the region server's logs directory to a
+          central logs archive directory) will update their paths in the in-memory
+          queue of the replicating thread.
+        </p>
+        <p>
+          When the slave cluster is finally available, the buffer will be applied
+          the same way as during normal processing. The master cluster RS will then
+          replicate the backlog of logs.
+        </p>
+      </section>
+    </section>
+    <section name="Internals">
+      <p>
+        This section describes in depth how each of replication's internal
+        features operate.
+      </p>
+      <section name="Choosing region servers to replicate to">
+        <p>
+          When a master cluster RS initiates a replication source to a slave cluster,
+          it first connects to the slave's ZooKeeper ensemble using the provided
+          cluster key (that key is composed of the value of hbase.zookeeper.quorum,
+          zookeeper.znode.parent and hbase.zookeeper.property.clientPort). It
+          then scans the "rs" directory to discover all the available sinks
+          (region servers that are accepting incoming streams of edits to replicate)
+          and will randomly choose a subset of them using a configured
+          ratio (which has a default value of 10%). For example, if a slave
+          cluster has 150 machines, 15 will be chosen as potential recipient for
+          edits that this master cluster RS will be sending. Since this is done by all
+          master cluster RSs, the probability that all slave RSs are used is very high,
+          and this method works for clusters of any size. For example, a master cluster
+          of 10 machines replicating to a slave cluster of 5 machines with a ratio
+          of 10% means that the master cluster RSs will choose one machine each
+          at random, thus the chance of overlapping and full usage of the slave
+          cluster is higher.
+        </p>
+      </section>
+      <section name="Keeping track of logs">
+        <p>
+          Every master cluster RS has its own znode in the replication znodes hierarchy.
+          It contains one znode per peer cluster (if 5 slave clusters, 5 znodes
+          are created), and each of these contain a queue
+          of HLogs to process. Each of these queues will track the HLogs created
+          by that RS, but they can differ in size. For example, if one slave
+          cluster becomes unavailable for some time then the HLogs should not be deleted,
+          thus they need to stay in the queue (while the others are processed).
+          See the section named "Region server failover" for an example.
+        </p>
+        <p>
+          When a source is instantiated, it contains the current HLog that the
+          region server is writing to. During log rolling, the new file is added
+          to the queue of each slave cluster's znode just before it's made available.
+          This ensures that all the sources are aware that a new log exists
+          before HLog is able to append edits into it, but this operations is
+          now more expensive.
+          The queue items are discarded when the replication thread cannot read
+          more entries from a file (because it reached the end of the last block)
+          and that there are other files in the queue.
+          This means that if a source is up-to-date and replicates from the log
+          that the region server writes to, reading up to the "end" of the
+          current file won't delete the item in the queue.
+        </p>
+        <p>
+          When a log is archived (because it's not used anymore or because there's
+          too many of them per hbase.regionserver.maxlogs typically because insertion
+          rate is faster than region flushing), it will notify the source threads that the path
+          for that log changed. If the a particular source was already done with
+          it, it will just ignore the message. If it's in the queue, the path
+          will be updated in memory. If the log is currently being replicated,
+          the change will be done atomically so that the reader doesn't try to
+          open the file when it's already moved. Also, moving a file is a NameNode
+          operation so, if the reader is currently reading the log, it won't
+          generate any exception.
+        </p>
+      </section>
+      <section name="Reading, filtering and sending edits">
+        <p>
+          By default, a source will try to read from a log file and ship log
+          entries as fast as possible to a sink. This is first limited by the
+          filtering of log entries; only KeyValues that are scoped GLOBAL and
+          that don't belong to catalog tables will be retained. A second limit
+          is imposed on the total size of the list of edits to replicate per slave,
+          which by default is 64MB. This means that a master cluster RS with 3 slaves
+          will use at most 192MB to store data to replicate. This doesn't account
+          the data filtered that wasn't garbage collected.
+        </p>
+        <p>
+          Once the maximum size of edits was buffered or the reader hits the end
+          of the log file, the source thread will stop reading and will choose
+          at random a sink to replicate to (from the list that was generated by
+          keeping only a subset of slave RSs). It will directly issue a RPC to
+          the chosen machine and will wait for the method to return. If it's
+          successful, the source will determine if the current file is emptied
+          or if it should continue to read from it. If the former, it will delete
+          the znode in the queue. If the latter, it will register the new offset
+          in the log's znode. If the RPC threw an exception, the source will retry
+          10 times until trying to find a different sink.
+        </p>
+      </section>
+      <section name="Cleaning logs">
+        <p>
+          If replication isn't enabled, the master's logs cleaning thread will
+          delete old logs using a configured TTL. This doesn't work well with
+          replication since archived logs passed their TTL may still be in a
+          queue. Thus, the default behavior is augmented so that if a log is
+          passed its TTL, the cleaning thread will lookup every queue until it
+          finds the log (while caching the ones it finds). If it's not found,
+          the log will be deleted. The next time it has to look for a log,
+          it will first use its cache.
+        </p>
+      </section>
+      <section name="Region server failover">
+        <p>
+          As long as region servers don't fail, keeping track of the logs in ZK
+          doesn't add any value. Unfortunately, they do fail, so since ZooKeeper
+          is highly available we can count on it and its semantics to help us
+          managing the transfer of the queues.
+        </p>
+        <p>
+          All the master cluster RSs keep a watcher on every other one of them to be
+          notified when one dies (just like the master does). When it happens,
+          they all race to create a znode called "lock" inside the dead RS' znode
+          that contains its queues. The one that creates it successfully will
+          proceed by transferring all the queues to its own znode (one by one
+          since ZK doesn't support the rename operation) and will delete all the
+          old ones when it's done. The recovered queues' znodes will be named
+          with the id of the slave cluster appended with the name of the dead
+          server. 
+        </p>
+        <p>
+          Once that is done, the master cluster RS will create one new source thread per
+          copied queue, and each of them will follow the read/filter/ship pattern.
+          The main difference is that those queues will never have new data since
+          they don't belong to their new region server, which means that when
+          the reader hits the end of the last log, the queue's znode will be
+          deleted and the master cluster RS will close that replication source.
+        </p>
+        <p>
+          For example, consider a master cluster with 3 region servers that's
+          replicating to a single slave with id '2'. The following hierarchy
+          represents what the znodes layout could be at some point in time. We
+          can see the RSs' znodes all contain a "peers" znode that contains a
+          single queue. The znode names in the queues represent the actual file
+          names on HDFS in the form "address,port.timestamp".
+        </p>
+        <pre>
+/hbase/replication/rs/
+                      1.1.1.1,60020,123456780/
+                        peers/
+                              2/
+                                1.1.1.1,60020.1234  (Contains a position)
+                                1.1.1.1,60020.1265
+                      1.1.1.2,60020,123456790/
+                        peers/
+                              2/
+                                1.1.1.2,60020.1214  (Contains a position)
+                                1.1.1.2,60020.1248
+                                1.1.1.2,60020.1312
+                      1.1.1.3,60020,    123456630/
+                        peers/
+                              2/
+                                1.1.1.3,60020.1280  (Contains a position)
+
+        </pre>
+        <p>
+          Now let's say that 1.1.1.2 loses its ZK session. The survivors will race
+          to create a lock, and for some reasons 1.1.1.3 wins. It will then start
+          transferring all the queues to its local peers znode by appending the
+          name of the dead server. Right before 1.1.1.3 is able to clean up the
+          old znodes, the layout will look like the following:
+        </p>
+        <pre>
+/hbase/replication/rs/
+                      1.1.1.1,60020,123456780/
+                        peers/
+                              2/
+                                1.1.1.1,60020.1234  (Contains a position)
+                                1.1.1.1,60020.1265
+                      1.1.1.2,60020,123456790/
+                        lock
+                        peers/
+                              2/
+                                1.1.1.2,60020.1214  (Contains a position)
+                                1.1.1.2,60020.1248
+                                1.1.1.2,60020.1312
+                      1.1.1.3,60020,123456630/
+                        peers/
+                              2/
+                                1.1.1.3,60020.1280  (Contains a position)
+
+                              2-1.1.1.2,60020,123456790/
+                                1.1.1.2,60020.1214  (Contains a position)
+                                1.1.1.2,60020.1248
+                                1.1.1.2,60020.1312
+        </pre>
+        <p>
+          Some time later, but before 1.1.1.3 is able to finish replicating the
+          last HLog from 1.1.1.2, let's say that it dies too (also some new logs
+          were created in the normal queues). The last RS will then try to lock
+          1.1.1.3's znode and will begin transferring all the queues. The new
+          layout will be:
+        </p>
+        <pre>
+/hbase/replication/rs/
+                      1.1.1.1,60020,123456780/
+                        peers/
+                              2/
+                                1.1.1.1,60020.1378  (Contains a position)
+
+                              2-1.1.1.3,60020,123456630/
+                                1.1.1.3,60020.1325  (Contains a position)
+                                1.1.1.3,60020.1401
+
+                              2-1.1.1.2,60020,123456790-1.1.1.3,60020,123456630/
+                                1.1.1.2,60020.1312  (Contains a position)
+                      1.1.1.3,60020,123456630/
+                        lock
+                        peers/
+                              2/
+                                1.1.1.3,60020.1325  (Contains a position)
+                                1.1.1.3,60020.1401
+
+                              2-1.1.1.2,60020,123456790/
+                                1.1.1.2,60020.1312  (Contains a position)
+        </pre>
+      </section>
+    </section>
+    <section name="FAQ">
+      <section name="Why do all clusters need to be in the same timezone?">
+        <p>
+          Suppose an edit to cell X happens in a EST cluster, then 2 minutes
+          later a new edits happens to the same cell in a PST cluster and that
+          both clusters are in a master-master replication. The second edit is
+          considered younger, so the first will always hide it while in fact the
+          second is older.
+        </p>
+      </section>
+      <section name="GLOBAL means replicate? Any provision to replicate only to cluster X and not to cluster Y? or is that for later?">
+        <p>
+          Yes, this is for much later.
+        </p>
+      </section>
+      <section name="You need a bulk edit shipper? Something that allows you transfer 64MB of edits in one go?">
+        <p>
+          You can use the HBase-provided utility called CopyTable from the package
+          org.apache.hadoop.hbase.mapreduce in order to have a discp-like tool to
+          bulk copy data.
+        </p>
+      </section>
+      <section name="Is it a mistake that WALEdit doesn't carry Put and Delete objects, that we have to reinstantiate not only when replicating but when replaying edits also?">
+        <p>
+          Yes, this behavior would help a lot but it's not currently available
+          in HBase (BatchUpdate had that, but it was lost in the new API).
+        </p>
+      </section>
+    </section>
+    <section name="Known bugs/missing features">
+      <p>
+        Here's a list of all the jiras that relate to major issues or missing
+        features in the replication implementation.
+      </p>
+      <ol>
+        <li>
+            HBASE-2611, basically if a region server dies while recovering the
+            queues of another dead RS, we will miss the data from the queues
+            that weren't copied.
+        </li>
+        <li>
+            HBASE-2196, a master cluster can only support a single slave, some
+            refactoring is needed to support this.
+        </li>
+        <li>
+            HBASE-2195, edits are applied disregard their home cluster, it should
+            carry that data and check it.
+        </li>
+        <li>
+            HBASE-3130, the master cluster needs to be restarted if its region
+            servers lose their session with a slave cluster.
+        </li>
+      </ol>
+    </section>
+  </body>
+</document>
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/BROKE_TODO_FIX_TestAcidGuarantees.java b/0.90/src/test/java/org/apache/hadoop/hbase/BROKE_TODO_FIX_TestAcidGuarantees.java
new file mode 100644
index 0000000..6741acc
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/BROKE_TODO_FIX_TestAcidGuarantees.java
@@ -0,0 +1,330 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.MultithreadedTestUtil.TestContext;
+import org.apache.hadoop.hbase.MultithreadedTestUtil.RepeatingTestThread;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import com.google.common.collect.Lists;
+
+/**
+ * Test case that uses multiple threads to read and write multifamily rows
+ * into a table, verifying that reads never see partially-complete writes.
+ * 
+ * This can run as a junit test, or with a main() function which runs against
+ * a real cluster (eg for testing with failures, region movement, etc)
+ */
+public class BROKE_TODO_FIX_TestAcidGuarantees {
+  protected static final Log LOG = LogFactory.getLog(BROKE_TODO_FIX_TestAcidGuarantees.class);
+  public static final byte [] TABLE_NAME = Bytes.toBytes("TestAcidGuarantees");
+  public static final byte [] FAMILY_A = Bytes.toBytes("A");
+  public static final byte [] FAMILY_B = Bytes.toBytes("B");
+  public static final byte [] FAMILY_C = Bytes.toBytes("C");
+  public static final byte [] QUALIFIER_NAME = Bytes.toBytes("data");
+
+  public static final byte[][] FAMILIES = new byte[][] {
+    FAMILY_A, FAMILY_B, FAMILY_C };
+
+  private HBaseTestingUtility util;
+
+  public static int NUM_COLS_TO_CHECK = 50;
+
+  private void createTableIfMissing()
+    throws IOException {
+    try {
+      util.createTable(TABLE_NAME, FAMILIES);
+    } catch (TableExistsException tee) {
+    }
+  }
+
+  public BROKE_TODO_FIX_TestAcidGuarantees() {
+    // Set small flush size for minicluster so we exercise reseeking scanners
+    Configuration conf = HBaseConfiguration.create();
+    conf.set("hbase.hregion.memstore.flush.size", String.valueOf(128*1024));
+    util = new HBaseTestingUtility(conf);
+  }
+  
+  /**
+   * Thread that does random full-row writes into a table.
+   */
+  public static class AtomicityWriter extends RepeatingTestThread {
+    Random rand = new Random();
+    byte data[] = new byte[10];
+    byte targetRows[][];
+    byte targetFamilies[][];
+    HTable table;
+    AtomicLong numWritten = new AtomicLong();
+    
+    public AtomicityWriter(TestContext ctx, byte targetRows[][],
+                           byte targetFamilies[][]) throws IOException {
+      super(ctx);
+      this.targetRows = targetRows;
+      this.targetFamilies = targetFamilies;
+      table = new HTable(ctx.getConf(), TABLE_NAME);
+    }
+    public void doAnAction() throws Exception {
+      // Pick a random row to write into
+      byte[] targetRow = targetRows[rand.nextInt(targetRows.length)];
+      Put p = new Put(targetRow); 
+      rand.nextBytes(data);
+
+      for (byte[] family : targetFamilies) {
+        for (int i = 0; i < NUM_COLS_TO_CHECK; i++) {
+          byte qualifier[] = Bytes.toBytes("col" + i);
+          p.add(family, qualifier, data);
+        }
+      }
+      table.put(p);
+      numWritten.getAndIncrement();
+    }
+  }
+  
+  /**
+   * Thread that does single-row reads in a table, looking for partially
+   * completed rows.
+   */
+  public static class AtomicGetReader extends RepeatingTestThread {
+    byte targetRow[];
+    byte targetFamilies[][];
+    HTable table;
+    int numVerified = 0;
+    AtomicLong numRead = new AtomicLong();
+
+    public AtomicGetReader(TestContext ctx, byte targetRow[],
+                           byte targetFamilies[][]) throws IOException {
+      super(ctx);
+      this.targetRow = targetRow;
+      this.targetFamilies = targetFamilies;
+      table = new HTable(ctx.getConf(), TABLE_NAME);
+    }
+
+    public void doAnAction() throws Exception {
+      Get g = new Get(targetRow);
+      Result res = table.get(g);
+      byte[] gotValue = null;
+      if (res.getRow() == null) {
+        // Trying to verify but we didn't find the row - the writing
+        // thread probably just hasn't started writing yet, so we can
+        // ignore this action
+        return;
+      }
+      
+      for (byte[] family : targetFamilies) {
+        for (int i = 0; i < NUM_COLS_TO_CHECK; i++) {
+          byte qualifier[] = Bytes.toBytes("col" + i);
+          byte thisValue[] = res.getValue(family, qualifier);
+          if (gotValue != null && !Bytes.equals(gotValue, thisValue)) {
+            gotFailure(gotValue, res);
+          }
+          numVerified++;
+          gotValue = thisValue;
+        }
+      }
+      numRead.getAndIncrement();
+    }
+
+    private void gotFailure(byte[] expected, Result res) {
+      StringBuilder msg = new StringBuilder();
+      msg.append("Failed after ").append(numVerified).append("!");
+      msg.append("Expected=").append(Bytes.toStringBinary(expected));
+      msg.append("Got:\n");
+      for (KeyValue kv : res.list()) {
+        msg.append(kv.toString());
+        msg.append(" val= ");
+        msg.append(Bytes.toStringBinary(kv.getValue()));
+        msg.append("\n");
+      }
+      throw new RuntimeException(msg.toString());
+    }
+  }
+  
+  /**
+   * Thread that does full scans of the table looking for any partially completed
+   * rows.
+   */
+  public static class AtomicScanReader extends RepeatingTestThread {
+    byte targetFamilies[][];
+    HTable table;
+    AtomicLong numScans = new AtomicLong();
+    AtomicLong numRowsScanned = new AtomicLong();
+
+    public AtomicScanReader(TestContext ctx,
+                           byte targetFamilies[][]) throws IOException {
+      super(ctx);
+      this.targetFamilies = targetFamilies;
+      table = new HTable(ctx.getConf(), TABLE_NAME);
+    }
+
+    public void doAnAction() throws Exception {
+      Scan s = new Scan();
+      for (byte[] family : targetFamilies) {
+        s.addFamily(family);
+      }
+      ResultScanner scanner = table.getScanner(s);
+      
+      for (Result res : scanner) {
+        byte[] gotValue = null;
+  
+        for (byte[] family : targetFamilies) {
+          for (int i = 0; i < NUM_COLS_TO_CHECK; i++) {
+            byte qualifier[] = Bytes.toBytes("col" + i);
+            byte thisValue[] = res.getValue(family, qualifier);
+            if (gotValue != null && !Bytes.equals(gotValue, thisValue)) {
+              gotFailure(gotValue, res);
+            }
+            gotValue = thisValue;
+          }
+        }
+        numRowsScanned.getAndIncrement();
+      }
+      numScans.getAndIncrement();
+    }
+
+    private void gotFailure(byte[] expected, Result res) {
+      StringBuilder msg = new StringBuilder();
+      msg.append("Failed after ").append(numRowsScanned).append("!");
+      msg.append("Expected=").append(Bytes.toStringBinary(expected));
+      msg.append("Got:\n");
+      for (KeyValue kv : res.list()) {
+        msg.append(kv.toString());
+        msg.append(" val= ");
+        msg.append(Bytes.toStringBinary(kv.getValue()));
+        msg.append("\n");
+      }
+      throw new RuntimeException(msg.toString());
+    }
+  }
+
+
+  public void runTestAtomicity(long millisToRun,
+      int numWriters,
+      int numGetters,
+      int numScanners,
+      int numUniqueRows) throws Exception {
+    createTableIfMissing();
+    TestContext ctx = new TestContext(util.getConfiguration());
+    
+    byte rows[][] = new byte[numUniqueRows][];
+    for (int i = 0; i < numUniqueRows; i++) {
+      rows[i] = Bytes.toBytes("test_row_" + i);
+    }
+    
+    List<AtomicityWriter> writers = Lists.newArrayList();
+    for (int i = 0; i < numWriters; i++) {
+      AtomicityWriter writer = new AtomicityWriter(
+          ctx, rows, FAMILIES);
+      writers.add(writer);
+      ctx.addThread(writer);
+    }
+
+    List<AtomicGetReader> getters = Lists.newArrayList();
+    for (int i = 0; i < numGetters; i++) {
+      AtomicGetReader getter = new AtomicGetReader(
+          ctx, rows[i % numUniqueRows], FAMILIES);
+      getters.add(getter);
+      ctx.addThread(getter);
+    }
+    
+    List<AtomicScanReader> scanners = Lists.newArrayList();
+    for (int i = 0; i < numScanners; i++) {
+      AtomicScanReader scanner = new AtomicScanReader(ctx, FAMILIES);
+      scanners.add(scanner);
+      ctx.addThread(scanner);
+    }
+    
+    ctx.startThreads();
+    ctx.waitFor(millisToRun);
+    ctx.stop();
+    
+    LOG.info("Finished test. Writers:");
+    for (AtomicityWriter writer : writers) {
+      LOG.info("  wrote " + writer.numWritten.get());
+    }
+    LOG.info("Readers:");
+    for (AtomicGetReader reader : getters) {
+      LOG.info("  read " + reader.numRead.get());
+    }
+    LOG.info("Scanners:");
+    for (AtomicScanReader scanner : scanners) {
+      LOG.info("  scanned " + scanner.numScans.get());
+      LOG.info("  verified " + scanner.numRowsScanned.get() + " rows");
+    }
+  }
+
+  @Test
+  public void testGetAtomicity() throws Exception {
+    util.startMiniCluster(1);
+    try {
+      runTestAtomicity(20000, 5, 5, 0, 3);
+    } finally {
+      util.shutdownMiniCluster();
+    }    
+  }
+
+  @Test
+  @Ignore("Currently not passing - see HBASE-2670")
+  public void testScanAtomicity() throws Exception {
+    util.startMiniCluster(1);
+    try {
+      runTestAtomicity(20000, 5, 0, 5, 3);
+    } finally {
+      util.shutdownMiniCluster();
+    }    
+  }
+
+  @Test
+  @Ignore("Currently not passing - see HBASE-2670")
+  public void testMixedAtomicity() throws Exception {
+    util.startMiniCluster(1);
+    try {
+      runTestAtomicity(20000, 5, 2, 2, 3);
+    } finally {
+      util.shutdownMiniCluster();
+    }    
+  }
+
+  public static void main(String args[]) throws Exception {
+    Configuration c = HBaseConfiguration.create();
+    BROKE_TODO_FIX_TestAcidGuarantees test = new BROKE_TODO_FIX_TestAcidGuarantees();
+    test.setConf(c);
+    test.runTestAtomicity(5*60*1000, 5, 2, 2, 3);
+  }
+
+  private void setConf(Configuration c) {
+    util = new HBaseTestingUtility(c);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/EmptyWatcher.java b/0.90/src/test/java/org/apache/hadoop/hbase/EmptyWatcher.java
new file mode 100644
index 0000000..cf27ff0
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/EmptyWatcher.java
@@ -0,0 +1,33 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.WatchedEvent;
+
+/**
+ * Class used as an empty watche for the tests
+ */
+public class EmptyWatcher implements Watcher{
+  public static EmptyWatcher instance = new EmptyWatcher();
+  private EmptyWatcher() {}
+
+  public void process(WatchedEvent event) {}
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/HBaseClusterTestCase.java b/0.90/src/test/java/org/apache/hadoop/hbase/HBaseClusterTestCase.java
new file mode 100644
index 0000000..c18cddb
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/HBaseClusterTestCase.java
@@ -0,0 +1,233 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.PrintWriter;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.util.ReflectionUtils;
+
+/**
+ * Abstract base class for HBase cluster junit tests.  Spins up an hbase
+ * cluster in setup and tears it down again in tearDown.
+ * @deprecated Use junit4 and {@link HBaseTestingUtility}
+ */
+public abstract class HBaseClusterTestCase extends HBaseTestCase {
+  private static final Log LOG = LogFactory.getLog(HBaseClusterTestCase.class);
+  public MiniHBaseCluster cluster;
+  protected MiniDFSCluster dfsCluster;
+  protected MiniZooKeeperCluster zooKeeperCluster;
+  protected int regionServers;
+  protected boolean startDfs;
+  private boolean openMetaTable = true;
+
+  /** default constructor */
+  public HBaseClusterTestCase() {
+    this(1);
+  }
+
+  /**
+   * Start a MiniHBaseCluster with regionServers region servers in-process to
+   * start with. Also, start a MiniDfsCluster before starting the hbase cluster.
+   * The configuration used will be edited so that this works correctly.
+   * @param regionServers number of region servers to start.
+   */
+  public HBaseClusterTestCase(int regionServers) {
+    this(regionServers, true);
+  }
+
+  /**  in-process to
+   * start with. Optionally, startDfs indicates if a MiniDFSCluster should be
+   * started. If startDfs is false, the assumption is that an external DFS is
+   * configured in hbase-site.xml and is already started, or you have started a
+   * MiniDFSCluster on your own and edited the configuration in memory. (You
+   * can modify the config used by overriding the preHBaseClusterSetup method.)
+   * @param regionServers number of region servers to start.
+   * @param startDfs set to true if MiniDFS should be started
+   */
+  public HBaseClusterTestCase(int regionServers, boolean startDfs) {
+    super();
+    this.startDfs = startDfs;
+    this.regionServers = regionServers;
+  }
+
+  protected void setOpenMetaTable(boolean val) {
+    openMetaTable = val;
+  }
+
+  /**
+   * Subclass hook.
+   *
+   * Run after dfs is ready but before hbase cluster is started up.
+   */
+  protected void preHBaseClusterSetup() throws Exception {
+    // continue
+  }
+
+  /**
+   * Actually start the MiniHBase instance.
+   */
+  protected void hBaseClusterSetup() throws Exception {
+    File testDir = new File(getUnitTestdir(getName()).toString());
+    if (testDir.exists()) testDir.delete();
+
+    // Note that this is done before we create the MiniHBaseCluster because we
+    // need to edit the config to add the ZooKeeper servers.
+    this.zooKeeperCluster = new MiniZooKeeperCluster();
+    int clientPort = this.zooKeeperCluster.startup(testDir);
+    conf.set("hbase.zookeeper.property.clientPort", Integer.toString(clientPort));
+    Configuration c = new Configuration(this.conf);
+    // start the mini cluster
+    this.cluster = new MiniHBaseCluster(c, regionServers);
+    if (openMetaTable) {
+      // opening the META table ensures that cluster is running
+      new HTable(c, HConstants.META_TABLE_NAME);
+    }
+  }
+
+  /**
+   * Run after hbase cluster is started up.
+   */
+  protected void postHBaseClusterSetup() throws Exception {
+    // continue
+  }
+
+  @Override
+  protected void setUp() throws Exception {
+    try {
+      if (this.startDfs) {
+        // This spews a bunch of warnings about missing scheme. TODO: fix.
+        this.dfsCluster = new MiniDFSCluster(0, this.conf, 2, true, true, true,
+          null, null, null, null);
+
+        // mangle the conf so that the fs parameter points to the minidfs we
+        // just started up
+        FileSystem filesystem = dfsCluster.getFileSystem();
+        conf.set("fs.defaultFS", filesystem.getUri().toString());
+        Path parentdir = filesystem.getHomeDirectory();
+        conf.set(HConstants.HBASE_DIR, parentdir.toString());
+        filesystem.mkdirs(parentdir);
+        FSUtils.setVersion(filesystem, parentdir);
+      }
+
+      // do the super setup now. if we had done it first, then we would have
+      // gotten our conf all mangled and a local fs started up.
+      super.setUp();
+
+      // run the pre-cluster setup
+      preHBaseClusterSetup();
+
+      // start the instance
+      hBaseClusterSetup();
+
+      // run post-cluster setup
+      postHBaseClusterSetup();
+    } catch (Exception e) {
+      LOG.error("Exception in setup!", e);
+      if (cluster != null) {
+        cluster.shutdown();
+      }
+      if (zooKeeperCluster != null) {
+        zooKeeperCluster.shutdown();
+      }
+      if (dfsCluster != null) {
+        shutdownDfs(dfsCluster);
+      }
+      throw e;
+    }
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    if (!openMetaTable) {
+      // open the META table now to ensure cluster is running before shutdown.
+      new HTable(conf, HConstants.META_TABLE_NAME);
+    }
+    super.tearDown();
+    try {
+      HConnectionManager.deleteConnection(conf, true);
+      if (this.cluster != null) {
+        try {
+          this.cluster.shutdown();
+        } catch (Exception e) {
+          LOG.warn("Closing mini dfs", e);
+        }
+        try {
+          this.zooKeeperCluster.shutdown();
+        } catch (IOException e) {
+          LOG.warn("Shutting down ZooKeeper cluster", e);
+        }
+      }
+      if (startDfs) {
+        shutdownDfs(dfsCluster);
+      }
+    } catch (Exception e) {
+      LOG.error(e);
+    }
+    // ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+    //  "Temporary end-of-test thread dump debugging HADOOP-2040: " + getName());
+  }
+
+
+  /**
+   * Use this utility method debugging why cluster won't go down.  On a
+   * period it throws a thread dump.  Method ends when all cluster
+   * regionservers and master threads are no long alive.
+   */
+  public void threadDumpingJoin() {
+    if (this.cluster.getRegionServerThreads() != null) {
+      for(Thread t: this.cluster.getRegionServerThreads()) {
+        threadDumpingJoin(t);
+      }
+    }
+    threadDumpingJoin(this.cluster.getMaster());
+  }
+
+  protected void threadDumpingJoin(final Thread t) {
+    if (t == null) {
+      return;
+    }
+    long startTime = System.currentTimeMillis();
+    while (t.isAlive()) {
+      try {
+        Thread.sleep(1000);
+      } catch (InterruptedException e) {
+        LOG.info("Continuing...", e);
+      }
+      if (System.currentTimeMillis() - startTime > 60000) {
+        startTime = System.currentTimeMillis();
+        ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+            "Automatic Stack Trace every 60 seconds waiting on " +
+            t.getName());
+      }
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java b/0.90/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
new file mode 100644
index 0000000..32ccbea
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
@@ -0,0 +1,691 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NavigableMap;
+
+import junit.framework.AssertionFailedError;
+import junit.framework.TestCase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+
+/**
+ * Abstract HBase test class.  Initializes a few things that can come in handly
+ * like an HBaseConfiguration and filesystem.
+ * @deprecated Write junit4 unit tests using {@link HBaseTestingUtility}
+ */
+public abstract class HBaseTestCase extends TestCase {
+  private static final Log LOG = LogFactory.getLog(HBaseTestCase.class);
+
+  /** configuration parameter name for test directory */
+  public static final String TEST_DIRECTORY_KEY = "test.build.data";
+
+  protected final static byte [] fam1 = Bytes.toBytes("colfamily1");
+  protected final static byte [] fam2 = Bytes.toBytes("colfamily2");
+  protected final static byte [] fam3 = Bytes.toBytes("colfamily3");
+  protected static final byte [][] COLUMNS = {fam1, fam2, fam3};
+
+  private boolean localfs = false;
+  protected Path testDir = null;
+  protected FileSystem fs = null;
+  protected HRegion root = null;
+  protected HRegion meta = null;
+  protected static final char FIRST_CHAR = 'a';
+  protected static final char LAST_CHAR = 'z';
+  protected static final String PUNCTUATION = "~`@#$%^&*()-_+=:;',.<>/?[]{}|";
+  protected static final byte [] START_KEY_BYTES = {FIRST_CHAR, FIRST_CHAR, FIRST_CHAR};
+  protected String START_KEY;
+  protected static final int MAXVERSIONS = 3;
+
+  static {
+    initialize();
+  }
+
+  public volatile Configuration conf;
+
+  /** constructor */
+  public HBaseTestCase() {
+    super();
+    init();
+  }
+
+  /**
+   * @param name
+   */
+  public HBaseTestCase(String name) {
+    super(name);
+    init();
+  }
+
+  private void init() {
+    conf = HBaseConfiguration.create();
+    try {
+      START_KEY = new String(START_KEY_BYTES, HConstants.UTF8_ENCODING);
+    } catch (UnsupportedEncodingException e) {
+      LOG.fatal("error during initialization", e);
+      fail();
+    }
+  }
+
+  /**
+   * Note that this method must be called after the mini hdfs cluster has
+   * started or we end up with a local file system.
+   */
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+    localfs =
+      (conf.get("fs.defaultFS", "file:///").compareTo("file:///") == 0);
+
+    if (fs == null) {
+      this.fs = FileSystem.get(conf);
+    }
+    try {
+      if (localfs) {
+        this.testDir = getUnitTestdir(getName());
+        if (fs.exists(testDir)) {
+          fs.delete(testDir, true);
+        }
+      } else {
+        this.testDir =
+          this.fs.makeQualified(new Path(conf.get(HConstants.HBASE_DIR)));
+      }
+    } catch (Exception e) {
+      LOG.fatal("error during setup", e);
+      throw e;
+    }
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    try {
+      if (localfs) {
+        if (this.fs.exists(testDir)) {
+          this.fs.delete(testDir, true);
+        }
+      }
+    } catch (Exception e) {
+      LOG.fatal("error during tear down", e);
+    }
+    super.tearDown();
+  }
+
+  protected Path getUnitTestdir(String testName) {
+    return new Path(
+        conf.get(TEST_DIRECTORY_KEY, "target/test/data"), testName);
+  }
+
+  protected HRegion createNewHRegion(HTableDescriptor desc, byte [] startKey,
+      byte [] endKey)
+  throws IOException {
+    FileSystem filesystem = FileSystem.get(conf);
+    Path rootdir = filesystem.makeQualified(
+        new Path(conf.get(HConstants.HBASE_DIR)));
+    filesystem.mkdirs(rootdir);
+
+    return HRegion.createHRegion(new HRegionInfo(desc, startKey, endKey),
+        rootdir, conf);
+  }
+
+  protected HRegion openClosedRegion(final HRegion closedRegion)
+  throws IOException {
+    HRegion r = new HRegion(closedRegion.getTableDir(), closedRegion.getLog(),
+        closedRegion.getFilesystem(), closedRegion.getConf(),
+        closedRegion.getRegionInfo(), null);
+    r.initialize();
+    return r;
+  }
+
+  /**
+   * Create a table of name <code>name</code> with {@link COLUMNS} for
+   * families.
+   * @param name Name to give table.
+   * @return Column descriptor.
+   */
+  protected HTableDescriptor createTableDescriptor(final String name) {
+    return createTableDescriptor(name, MAXVERSIONS);
+  }
+
+  /**
+   * Create a table of name <code>name</code> with {@link COLUMNS} for
+   * families.
+   * @param name Name to give table.
+   * @param versions How many versions to allow per column.
+   * @return Column descriptor.
+   */
+  protected HTableDescriptor createTableDescriptor(final String name,
+      final int versions) {
+    HTableDescriptor htd = new HTableDescriptor(name);
+    htd.addFamily(new HColumnDescriptor(fam1, versions,
+      HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+      Integer.MAX_VALUE, HConstants.FOREVER, 
+      HColumnDescriptor.DEFAULT_BLOOMFILTER,
+      HConstants.REPLICATION_SCOPE_LOCAL));
+    htd.addFamily(new HColumnDescriptor(fam2, versions,
+        HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+        Integer.MAX_VALUE, HConstants.FOREVER,
+        HColumnDescriptor.DEFAULT_BLOOMFILTER,
+        HConstants.REPLICATION_SCOPE_LOCAL));
+    htd.addFamily(new HColumnDescriptor(fam3, versions,
+        HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+        Integer.MAX_VALUE,  HConstants.FOREVER,
+        HColumnDescriptor.DEFAULT_BLOOMFILTER,
+        HConstants.REPLICATION_SCOPE_LOCAL));
+    return htd;
+  }
+
+  /**
+   * Add content to region <code>r</code> on the passed column
+   * <code>column</code>.
+   * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+   * @param r
+   * @param columnFamily
+   * @throws IOException
+   * @return count of what we added.
+   */
+  protected static long addContent(final HRegion r, final byte [] columnFamily)
+  throws IOException {
+    byte [] startKey = r.getRegionInfo().getStartKey();
+    byte [] endKey = r.getRegionInfo().getEndKey();
+    byte [] startKeyBytes = startKey;
+    if (startKeyBytes == null || startKeyBytes.length == 0) {
+      startKeyBytes = START_KEY_BYTES;
+    }
+    return addContent(new HRegionIncommon(r), Bytes.toString(columnFamily), null,
+      startKeyBytes, endKey, -1);
+  }
+
+  /**
+   * Add content to region <code>r</code> on the passed column
+   * <code>column</code>.
+   * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+   * @param updater  An instance of {@link Incommon}.
+   * @param columnFamily
+   * @throws IOException
+   * @return count of what we added.
+   */
+  protected static long addContent(final Incommon updater,
+                                   final String columnFamily) throws IOException {
+    return addContent(updater, columnFamily, START_KEY_BYTES, null);
+  }
+
+  protected static long addContent(final Incommon updater, final String family,
+                                   final String column) throws IOException {
+    return addContent(updater, family, column, START_KEY_BYTES, null);
+  }
+
+  /**
+   * Add content to region <code>r</code> on the passed column
+   * <code>column</code>.
+   * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+   * @param updater  An instance of {@link Incommon}.
+   * @param columnFamily
+   * @param startKeyBytes Where to start the rows inserted
+   * @param endKey Where to stop inserting rows.
+   * @return count of what we added.
+   * @throws IOException
+   */
+  protected static long addContent(final Incommon updater, final String columnFamily,
+      final byte [] startKeyBytes, final byte [] endKey)
+  throws IOException {
+    return addContent(updater, columnFamily, null, startKeyBytes, endKey, -1);
+  }
+
+  protected static long addContent(final Incommon updater, final String family,
+                                   final String column, final byte [] startKeyBytes,
+                                   final byte [] endKey) throws IOException {
+    return addContent(updater, family, column, startKeyBytes, endKey, -1);
+  }
+
+  /**
+   * Add content to region <code>r</code> on the passed column
+   * <code>column</code>.
+   * Adds data of the from 'aaa', 'aab', etc where key and value are the same.
+   * @param updater  An instance of {@link Incommon}.
+   * @param column
+   * @param startKeyBytes Where to start the rows inserted
+   * @param endKey Where to stop inserting rows.
+   * @param ts Timestamp to write the content with.
+   * @return count of what we added.
+   * @throws IOException
+   */
+  protected static long addContent(final Incommon updater,
+                                   final String columnFamily,
+                                   final String column,
+      final byte [] startKeyBytes, final byte [] endKey, final long ts)
+  throws IOException {
+    long count = 0;
+    // Add rows of three characters.  The first character starts with the
+    // 'a' character and runs up to 'z'.  Per first character, we run the
+    // second character over same range.  And same for the third so rows
+    // (and values) look like this: 'aaa', 'aab', 'aac', etc.
+    char secondCharStart = (char)startKeyBytes[1];
+    char thirdCharStart = (char)startKeyBytes[2];
+    EXIT: for (char c = (char)startKeyBytes[0]; c <= LAST_CHAR; c++) {
+      for (char d = secondCharStart; d <= LAST_CHAR; d++) {
+        for (char e = thirdCharStart; e <= LAST_CHAR; e++) {
+          byte [] t = new byte [] {(byte)c, (byte)d, (byte)e};
+          if (endKey != null && endKey.length > 0
+              && Bytes.compareTo(endKey, t) <= 0) {
+            break EXIT;
+          }
+          try {
+            Put put;
+            if(ts != -1) {
+              put = new Put(t, ts, null);
+            } else {
+              put = new Put(t);
+            }
+            try {
+              StringBuilder sb = new StringBuilder();
+              if (column != null && column.contains(":")) {
+                sb.append(column);
+              } else {
+                if (columnFamily != null) {
+                  sb.append(columnFamily);
+                  if (!columnFamily.endsWith(":")) {
+                    sb.append(":");
+                  }
+                  if (column != null) {
+                    sb.append(column);
+                  }
+                }
+              }
+              byte[][] split =
+                KeyValue.parseColumn(Bytes.toBytes(sb.toString()));
+              if(split.length == 1) {
+                put.add(split[0], new byte[0], t);
+              } else {
+                put.add(split[0], split[1], t);
+              }
+              updater.put(put);
+              count++;
+            } catch (RuntimeException ex) {
+              ex.printStackTrace();
+              throw ex;
+            } catch (IOException ex) {
+              ex.printStackTrace();
+              throw ex;
+            }
+          } catch (RuntimeException ex) {
+            ex.printStackTrace();
+            throw ex;
+          } catch (IOException ex) {
+            ex.printStackTrace();
+            throw ex;
+          }
+        }
+        // Set start character back to FIRST_CHAR after we've done first loop.
+        thirdCharStart = FIRST_CHAR;
+      }
+      secondCharStart = FIRST_CHAR;
+    }
+    return count;
+  }
+
+  /**
+   * Implementors can flushcache.
+   */
+  public static interface FlushCache {
+    /**
+     * @throws IOException
+     */
+    public void flushcache() throws IOException;
+  }
+
+  /**
+   * Interface used by tests so can do common operations against an HTable
+   * or an HRegion.
+   *
+   * TOOD: Come up w/ a better name for this interface.
+   */
+  public static interface Incommon {
+    /**
+     *
+     * @param delete
+     * @param lockid
+     * @param writeToWAL
+     * @throws IOException
+     */
+    public void delete(Delete delete,  Integer lockid, boolean writeToWAL)
+    throws IOException;
+
+    /**
+     * @param put
+     * @throws IOException
+     */
+    public void put(Put put) throws IOException;
+
+    public Result get(Get get) throws IOException;
+
+    /**
+     * @param family
+     * @param qualifiers
+     * @param firstRow
+     * @param ts
+     * @return scanner for specified columns, first row and timestamp
+     * @throws IOException
+     */
+    public ScannerIncommon getScanner(byte [] family, byte [][] qualifiers,
+        byte [] firstRow, long ts)
+    throws IOException;
+  }
+
+  /**
+   * A class that makes a {@link Incommon} out of a {@link HRegion}
+   */
+  public static class HRegionIncommon implements Incommon, FlushCache {
+    final HRegion region;
+
+    /**
+     * @param HRegion
+     */
+    public HRegionIncommon(final HRegion HRegion) {
+      this.region = HRegion;
+    }
+
+    public void put(Put put) throws IOException {
+      region.put(put);
+    }
+
+    public void delete(Delete delete,  Integer lockid, boolean writeToWAL)
+    throws IOException {
+      this.region.delete(delete, lockid, writeToWAL);
+    }
+
+    public Result get(Get get) throws IOException {
+      return region.get(get, null);
+    }
+
+    public ScannerIncommon getScanner(byte [] family, byte [][] qualifiers,
+        byte [] firstRow, long ts)
+      throws IOException {
+        Scan scan = new Scan(firstRow);
+        if(qualifiers == null || qualifiers.length == 0) {
+          scan.addFamily(family);
+        } else {
+          for(int i=0; i<qualifiers.length; i++){
+            scan.addColumn(HConstants.CATALOG_FAMILY, qualifiers[i]);
+          }
+        }
+        scan.setTimeRange(0, ts);
+        return new
+          InternalScannerIncommon(region.getScanner(scan));
+      }
+
+    public Result get(Get get, Integer lockid) throws IOException{
+      return this.region.get(get, lockid);
+    }
+
+
+    public void flushcache() throws IOException {
+      this.region.flushcache();
+    }
+  }
+
+  /**
+   * A class that makes a {@link Incommon} out of a {@link HTable}
+   */
+  public static class HTableIncommon implements Incommon {
+    final HTable table;
+
+    /**
+     * @param table
+     */
+    public HTableIncommon(final HTable table) {
+      super();
+      this.table = table;
+    }
+
+    public void put(Put put) throws IOException {
+      table.put(put);
+    }
+
+
+    public void delete(Delete delete,  Integer lockid, boolean writeToWAL)
+    throws IOException {
+      this.table.delete(delete);
+    }
+
+    public Result get(Get get) throws IOException {
+      return table.get(get);
+    }
+
+    public ScannerIncommon getScanner(byte [] family, byte [][] qualifiers,
+        byte [] firstRow, long ts)
+      throws IOException {
+      Scan scan = new Scan(firstRow);
+      if(qualifiers == null || qualifiers.length == 0) {
+        scan.addFamily(family);
+      } else {
+        for(int i=0; i<qualifiers.length; i++){
+          scan.addColumn(HConstants.CATALOG_FAMILY, qualifiers[i]);
+        }
+      }
+      scan.setTimeRange(0, ts);
+      return new
+        ClientScannerIncommon(table.getScanner(scan));
+    }
+  }
+
+  public interface ScannerIncommon
+  extends Iterable<Result> {
+    public boolean next(List<KeyValue> values)
+    throws IOException;
+
+    public void close() throws IOException;
+  }
+
+  public static class ClientScannerIncommon implements ScannerIncommon {
+    ResultScanner scanner;
+    public ClientScannerIncommon(ResultScanner scanner) {
+      this.scanner = scanner;
+    }
+
+    public boolean next(List<KeyValue> values)
+    throws IOException {
+      Result results = scanner.next();
+      if (results == null) {
+        return false;
+      }
+      values.clear();
+      values.addAll(results.list());
+      return true;
+    }
+
+    public void close() throws IOException {
+      scanner.close();
+    }
+
+    @SuppressWarnings("unchecked")
+    public Iterator iterator() {
+      return scanner.iterator();
+    }
+  }
+
+  public static class InternalScannerIncommon implements ScannerIncommon {
+    InternalScanner scanner;
+
+    public InternalScannerIncommon(InternalScanner scanner) {
+      this.scanner = scanner;
+    }
+
+    public boolean next(List<KeyValue> results)
+    throws IOException {
+      return scanner.next(results);
+    }
+
+    public void close() throws IOException {
+      scanner.close();
+    }
+
+    public Iterator<Result> iterator() {
+      throw new UnsupportedOperationException();
+    }
+  }
+
+//  protected void assertCellEquals(final HRegion region, final byte [] row,
+//    final byte [] column, final long timestamp, final String value)
+//  throws IOException {
+//    Map<byte [], Cell> result = region.getFull(row, null, timestamp, 1, null);
+//    Cell cell_value = result.get(column);
+//    if (value == null) {
+//      assertEquals(Bytes.toString(column) + " at timestamp " + timestamp, null,
+//        cell_value);
+//    } else {
+//      if (cell_value == null) {
+//        fail(Bytes.toString(column) + " at timestamp " + timestamp +
+//          "\" was expected to be \"" + value + " but was null");
+//      }
+//      if (cell_value != null) {
+//        assertEquals(Bytes.toString(column) + " at timestamp "
+//            + timestamp, value, new String(cell_value.getValue()));
+//      }
+//    }
+//  }
+
+  protected void assertResultEquals(final HRegion region, final byte [] row,
+      final byte [] family, final byte [] qualifier, final long timestamp,
+      final byte [] value)
+    throws IOException {
+      Get get = new Get(row);
+      get.setTimeStamp(timestamp);
+      Result res = region.get(get, null);
+      NavigableMap<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>> map =
+        res.getMap();
+      byte [] res_value = map.get(family).get(qualifier).get(timestamp);
+
+      if (value == null) {
+        assertEquals(Bytes.toString(family) + " " + Bytes.toString(qualifier) +
+            " at timestamp " + timestamp, null, res_value);
+      } else {
+        if (res_value == null) {
+          fail(Bytes.toString(family) + " " + Bytes.toString(qualifier) +
+              " at timestamp " + timestamp + "\" was expected to be \"" +
+              Bytes.toStringBinary(value) + " but was null");
+        }
+        if (res_value != null) {
+          assertEquals(Bytes.toString(family) + " " + Bytes.toString(qualifier) +
+              " at timestamp " +
+              timestamp, value, new String(res_value));
+        }
+      }
+    }
+
+  /**
+   * Initializes parameters used in the test environment:
+   *
+   * Sets the configuration parameter TEST_DIRECTORY_KEY if not already set.
+   * Sets the boolean debugging if "DEBUGGING" is set in the environment.
+   * If debugging is enabled, reconfigures logging so that the root log level is
+   * set to WARN and the logging level for the package is set to DEBUG.
+   */
+  public static void initialize() {
+    if (System.getProperty(TEST_DIRECTORY_KEY) == null) {
+      System.setProperty(TEST_DIRECTORY_KEY, new File(
+          "build/hbase/test").getAbsolutePath());
+    }
+  }
+
+  /**
+   * Common method to close down a MiniDFSCluster and the associated file system
+   *
+   * @param cluster
+   */
+  public static void shutdownDfs(MiniDFSCluster cluster) {
+    if (cluster != null) {
+      LOG.info("Shutting down Mini DFS ");
+      try {
+        cluster.shutdown();
+      } catch (Exception e) {
+        /// Can get a java.lang.reflect.UndeclaredThrowableException thrown
+        // here because of an InterruptedException. Don't let exceptions in
+        // here be cause of test failure.
+      }
+      try {
+        FileSystem fs = cluster.getFileSystem();
+        if (fs != null) {
+          LOG.info("Shutting down FileSystem");
+          fs.close();
+        }
+        FileSystem.closeAll();
+      } catch (IOException e) {
+        LOG.error("error closing file system", e);
+      }
+    }
+  }
+
+  protected void createRootAndMetaRegions() throws IOException {
+    root = HRegion.createHRegion(HRegionInfo.ROOT_REGIONINFO, testDir, conf);
+    meta = HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO, testDir,
+        conf);
+    HRegion.addRegionToMETA(root, meta);
+  }
+
+  protected void closeRootAndMeta() throws IOException {
+    if (meta != null) {
+      meta.close();
+      meta.getLog().closeAndDelete();
+    }
+    if (root != null) {
+      root.close();
+      root.getLog().closeAndDelete();
+    }
+  }
+
+  public static void assertByteEquals(byte[] expected,
+                               byte[] actual) {
+    if (Bytes.compareTo(expected, actual) != 0) {
+      throw new AssertionFailedError("expected:<" +
+      Bytes.toString(expected) + "> but was:<" +
+      Bytes.toString(actual) + ">");
+    }
+  }
+
+  public static void assertEquals(byte[] expected,
+                               byte[] actual) {
+    if (Bytes.compareTo(expected, actual) != 0) {
+      throw new AssertionFailedError("expected:<" +
+      Bytes.toStringBinary(expected) + "> but was:<" +
+      Bytes.toStringBinary(actual) + ">");
+    }
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java b/0.90/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
new file mode 100644
index 0000000..8a06292
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
@@ -0,0 +1,1284 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.reflect.Field;
+import java.security.MessageDigest;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.UUID;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.logging.impl.Jdk14Logger;
+import org.apache.commons.logging.impl.Log4JLogger;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.ReadWriteConsistencyControl;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
+import org.apache.hadoop.hbase.zookeeper.ZKConfig;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.mapred.MiniMRCluster;
+import org.apache.zookeeper.ZooKeeper;
+
+/**
+ * Facility for testing HBase. Replacement for
+ * old HBaseTestCase and HBaseCluserTestCase functionality.
+ * Create an instance and keep it around testing HBase.  This class is
+ * meant to be your one-stop shop for anything you might need testing.  Manages
+ * one cluster at a time only.  Depends on log4j being on classpath and
+ * hbase-site.xml for logging and test-run configuration.  It does not set
+ * logging levels nor make changes to configuration parameters.
+ */
+public class HBaseTestingUtility {
+  private final static Log LOG = LogFactory.getLog(HBaseTestingUtility.class);
+  private Configuration conf;
+  private MiniZooKeeperCluster zkCluster = null;
+  /**
+   * Set if we were passed a zkCluster.  If so, we won't shutdown zk as
+   * part of general shutdown.
+   */
+  private boolean passedZkCluster = false;
+  private MiniDFSCluster dfsCluster = null;
+  private MiniHBaseCluster hbaseCluster = null;
+  private MiniMRCluster mrCluster = null;
+  // If non-null, then already a cluster running.
+  private File clusterTestBuildDir = null;
+
+  /**
+   * System property key to get test directory value.
+   * Name is as it is because mini dfs has hard-codings to put test data here.
+   */
+  public static final String TEST_DIRECTORY_KEY = "test.build.data";
+
+  /**
+   * Default parent directory for test output.
+   */
+  public static final String DEFAULT_TEST_DIRECTORY = "target/test-data";
+
+  public HBaseTestingUtility() {
+    this(HBaseConfiguration.create());
+  }
+
+  public HBaseTestingUtility(Configuration conf) {
+    this.conf = conf;
+  }
+
+  /**
+   * Returns this classes's instance of {@link Configuration}.  Be careful how
+   * you use the returned Configuration since {@link HConnection} instances
+   * can be shared.  The Map of HConnections is keyed by the Configuration.  If
+   * say, a Connection was being used against a cluster that had been shutdown,
+   * see {@link #shutdownMiniCluster()}, then the Connection will no longer
+   * be wholesome.  Rather than use the return direct, its usually best to
+   * make a copy and use that.  Do
+   * <code>Configuration c = new Configuration(INSTANCE.getConfiguration());</code>
+   * @return Instance of Configuration.
+   */
+  public Configuration getConfiguration() {
+    return this.conf;
+  }
+
+  /**
+   * @return Where to write test data on local filesystem; usually
+   * {@link #DEFAULT_TEST_DIRECTORY}
+   * @see #setupClusterTestBuildDir()
+   * @see #clusterTestBuildDir()
+   * @see #getTestFileSystem()
+   */
+  public static Path getTestDir() {
+    return new Path(System.getProperty(TEST_DIRECTORY_KEY,
+      DEFAULT_TEST_DIRECTORY));
+  }
+
+  /**
+   * @param subdirName
+   * @return Path to a subdirectory named <code>subdirName</code> under
+   * {@link #getTestDir()}.
+   * @see #setupClusterTestBuildDir()
+   * @see #clusterTestBuildDir(String)
+   * @see #getTestFileSystem()
+   */
+  public static Path getTestDir(final String subdirName) {
+    return new Path(getTestDir(), subdirName);
+  }
+
+  /**
+   * Home our cluster in a dir under {@link #DEFAULT_TEST_DIRECTORY}.  Give it a
+   * random name
+   * so can have many concurrent clusters running if we need to.  Need to
+   * amend the {@link #TEST_DIRECTORY_KEY} System property.  Its what
+   * minidfscluster bases
+   * it data dir on.  Moding a System property is not the way to do concurrent
+   * instances -- another instance could grab the temporary
+   * value unintentionally -- but not anything can do about it at moment;
+   * single instance only is how the minidfscluster works.
+   * @return The calculated cluster test build directory.
+   */
+  public File setupClusterTestBuildDir() {
+    String randomStr = UUID.randomUUID().toString();
+    String dirStr = getTestDir(randomStr).toString();
+    File dir = new File(dirStr).getAbsoluteFile();
+    // Have it cleaned up on exit
+    dir.deleteOnExit();
+    return dir;
+  }
+
+  /**
+   * @throws IOException If a cluster -- zk, dfs, or hbase -- already running.
+   */
+  void isRunningCluster(String passedBuildPath) throws IOException {
+    if (this.clusterTestBuildDir == null || passedBuildPath != null) return;
+    throw new IOException("Cluster already running at " +
+      this.clusterTestBuildDir);
+  }
+
+  /**
+   * Start a minidfscluster.
+   * @param servers How many DNs to start.
+   * @throws Exception
+   * @see {@link #shutdownMiniDFSCluster()}
+   * @return The mini dfs cluster created.
+   */
+  public MiniDFSCluster startMiniDFSCluster(int servers) throws Exception {
+    return startMiniDFSCluster(servers, null);
+  }
+
+  /**
+   * Start a minidfscluster.
+   * Can only create one.
+   * @param dir Where to home your dfs cluster.
+   * @param servers How many DNs to start.
+   * @throws Exception
+   * @see {@link #shutdownMiniDFSCluster()}
+   * @return The mini dfs cluster created.
+   */
+  public MiniDFSCluster startMiniDFSCluster(int servers, final File dir)
+  throws Exception {
+    // This does the following to home the minidfscluster
+    //     base_dir = new File(System.getProperty("test.build.data", "build/test/data"), "dfs/");
+    // Some tests also do this:
+    //  System.getProperty("test.cache.data", "build/test/cache");
+    if (dir == null) {
+      this.clusterTestBuildDir = setupClusterTestBuildDir();
+    } else {
+      this.clusterTestBuildDir = dir;
+    }
+    System.setProperty(TEST_DIRECTORY_KEY, this.clusterTestBuildDir.toString());
+    System.setProperty("test.cache.data", this.clusterTestBuildDir.toString());
+    this.dfsCluster = new MiniDFSCluster(0, this.conf, servers, true, true,
+      true, null, null, null, null);
+    // Set this just-started cluser as our filesystem.
+    FileSystem fs = this.dfsCluster.getFileSystem();
+    this.conf.set("fs.defaultFS", fs.getUri().toString());
+    // Do old style too just to be safe.
+    this.conf.set("fs.default.name", fs.getUri().toString());
+    return this.dfsCluster;
+  }
+
+  /**
+   * Shuts down instance created by call to {@link #startMiniDFSCluster(int, File)}
+   * or does nothing.
+   * @throws Exception
+   */
+  public void shutdownMiniDFSCluster() throws Exception {
+    if (this.dfsCluster != null) {
+      // The below throws an exception per dn, AsynchronousCloseException.
+      this.dfsCluster.shutdown();
+    }
+  }
+
+  /**
+   * Call this if you only want a zk cluster.
+   * @see #startMiniZKCluster() if you want zk + dfs + hbase mini cluster.
+   * @throws Exception
+   * @see #shutdownMiniZKCluster()
+   * @return zk cluster started.
+   */
+  public MiniZooKeeperCluster startMiniZKCluster() throws Exception {
+    return startMiniZKCluster(setupClusterTestBuildDir());
+
+  }
+
+  private MiniZooKeeperCluster startMiniZKCluster(final File dir)
+  throws Exception {
+    this.passedZkCluster = false;
+    if (this.zkCluster != null) {
+      throw new IOException("Cluster already running at " + dir);
+    }
+    this.zkCluster = new MiniZooKeeperCluster();
+    int clientPort = this.zkCluster.startup(dir);
+    this.conf.set("hbase.zookeeper.property.clientPort",
+      Integer.toString(clientPort));
+    return this.zkCluster;
+  }
+
+  /**
+   * Shuts down zk cluster created by call to {@link #startMiniZKCluster(File)}
+   * or does nothing.
+   * @throws IOException
+   * @see #startMiniZKCluster()
+   */
+  public void shutdownMiniZKCluster() throws IOException {
+    if (this.zkCluster != null) {
+      this.zkCluster.shutdown();
+      this.zkCluster = null;
+    }
+  }
+
+  /**
+   * Start up a minicluster of hbase, dfs, and zookeeper.
+   * @throws Exception
+   * @return Mini hbase cluster instance created.
+   * @see {@link #shutdownMiniDFSCluster()}
+   */
+  public MiniHBaseCluster startMiniCluster() throws Exception {
+    return startMiniCluster(1, 1);
+  }
+
+  /**
+   * Start up a minicluster of hbase, optionally dfs, and zookeeper.
+   * Modifies Configuration.  Homes the cluster data directory under a random
+   * subdirectory in a directory under System property test.build.data.
+   * Directory is cleaned up on exit.
+   * @param numSlaves Number of slaves to start up.  We'll start this many
+   * datanodes and regionservers.  If numSlaves is > 1, then make sure
+   * hbase.regionserver.info.port is -1 (i.e. no ui per regionserver) otherwise
+   * bind errors.
+   * @throws Exception
+   * @see {@link #shutdownMiniCluster()}
+   * @return Mini hbase cluster instance created.
+   */
+  public MiniHBaseCluster startMiniCluster(final int numSlaves)
+  throws Exception {
+    return startMiniCluster(1, numSlaves);
+  }
+
+  /**
+   * Start up a minicluster of hbase, optionally dfs, and zookeeper.
+   * Modifies Configuration.  Homes the cluster data directory under a random
+   * subdirectory in a directory under System property test.build.data.
+   * Directory is cleaned up on exit.
+   * @param numMasters Number of masters to start up.  We'll start this many
+   * hbase masters.  If numMasters > 1, you can find the active/primary master
+   * with {@link MiniHBaseCluster#getMaster()}.
+   * @param numSlaves Number of slaves to start up.  We'll start this many
+   * datanodes and regionservers.  If numSlaves is > 1, then make sure
+   * hbase.regionserver.info.port is -1 (i.e. no ui per regionserver) otherwise
+   * bind errors.
+   * @throws Exception
+   * @see {@link #shutdownMiniCluster()}
+   * @return Mini hbase cluster instance created.
+   */
+  public MiniHBaseCluster startMiniCluster(final int numMasters,
+      final int numSlaves)
+  throws Exception {
+    LOG.info("Starting up minicluster with " + numMasters + " master(s) and " +
+        numSlaves + " regionserver(s) and datanode(s)");
+    // If we already put up a cluster, fail.
+    String testBuildPath = conf.get(TEST_DIRECTORY_KEY, null);
+    isRunningCluster(testBuildPath);
+    if (testBuildPath != null) {
+      LOG.info("Using passed path: " + testBuildPath);
+    }
+    // Make a new random dir to home everything in.  Set it as system property.
+    // minidfs reads home from system property.
+    this.clusterTestBuildDir = testBuildPath == null?
+      setupClusterTestBuildDir() : new File(testBuildPath);
+    System.setProperty(TEST_DIRECTORY_KEY, this.clusterTestBuildDir.getPath());
+    // Bring up mini dfs cluster. This spews a bunch of warnings about missing
+    // scheme. Complaints are 'Scheme is undefined for build/test/data/dfs/name1'.
+    startMiniDFSCluster(numSlaves, this.clusterTestBuildDir);
+    this.dfsCluster.waitClusterUp();
+
+    // Start up a zk cluster.
+    if (this.zkCluster == null) {
+      startMiniZKCluster(this.clusterTestBuildDir);
+    }
+    return startMiniHBaseCluster(numMasters, numSlaves);
+  }
+
+  /**
+   * Starts up mini hbase cluster.  Usually used after call to
+   * {@link #startMiniCluster(int, int)} when doing stepped startup of clusters.
+   * Usually you won't want this.  You'll usually want {@link #startMiniCluster()}.
+   * @param numMasters
+   * @param numSlaves
+   * @return Reference to the hbase mini hbase cluster.
+   * @throws IOException
+   * @throws InterruptedException 
+   * @see {@link #startMiniCluster()}
+   */
+  public MiniHBaseCluster startMiniHBaseCluster(final int numMasters,
+      final int numSlaves)
+  throws IOException, InterruptedException {
+    // Now do the mini hbase cluster.  Set the hbase.rootdir in config.
+    createRootDir();
+    Configuration c = new Configuration(this.conf);
+    this.hbaseCluster = new MiniHBaseCluster(c, numMasters, numSlaves);
+    // Don't leave here till we've done a successful scan of the .META.
+    HTable t = new HTable(c, HConstants.META_TABLE_NAME);
+    ResultScanner s = t.getScanner(new Scan());
+    while (s.next() != null) {
+      continue;
+    }
+    LOG.info("Minicluster is up");
+    return this.hbaseCluster;
+  }
+
+  /**
+   * Starts the hbase cluster up again after shutting it down previously in a
+   * test.  Use this if you want to keep dfs/zk up and just stop/start hbase.
+   * @param servers number of region servers
+   * @throws IOException
+   */
+  public void restartHBaseCluster(int servers) throws IOException, InterruptedException {
+    this.hbaseCluster = new MiniHBaseCluster(this.conf, servers);
+    // Don't leave here till we've done a successful scan of the .META.
+    HTable t = new HTable(new Configuration(this.conf), HConstants.META_TABLE_NAME);
+    ResultScanner s = t.getScanner(new Scan());
+    while (s.next() != null) {
+      continue;
+    }
+    LOG.info("HBase has been restarted");
+  }
+
+  /**
+   * @return Current mini hbase cluster. Only has something in it after a call
+   * to {@link #startMiniCluster()}.
+   * @see #startMiniCluster()
+   */
+  public MiniHBaseCluster getMiniHBaseCluster() {
+    return this.hbaseCluster;
+  }
+
+  /**
+   * Stops mini hbase, zk, and hdfs clusters.
+   * @throws IOException
+   * @see {@link #startMiniCluster(int)}
+   */
+  public void shutdownMiniCluster() throws IOException {
+    LOG.info("Shutting down minicluster");
+    if (this.hbaseCluster != null) {
+      this.hbaseCluster.shutdown();
+      // Wait till hbase is down before going on to shutdown zk.
+      this.hbaseCluster.join();
+    }
+    if (!this.passedZkCluster) shutdownMiniZKCluster();
+    if (this.dfsCluster != null) {
+      // The below throws an exception per dn, AsynchronousCloseException.
+      this.dfsCluster.shutdown();
+    }
+    // Clean up our directory.
+    if (this.clusterTestBuildDir != null && this.clusterTestBuildDir.exists()) {
+      // Need to use deleteDirectory because File.delete required dir is empty.
+      if (!FSUtils.deleteDirectory(FileSystem.getLocal(this.conf),
+          new Path(this.clusterTestBuildDir.toString()))) {
+        LOG.warn("Failed delete of " + this.clusterTestBuildDir.toString());
+      }
+      this.clusterTestBuildDir = null;
+    }
+    LOG.info("Minicluster is down");
+  }
+
+  /**
+   * Creates an hbase rootdir in user home directory.  Also creates hbase
+   * version file.  Normally you won't make use of this method.  Root hbasedir
+   * is created for you as part of mini cluster startup.  You'd only use this
+   * method if you were doing manual operation.
+   * @return Fully qualified path to hbase root dir
+   * @throws IOException
+   */
+  public Path createRootDir() throws IOException {
+    FileSystem fs = FileSystem.get(this.conf);
+    Path hbaseRootdir = fs.makeQualified(fs.getHomeDirectory());
+    this.conf.set(HConstants.HBASE_DIR, hbaseRootdir.toString());
+    fs.mkdirs(hbaseRootdir);
+    FSUtils.setVersion(fs, hbaseRootdir);
+    return hbaseRootdir;
+  }
+
+  /**
+   * Flushes all caches in the mini hbase cluster
+   * @throws IOException
+   */
+  public void flush() throws IOException {
+    this.hbaseCluster.flushcache();
+  }
+
+  /**
+   * Flushes all caches in the mini hbase cluster
+   * @throws IOException
+   */
+  public void flush(byte [] tableName) throws IOException {
+    this.hbaseCluster.flushcache(tableName);
+  }
+
+
+  /**
+   * Create a table.
+   * @param tableName
+   * @param family
+   * @return An HTable instance for the created table.
+   * @throws IOException
+   */
+  public HTable createTable(byte[] tableName, byte[] family)
+  throws IOException{
+    return createTable(tableName, new byte[][]{family});
+  }
+
+  /**
+   * Create a table.
+   * @param tableName
+   * @param families
+   * @return An HTable instance for the created table.
+   * @throws IOException
+   */
+  public HTable createTable(byte[] tableName, byte[][] families)
+  throws IOException {
+    return createTable(tableName, families,
+        new Configuration(getConfiguration()));
+  }
+
+  /**
+   * Create a table.
+   * @param tableName
+   * @param families
+   * @param c Configuration to use
+   * @return An HTable instance for the created table.
+   * @throws IOException
+   */
+  public HTable createTable(byte[] tableName, byte[][] families,
+      final Configuration c)
+  throws IOException {
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    for(byte[] family : families) {
+      desc.addFamily(new HColumnDescriptor(family));
+    }
+    getHBaseAdmin().createTable(desc);
+    return new HTable(c, tableName);
+  }
+
+  /**
+   * Create a table.
+   * @param tableName
+   * @param family
+   * @param numVersions
+   * @return An HTable instance for the created table.
+   * @throws IOException
+   */
+  public HTable createTable(byte[] tableName, byte[] family, int numVersions)
+  throws IOException {
+    return createTable(tableName, new byte[][]{family}, numVersions);
+  }
+
+  /**
+   * Create a table.
+   * @param tableName
+   * @param families
+   * @param numVersions
+   * @return An HTable instance for the created table.
+   * @throws IOException
+   */
+  public HTable createTable(byte[] tableName, byte[][] families,
+      int numVersions)
+  throws IOException {
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    for (byte[] family : families) {
+      HColumnDescriptor hcd = new HColumnDescriptor(family, numVersions,
+          HColumnDescriptor.DEFAULT_COMPRESSION,
+          HColumnDescriptor.DEFAULT_IN_MEMORY,
+          HColumnDescriptor.DEFAULT_BLOCKCACHE,
+          Integer.MAX_VALUE, HColumnDescriptor.DEFAULT_TTL,
+          HColumnDescriptor.DEFAULT_BLOOMFILTER,
+          HColumnDescriptor.DEFAULT_REPLICATION_SCOPE);
+      desc.addFamily(hcd);
+    }
+    getHBaseAdmin().createTable(desc);
+    return new HTable(new Configuration(getConfiguration()), tableName);
+  }
+
+  /**
+   * Create a table.
+   * @param tableName
+   * @param families
+   * @param numVersions
+   * @return An HTable instance for the created table.
+   * @throws IOException
+   */
+  public HTable createTable(byte[] tableName, byte[][] families,
+      int[] numVersions)
+  throws IOException {
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    int i = 0;
+    for (byte[] family : families) {
+      HColumnDescriptor hcd = new HColumnDescriptor(family, numVersions[i],
+          HColumnDescriptor.DEFAULT_COMPRESSION,
+          HColumnDescriptor.DEFAULT_IN_MEMORY,
+          HColumnDescriptor.DEFAULT_BLOCKCACHE,
+          Integer.MAX_VALUE, HColumnDescriptor.DEFAULT_TTL,
+          HColumnDescriptor.DEFAULT_BLOOMFILTER,
+          HColumnDescriptor.DEFAULT_REPLICATION_SCOPE);
+      desc.addFamily(hcd);
+      i++;
+    }
+    getHBaseAdmin().createTable(desc);
+    return new HTable(new Configuration(getConfiguration()), tableName);
+  }
+
+  /**
+   * Drop an existing table
+   * @param tableName existing table
+   */
+  public void deleteTable(byte[] tableName) throws IOException {
+    HBaseAdmin admin = new HBaseAdmin(getConfiguration());
+    admin.disableTable(tableName);
+    admin.deleteTable(tableName);
+  }
+
+  /**
+   * Provide an existing table name to truncate
+   * @param tableName existing table
+   * @return HTable to that new table
+   * @throws IOException
+   */
+  public HTable truncateTable(byte [] tableName) throws IOException {
+    HTable table = new HTable(getConfiguration(), tableName);
+    Scan scan = new Scan();
+    ResultScanner resScan = table.getScanner(scan);
+    for(Result res : resScan) {
+      Delete del = new Delete(res.getRow());
+      table.delete(del);
+    }
+    resScan = table.getScanner(scan);
+    return table;
+  }
+
+  /**
+   * Load table with rows from 'aaa' to 'zzz'.
+   * @param t Table
+   * @param f Family
+   * @return Count of rows loaded.
+   * @throws IOException
+   */
+  public int loadTable(final HTable t, final byte[] f) throws IOException {
+    t.setAutoFlush(false);
+    byte[] k = new byte[3];
+    int rowCount = 0;
+    for (byte b1 = 'a'; b1 <= 'z'; b1++) {
+      for (byte b2 = 'a'; b2 <= 'z'; b2++) {
+        for (byte b3 = 'a'; b3 <= 'z'; b3++) {
+          k[0] = b1;
+          k[1] = b2;
+          k[2] = b3;
+          Put put = new Put(k);
+          put.add(f, null, k);
+          t.put(put);
+          rowCount++;
+        }
+      }
+    }
+    t.flushCommits();
+    return rowCount;
+  }
+  /**
+   * Load region with rows from 'aaa' to 'zzz'.
+   * @param r Region
+   * @param f Family
+   * @return Count of rows loaded.
+   * @throws IOException
+   */
+  public int loadRegion(final HRegion r, final byte[] f)
+  throws IOException {
+    byte[] k = new byte[3];
+    int rowCount = 0;
+    for (byte b1 = 'a'; b1 <= 'z'; b1++) {
+      for (byte b2 = 'a'; b2 <= 'z'; b2++) {
+        for (byte b3 = 'a'; b3 <= 'z'; b3++) {
+          k[0] = b1;
+          k[1] = b2;
+          k[2] = b3;
+          Put put = new Put(k);
+          put.add(f, null, k);
+          if (r.getLog() == null) put.setWriteToWAL(false);
+          r.put(put);
+          rowCount++;
+        }
+      }
+    }
+    return rowCount;
+  }
+
+  /**
+   * Return the number of rows in the given table.
+   */
+  public int countRows(final HTable table) throws IOException {
+    Scan scan = new Scan();
+    ResultScanner results = table.getScanner(scan);
+    int count = 0;
+    for (@SuppressWarnings("unused") Result res : results) {
+      count++;
+    }
+    results.close();
+    return count;
+  }
+
+  /**
+   * Return an md5 digest of the entire contents of a table.
+   */
+  public String checksumRows(final HTable table) throws Exception {
+    Scan scan = new Scan();
+    ResultScanner results = table.getScanner(scan);
+    MessageDigest digest = MessageDigest.getInstance("MD5");
+    for (Result res : results) {
+      digest.update(res.getRow());
+    }
+    results.close();
+    return digest.toString();
+  }
+
+  /**
+   * Creates many regions names "aaa" to "zzz".
+   *
+   * @param table  The table to use for the data.
+   * @param columnFamily  The family to insert the data into.
+   * @return count of regions created.
+   * @throws IOException When creating the regions fails.
+   */
+  public int createMultiRegions(HTable table, byte[] columnFamily)
+  throws IOException {
+    return createMultiRegions(getConfiguration(), table, columnFamily);
+  }
+
+  public static final byte[][] KEYS = {
+    HConstants.EMPTY_BYTE_ARRAY, Bytes.toBytes("bbb"),
+    Bytes.toBytes("ccc"), Bytes.toBytes("ddd"), Bytes.toBytes("eee"),
+    Bytes.toBytes("fff"), Bytes.toBytes("ggg"), Bytes.toBytes("hhh"),
+    Bytes.toBytes("iii"), Bytes.toBytes("jjj"), Bytes.toBytes("kkk"),
+    Bytes.toBytes("lll"), Bytes.toBytes("mmm"), Bytes.toBytes("nnn"),
+    Bytes.toBytes("ooo"), Bytes.toBytes("ppp"), Bytes.toBytes("qqq"),
+    Bytes.toBytes("rrr"), Bytes.toBytes("sss"), Bytes.toBytes("ttt"),
+    Bytes.toBytes("uuu"), Bytes.toBytes("vvv"), Bytes.toBytes("www"),
+    Bytes.toBytes("xxx"), Bytes.toBytes("yyy")
+  };
+
+  /**
+   * Creates many regions names "aaa" to "zzz".
+   * @param c Configuration to use.
+   * @param table  The table to use for the data.
+   * @param columnFamily  The family to insert the data into.
+   * @return count of regions created.
+   * @throws IOException When creating the regions fails.
+   */
+  public int createMultiRegions(final Configuration c, final HTable table,
+      final byte[] columnFamily)
+  throws IOException {
+    return createMultiRegions(c, table, columnFamily, KEYS);
+  }
+
+  /**
+   * Creates the specified number of regions in the specified table.
+   * @param c
+   * @param table
+   * @param columnFamily
+   * @param startKeys
+   * @return
+   * @throws IOException
+   */
+  public int createMultiRegions(final Configuration c, final HTable table,
+      final byte [] family, int numRegions)
+  throws IOException {
+    if (numRegions < 3) throw new IOException("Must create at least 3 regions");
+    byte [] startKey = Bytes.toBytes("aaaaa");
+    byte [] endKey = Bytes.toBytes("zzzzz");
+    byte [][] splitKeys = Bytes.split(startKey, endKey, numRegions - 3);
+    byte [][] regionStartKeys = new byte[splitKeys.length+1][];
+    for (int i=0;i<splitKeys.length;i++) {
+      regionStartKeys[i+1] = splitKeys[i];
+    }
+    regionStartKeys[0] = HConstants.EMPTY_BYTE_ARRAY;
+    return createMultiRegions(c, table, family, regionStartKeys);
+  }
+  
+  public int createMultiRegions(final Configuration c, final HTable table,
+      final byte[] columnFamily, byte [][] startKeys)
+  throws IOException {
+    Arrays.sort(startKeys, Bytes.BYTES_COMPARATOR);
+    HTable meta = new HTable(c, HConstants.META_TABLE_NAME);
+    HTableDescriptor htd = table.getTableDescriptor();
+    if(!htd.hasFamily(columnFamily)) {
+      HColumnDescriptor hcd = new HColumnDescriptor(columnFamily);
+      htd.addFamily(hcd);
+    }
+    // remove empty region - this is tricky as the mini cluster during the test
+    // setup already has the "<tablename>,,123456789" row with an empty start
+    // and end key. Adding the custom regions below adds those blindly,
+    // including the new start region from empty to "bbb". lg
+    List<byte[]> rows = getMetaTableRows(htd.getName());
+    List<HRegionInfo> newRegions = new ArrayList<HRegionInfo>(startKeys.length);
+    // add custom ones
+    int count = 0;
+    for (int i = 0; i < startKeys.length; i++) {
+      int j = (i + 1) % startKeys.length;
+      HRegionInfo hri = new HRegionInfo(table.getTableDescriptor(),
+        startKeys[i], startKeys[j]);
+      Put put = new Put(hri.getRegionName());
+      put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+        Writables.getBytes(hri));
+      meta.put(put);
+      LOG.info("createMultiRegions: inserted " + hri.toString());
+      newRegions.add(hri);
+      count++;
+    }
+    // see comment above, remove "old" (or previous) single region
+    for (byte[] row : rows) {
+      LOG.info("createMultiRegions: deleting meta row -> " +
+        Bytes.toStringBinary(row));
+      meta.delete(new Delete(row));
+    }
+    // flush cache of regions
+    HConnection conn = table.getConnection();
+    conn.clearRegionCache();
+    // assign all the new regions IF table is enabled.
+    if (getHBaseAdmin().isTableEnabled(table.getTableName())) {
+      for(HRegionInfo hri : newRegions) {
+        hbaseCluster.getMaster().assignRegion(hri);
+      }
+    }
+    return count;
+  }
+
+  /**
+   * Create rows in META for regions of the specified table with the specified
+   * start keys.  The first startKey should be a 0 length byte array if you
+   * want to form a proper range of regions.
+   * @param conf
+   * @param htd
+   * @param startKeys
+   * @return list of region info for regions added to meta
+   * @throws IOException
+   */
+  public List<HRegionInfo> createMultiRegionsInMeta(final Configuration conf,
+      final HTableDescriptor htd, byte [][] startKeys)
+  throws IOException {
+    HTable meta = new HTable(conf, HConstants.META_TABLE_NAME);
+    Arrays.sort(startKeys, Bytes.BYTES_COMPARATOR);
+    List<HRegionInfo> newRegions = new ArrayList<HRegionInfo>(startKeys.length);
+    // add custom ones
+    int count = 0;
+    for (int i = 0; i < startKeys.length; i++) {
+      int j = (i + 1) % startKeys.length;
+      HRegionInfo hri = new HRegionInfo(htd, startKeys[i], startKeys[j]);
+      Put put = new Put(hri.getRegionName());
+      put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+        Writables.getBytes(hri));
+      meta.put(put);
+      LOG.info("createMultiRegionsInMeta: inserted " + hri.toString());
+      newRegions.add(hri);
+      count++;
+    }
+    return newRegions;
+  }
+
+  /**
+   * Returns all rows from the .META. table.
+   *
+   * @throws IOException When reading the rows fails.
+   */
+  public List<byte[]> getMetaTableRows() throws IOException {
+    // TODO: Redo using MetaReader class
+    HTable t = new HTable(new Configuration(this.conf), HConstants.META_TABLE_NAME);
+    List<byte[]> rows = new ArrayList<byte[]>();
+    ResultScanner s = t.getScanner(new Scan());
+    for (Result result : s) {
+      LOG.info("getMetaTableRows: row -> " +
+        Bytes.toStringBinary(result.getRow()));
+      rows.add(result.getRow());
+    }
+    s.close();
+    return rows;
+  }
+
+  /**
+   * Returns all rows from the .META. table for a given user table
+   *
+   * @throws IOException When reading the rows fails.
+   */
+  public List<byte[]> getMetaTableRows(byte[] tableName) throws IOException {
+    // TODO: Redo using MetaReader.
+    HTable t = new HTable(new Configuration(this.conf), HConstants.META_TABLE_NAME);
+    List<byte[]> rows = new ArrayList<byte[]>();
+    ResultScanner s = t.getScanner(new Scan());
+    for (Result result : s) {
+      HRegionInfo info = Writables.getHRegionInfo(
+          result.getValue(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER));
+      HTableDescriptor desc = info.getTableDesc();
+      if (Bytes.compareTo(desc.getName(), tableName) == 0) {
+        LOG.info("getMetaTableRows: row -> " +
+            Bytes.toStringBinary(result.getRow()));
+        rows.add(result.getRow());
+      }
+    }
+    s.close();
+    return rows;
+  }
+
+  /**
+   * Tool to get the reference to the region server object that holds the
+   * region of the specified user table.
+   * It first searches for the meta rows that contain the region of the
+   * specified table, then gets the index of that RS, and finally retrieves
+   * the RS's reference.
+   * @param tableName user table to lookup in .META.
+   * @return region server that holds it, null if the row doesn't exist
+   * @throws IOException
+   */
+  public HRegionServer getRSForFirstRegionInTable(byte[] tableName)
+      throws IOException {
+    List<byte[]> metaRows = getMetaTableRows(tableName);
+    if (metaRows == null || metaRows.size() == 0) {
+      return null;
+    }
+    int index = hbaseCluster.getServerWith(metaRows.get(0));
+    return hbaseCluster.getRegionServerThreads().get(index).getRegionServer();
+  }
+
+  /**
+   * Starts a <code>MiniMRCluster</code> with a default number of
+   * <code>TaskTracker</code>'s.
+   *
+   * @throws IOException When starting the cluster fails.
+   */
+  public void startMiniMapReduceCluster() throws IOException {
+    startMiniMapReduceCluster(2);
+  }
+
+  /**
+   * Starts a <code>MiniMRCluster</code>.
+   *
+   * @param servers  The number of <code>TaskTracker</code>'s to start.
+   * @throws IOException When starting the cluster fails.
+   */
+  public void startMiniMapReduceCluster(final int servers) throws IOException {
+    LOG.info("Starting mini mapreduce cluster...");
+    // These are needed for the new and improved Map/Reduce framework
+    Configuration c = getConfiguration();
+    System.setProperty("hadoop.log.dir", c.get("hadoop.log.dir"));
+    c.set("mapred.output.dir", c.get("hadoop.tmp.dir"));
+    mrCluster = new MiniMRCluster(servers,
+      FileSystem.get(c).getUri().toString(), 1);
+    LOG.info("Mini mapreduce cluster started");
+    c.set("mapred.job.tracker",
+        mrCluster.createJobConf().get("mapred.job.tracker"));
+  }
+
+  /**
+   * Stops the previously started <code>MiniMRCluster</code>.
+   */
+  public void shutdownMiniMapReduceCluster() {
+    LOG.info("Stopping mini mapreduce cluster...");
+    if (mrCluster != null) {
+      mrCluster.shutdown();
+    }
+    // Restore configuration to point to local jobtracker
+    conf.set("mapred.job.tracker", "local");
+    LOG.info("Mini mapreduce cluster stopped");
+  }
+
+  /**
+   * Switches the logger for the given class to DEBUG level.
+   *
+   * @param clazz  The class for which to switch to debug logging.
+   */
+  public void enableDebug(Class<?> clazz) {
+    Log l = LogFactory.getLog(clazz);
+    if (l instanceof Log4JLogger) {
+      ((Log4JLogger) l).getLogger().setLevel(org.apache.log4j.Level.DEBUG);
+    } else if (l instanceof Jdk14Logger) {
+      ((Jdk14Logger) l).getLogger().setLevel(java.util.logging.Level.ALL);
+    }
+  }
+
+  /**
+   * Expire the Master's session
+   * @throws Exception
+   */
+  public void expireMasterSession() throws Exception {
+    HMaster master = hbaseCluster.getMaster();
+    expireSession(master.getZooKeeper(), master);
+  }
+
+  /**
+   * Expire a region server's session
+   * @param index which RS
+   * @throws Exception
+   */
+  public void expireRegionServerSession(int index) throws Exception {
+    HRegionServer rs = hbaseCluster.getRegionServer(index);
+    expireSession(rs.getZooKeeper(), rs);
+  }
+
+  public void expireSession(ZooKeeperWatcher nodeZK, Server server)
+  throws Exception {
+    Configuration c = new Configuration(this.conf);
+    String quorumServers = ZKConfig.getZKQuorumServersString(c);
+    int sessionTimeout = 5 * 1000; // 5 seconds
+    ZooKeeper zk = nodeZK.getZooKeeper();
+    byte[] password = zk.getSessionPasswd();
+    long sessionID = zk.getSessionId();
+
+    ZooKeeper newZK = new ZooKeeper(quorumServers,
+        sessionTimeout, EmptyWatcher.instance, sessionID, password);
+    newZK.close();
+    final long sleep = sessionTimeout * 5L;
+    LOG.info("ZK Closed Session 0x" + Long.toHexString(sessionID) +
+      "; sleeping=" + sleep);
+
+    Thread.sleep(sleep);
+
+    new HTable(new Configuration(conf), HConstants.META_TABLE_NAME);
+  }
+
+  /**
+   * Get the HBase cluster.
+   *
+   * @return hbase cluster
+   */
+  public MiniHBaseCluster getHBaseCluster() {
+    return hbaseCluster;
+  }
+
+  /**
+   * Returns a HBaseAdmin instance.
+   *
+   * @return The HBaseAdmin instance.
+   * @throws IOException
+   */
+  public HBaseAdmin getHBaseAdmin()
+  throws IOException {
+    return new HBaseAdmin(new Configuration(getConfiguration()));
+  }
+
+  /**
+   * Closes the named region.
+   *
+   * @param regionName  The region to close.
+   * @throws IOException
+   */
+  public void closeRegion(String regionName) throws IOException {
+    closeRegion(Bytes.toBytes(regionName));
+  }
+
+  /**
+   * Closes the named region.
+   *
+   * @param regionName  The region to close.
+   * @throws IOException
+   */
+  public void closeRegion(byte[] regionName) throws IOException {
+    HBaseAdmin admin = getHBaseAdmin();
+    admin.closeRegion(regionName, null);
+  }
+
+  /**
+   * Closes the region containing the given row.
+   *
+   * @param row  The row to find the containing region.
+   * @param table  The table to find the region.
+   * @throws IOException
+   */
+  public void closeRegionByRow(String row, HTable table) throws IOException {
+    closeRegionByRow(Bytes.toBytes(row), table);
+  }
+
+  /**
+   * Closes the region containing the given row.
+   *
+   * @param row  The row to find the containing region.
+   * @param table  The table to find the region.
+   * @throws IOException
+   */
+  public void closeRegionByRow(byte[] row, HTable table) throws IOException {
+    HRegionLocation hrl = table.getRegionLocation(row);
+    closeRegion(hrl.getRegionInfo().getRegionName());
+  }
+
+  public MiniZooKeeperCluster getZkCluster() {
+    return zkCluster;
+  }
+
+  public void setZkCluster(MiniZooKeeperCluster zkCluster) {
+    this.passedZkCluster = true;
+    this.zkCluster = zkCluster;
+  }
+
+  public MiniDFSCluster getDFSCluster() {
+    return dfsCluster;
+  }
+
+  public FileSystem getTestFileSystem() throws IOException {
+    return FileSystem.get(conf);
+  }
+
+  /**
+   * @return True if we removed the test dir
+   * @throws IOException
+   */
+  public boolean cleanupTestDir() throws IOException {
+    return deleteDir(getTestDir());
+  }
+
+  /**
+   * @param subdir Test subdir name.
+   * @return True if we removed the test dir
+   * @throws IOException
+   */
+  public boolean cleanupTestDir(final String subdir) throws IOException {
+    return deleteDir(getTestDir(subdir));
+  }
+
+  /**
+   * @param dir Directory to delete
+   * @return True if we deleted it.
+   * @throws IOException
+   */
+  public boolean deleteDir(final Path dir) throws IOException {
+    FileSystem fs = getTestFileSystem();
+    if (fs.exists(dir)) {
+      return fs.delete(getTestDir(), true);
+    }
+    return false;
+  }
+
+  public void waitTableAvailable(byte[] table, long timeoutMillis)
+  throws InterruptedException, IOException {
+    HBaseAdmin admin = getHBaseAdmin();
+    long startWait = System.currentTimeMillis();
+    while (!admin.isTableAvailable(table)) {
+      assertTrue("Timed out waiting for table " + Bytes.toStringBinary(table),
+          System.currentTimeMillis() - startWait < timeoutMillis);
+      Thread.sleep(500);
+    }
+  }
+
+  /**
+   * Make sure that at least the specified number of region servers
+   * are running
+   * @param num minimum number of region servers that should be running
+   * @return True if we started some servers
+   * @throws IOException
+   */
+  public boolean ensureSomeRegionServersAvailable(final int num)
+      throws IOException {
+    if (this.getHBaseCluster().getLiveRegionServerThreads().size() < num) {
+      // Need at least "num" servers.
+      LOG.info("Started new server=" +
+        this.getHBaseCluster().startRegionServer());
+      return true;
+    }
+    return false;
+  }
+
+  /**
+   * This method clones the passed <code>c</code> configuration setting a new
+   * user into the clone.  Use it getting new instances of FileSystem.  Only
+   * works for DistributedFileSystem.
+   * @param c Initial configuration
+   * @param differentiatingSuffix Suffix to differentiate this user from others.
+   * @return A new configuration instance with a different user set into it.
+   * @throws IOException
+   */
+  public static User getDifferentUser(final Configuration c,
+    final String differentiatingSuffix)
+  throws IOException {
+    FileSystem currentfs = FileSystem.get(c);
+    if (!(currentfs instanceof DistributedFileSystem)) {
+      return User.getCurrent();
+    }
+    // Else distributed filesystem.  Make a new instance per daemon.  Below
+    // code is taken from the AppendTestUtil over in hdfs.
+    String username = User.getCurrent().getName() +
+      differentiatingSuffix;
+    User user = User.createUserForTesting(c, username,
+        new String[]{"supergroup"});
+    return user;
+  }
+
+  /**
+   * Set soft and hard limits in namenode.
+   * You'll get a NPE if you call before you've started a minidfscluster.
+   * @param soft Soft limit
+   * @param hard Hard limit
+   * @throws NoSuchFieldException
+   * @throws SecurityException
+   * @throws IllegalAccessException
+   * @throws IllegalArgumentException
+   */
+  public void setNameNodeNameSystemLeasePeriod(final int soft, final int hard)
+  throws SecurityException, NoSuchFieldException, IllegalArgumentException, IllegalAccessException {
+    // TODO: If 0.20 hadoop do one thing, if 0.21 hadoop do another.
+    // Not available in 0.20 hdfs.  Use reflection to make it happen.
+
+    // private NameNode nameNode;
+    Field field = this.dfsCluster.getClass().getDeclaredField("nameNode");
+    field.setAccessible(true);
+    NameNode nn = (NameNode)field.get(this.dfsCluster);
+    nn.namesystem.leaseManager.setLeasePeriod(100, 50000);
+  }
+
+  /**
+   * Set maxRecoveryErrorCount in DFSClient.  In 0.20 pre-append its hard-coded to 5 and
+   * makes tests linger.  Here is the exception you'll see:
+   * <pre>
+   * 2010-06-15 11:52:28,511 WARN  [DataStreamer for file /hbase/.logs/hlog.1276627923013 block blk_928005470262850423_1021] hdfs.DFSClient$DFSOutputStream(2657): Error Recovery for block blk_928005470262850423_1021 failed  because recovery from primary datanode 127.0.0.1:53683 failed 4 times.  Pipeline was 127.0.0.1:53687, 127.0.0.1:53683. Will retry...
+   * </pre>
+   * @param stream A DFSClient.DFSOutputStream.
+   * @param max
+   * @throws NoSuchFieldException
+   * @throws SecurityException
+   * @throws IllegalAccessException
+   * @throws IllegalArgumentException
+   */
+  public static void setMaxRecoveryErrorCount(final OutputStream stream,
+      final int max) {
+    try {
+      Class<?> [] clazzes = DFSClient.class.getDeclaredClasses();
+      for (Class<?> clazz: clazzes) {
+        String className = clazz.getSimpleName();
+        if (className.equals("DFSOutputStream")) {
+          if (clazz.isInstance(stream)) {
+            Field maxRecoveryErrorCountField =
+              stream.getClass().getDeclaredField("maxRecoveryErrorCount");
+            maxRecoveryErrorCountField.setAccessible(true);
+            maxRecoveryErrorCountField.setInt(stream, max);
+            break;
+          }
+        }
+      }
+    } catch (Exception e) {
+      LOG.info("Could not set max recovery field", e);
+    }
+  }
+
+
+  /**
+   * Wait until <code>countOfRegion</code> in .META. have a non-empty
+   * info:server.  This means all regions have been deployed, master has been
+   * informed and updated .META. with the regions deployed server.
+   * @param conf Configuration
+   * @param countOfRegions How many regions in .META.
+   * @throws IOException
+   */
+  public void waitUntilAllRegionsAssigned(final int countOfRegions)
+  throws IOException {
+    HTable meta = new HTable(getConfiguration(), HConstants.META_TABLE_NAME);
+    while (true) {
+      int rows = 0;
+      Scan scan = new Scan();
+      scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
+      ResultScanner s = meta.getScanner(scan);
+      for (Result r = null; (r = s.next()) != null;) {
+        byte [] b =
+          r.getValue(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
+        if (b == null || b.length <= 0) {
+          break;
+        }
+        rows++;
+      }
+      s.close();
+      // If I get to here and all rows have a Server, then all have been assigned.
+      if (rows == countOfRegions) {
+        break;
+      }
+      LOG.info("Found=" + rows);
+      Threads.sleep(1000);
+    }
+  }
+
+  /**
+   * Do a small get/scan against one store. This is required because store
+   * has no actual methods of querying itself, and relies on StoreScanner.
+   */
+  public static List<KeyValue> getFromStoreFile(Store store,
+                                                Get get) throws IOException {
+    ReadWriteConsistencyControl.resetThreadReadPoint();
+    Scan scan = new Scan(get);
+    InternalScanner scanner = (InternalScanner) store.getScanner(scan,
+        scan.getFamilyMap().get(store.getFamily().getName()));
+
+    List<KeyValue> result = new ArrayList<KeyValue>();
+    scanner.next(result);
+    if (!result.isEmpty()) {
+      // verify that we are on the row we want:
+      KeyValue kv = result.get(0);
+      if (!Bytes.equals(kv.getRow(), get.getRow())) {
+        result.clear();
+      }
+    }
+    return result;
+  }
+
+  /**
+   * Do a small get/scan against one store. This is required because store
+   * has no actual methods of querying itself, and relies on StoreScanner.
+   */
+  public static List<KeyValue> getFromStoreFile(Store store,
+                                                byte [] row,
+                                                NavigableSet<byte[]> columns
+                                                ) throws IOException {
+    Get get = new Get(row);
+    Map<byte[], NavigableSet<byte[]>> s = get.getFamilyMap();
+    s.put(store.getFamily().getName(), columns);
+
+    return getFromStoreFile(store,get);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java b/0.90/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
new file mode 100644
index 0000000..c8de05c
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
@@ -0,0 +1,365 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.math.random.RandomData;
+import org.apache.commons.math.random.RandomDataImpl;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * <p>
+ * This class runs performance benchmarks for {@link HFile}.
+ * </p>
+ */
+public class HFilePerformanceEvaluation {
+
+  private static final int ROW_LENGTH = 10;
+  private static final int ROW_COUNT = 1000000;
+  private static final int RFILE_BLOCKSIZE = 8 * 1024;
+
+  static final Log LOG =
+    LogFactory.getLog(HFilePerformanceEvaluation.class.getName());
+
+  static byte [] format(final int i) {
+    String v = Integer.toString(i);
+    return Bytes.toBytes("0000000000".substring(v.length()) + v);
+  }
+
+  static ImmutableBytesWritable format(final int i, ImmutableBytesWritable w) {
+    w.set(format(i));
+    return w;
+  }
+
+  private void runBenchmarks() throws Exception {
+    final Configuration conf = new Configuration();
+    final FileSystem fs = FileSystem.get(conf);
+    final Path mf = fs.makeQualified(new Path("performanceevaluation.mapfile"));
+    if (fs.exists(mf)) {
+      fs.delete(mf, true);
+    }
+
+    runBenchmark(new SequentialWriteBenchmark(conf, fs, mf, ROW_COUNT),
+        ROW_COUNT);
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new UniformRandomSmallScan(conf, fs, mf, ROW_COUNT),
+            ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new UniformRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+              ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new GaussianRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+              ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new SequentialReadBenchmark(conf, fs, mf, ROW_COUNT),
+              ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+
+  }
+
+  protected void runBenchmark(RowOrientedBenchmark benchmark, int rowCount)
+    throws Exception {
+    LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+        rowCount + " rows.");
+    long elapsedTime = benchmark.run();
+    LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+        rowCount + " rows took " + elapsedTime + "ms.");
+  }
+
+  static abstract class RowOrientedBenchmark {
+
+    protected final Configuration conf;
+    protected final FileSystem fs;
+    protected final Path mf;
+    protected final int totalRows;
+
+    public RowOrientedBenchmark(Configuration conf, FileSystem fs, Path mf,
+        int totalRows) {
+      this.conf = conf;
+      this.fs = fs;
+      this.mf = mf;
+      this.totalRows = totalRows;
+    }
+
+    void setUp() throws Exception {
+      // do nothing
+    }
+
+    abstract void doRow(int i) throws Exception;
+
+    protected int getReportingPeriod() {
+      return this.totalRows / 10;
+    }
+
+    void tearDown() throws Exception {
+      // do nothing
+    }
+
+    /**
+     * Run benchmark
+     * @return elapsed time.
+     * @throws Exception
+     */
+    long run() throws Exception {
+      long elapsedTime;
+      setUp();
+      long startTime = System.currentTimeMillis();
+      try {
+        for (int i = 0; i < totalRows; i++) {
+          if (i > 0 && i % getReportingPeriod() == 0) {
+            LOG.info("Processed " + i + " rows.");
+          }
+          doRow(i);
+        }
+        elapsedTime = System.currentTimeMillis() - startTime;
+      } finally {
+        tearDown();
+      }
+      return elapsedTime;
+    }
+
+  }
+
+  static class SequentialWriteBenchmark extends RowOrientedBenchmark {
+    protected HFile.Writer writer;
+    private Random random = new Random();
+    private byte[] bytes = new byte[ROW_LENGTH];
+
+    public SequentialWriteBenchmark(Configuration conf, FileSystem fs, Path mf,
+        int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void setUp() throws Exception {
+      writer = new HFile.Writer(this.fs, this.mf, RFILE_BLOCKSIZE, (Compression.Algorithm) null, null);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      writer.append(format(i), generateValue());
+    }
+
+    private byte[] generateValue() {
+      random.nextBytes(bytes);
+      return bytes;
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      return this.totalRows; // don't report progress
+    }
+
+    @Override
+    void tearDown() throws Exception {
+      writer.close();
+    }
+
+  }
+
+  static abstract class ReadBenchmark extends RowOrientedBenchmark {
+
+    protected HFile.Reader reader;
+
+    public ReadBenchmark(Configuration conf, FileSystem fs, Path mf,
+        int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void setUp() throws Exception {
+      reader = new HFile.Reader(this.fs, this.mf, null, false);
+      this.reader.loadFileInfo();
+    }
+
+    @Override
+    void tearDown() throws Exception {
+      reader.close();
+    }
+
+  }
+
+  static class SequentialReadBenchmark extends ReadBenchmark {
+    private HFileScanner scanner;
+
+    public SequentialReadBenchmark(Configuration conf, FileSystem fs,
+      Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void setUp() throws Exception {
+      super.setUp();
+      this.scanner = this.reader.getScanner(false, false);
+      this.scanner.seekTo();
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      if (this.scanner.next()) {
+        ByteBuffer k = this.scanner.getKey();
+        PerformanceEvaluationCommons.assertKey(format(i + 1), k);
+        ByteBuffer v = scanner.getValue();
+        PerformanceEvaluationCommons.assertValueSize(v.limit(), ROW_LENGTH);
+      }
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      return this.totalRows; // don't report progress
+    }
+
+  }
+
+  static class UniformRandomReadBenchmark extends ReadBenchmark {
+
+    private Random random = new Random();
+
+    public UniformRandomReadBenchmark(Configuration conf, FileSystem fs,
+        Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      HFileScanner scanner = this.reader.getScanner(false, true);
+      byte [] b = getRandomRow();
+      scanner.seekTo(b);
+      ByteBuffer k = scanner.getKey();
+      PerformanceEvaluationCommons.assertKey(b, k);
+      ByteBuffer v = scanner.getValue();
+      PerformanceEvaluationCommons.assertValueSize(v.limit(), ROW_LENGTH);
+    }
+
+    private byte [] getRandomRow() {
+      return format(random.nextInt(totalRows));
+    }
+  }
+
+  static class UniformRandomSmallScan extends ReadBenchmark {
+    private Random random = new Random();
+
+    public UniformRandomSmallScan(Configuration conf, FileSystem fs,
+        Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows/10);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      HFileScanner scanner = this.reader.getScanner(false, false);
+      byte [] b = getRandomRow();
+      if (scanner.seekTo(b) != 0) {
+        System.out.println("Nonexistent row: " + new String(b));
+        return;
+      }
+      ByteBuffer k = scanner.getKey();
+      PerformanceEvaluationCommons.assertKey(b, k);
+      // System.out.println("Found row: " + new String(b));
+      for (int ii = 0; ii < 30; ii++) {
+        if (!scanner.next()) {
+          System.out.println("NOTHING FOLLOWS");
+        }
+        ByteBuffer v = scanner.getValue();
+        PerformanceEvaluationCommons.assertValueSize(v.limit(), ROW_LENGTH);
+      }
+    }
+
+    private byte [] getRandomRow() {
+      return format(random.nextInt(totalRows));
+    }
+  }
+
+  static class GaussianRandomReadBenchmark extends ReadBenchmark {
+
+    private RandomData randomData = new RandomDataImpl();
+
+    public GaussianRandomReadBenchmark(Configuration conf, FileSystem fs,
+        Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      HFileScanner scanner = this.reader.getScanner(false, true);
+      scanner.seekTo(getGaussianRandomRowBytes());
+      for (int ii = 0; ii < 30; ii++) {
+        if (!scanner.next()) {
+          System.out.println("NOTHING FOLLOWS");
+        }
+        scanner.getKey();
+        scanner.getValue();
+      }
+    }
+
+    private byte [] getGaussianRandomRowBytes() {
+      int r = (int) randomData.nextGaussian((double)totalRows / 2.0,
+          (double)totalRows / 10.0);
+      return format(r);
+    }
+  }
+
+  /**
+   * @param args
+   * @throws Exception
+   * @throws IOException
+   */
+  public static void main(String[] args) throws Exception {
+    new HFilePerformanceEvaluation().runBenchmarks();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/KeyValueTestUtil.java b/0.90/src/test/java/org/apache/hadoop/hbase/KeyValueTestUtil.java
new file mode 100644
index 0000000..36d768a
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/KeyValueTestUtil.java
@@ -0,0 +1,54 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class KeyValueTestUtil {
+
+  public static KeyValue create(
+      String row,
+      String family,
+      String qualifier,
+      long timestamp,
+      String value)
+  {
+    return create(row, family, qualifier, timestamp, KeyValue.Type.Put, value);
+  }
+
+  public static KeyValue create(
+      String row,
+      String family,
+      String qualifier,
+      long timestamp,
+      KeyValue.Type type,
+      String value)
+  {
+      return new KeyValue(
+          Bytes.toBytes(row),
+          Bytes.toBytes(family),
+          Bytes.toBytes(qualifier),
+          timestamp,
+          type,
+          Bytes.toBytes(value)
+      );
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java b/0.90/src/test/java/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java
new file mode 100644
index 0000000..7635949
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/MapFilePerformanceEvaluation.java
@@ -0,0 +1,349 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.math.random.RandomData;
+import org.apache.commons.math.random.RandomDataImpl;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.io.MapFile;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.WritableComparable;
+
+/**
+ * <p>
+ * This class runs performance benchmarks for {@link MapFile}.
+ * </p>
+ */
+public class MapFilePerformanceEvaluation {
+  protected final Configuration conf;
+  private static final int ROW_LENGTH = 10;
+  private static final int ROW_COUNT = 100000;
+
+  static final Log LOG =
+    LogFactory.getLog(MapFilePerformanceEvaluation.class.getName());
+
+  /**
+   * @param c
+   */
+  public MapFilePerformanceEvaluation(final Configuration c) {
+    super();
+    this.conf = c;
+  }
+
+  static ImmutableBytesWritable format(final int i, ImmutableBytesWritable w) {
+    String v = Integer.toString(i);
+    w.set(Bytes.toBytes("0000000000".substring(v.length()) + v));
+    return w;
+  }
+
+  private void runBenchmarks() throws Exception {
+    final FileSystem fs = FileSystem.get(this.conf);
+    final Path mf = fs.makeQualified(new Path("performanceevaluation.mapfile"));
+    if (fs.exists(mf)) {
+      fs.delete(mf, true);
+    }
+    runBenchmark(new SequentialWriteBenchmark(conf, fs, mf, ROW_COUNT),
+        ROW_COUNT);
+
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new UniformRandomSmallScan(conf, fs, mf, ROW_COUNT),
+            ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new UniformRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+              ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new GaussianRandomReadBenchmark(conf, fs, mf, ROW_COUNT),
+              ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+    PerformanceEvaluationCommons.concurrentReads(new Runnable() {
+      public void run() {
+        try {
+          runBenchmark(new SequentialReadBenchmark(conf, fs, mf, ROW_COUNT),
+              ROW_COUNT);
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+      }
+    });
+  }
+
+  protected void runBenchmark(RowOrientedBenchmark benchmark, int rowCount)
+    throws Exception {
+    LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+        rowCount + " rows.");
+    long elapsedTime = benchmark.run();
+    LOG.info("Running " + benchmark.getClass().getSimpleName() + " for " +
+        rowCount + " rows took " + elapsedTime + "ms.");
+  }
+
+  static abstract class RowOrientedBenchmark {
+
+    protected final Configuration conf;
+    protected final FileSystem fs;
+    protected final Path mf;
+    protected final int totalRows;
+
+    public RowOrientedBenchmark(Configuration conf, FileSystem fs, Path mf,
+        int totalRows) {
+      this.conf = conf;
+      this.fs = fs;
+      this.mf = mf;
+      this.totalRows = totalRows;
+    }
+
+    void setUp() throws Exception {
+      // do nothing
+    }
+
+    abstract void doRow(int i) throws Exception;
+
+    protected int getReportingPeriod() {
+      return this.totalRows / 10;
+    }
+
+    void tearDown() throws Exception {
+      // do nothing
+    }
+
+    /**
+     * Run benchmark
+     * @return elapsed time.
+     * @throws Exception
+     */
+    long run() throws Exception {
+      long elapsedTime;
+      setUp();
+      long startTime = System.currentTimeMillis();
+      try {
+        for (int i = 0; i < totalRows; i++) {
+          if (i > 0 && i % getReportingPeriod() == 0) {
+            LOG.info("Processed " + i + " rows.");
+          }
+          doRow(i);
+        }
+        elapsedTime = System.currentTimeMillis() - startTime;
+      } finally {
+        tearDown();
+      }
+      return elapsedTime;
+    }
+
+  }
+
+  static class SequentialWriteBenchmark extends RowOrientedBenchmark {
+
+    protected MapFile.Writer writer;
+    private Random random = new Random();
+    private byte[] bytes = new byte[ROW_LENGTH];
+    private ImmutableBytesWritable key = new ImmutableBytesWritable();
+    private ImmutableBytesWritable value = new ImmutableBytesWritable();
+
+    public SequentialWriteBenchmark(Configuration conf, FileSystem fs, Path mf,
+        int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void setUp() throws Exception {
+      writer = new MapFile.Writer(conf, fs, mf.toString(),
+        ImmutableBytesWritable.class, ImmutableBytesWritable.class);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      value.set(generateValue());
+      writer.append(format(i, key), value);
+    }
+
+    private byte[] generateValue() {
+      random.nextBytes(bytes);
+      return bytes;
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      return this.totalRows; // don't report progress
+    }
+
+    @Override
+    void tearDown() throws Exception {
+      writer.close();
+    }
+
+  }
+
+  static abstract class ReadBenchmark extends RowOrientedBenchmark {
+    ImmutableBytesWritable key = new ImmutableBytesWritable();
+    ImmutableBytesWritable value = new ImmutableBytesWritable();
+
+    protected MapFile.Reader reader;
+
+    public ReadBenchmark(Configuration conf, FileSystem fs, Path mf,
+        int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void setUp() throws Exception {
+      reader = new MapFile.Reader(fs, mf.toString(), conf);
+    }
+
+    @Override
+    void tearDown() throws Exception {
+      reader.close();
+    }
+
+  }
+
+  static class SequentialReadBenchmark extends ReadBenchmark {
+    ImmutableBytesWritable verify = new ImmutableBytesWritable();
+
+    public SequentialReadBenchmark(Configuration conf, FileSystem fs,
+        Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      this.reader.next(key, value);
+      PerformanceEvaluationCommons.assertKey(this.key.get(),
+        format(i, this.verify).get());
+      PerformanceEvaluationCommons.assertValueSize(ROW_LENGTH, value.getSize());
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      return this.totalRows; // don't report progress
+    }
+
+  }
+
+  static class UniformRandomReadBenchmark extends ReadBenchmark {
+
+    private Random random = new Random();
+
+    public UniformRandomReadBenchmark(Configuration conf, FileSystem fs,
+        Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      ImmutableBytesWritable k = getRandomRow();
+      ImmutableBytesWritable r = (ImmutableBytesWritable)reader.get(k, value);
+      PerformanceEvaluationCommons.assertValueSize(r.getSize(), ROW_LENGTH);
+    }
+
+    private ImmutableBytesWritable getRandomRow() {
+      return format(random.nextInt(totalRows), key);
+    }
+
+  }
+
+  static class UniformRandomSmallScan extends ReadBenchmark {
+    private Random random = new Random();
+
+    public UniformRandomSmallScan(Configuration conf, FileSystem fs,
+        Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows/10);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      ImmutableBytesWritable ibw = getRandomRow();
+      WritableComparable<?> wc = this.reader.getClosest(ibw, this.value);
+      if (wc == null) {
+        throw new NullPointerException();
+      }
+      PerformanceEvaluationCommons.assertKey(ibw.get(),
+        ((ImmutableBytesWritable)wc).get());
+      // TODO: Verify we're getting right values.
+      for (int ii = 0; ii < 29; ii++) {
+        this.reader.next(this.key, this.value);
+        PerformanceEvaluationCommons.assertValueSize(this.value.getSize(), ROW_LENGTH);
+      }
+    }
+
+    private ImmutableBytesWritable getRandomRow() {
+      return format(random.nextInt(totalRows), key);
+    }
+  }
+
+  static class GaussianRandomReadBenchmark extends ReadBenchmark {
+    private RandomData randomData = new RandomDataImpl();
+
+    public GaussianRandomReadBenchmark(Configuration conf, FileSystem fs,
+        Path mf, int totalRows) {
+      super(conf, fs, mf, totalRows);
+    }
+
+    @Override
+    void doRow(int i) throws Exception {
+      ImmutableBytesWritable k = getGaussianRandomRow();
+      ImmutableBytesWritable r = (ImmutableBytesWritable)reader.get(k, value);
+      PerformanceEvaluationCommons.assertValueSize(r.getSize(), ROW_LENGTH);
+    }
+
+    private ImmutableBytesWritable getGaussianRandomRow() {
+      int r = (int) randomData.nextGaussian((double)totalRows / 2.0,
+          (double)totalRows / 10.0);
+      return format(r, key);
+    }
+
+  }
+
+  /**
+   * @param args
+   * @throws Exception
+   * @throws IOException
+   */
+  public static void main(String[] args) throws Exception {
+    new MapFilePerformanceEvaluation(HBaseConfiguration.create()).
+      runBenchmarks();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java b/0.90/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
new file mode 100644
index 0000000..9ad3697
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
@@ -0,0 +1,669 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.security.PrivilegedAction;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * This class creates a single process HBase cluster.
+ * each server.  The master uses the 'default' FileSystem.  The RegionServers,
+ * if we are running on DistributedFilesystem, create a FileSystem instance
+ * each and will close down their instance on the way out.
+ */
+public class MiniHBaseCluster {
+  static final Log LOG = LogFactory.getLog(MiniHBaseCluster.class.getName());
+  private Configuration conf;
+  public LocalHBaseCluster hbaseCluster;
+  private static int index;
+
+  /**
+   * Start a MiniHBaseCluster.
+   * @param conf Configuration to be used for cluster
+   * @param numRegionServers initial number of region servers to start.
+   * @throws IOException
+   */
+  public MiniHBaseCluster(Configuration conf, int numRegionServers)
+  throws IOException, InterruptedException {
+    this(conf, 1, numRegionServers);
+  }
+
+  /**
+   * Start a MiniHBaseCluster.
+   * @param conf Configuration to be used for cluster
+   * @param numMasters initial number of masters to start.
+   * @param numRegionServers initial number of region servers to start.
+   * @throws IOException
+   */
+  public MiniHBaseCluster(Configuration conf, int numMasters,
+      int numRegionServers)
+  throws IOException, InterruptedException {
+    this.conf = conf;
+    conf.set(HConstants.MASTER_PORT, "0");
+    init(numMasters, numRegionServers);
+  }
+
+  public Configuration getConfiguration() {
+    return this.conf;
+  }
+
+  /**
+   * Override Master so can add inject behaviors testing.
+   */
+  public static class MiniHBaseClusterMaster extends HMaster {
+    private final Map<HServerInfo, List<HMsg>> messages =
+      new ConcurrentHashMap<HServerInfo, List<HMsg>>();
+
+    private final Map<HServerInfo, IOException> exceptions =
+      new ConcurrentHashMap<HServerInfo, IOException>();
+
+    public MiniHBaseClusterMaster(final Configuration conf)
+    throws IOException, KeeperException, InterruptedException {
+      super(conf);
+    }
+
+    /**
+     * Add a message to send to a regionserver next time it checks in.
+     * @param hsi RegionServer's HServerInfo.
+     * @param msg Message to add.
+     */
+    void addMessage(final HServerInfo hsi, HMsg msg) {
+      synchronized(this.messages) {
+        List<HMsg> hmsgs = this.messages.get(hsi);
+        if (hmsgs == null) {
+          hmsgs = new ArrayList<HMsg>();
+          this.messages.put(hsi, hmsgs);
+        }
+        hmsgs.add(msg);
+      }
+    }
+
+    void addException(final HServerInfo hsi, final IOException ex) {
+      this.exceptions.put(hsi, ex);
+    }
+
+    /**
+     * This implementation is special, exceptions will be treated first and
+     * message won't be sent back to the region servers even if some are
+     * specified.
+     * @param hsi the rs
+     * @param msgs Messages to add to
+     * @return
+     * @throws IOException will be throw if any added for this region server
+     */
+    @Override
+    protected HMsg[] adornRegionServerAnswer(final HServerInfo hsi,
+        final HMsg[] msgs) throws IOException {
+      IOException ex = this.exceptions.remove(hsi);
+      if (ex != null) {
+        throw ex;
+      }
+      HMsg [] answerMsgs = msgs;
+      synchronized (this.messages) {
+        List<HMsg> hmsgs = this.messages.get(hsi);
+        if (hmsgs != null && !hmsgs.isEmpty()) {
+          int size = answerMsgs.length;
+          HMsg [] newAnswerMsgs = new HMsg[size + hmsgs.size()];
+          System.arraycopy(answerMsgs, 0, newAnswerMsgs, 0, answerMsgs.length);
+          for (int i = 0; i < hmsgs.size(); i++) {
+            newAnswerMsgs[answerMsgs.length + i] = hmsgs.get(i);
+          }
+          answerMsgs = newAnswerMsgs;
+          hmsgs.clear();
+        }
+      }
+      return super.adornRegionServerAnswer(hsi, answerMsgs);
+    }
+  }
+
+  /**
+   * Subclass so can get at protected methods (none at moment).  Also, creates
+   * a FileSystem instance per instantiation.  Adds a shutdown own FileSystem
+   * on the way out. Shuts down own Filesystem only, not All filesystems as
+   * the FileSystem system exit hook does.
+   */
+  public static class MiniHBaseClusterRegionServer extends HRegionServer {
+    private Thread shutdownThread = null;
+    private User user = null;
+
+    public MiniHBaseClusterRegionServer(Configuration conf)
+        throws IOException, InterruptedException {
+      super(conf);
+      this.user = User.getCurrent();
+    }
+
+    public void setHServerInfo(final HServerInfo hsi) {
+      this.serverInfo = hsi;
+    }
+
+    /*
+     * @param c
+     * @param currentfs We return this if we did not make a new one.
+     * @param uniqueName Same name used to help identify the created fs.
+     * @return A new fs instance if we are up on DistributeFileSystem.
+     * @throws IOException
+     */
+
+    @Override
+    protected void handleReportForDutyResponse(MapWritable c) throws IOException {
+      super.handleReportForDutyResponse(c);
+      // Run this thread to shutdown our filesystem on way out.
+      this.shutdownThread = new SingleFileSystemShutdownThread(getFileSystem());
+    }
+
+    @Override
+    public void run() {
+      try {
+        this.user.runAs(new PrivilegedAction<Object>(){
+          public Object run() {
+            runRegionServer();
+            return null;
+          }
+        });
+      } catch (Throwable t) {
+        LOG.error("Exception in run", t);
+      } finally {
+        // Run this on the way out.
+        if (this.shutdownThread != null) {
+          this.shutdownThread.start();
+          Threads.shutdown(this.shutdownThread, 30000);
+        }
+      }
+    }
+
+    private void runRegionServer() {
+      super.run();
+    }
+
+    @Override
+    public void kill() {
+      super.kill();
+    }
+
+    public void abort(final String reason, final Throwable cause) {
+      this.user.runAs(new PrivilegedAction<Object>() {
+        public Object run() {
+          abortRegionServer(reason, cause);
+          return null;
+        }
+      });
+    }
+
+    private void abortRegionServer(String reason, Throwable cause) {
+      super.abort(reason, cause);
+    }
+  }
+
+  /**
+   * Alternate shutdown hook.
+   * Just shuts down the passed fs, not all as default filesystem hook does.
+   */
+  static class SingleFileSystemShutdownThread extends Thread {
+    private final FileSystem fs;
+    SingleFileSystemShutdownThread(final FileSystem fs) {
+      super("Shutdown of " + fs);
+      this.fs = fs;
+    }
+    @Override
+    public void run() {
+      try {
+        LOG.info("Hook closing fs=" + this.fs);
+        this.fs.close();
+      } catch (NullPointerException npe) {
+        LOG.debug("Need to fix these: " + npe.toString());
+      } catch (IOException e) {
+        LOG.warn("Running hook", e);
+      }
+    }
+  }
+
+  private void init(final int nMasterNodes, final int nRegionNodes)
+  throws IOException, InterruptedException {
+    try {
+      // start up a LocalHBaseCluster
+      hbaseCluster = new LocalHBaseCluster(conf, nMasterNodes, 0,
+          MiniHBaseCluster.MiniHBaseClusterMaster.class,
+          MiniHBaseCluster.MiniHBaseClusterRegionServer.class);
+
+      // manually add the regionservers as other users
+      for (int i=0; i<nRegionNodes; i++) {
+        Configuration rsConf = HBaseConfiguration.create(conf);
+        User user = HBaseTestingUtility.getDifferentUser(rsConf,
+            ".hfs."+index++);
+        hbaseCluster.addRegionServer(rsConf, i, user);
+      }
+
+      hbaseCluster.startup();
+    } catch (IOException e) {
+      shutdown();
+      throw e;
+    } catch (Throwable t) {
+      LOG.error("Error starting cluster", t);
+      shutdown();
+      throw new IOException("Shutting down", t);
+    }
+  }
+
+  /**
+   * Starts a region server thread running
+   *
+   * @throws IOException
+   * @return New RegionServerThread
+   */
+  public JVMClusterUtil.RegionServerThread startRegionServer()
+      throws IOException {
+    final Configuration newConf = HBaseConfiguration.create(conf);
+    User rsUser =
+        HBaseTestingUtility.getDifferentUser(newConf, ".hfs."+index++);
+    JVMClusterUtil.RegionServerThread t =  null;
+    try {
+      t = hbaseCluster.addRegionServer(
+          newConf, hbaseCluster.getRegionServers().size(), rsUser);
+      t.start();
+      t.waitForServerOnline();
+    } catch (InterruptedException ie) {
+      throw new IOException("Interrupted executing UserGroupInformation.doAs()", ie);
+    }
+    return t;
+  }
+
+  /**
+   * Cause a region server to exit doing basic clean up only on its way out.
+   * @param serverNumber  Used as index into a list.
+   */
+  public String abortRegionServer(int serverNumber) {
+    HRegionServer server = getRegionServer(serverNumber);
+    LOG.info("Aborting " + server.toString());
+    server.abort("Aborting for tests", new Exception("Trace info"));
+    return server.toString();
+  }
+
+  /**
+   * Shut down the specified region server cleanly
+   *
+   * @param serverNumber  Used as index into a list.
+   * @return the region server that was stopped
+   */
+  public JVMClusterUtil.RegionServerThread stopRegionServer(int serverNumber) {
+    return stopRegionServer(serverNumber, true);
+  }
+
+  /**
+   * Shut down the specified region server cleanly
+   *
+   * @param serverNumber  Used as index into a list.
+   * @param shutdownFS True is we are to shutdown the filesystem as part of this
+   * regionserver's shutdown.  Usually we do but you do not want to do this if
+   * you are running multiple regionservers in a test and you shut down one
+   * before end of the test.
+   * @return the region server that was stopped
+   */
+  public JVMClusterUtil.RegionServerThread stopRegionServer(int serverNumber,
+      final boolean shutdownFS) {
+    JVMClusterUtil.RegionServerThread server =
+      hbaseCluster.getRegionServers().get(serverNumber);
+    LOG.info("Stopping " + server.toString());
+    server.getRegionServer().stop("Stopping rs " + serverNumber);
+    return server;
+  }
+
+  /**
+   * Wait for the specified region server to stop. Removes this thread from list
+   * of running threads.
+   * @param serverNumber
+   * @return Name of region server that just went down.
+   */
+  public String waitOnRegionServer(final int serverNumber) {
+    return this.hbaseCluster.waitOnRegionServer(serverNumber);
+  }
+
+
+  /**
+   * Starts a master thread running
+   *
+   * @throws IOException
+   * @return New RegionServerThread
+   */
+  public JVMClusterUtil.MasterThread startMaster() throws IOException {
+    Configuration c = HBaseConfiguration.create(conf);
+    User user =
+        HBaseTestingUtility.getDifferentUser(c, ".hfs."+index++);
+
+    JVMClusterUtil.MasterThread t = null;
+    try {
+      t = hbaseCluster.addMaster(c, hbaseCluster.getMasters().size(), user);
+      t.start();
+      t.waitForServerOnline();
+    } catch (InterruptedException ie) {
+      throw new IOException("Interrupted executing UserGroupInformation.doAs()", ie);
+    }
+    return t;
+  }
+
+  /**
+   * @return Returns the rpc address actually used by the currently active
+   * master server, because the supplied port is not necessarily the actual port
+   * used.
+   */
+  public HServerAddress getHMasterAddress() {
+    return this.hbaseCluster.getActiveMaster().getMasterAddress();
+  }
+
+  /**
+   * Returns the current active master, if available.
+   * @return the active HMaster, null if none is active.
+   */
+  public HMaster getMaster() {
+    return this.hbaseCluster.getActiveMaster();
+  }
+
+  /**
+   * Returns the master at the specified index, if available.
+   * @return the active HMaster, null if none is active.
+   */
+  public HMaster getMaster(final int serverNumber) {
+    return this.hbaseCluster.getMaster(serverNumber);
+  }
+
+  /**
+   * Cause a master to exit without shutting down entire cluster.
+   * @param serverNumber  Used as index into a list.
+   */
+  public String abortMaster(int serverNumber) {
+    HMaster server = getMaster(serverNumber);
+    LOG.info("Aborting " + server.toString());
+    server.abort("Aborting for tests", new Exception("Trace info"));
+    return server.toString();
+  }
+
+  /**
+   * Shut down the specified master cleanly
+   *
+   * @param serverNumber  Used as index into a list.
+   * @return the region server that was stopped
+   */
+  public JVMClusterUtil.MasterThread stopMaster(int serverNumber) {
+    return stopMaster(serverNumber, true);
+  }
+
+  /**
+   * Shut down the specified master cleanly
+   *
+   * @param serverNumber  Used as index into a list.
+   * @param shutdownFS True is we are to shutdown the filesystem as part of this
+   * master's shutdown.  Usually we do but you do not want to do this if
+   * you are running multiple master in a test and you shut down one
+   * before end of the test.
+   * @return the master that was stopped
+   */
+  public JVMClusterUtil.MasterThread stopMaster(int serverNumber,
+      final boolean shutdownFS) {
+    JVMClusterUtil.MasterThread server =
+      hbaseCluster.getMasters().get(serverNumber);
+    LOG.info("Stopping " + server.toString());
+    server.getMaster().stop("Stopping master " + serverNumber);
+    return server;
+  }
+
+  /**
+   * Wait for the specified master to stop. Removes this thread from list
+   * of running threads.
+   * @param serverNumber
+   * @return Name of master that just went down.
+   */
+  public String waitOnMaster(final int serverNumber) {
+    return this.hbaseCluster.waitOnMaster(serverNumber);
+  }
+
+  /**
+   * Blocks until there is an active master and that master has completed
+   * initialization.
+   *
+   * @return true if an active master becomes available.  false if there are no
+   *         masters left.
+   * @throws InterruptedException
+   */
+  public boolean waitForActiveAndReadyMaster() throws InterruptedException {
+    List<JVMClusterUtil.MasterThread> mts;
+    while ((mts = getMasterThreads()).size() > 0) {
+      for (JVMClusterUtil.MasterThread mt : mts) {
+        if (mt.getMaster().isActiveMaster() && mt.getMaster().isInitialized()) {
+          return true;
+        }
+      }
+      Thread.sleep(200);
+    }
+    return false;
+  }
+
+  /**
+   * @return List of master threads.
+   */
+  public List<JVMClusterUtil.MasterThread> getMasterThreads() {
+    return this.hbaseCluster.getMasters();
+  }
+
+  /**
+   * @return List of live master threads (skips the aborted and the killed)
+   */
+  public List<JVMClusterUtil.MasterThread> getLiveMasterThreads() {
+    return this.hbaseCluster.getLiveMasters();
+  }
+
+  /**
+   * Wait for Mini HBase Cluster to shut down.
+   */
+  public void join() {
+    this.hbaseCluster.join();
+  }
+
+  /**
+   * Shut down the mini HBase cluster
+   * @throws IOException
+   */
+  public void shutdown() throws IOException {
+    if (this.hbaseCluster != null) {
+      this.hbaseCluster.shutdown();
+    }
+    HConnectionManager.deleteAllConnections(false);
+  }
+
+  /**
+   * Call flushCache on all regions on all participating regionservers.
+   * @throws IOException
+   */
+  public void flushcache() throws IOException {
+    for (JVMClusterUtil.RegionServerThread t:
+        this.hbaseCluster.getRegionServers()) {
+      for(HRegion r: t.getRegionServer().getOnlineRegionsLocalContext()) {
+        r.flushcache();
+      }
+    }
+  }
+
+  /**
+   * Call flushCache on all regions of the specified table.
+   * @throws IOException
+   */
+  public void flushcache(byte [] tableName) throws IOException {
+    for (JVMClusterUtil.RegionServerThread t:
+        this.hbaseCluster.getRegionServers()) {
+      for(HRegion r: t.getRegionServer().getOnlineRegionsLocalContext()) {
+        if(Bytes.equals(r.getTableDesc().getName(), tableName)) {
+          r.flushcache();
+        }
+      }
+    }
+  }
+
+  /**
+   * @return List of region server threads.
+   */
+  public List<JVMClusterUtil.RegionServerThread> getRegionServerThreads() {
+    return this.hbaseCluster.getRegionServers();
+  }
+
+  /**
+   * @return List of live region server threads (skips the aborted and the killed)
+   */
+  public List<JVMClusterUtil.RegionServerThread> getLiveRegionServerThreads() {
+    return this.hbaseCluster.getLiveRegionServers();
+  }
+
+  /**
+   * Grab a numbered region server of your choice.
+   * @param serverNumber
+   * @return region server
+   */
+  public HRegionServer getRegionServer(int serverNumber) {
+    return hbaseCluster.getRegionServer(serverNumber);
+  }
+
+  public List<HRegion> getRegions(byte[] tableName) {
+    List<HRegion> ret = new ArrayList<HRegion>();
+    for (JVMClusterUtil.RegionServerThread rst : getRegionServerThreads()) {
+      HRegionServer hrs = rst.getRegionServer();
+      for (HRegion region : hrs.getOnlineRegionsLocalContext()) {
+        if (Bytes.equals(region.getTableDesc().getName(), tableName)) {
+          ret.add(region);
+        }
+      }
+    }
+    return ret;
+  }
+
+  /**
+   * @return Index into List of {@link MiniHBaseCluster#getRegionServerThreads()}
+   * of HRS carrying regionName. Returns -1 if none found.
+   */
+  public int getServerWithMeta() {
+    return getServerWith(HRegionInfo.FIRST_META_REGIONINFO.getRegionName());
+  }
+
+  /**
+   * Get the location of the specified region
+   * @param regionName Name of the region in bytes
+   * @return Index into List of {@link MiniHBaseCluster#getRegionServerThreads()}
+   * of HRS carrying .META.. Returns -1 if none found.
+   */
+  public int getServerWith(byte[] regionName) {
+    int index = -1;
+    int count = 0;
+    for (JVMClusterUtil.RegionServerThread rst: getRegionServerThreads()) {
+      HRegionServer hrs = rst.getRegionServer();
+      HRegion metaRegion =
+        hrs.getOnlineRegion(regionName);
+      if (metaRegion != null) {
+        index = count;
+        break;
+      }
+      count++;
+    }
+    return index;
+  }
+
+  /**
+   * Add an exception to send when a region server checks back in
+   * @param serverNumber Which server to send it to
+   * @param ex The exception that will be sent
+   * @throws IOException
+   */
+  public void addExceptionToSendRegionServer(final int serverNumber,
+      IOException ex) throws IOException {
+    MiniHBaseClusterRegionServer hrs =
+      (MiniHBaseClusterRegionServer)getRegionServer(serverNumber);
+    addExceptionToSendRegionServer(hrs, ex);
+  }
+
+  /**
+   * Add an exception to send when a region server checks back in
+   * @param hrs Which server to send it to
+   * @param ex The exception that will be sent
+   * @throws IOException
+   */
+  public void addExceptionToSendRegionServer(
+      final MiniHBaseClusterRegionServer hrs, IOException ex)
+      throws IOException {
+    ((MiniHBaseClusterMaster)getMaster()).addException(hrs.getHServerInfo(),ex);
+  }
+
+  /**
+   * Add a message to include in the responses send a regionserver when it
+   * checks back in.
+   * @param serverNumber Which server to send it to.
+   * @param msg The MESSAGE
+   * @throws IOException
+   */
+  public void addMessageToSendRegionServer(final int serverNumber,
+    final HMsg msg)
+  throws IOException {
+    MiniHBaseClusterRegionServer hrs =
+      (MiniHBaseClusterRegionServer)getRegionServer(serverNumber);
+    addMessageToSendRegionServer(hrs, msg);
+  }
+
+  /**
+   * Add a message to include in the responses send a regionserver when it
+   * checks back in.
+   * @param hrs Which region server.
+   * @param msg The MESSAGE
+   * @throws IOException
+   */
+  public void addMessageToSendRegionServer(final MiniHBaseClusterRegionServer hrs,
+    final HMsg msg)
+  throws IOException {
+    ((MiniHBaseClusterMaster)getMaster()).addMessage(hrs.getHServerInfo(), msg);
+  }
+
+  /**
+   * Counts the total numbers of regions being served by the currently online
+   * region servers by asking each how many regions they have.  Does not look
+   * at META at all.  Count includes catalog tables.
+   * @return number of regions being served by all region servers
+   */
+  public long countServedRegions() {
+    long count = 0;
+    for (JVMClusterUtil.RegionServerThread rst : getLiveRegionServerThreads()) {
+      count += rst.getRegionServer().getNumberOfOnlineRegions();
+    }
+    return count;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/MultiRegionTable.java b/0.90/src/test/java/org/apache/hadoop/hbase/MultiRegionTable.java
new file mode 100644
index 0000000..b8ad4c7
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/MultiRegionTable.java
@@ -0,0 +1,118 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Utility class to build a table of multiple regions.
+ */
+public class MultiRegionTable extends HBaseClusterTestCase {
+  protected static final byte [][] KEYS = {
+    HConstants.EMPTY_BYTE_ARRAY,
+    Bytes.toBytes("bbb"),
+    Bytes.toBytes("ccc"),
+    Bytes.toBytes("ddd"),
+    Bytes.toBytes("eee"),
+    Bytes.toBytes("fff"),
+    Bytes.toBytes("ggg"),
+    Bytes.toBytes("hhh"),
+    Bytes.toBytes("iii"),
+    Bytes.toBytes("jjj"),
+    Bytes.toBytes("kkk"),
+    Bytes.toBytes("lll"),
+    Bytes.toBytes("mmm"),
+    Bytes.toBytes("nnn"),
+    Bytes.toBytes("ooo"),
+    Bytes.toBytes("ppp"),
+    Bytes.toBytes("qqq"),
+    Bytes.toBytes("rrr"),
+    Bytes.toBytes("sss"),
+    Bytes.toBytes("ttt"),
+    Bytes.toBytes("uuu"),
+    Bytes.toBytes("vvv"),
+    Bytes.toBytes("www"),
+    Bytes.toBytes("xxx"),
+    Bytes.toBytes("yyy")
+  };
+
+  protected final byte [] columnFamily;
+  protected HTableDescriptor desc;
+
+  /**
+   * @param familyName the family to populate.
+   */
+  public MultiRegionTable(final String familyName) {
+    this(1, familyName);
+  }
+
+  public MultiRegionTable(int nServers, final String familyName) {
+    super(nServers);
+
+     this.columnFamily = Bytes.toBytes(familyName);
+    // These are needed for the new and improved Map/Reduce framework
+    System.setProperty("hadoop.log.dir", conf.get("hadoop.log.dir"));
+    conf.set("mapred.output.dir", conf.get("hadoop.tmp.dir"));
+  }
+
+  /**
+   * Run after dfs is ready but before hbase cluster is started up.
+   */
+  @Override
+  protected void preHBaseClusterSetup() throws Exception {
+    try {
+      // Create a bunch of regions
+      HRegion[] regions = new HRegion[KEYS.length];
+      for (int i = 0; i < regions.length; i++) {
+        int j = (i + 1) % regions.length;
+        regions[i] = createARegion(KEYS[i], KEYS[j]);
+      }
+
+      // Now create the root and meta regions and insert the data regions
+      // created above into the meta
+
+      createRootAndMetaRegions();
+
+      for(int i = 0; i < regions.length; i++) {
+        HRegion.addRegionToMETA(meta, regions[i]);
+      }
+
+      closeRootAndMeta();
+    } catch (Exception e) {
+      shutdownDfs(dfsCluster);
+      throw e;
+    }
+  }
+
+  private HRegion createARegion(byte [] startKey, byte [] endKey) throws IOException {
+    HRegion region = createNewHRegion(desc, startKey, endKey);
+    addContent(region, this.columnFamily);
+    closeRegionAndDeleteLog(region);
+    return region;
+  }
+
+  private void closeRegionAndDeleteLog(HRegion region) throws IOException {
+    region.close();
+    region.getLog().closeAndDelete();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/MultithreadedTestUtil.java b/0.90/src/test/java/org/apache/hadoop/hbase/MultithreadedTestUtil.java
new file mode 100644
index 0000000..6675fac
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/MultithreadedTestUtil.java
@@ -0,0 +1,145 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.util.Set;
+import java.util.HashSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+
+public abstract class MultithreadedTestUtil {
+
+  public static final Log LOG =
+    LogFactory.getLog(MultithreadedTestUtil.class);
+
+  public static class TestContext {
+    private final Configuration conf;
+    private Throwable err = null;
+    private boolean stopped = false;
+    private int threadDoneCount = 0;
+    private Set<TestThread> testThreads = new HashSet<TestThread>();
+
+    public TestContext(Configuration configuration) {
+      this.conf = configuration;
+    }
+
+    protected Configuration getConf() {
+      return conf;
+    }
+
+    public synchronized boolean shouldRun()  {
+      return !stopped && err == null;
+    }
+
+    public void addThread(TestThread t) {
+      testThreads.add(t);
+    }
+
+    public void startThreads() {
+      for (TestThread t : testThreads) {
+        t.start();
+      }
+    }
+
+    public void waitFor(long millis) throws Exception {
+      long endTime = System.currentTimeMillis() + millis;
+      while (!stopped) {
+        long left = endTime - System.currentTimeMillis();
+        if (left <= 0) break;
+        synchronized (this) {
+          checkException();
+          wait(left);
+        }
+      }
+    }
+    private synchronized void checkException() throws Exception {
+      if (err != null) {
+        throw new RuntimeException("Deferred", err);
+      }
+    }
+
+    public synchronized void threadFailed(Throwable t) {
+      if (err == null) err = t;
+      LOG.error("Failed!", err);
+      notify();
+    }
+
+    public synchronized void threadDone() {
+      threadDoneCount++;
+    }
+
+    public void stop() throws Exception {
+      synchronized (this) {
+        stopped = true;
+      }
+      for (TestThread t : testThreads) {
+        t.join();
+      }
+      checkException();
+    }
+  }
+
+  /**
+   * A thread that can be added to a test context, and properly
+   * passes exceptions through.
+   */
+  public static abstract class TestThread extends Thread {
+    protected final TestContext ctx;
+    protected boolean stopped;
+
+    public TestThread(TestContext ctx) {
+      this.ctx = ctx;
+    }
+
+    public void run() {
+      try {
+        doWork();
+      } catch (Throwable t) {
+        ctx.threadFailed(t);
+      }
+      ctx.threadDone();
+    }
+
+    public abstract void doWork() throws Exception;
+
+    protected void stopTestThread() {
+      this.stopped = true;
+    }
+  }
+  
+  /**
+   * A test thread that performs a repeating operation.
+   */
+  public static abstract class RepeatingTestThread extends TestThread {
+    public RepeatingTestThread(TestContext ctx) {
+      super(ctx);
+    }
+    
+    public final void doWork() throws Exception {
+      while (ctx.shouldRun() && !stopped) {
+        doAnAction();
+      }
+    }
+    
+    public abstract void doAnAction() throws Exception;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java b/0.90/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
new file mode 100644
index 0000000..3982eff
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
@@ -0,0 +1,1287 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.io.File;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.TreeMap;
+import java.util.Arrays;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.lang.reflect.Constructor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.PageFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.Hash;
+import org.apache.hadoop.hbase.util.MurmurHash;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer;
+import org.apache.hadoop.util.LineReader;
+
+/**
+ * Script used evaluating HBase performance and scalability.  Runs a HBase
+ * client that steps through one of a set of hardcoded tests or 'experiments'
+ * (e.g. a random reads test, a random writes test, etc.). Pass on the
+ * command-line which test to run and how many clients are participating in
+ * this experiment. Run <code>java PerformanceEvaluation --help</code> to
+ * obtain usage.
+ *
+ * <p>This class sets up and runs the evaluation programs described in
+ * Section 7, <i>Performance Evaluation</i>, of the <a
+ * href="http://labs.google.com/papers/bigtable.html">Bigtable</a>
+ * paper, pages 8-10.
+ *
+ * <p>If number of clients > 1, we start up a MapReduce job. Each map task
+ * runs an individual client. Each client does about 1GB of data.
+ */
+public class PerformanceEvaluation {
+  protected static final Log LOG = LogFactory.getLog(PerformanceEvaluation.class.getName());
+
+  private static final int ROW_LENGTH = 1000;
+  private static final int ONE_GB = 1024 * 1024 * 1000;
+  private static final int ROWS_PER_GB = ONE_GB / ROW_LENGTH;
+
+  public static final byte[] TABLE_NAME = Bytes.toBytes("TestTable");
+  public static final byte[] FAMILY_NAME = Bytes.toBytes("info");
+  public static final byte[] QUALIFIER_NAME = Bytes.toBytes("data");
+
+  protected static final HTableDescriptor TABLE_DESCRIPTOR;
+  static {
+    TABLE_DESCRIPTOR = new HTableDescriptor(TABLE_NAME);
+    TABLE_DESCRIPTOR.addFamily(new HColumnDescriptor(FAMILY_NAME));
+  }
+
+  protected Map<String, CmdDescriptor> commands = new TreeMap<String, CmdDescriptor>();
+
+  volatile Configuration conf;
+  private boolean miniCluster = false;
+  private boolean nomapred = false;
+  private int N = 1;
+  private int R = ROWS_PER_GB;
+  private boolean flushCommits = true;
+  private boolean writeToWAL = true;
+
+  private static final Path PERF_EVAL_DIR = new Path("performance_evaluation");
+  /**
+   * Regex to parse lines in input file passed to mapreduce task.
+   */
+  public static final Pattern LINE_PATTERN =
+    Pattern.compile("startRow=(\\d+),\\s+" +
+        "perClientRunRows=(\\d+),\\s+" +
+        "totalRows=(\\d+),\\s+" +
+        "clients=(\\d+),\\s+" +
+        "flushCommits=(\\w+),\\s+" +
+        "writeToWAL=(\\w+)");
+
+  /**
+   * Enum for map metrics.  Keep it out here rather than inside in the Map
+   * inner-class so we can find associated properties.
+   */
+  protected static enum Counter {
+    /** elapsed time */
+    ELAPSED_TIME,
+    /** number of rows */
+    ROWS}
+
+
+  /**
+   * Constructor
+   * @param c Configuration object
+   */
+  public PerformanceEvaluation(final Configuration c) {
+    this.conf = c;
+
+    addCommandDescriptor(RandomReadTest.class, "randomRead",
+        "Run random read test");
+    addCommandDescriptor(RandomSeekScanTest.class, "randomSeekScan",
+        "Run random seek and scan 100 test");
+    addCommandDescriptor(RandomScanWithRange10Test.class, "scanRange10",
+        "Run random seek scan with both start and stop row (max 10 rows)");
+    addCommandDescriptor(RandomScanWithRange100Test.class, "scanRange100",
+        "Run random seek scan with both start and stop row (max 100 rows)");
+    addCommandDescriptor(RandomScanWithRange1000Test.class, "scanRange1000",
+        "Run random seek scan with both start and stop row (max 1000 rows)");
+    addCommandDescriptor(RandomScanWithRange10000Test.class, "scanRange10000",
+        "Run random seek scan with both start and stop row (max 10000 rows)");
+    addCommandDescriptor(RandomWriteTest.class, "randomWrite",
+        "Run random write test");
+    addCommandDescriptor(SequentialReadTest.class, "sequentialRead",
+        "Run sequential read test");
+    addCommandDescriptor(SequentialWriteTest.class, "sequentialWrite",
+        "Run sequential write test");
+    addCommandDescriptor(ScanTest.class, "scan",
+        "Run scan test (read every row)");
+    addCommandDescriptor(FilteredScanTest.class, "filterScan",
+        "Run scan test using a filter to find a specific row based on it's value (make sure to use --rows=20)");
+  }
+
+  protected void addCommandDescriptor(Class<? extends Test> cmdClass,
+      String name, String description) {
+    CmdDescriptor cmdDescriptor =
+      new CmdDescriptor(cmdClass, name, description);
+    commands.put(name, cmdDescriptor);
+  }
+
+  /**
+   * Implementations can have their status set.
+   */
+  static interface Status {
+    /**
+     * Sets status
+     * @param msg status message
+     * @throws IOException
+     */
+    void setStatus(final String msg) throws IOException;
+  }
+
+  /**
+   *  This class works as the InputSplit of Performance Evaluation
+   *  MapReduce InputFormat, and the Record Value of RecordReader.
+   *  Each map task will only read one record from a PeInputSplit,
+   *  the record value is the PeInputSplit itself.
+   */
+  public static class PeInputSplit extends InputSplit implements Writable {
+    private int startRow = 0;
+    private int rows = 0;
+    private int totalRows = 0;
+    private int clients = 0;
+    private boolean flushCommits = false;
+    private boolean writeToWAL = true;
+
+    public PeInputSplit() {
+      this.startRow = 0;
+      this.rows = 0;
+      this.totalRows = 0;
+      this.clients = 0;
+      this.flushCommits = false;
+      this.writeToWAL = true;
+    }
+
+    public PeInputSplit(int startRow, int rows, int totalRows, int clients,
+        boolean flushCommits, boolean writeToWAL) {
+      this.startRow = startRow;
+      this.rows = rows;
+      this.totalRows = totalRows;
+      this.clients = clients;
+      this.flushCommits = flushCommits;
+      this.writeToWAL = writeToWAL;
+    }
+
+    @Override
+    public void readFields(DataInput in) throws IOException {
+      this.startRow = in.readInt();
+      this.rows = in.readInt();
+      this.totalRows = in.readInt();
+      this.clients = in.readInt();
+      this.flushCommits = in.readBoolean();
+      this.writeToWAL = in.readBoolean();
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      out.writeInt(startRow);
+      out.writeInt(rows);
+      out.writeInt(totalRows);
+      out.writeInt(clients);
+      out.writeBoolean(flushCommits);
+      out.writeBoolean(writeToWAL);
+    }
+
+    @Override
+    public long getLength() throws IOException, InterruptedException {
+      return 0;
+    }
+
+    @Override
+    public String[] getLocations() throws IOException, InterruptedException {
+      return new String[0];
+    }
+
+    public int getStartRow() {
+      return startRow;
+    }
+
+    public int getRows() {
+      return rows;
+    }
+
+    public int getTotalRows() {
+      return totalRows;
+    }
+
+    public int getClients() {
+      return clients;
+    }
+
+    public boolean isFlushCommits() {
+      return flushCommits;
+    }
+
+    public boolean isWriteToWAL() {
+      return writeToWAL;
+    }
+  }
+
+  /**
+   *  InputFormat of Performance Evaluation MapReduce job.
+   *  It extends from FileInputFormat, want to use it's methods such as setInputPaths().
+   */
+  public static class PeInputFormat extends FileInputFormat<NullWritable, PeInputSplit> {
+
+    @Override
+    public List<InputSplit> getSplits(JobContext job) throws IOException {
+      // generate splits
+      List<InputSplit> splitList = new ArrayList<InputSplit>();
+
+      for (FileStatus file: listStatus(job)) {
+        Path path = file.getPath();
+        FileSystem fs = path.getFileSystem(job.getConfiguration());
+        FSDataInputStream fileIn = fs.open(path);
+        LineReader in = new LineReader(fileIn, job.getConfiguration());
+        int lineLen = 0;
+        while(true) {
+          Text lineText = new Text();
+          lineLen = in.readLine(lineText);
+          if(lineLen <= 0) {
+          break;
+          }
+          Matcher m = LINE_PATTERN.matcher(lineText.toString());
+          if((m != null) && m.matches()) {
+            int startRow = Integer.parseInt(m.group(1));
+            int rows = Integer.parseInt(m.group(2));
+            int totalRows = Integer.parseInt(m.group(3));
+            int clients = Integer.parseInt(m.group(4));
+            boolean flushCommits = Boolean.parseBoolean(m.group(5));
+            boolean writeToWAL = Boolean.parseBoolean(m.group(6));
+
+            LOG.debug("split["+ splitList.size() + "] " +
+                     " startRow=" + startRow +
+                     " rows=" + rows +
+                     " totalRows=" + totalRows +
+                     " clients=" + clients +
+                     " flushCommits=" + flushCommits +
+                     " writeToWAL=" + writeToWAL);
+
+            PeInputSplit newSplit =
+              new PeInputSplit(startRow, rows, totalRows, clients,
+                flushCommits, writeToWAL);
+            splitList.add(newSplit);
+          }
+        }
+        in.close();
+      }
+
+      LOG.info("Total # of splits: " + splitList.size());
+      return splitList;
+    }
+
+    @Override
+    public RecordReader<NullWritable, PeInputSplit> createRecordReader(InputSplit split,
+                            TaskAttemptContext context) {
+      return new PeRecordReader();
+    }
+
+    public static class PeRecordReader extends RecordReader<NullWritable, PeInputSplit> {
+      private boolean readOver = false;
+      private PeInputSplit split = null;
+      private NullWritable key = null;
+      private PeInputSplit value = null;
+
+      @Override
+      public void initialize(InputSplit split, TaskAttemptContext context)
+                  throws IOException, InterruptedException {
+        this.readOver = false;
+        this.split = (PeInputSplit)split;
+      }
+
+      @Override
+      public boolean nextKeyValue() throws IOException, InterruptedException {
+        if(readOver) {
+          return false;
+        }
+
+        key = NullWritable.get();
+        value = (PeInputSplit)split;
+
+        readOver = true;
+        return true;
+      }
+
+      @Override
+      public NullWritable getCurrentKey() throws IOException, InterruptedException {
+        return key;
+      }
+
+      @Override
+      public PeInputSplit getCurrentValue() throws IOException, InterruptedException {
+        return value;
+      }
+
+      @Override
+      public float getProgress() throws IOException, InterruptedException {
+        if(readOver) {
+          return 1.0f;
+        } else {
+          return 0.0f;
+        }
+      }
+
+      @Override
+      public void close() throws IOException {
+        // do nothing
+      }
+    }
+  }
+
+  /**
+   * MapReduce job that runs a performance evaluation client in each map task.
+   */
+  public static class EvaluationMapTask
+      extends Mapper<NullWritable, PeInputSplit, LongWritable, LongWritable> {
+
+    /** configuration parameter name that contains the command */
+    public final static String CMD_KEY = "EvaluationMapTask.command";
+    /** configuration parameter name that contains the PE impl */
+    public static final String PE_KEY = "EvaluationMapTask.performanceEvalImpl";
+
+    private Class<? extends Test> cmd;
+    private PerformanceEvaluation pe;
+
+    @Override
+    protected void setup(Context context) throws IOException, InterruptedException {
+      this.cmd = forName(context.getConfiguration().get(CMD_KEY), Test.class);
+
+      // this is required so that extensions of PE are instantiated within the
+      // map reduce task...
+      Class<? extends PerformanceEvaluation> peClass =
+          forName(context.getConfiguration().get(PE_KEY), PerformanceEvaluation.class);
+      try {
+        this.pe = peClass.getConstructor(Configuration.class)
+            .newInstance(context.getConfiguration());
+      } catch (Exception e) {
+        throw new IllegalStateException("Could not instantiate PE instance", e);
+      }
+    }
+
+    private <Type> Class<? extends Type> forName(String className, Class<Type> type) {
+      Class<? extends Type> clazz = null;
+      try {
+        clazz = Class.forName(className).asSubclass(type);
+      } catch (ClassNotFoundException e) {
+        throw new IllegalStateException("Could not find class for name: " + className, e);
+      }
+      return clazz;
+    }
+
+    protected void map(NullWritable key, PeInputSplit value, final Context context)
+           throws IOException, InterruptedException {
+
+      Status status = new Status() {
+        public void setStatus(String msg) {
+           context.setStatus(msg);
+        }
+      };
+
+      // Evaluation task
+      long elapsedTime = this.pe.runOneClient(this.cmd, value.getStartRow(),
+                                  value.getRows(), value.getTotalRows(),
+                                  value.isFlushCommits(), value.isWriteToWAL(),
+                                  status);
+      // Collect how much time the thing took. Report as map output and
+      // to the ELAPSED_TIME counter.
+      context.getCounter(Counter.ELAPSED_TIME).increment(elapsedTime);
+      context.getCounter(Counter.ROWS).increment(value.rows);
+      context.write(new LongWritable(value.startRow), new LongWritable(elapsedTime));
+      context.progress();
+    }
+  }
+
+  /*
+   * If table does not already exist, create.
+   * @param c Client to use checking.
+   * @return True if we created the table.
+   * @throws IOException
+   */
+  private boolean checkTable(HBaseAdmin admin) throws IOException {
+    HTableDescriptor tableDescriptor = getTableDescriptor();
+    boolean tableExists = admin.tableExists(tableDescriptor.getName());
+    if (!tableExists) {
+      admin.createTable(tableDescriptor);
+      LOG.info("Table " + tableDescriptor + " created");
+    }
+    return !tableExists;
+  }
+
+  protected HTableDescriptor getTableDescriptor() {
+    return TABLE_DESCRIPTOR;
+  }
+
+  /*
+   * We're to run multiple clients concurrently.  Setup a mapreduce job.  Run
+   * one map per client.  Then run a single reduce to sum the elapsed times.
+   * @param cmd Command to run.
+   * @throws IOException
+   */
+  private void runNIsMoreThanOne(final Class<? extends Test> cmd)
+  throws IOException, InterruptedException, ClassNotFoundException {
+    checkTable(new HBaseAdmin(conf));
+    if (this.nomapred) {
+      doMultipleClients(cmd);
+    } else {
+      doMapReduce(cmd);
+    }
+  }
+
+  /*
+   * Run all clients in this vm each to its own thread.
+   * @param cmd Command to run.
+   * @throws IOException
+   */
+  private void doMultipleClients(final Class<? extends Test> cmd) throws IOException {
+    final List<Thread> threads = new ArrayList<Thread>(this.N);
+    final int perClientRows = R/N;
+    for (int i = 0; i < this.N; i++) {
+      Thread t = new Thread (Integer.toString(i)) {
+        @Override
+        public void run() {
+          super.run();
+          PerformanceEvaluation pe = new PerformanceEvaluation(conf);
+          int index = Integer.parseInt(getName());
+          try {
+            long elapsedTime = pe.runOneClient(cmd, index * perClientRows,
+               perClientRows, R,
+                flushCommits, writeToWAL, new Status() {
+                  public void setStatus(final String msg) throws IOException {
+                    LOG.info("client-" + getName() + " " + msg);
+                  }
+                });
+            LOG.info("Finished " + getName() + " in " + elapsedTime +
+              "ms writing " + perClientRows + " rows");
+          } catch (IOException e) {
+            throw new RuntimeException(e);
+          }
+        }
+      };
+      threads.add(t);
+    }
+    for (Thread t: threads) {
+      t.start();
+    }
+    for (Thread t: threads) {
+      while(t.isAlive()) {
+        try {
+          t.join();
+        } catch (InterruptedException e) {
+          LOG.debug("Interrupted, continuing" + e.toString());
+        }
+      }
+    }
+  }
+
+  /*
+   * Run a mapreduce job.  Run as many maps as asked-for clients.
+   * Before we start up the job, write out an input file with instruction
+   * per client regards which row they are to start on.
+   * @param cmd Command to run.
+   * @throws IOException
+   */
+  private void doMapReduce(final Class<? extends Test> cmd) throws IOException,
+        InterruptedException, ClassNotFoundException {
+    Path inputDir = writeInputFile(this.conf);
+    this.conf.set(EvaluationMapTask.CMD_KEY, cmd.getName());
+    this.conf.set(EvaluationMapTask.PE_KEY, getClass().getName());
+    Job job = new Job(this.conf);
+    job.setJarByClass(PerformanceEvaluation.class);
+    job.setJobName("HBase Performance Evaluation");
+
+    job.setInputFormatClass(PeInputFormat.class);
+    PeInputFormat.setInputPaths(job, inputDir);
+
+    job.setOutputKeyClass(LongWritable.class);
+    job.setOutputValueClass(LongWritable.class);
+
+    job.setMapperClass(EvaluationMapTask.class);
+    job.setReducerClass(LongSumReducer.class);
+
+    job.setNumReduceTasks(1);
+
+    job.setOutputFormatClass(TextOutputFormat.class);
+    TextOutputFormat.setOutputPath(job, new Path(inputDir,"outputs"));
+
+    TableMapReduceUtil.addDependencyJars(job);
+    job.waitForCompletion(true);
+  }
+
+  /*
+   * Write input file of offsets-per-client for the mapreduce job.
+   * @param c Configuration
+   * @return Directory that contains file written.
+   * @throws IOException
+   */
+  private Path writeInputFile(final Configuration c) throws IOException {
+    FileSystem fs = FileSystem.get(c);
+    if (!fs.exists(PERF_EVAL_DIR)) {
+      fs.mkdirs(PERF_EVAL_DIR);
+    }
+    SimpleDateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss");
+    Path subdir = new Path(PERF_EVAL_DIR, formatter.format(new Date()));
+    fs.mkdirs(subdir);
+    Path inputFile = new Path(subdir, "input.txt");
+    PrintStream out = new PrintStream(fs.create(inputFile));
+    // Make input random.
+    Map<Integer, String> m = new TreeMap<Integer, String>();
+    Hash h = MurmurHash.getInstance();
+    int perClientRows = (this.R / this.N);
+    try {
+      for (int i = 0; i < 10; i++) {
+        for (int j = 0; j < N; j++) {
+          String s = "startRow=" + ((j * perClientRows) + (i * (perClientRows/10))) +
+          ", perClientRunRows=" + (perClientRows / 10) +
+          ", totalRows=" + this.R +
+          ", clients=" + this.N +
+          ", flushCommits=" + this.flushCommits +
+          ", writeToWAL=" + this.writeToWAL;
+          int hash = h.hash(Bytes.toBytes(s));
+          m.put(hash, s);
+        }
+      }
+      for (Map.Entry<Integer, String> e: m.entrySet()) {
+        out.println(e.getValue());
+      }
+    } finally {
+      out.close();
+    }
+    return subdir;
+  }
+
+  /**
+   * Describes a command.
+   */
+  static class CmdDescriptor {
+    private Class<? extends Test> cmdClass;
+    private String name;
+    private String description;
+
+    CmdDescriptor(Class<? extends Test> cmdClass, String name, String description) {
+      this.cmdClass = cmdClass;
+      this.name = name;
+      this.description = description;
+    }
+
+    public Class<? extends Test> getCmdClass() {
+      return cmdClass;
+    }
+
+    public String getName() {
+      return name;
+    }
+
+    public String getDescription() {
+      return description;
+    }
+  }
+
+  /**
+   * Wraps up options passed to {@link org.apache.hadoop.hbase.PerformanceEvaluation.Test
+   * tests}.  This makes the reflection logic a little easier to understand...
+   */
+  static class TestOptions {
+    private int startRow;
+    private int perClientRunRows;
+    private int totalRows;
+    private byte[] tableName;
+    private boolean flushCommits;
+    private boolean writeToWAL = true;
+
+    TestOptions() {
+    }
+
+    TestOptions(int startRow, int perClientRunRows, int totalRows, byte[] tableName, boolean flushCommits, boolean writeToWAL) {
+      this.startRow = startRow;
+      this.perClientRunRows = perClientRunRows;
+      this.totalRows = totalRows;
+      this.tableName = tableName;
+      this.flushCommits = flushCommits;
+      this.writeToWAL = writeToWAL;
+    }
+
+    public int getStartRow() {
+      return startRow;
+    }
+
+    public int getPerClientRunRows() {
+      return perClientRunRows;
+    }
+
+    public int getTotalRows() {
+      return totalRows;
+    }
+
+    public byte[] getTableName() {
+      return tableName;
+    }
+
+    public boolean isFlushCommits() {
+      return flushCommits;
+    }
+
+    public boolean isWriteToWAL() {
+      return writeToWAL;
+    }
+  }
+
+  /*
+   * A test.
+   * Subclass to particularize what happens per row.
+   */
+  static abstract class Test {
+    // Below is make it so when Tests are all running in the one
+    // jvm, that they each have a differently seeded Random.
+    private static final Random randomSeed =
+      new Random(System.currentTimeMillis());
+    private static long nextRandomSeed() {
+      return randomSeed.nextLong();
+    }
+    protected final Random rand = new Random(nextRandomSeed());
+
+    protected final int startRow;
+    protected final int perClientRunRows;
+    protected final int totalRows;
+    private final Status status;
+    protected byte[] tableName;
+    protected HBaseAdmin admin;
+    protected HTable table;
+    protected volatile Configuration conf;
+    protected boolean flushCommits;
+    protected boolean writeToWAL;
+
+    /**
+     * Note that all subclasses of this class must provide a public contructor
+     * that has the exact same list of arguments.
+     */
+    Test(final Configuration conf, final TestOptions options, final Status status) {
+      super();
+      this.startRow = options.getStartRow();
+      this.perClientRunRows = options.getPerClientRunRows();
+      this.totalRows = options.getTotalRows();
+      this.status = status;
+      this.tableName = options.getTableName();
+      this.table = null;
+      this.conf = conf;
+      this.flushCommits = options.isFlushCommits();
+      this.writeToWAL = options.isWriteToWAL();
+    }
+
+    private String generateStatus(final int sr, final int i, final int lr) {
+      return sr + "/" + i + "/" + lr;
+    }
+
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 10;
+      return period == 0? this.perClientRunRows: period;
+    }
+
+    void testSetup() throws IOException {
+      this.admin = new HBaseAdmin(conf);
+      this.table = new HTable(conf, tableName);
+      this.table.setAutoFlush(false);
+      this.table.setScannerCaching(30);
+    }
+
+    void testTakedown()  throws IOException {
+      if (flushCommits) {
+        this.table.flushCommits();
+      }
+    }
+
+    /*
+     * Run test
+     * @return Elapsed time.
+     * @throws IOException
+     */
+    long test() throws IOException {
+      long elapsedTime;
+      testSetup();
+      long startTime = System.currentTimeMillis();
+      try {
+        testTimed();
+        elapsedTime = System.currentTimeMillis() - startTime;
+      } finally {
+        testTakedown();
+      }
+      return elapsedTime;
+    }
+
+    /**
+     * Provides an extension point for tests that don't want a per row invocation.
+     */
+    void testTimed() throws IOException {
+      int lastRow = this.startRow + this.perClientRunRows;
+      // Report on completion of 1/10th of total.
+      for (int i = this.startRow; i < lastRow; i++) {
+        testRow(i);
+        if (status != null && i > 0 && (i % getReportingPeriod()) == 0) {
+          status.setStatus(generateStatus(this.startRow, i, lastRow));
+        }
+      }
+    }
+
+    /*
+    * Test for individual row.
+    * @param i Row index.
+    */
+    void testRow(final int i) throws IOException {
+    }
+  }
+
+  @SuppressWarnings("unused")
+  static class RandomSeekScanTest extends Test {
+    RandomSeekScanTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Scan scan = new Scan(getRandomRow(this.rand, this.totalRows));
+      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      scan.setFilter(new WhileMatchFilter(new PageFilter(120)));
+      ResultScanner s = this.table.getScanner(scan);
+      //int count = 0;
+      for (Result rr = null; (rr = s.next()) != null;) {
+        // LOG.info("" + count++ + " " + rr.toString());
+      }
+      s.close();
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 100;
+      return period == 0? this.perClientRunRows: period;
+    }
+
+  }
+
+  @SuppressWarnings("unused")
+  static abstract class RandomScanWithRangeTest extends Test {
+    RandomScanWithRangeTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Pair<byte[], byte[]> startAndStopRow = getStartAndStopRow();
+      Scan scan = new Scan(startAndStopRow.getFirst(), startAndStopRow.getSecond());
+      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      ResultScanner s = this.table.getScanner(scan);
+      int count = 0;
+      for (Result rr = null; (rr = s.next()) != null;) {
+        count++;
+      }
+
+      if (i % 100 == 0) {
+        LOG.info(String.format("Scan for key range %s - %s returned %s rows",
+            Bytes.toString(startAndStopRow.getFirst()),
+            Bytes.toString(startAndStopRow.getSecond()), count));
+      }
+
+      s.close();
+    }
+
+    protected abstract Pair<byte[],byte[]> getStartAndStopRow();
+
+    protected Pair<byte[], byte[]> generateStartAndStopRows(int maxRange) {
+      int start = this.rand.nextInt(Integer.MAX_VALUE) % totalRows;
+      int stop = start + maxRange;
+      return new Pair<byte[],byte[]>(format(start), format(stop));
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 100;
+      return period == 0? this.perClientRunRows: period;
+    }
+  }
+
+  static class RandomScanWithRange10Test extends RandomScanWithRangeTest {
+    RandomScanWithRange10Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(10);
+    }
+  }
+
+  static class RandomScanWithRange100Test extends RandomScanWithRangeTest {
+    RandomScanWithRange100Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(100);
+    }
+  }
+
+  static class RandomScanWithRange1000Test extends RandomScanWithRangeTest {
+    RandomScanWithRange1000Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(1000);
+    }
+  }
+
+  static class RandomScanWithRange10000Test extends RandomScanWithRangeTest {
+    RandomScanWithRange10000Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(10000);
+    }
+  }
+
+  static class RandomReadTest extends Test {
+    RandomReadTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Get get = new Get(getRandomRow(this.rand, this.totalRows));
+      get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      this.table.get(get);
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 100;
+      return period == 0? this.perClientRunRows: period;
+    }
+
+  }
+
+  static class RandomWriteTest extends Test {
+    RandomWriteTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      byte [] row = getRandomRow(this.rand, this.totalRows);
+      Put put = new Put(row);
+      byte[] value = generateValue(this.rand);
+      put.add(FAMILY_NAME, QUALIFIER_NAME, value);
+      put.setWriteToWAL(writeToWAL);
+      table.put(put);
+    }
+  }
+
+  static class ScanTest extends Test {
+    private ResultScanner testScanner;
+
+    ScanTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testSetup() throws IOException {
+      super.testSetup();
+    }
+
+    @Override
+    void testTakedown() throws IOException {
+      if (this.testScanner != null) {
+        this.testScanner.close();
+      }
+      super.testTakedown();
+    }
+
+
+    @Override
+    void testRow(final int i) throws IOException {
+      if (this.testScanner == null) {
+        Scan scan = new Scan(format(this.startRow));
+        scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+        this.testScanner = table.getScanner(scan);
+      }
+      testScanner.next();
+    }
+
+  }
+
+  static class SequentialReadTest extends Test {
+    SequentialReadTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Get get = new Get(format(i));
+      get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      table.get(get);
+    }
+
+  }
+
+  static class SequentialWriteTest extends Test {
+    SequentialWriteTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Put put = new Put(format(i));
+      byte[] value = generateValue(this.rand);
+      put.add(FAMILY_NAME, QUALIFIER_NAME, value);
+      put.setWriteToWAL(writeToWAL);
+      table.put(put);
+    }
+
+  }
+
+  static class FilteredScanTest extends Test {
+    protected static final Log LOG = LogFactory.getLog(FilteredScanTest.class.getName());
+
+    FilteredScanTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(int i) throws IOException {
+      byte[] value = generateValue(this.rand);
+      Scan scan = constructScan(value);
+      ResultScanner scanner = null;
+      try {
+        scanner = this.table.getScanner(scan);
+        while (scanner.next() != null) {
+        }
+      } finally {
+        if (scanner != null) scanner.close();
+      }
+    }
+
+    protected Scan constructScan(byte[] valuePrefix) throws IOException {
+      Filter filter = new SingleColumnValueFilter(
+          FAMILY_NAME, QUALIFIER_NAME, CompareFilter.CompareOp.EQUAL,
+          new BinaryComparator(valuePrefix)
+      );
+      Scan scan = new Scan();
+      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      scan.setFilter(filter);
+      return scan;
+    }
+  }
+
+  /*
+   * Format passed integer.
+   * @param number
+   * @return Returns zero-prefixed 10-byte wide decimal version of passed
+   * number (Does absolute in case number is negative).
+   */
+  public static byte [] format(final int number) {
+    byte [] b = new byte[10];
+    int d = Math.abs(number);
+    for (int i = b.length - 1; i >= 0; i--) {
+      b[i] = (byte)((d % 10) + '0');
+      d /= 10;
+    }
+    return b;
+  }
+
+  /*
+   * This method takes some time and is done inline uploading data.  For
+   * example, doing the mapfile test, generation of the key and value
+   * consumes about 30% of CPU time.
+   * @return Generated random value to insert into a table cell.
+   */
+  public static byte[] generateValue(final Random r) {
+    byte [] b = new byte [ROW_LENGTH];
+    r.nextBytes(b);
+    return b;
+  }
+
+  static byte [] getRandomRow(final Random random, final int totalRows) {
+    return format(random.nextInt(Integer.MAX_VALUE) % totalRows);
+  }
+
+  long runOneClient(final Class<? extends Test> cmd, final int startRow,
+                    final int perClientRunRows, final int totalRows,
+                    boolean flushCommits, boolean writeToWAL,
+                    final Status status)
+  throws IOException {
+    status.setStatus("Start " + cmd + " at offset " + startRow + " for " +
+      perClientRunRows + " rows");
+    long totalElapsedTime = 0;
+
+    Test t = null;
+    TestOptions options = new TestOptions(startRow, perClientRunRows,
+        totalRows, getTableDescriptor().getName(), flushCommits, writeToWAL);
+    try {
+      Constructor<? extends Test> constructor = cmd.getDeclaredConstructor(
+          Configuration.class, TestOptions.class, Status.class);
+      t = constructor.newInstance(this.conf, options, status);
+    } catch (NoSuchMethodException e) {
+      throw new IllegalArgumentException("Invalid command class: " +
+          cmd.getName() + ".  It does not provide a constructor as described by" +
+          "the javadoc comment.  Available constructors are: " +
+          Arrays.toString(cmd.getConstructors()));
+    } catch (Exception e) {
+      throw new IllegalStateException("Failed to construct command class", e);
+    }
+    totalElapsedTime = t.test();
+
+    status.setStatus("Finished " + cmd + " in " + totalElapsedTime +
+      "ms at offset " + startRow + " for " + perClientRunRows + " rows");
+    return totalElapsedTime;
+  }
+
+  private void runNIsOne(final Class<? extends Test> cmd) {
+    Status status = new Status() {
+      public void setStatus(String msg) throws IOException {
+        LOG.info(msg);
+      }
+    };
+
+    HBaseAdmin admin = null;
+    try {
+      admin = new HBaseAdmin(this.conf);
+      checkTable(admin);
+      runOneClient(cmd, 0, this.R, this.R, this.flushCommits, this.writeToWAL,
+        status);
+    } catch (Exception e) {
+      LOG.error("Failed", e);
+    }
+  }
+
+  private void runTest(final Class<? extends Test> cmd) throws IOException,
+          InterruptedException, ClassNotFoundException {
+    MiniHBaseCluster hbaseMiniCluster = null;
+    MiniDFSCluster dfsCluster = null;
+    MiniZooKeeperCluster zooKeeperCluster = null;
+    if (this.miniCluster) {
+      dfsCluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+      zooKeeperCluster = new MiniZooKeeperCluster();
+      int zooKeeperPort = zooKeeperCluster.startup(new File(System.getProperty("java.io.tmpdir")));
+
+      // mangle the conf so that the fs parameter points to the minidfs we
+      // just started up
+      FileSystem fs = dfsCluster.getFileSystem();
+      conf.set("fs.default.name", fs.getUri().toString());
+      conf.set("hbase.zookeeper.property.clientPort", Integer.toString(zooKeeperPort));
+      Path parentdir = fs.getHomeDirectory();
+      conf.set(HConstants.HBASE_DIR, parentdir.toString());
+      fs.mkdirs(parentdir);
+      FSUtils.setVersion(fs, parentdir);
+      hbaseMiniCluster = new MiniHBaseCluster(this.conf, N);
+    }
+
+    try {
+      if (N == 1) {
+        // If there is only one client and one HRegionServer, we assume nothing
+        // has been set up at all.
+        runNIsOne(cmd);
+      } else {
+        // Else, run
+        runNIsMoreThanOne(cmd);
+      }
+    } finally {
+      if(this.miniCluster) {
+        if (hbaseMiniCluster != null) hbaseMiniCluster.shutdown();
+        if (zooKeeperCluster != null) zooKeeperCluster.shutdown();
+        HBaseTestCase.shutdownDfs(dfsCluster);
+      }
+    }
+  }
+
+  protected void printUsage() {
+    printUsage(null);
+  }
+
+  protected void printUsage(final String message) {
+    if (message != null && message.length() > 0) {
+      System.err.println(message);
+    }
+    System.err.println("Usage: java " + this.getClass().getName() + " \\");
+    System.err.println("  [--miniCluster] [--nomapred] [--rows=ROWS] <command> <nclients>");
+    System.err.println();
+    System.err.println("Options:");
+    System.err.println(" miniCluster     Run the test on an HBaseMiniCluster");
+    System.err.println(" nomapred        Run multiple clients using threads " +
+      "(rather than use mapreduce)");
+    System.err.println(" rows            Rows each client runs. Default: One million");
+    System.err.println(" flushCommits    Used to determine if the test should flush the table.  Default: false");
+    System.err.println(" writeToWAL      Set writeToWAL on puts. Default: True");
+    System.err.println();
+    System.err.println("Command:");
+    for (CmdDescriptor command : commands.values()) {
+      System.err.println(String.format(" %-15s %s", command.getName(), command.getDescription()));
+    }
+    System.err.println();
+    System.err.println("Args:");
+    System.err.println(" nclients        Integer. Required. Total number of " +
+      "clients (and HRegionServers)");
+    System.err.println("                 running: 1 <= value <= 500");
+    System.err.println("Examples:");
+    System.err.println(" To run a single evaluation client:");
+    System.err.println(" $ bin/hbase " + this.getClass().getName()
+        + " sequentialWrite 1");
+  }
+
+  private void getArgs(final int start, final String[] args) {
+    if(start + 1 > args.length) {
+      throw new IllegalArgumentException("must supply the number of clients");
+    }
+    N = Integer.parseInt(args[start]);
+    if (N < 1) {
+      throw new IllegalArgumentException("Number of clients must be > 1");
+    }
+    // Set total number of rows to write.
+    this.R = this.R * N;
+  }
+
+  public int doCommandLine(final String[] args) {
+    // Process command-line args. TODO: Better cmd-line processing
+    // (but hopefully something not as painful as cli options).
+    int errCode = -1;
+    if (args.length < 1) {
+      printUsage();
+      return errCode;
+    }
+
+    try {
+      for (int i = 0; i < args.length; i++) {
+        String cmd = args[i];
+        if (cmd.equals("-h") || cmd.startsWith("--h")) {
+          printUsage();
+          errCode = 0;
+          break;
+        }
+
+        final String miniClusterArgKey = "--miniCluster";
+        if (cmd.startsWith(miniClusterArgKey)) {
+          this.miniCluster = true;
+          continue;
+        }
+
+        final String nmr = "--nomapred";
+        if (cmd.startsWith(nmr)) {
+          this.nomapred = true;
+          continue;
+        }
+
+        final String rows = "--rows=";
+        if (cmd.startsWith(rows)) {
+          this.R = Integer.parseInt(cmd.substring(rows.length()));
+          continue;
+        }
+
+        final String flushCommits = "--flushCommits=";
+        if (cmd.startsWith(flushCommits)) {
+          this.flushCommits = Boolean.parseBoolean(cmd.substring(flushCommits.length()));
+          continue;
+        }
+
+        final String writeToWAL = "--writeToWAL=";
+        if (cmd.startsWith(writeToWAL)) {
+          this.writeToWAL = Boolean.parseBoolean(cmd.substring(writeToWAL.length()));
+          continue;
+        }
+
+        Class<? extends Test> cmdClass = determineCommandClass(cmd);
+        if (cmdClass != null) {
+          getArgs(i + 1, args);
+          runTest(cmdClass);
+          errCode = 0;
+          break;
+        }
+
+        printUsage();
+        break;
+      }
+    } catch (Exception e) {
+      e.printStackTrace();
+    }
+
+    return errCode;
+  }
+
+  private Class<? extends Test> determineCommandClass(String cmd) {
+    CmdDescriptor descriptor = commands.get(cmd);
+    return descriptor != null ? descriptor.getCmdClass() : null;
+  }
+
+  /**
+   * @param args
+   */
+  public static void main(final String[] args) {
+    Configuration c = HBaseConfiguration.create();
+    System.exit(new PerformanceEvaluation(c).doCommandLine(args));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluationCommons.java b/0.90/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluationCommons.java
new file mode 100644
index 0000000..eac7207
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluationCommons.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+
+/**
+ * Code shared by PE tests.
+ */
+public class PerformanceEvaluationCommons {
+  static final Log LOG =
+    LogFactory.getLog(PerformanceEvaluationCommons.class.getName());
+
+  public static void assertValueSize(final int expectedSize, final int got) {
+    if (got != expectedSize) {
+      throw new AssertionError("Expected " + expectedSize + " but got " + got);
+    }
+  }
+
+  public static void assertKey(final byte [] expected, final ByteBuffer got) {
+    byte [] b = new byte[got.limit()];
+    got.get(b, 0, got.limit());
+    assertKey(expected, b);
+  }
+
+  public static void assertKey(final byte [] expected, final byte [] got) {
+    if (!org.apache.hadoop.hbase.util.Bytes.equals(expected, got)) {
+      throw new AssertionError("Expected " +
+        org.apache.hadoop.hbase.util.Bytes.toString(expected) +
+        " but got " + org.apache.hadoop.hbase.util.Bytes.toString(got));
+    }
+  }
+
+  public static void concurrentReads(final Runnable r) {
+    final int count = 1;
+    long now = System.currentTimeMillis();
+    List<Thread> threads = new ArrayList<Thread>(count);
+    for (int i = 0; i < count; i++) {
+      Thread t = new Thread(r);
+      t.setName("" + i);
+      threads.add(t);
+    }
+    for (Thread t: threads) {
+      t.start();
+    }
+    for (Thread t: threads) {
+      try {
+        t.join();
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+    }
+    LOG.info("Test took " + (System.currentTimeMillis() - now));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java
new file mode 100644
index 0000000..4ac6e09
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java
@@ -0,0 +1,331 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.MultithreadedTestUtil.TestContext;
+import org.apache.hadoop.hbase.MultithreadedTestUtil.RepeatingTestThread;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import com.google.common.collect.Lists;
+
+/**
+ * Test case that uses multiple threads to read and write multifamily rows
+ * into a table, verifying that reads never see partially-complete writes.
+ * 
+ * This can run as a junit test, or with a main() function which runs against
+ * a real cluster (eg for testing with failures, region movement, etc)
+ */
+public class TestAcidGuarantees {
+  protected static final Log LOG = LogFactory.getLog(TestAcidGuarantees.class);
+  public static final byte [] TABLE_NAME = Bytes.toBytes("TestAcidGuarantees");
+  public static final byte [] FAMILY_A = Bytes.toBytes("A");
+  public static final byte [] FAMILY_B = Bytes.toBytes("B");
+  public static final byte [] FAMILY_C = Bytes.toBytes("C");
+  public static final byte [] QUALIFIER_NAME = Bytes.toBytes("data");
+
+  public static final byte[][] FAMILIES = new byte[][] {
+    FAMILY_A, FAMILY_B, FAMILY_C };
+
+  private HBaseTestingUtility util;
+
+  public static int NUM_COLS_TO_CHECK = 50;
+
+  private void createTableIfMissing()
+    throws IOException {
+    try {
+      util.createTable(TABLE_NAME, FAMILIES);
+    } catch (TableExistsException tee) {
+    }
+  }
+
+  public TestAcidGuarantees() {
+    // Set small flush size for minicluster so we exercise reseeking scanners
+    Configuration conf = HBaseConfiguration.create();
+    conf.set("hbase.hregion.memstore.flush.size", String.valueOf(128*1024));
+    util = new HBaseTestingUtility(conf);
+  }
+  
+  /**
+   * Thread that does random full-row writes into a table.
+   */
+  public static class AtomicityWriter extends RepeatingTestThread {
+    Random rand = new Random();
+    byte data[] = new byte[10];
+    byte targetRows[][];
+    byte targetFamilies[][];
+    HTable table;
+    AtomicLong numWritten = new AtomicLong();
+    
+    public AtomicityWriter(TestContext ctx, byte targetRows[][],
+                           byte targetFamilies[][]) throws IOException {
+      super(ctx);
+      this.targetRows = targetRows;
+      this.targetFamilies = targetFamilies;
+      table = new HTable(ctx.getConf(), TABLE_NAME);
+    }
+    public void doAnAction() throws Exception {
+      // Pick a random row to write into
+      byte[] targetRow = targetRows[rand.nextInt(targetRows.length)];
+      Put p = new Put(targetRow); 
+      rand.nextBytes(data);
+
+      for (byte[] family : targetFamilies) {
+        for (int i = 0; i < NUM_COLS_TO_CHECK; i++) {
+          byte qualifier[] = Bytes.toBytes("col" + i);
+          p.add(family, qualifier, data);
+        }
+      }
+      table.put(p);
+      numWritten.getAndIncrement();
+    }
+  }
+  
+  /**
+   * Thread that does single-row reads in a table, looking for partially
+   * completed rows.
+   */
+  public static class AtomicGetReader extends RepeatingTestThread {
+    byte targetRow[];
+    byte targetFamilies[][];
+    HTable table;
+    int numVerified = 0;
+    AtomicLong numRead = new AtomicLong();
+
+    public AtomicGetReader(TestContext ctx, byte targetRow[],
+                           byte targetFamilies[][]) throws IOException {
+      super(ctx);
+      this.targetRow = targetRow;
+      this.targetFamilies = targetFamilies;
+      table = new HTable(ctx.getConf(), TABLE_NAME);
+    }
+
+    public void doAnAction() throws Exception {
+      Get g = new Get(targetRow);
+      Result res = table.get(g);
+      byte[] gotValue = null;
+      if (res.getRow() == null) {
+        // Trying to verify but we didn't find the row - the writing
+        // thread probably just hasn't started writing yet, so we can
+        // ignore this action
+        return;
+      }
+      
+      for (byte[] family : targetFamilies) {
+        for (int i = 0; i < NUM_COLS_TO_CHECK; i++) {
+          byte qualifier[] = Bytes.toBytes("col" + i);
+          byte thisValue[] = res.getValue(family, qualifier);
+          if (gotValue != null && !Bytes.equals(gotValue, thisValue)) {
+            gotFailure(gotValue, res);
+          }
+          numVerified++;
+          gotValue = thisValue;
+        }
+      }
+      numRead.getAndIncrement();
+    }
+
+    private void gotFailure(byte[] expected, Result res) {
+      StringBuilder msg = new StringBuilder();
+      msg.append("Failed after ").append(numVerified).append("!");
+      msg.append("Expected=").append(Bytes.toStringBinary(expected));
+      msg.append("Got:\n");
+      for (KeyValue kv : res.list()) {
+        msg.append(kv.toString());
+        msg.append(" val= ");
+        msg.append(Bytes.toStringBinary(kv.getValue()));
+        msg.append("\n");
+      }
+      throw new RuntimeException(msg.toString());
+    }
+  }
+  
+  /**
+   * Thread that does full scans of the table looking for any partially completed
+   * rows.
+   */
+  public static class AtomicScanReader extends RepeatingTestThread {
+    byte targetFamilies[][];
+    HTable table;
+    AtomicLong numScans = new AtomicLong();
+    AtomicLong numRowsScanned = new AtomicLong();
+
+    public AtomicScanReader(TestContext ctx,
+                           byte targetFamilies[][]) throws IOException {
+      super(ctx);
+      this.targetFamilies = targetFamilies;
+      table = new HTable(ctx.getConf(), TABLE_NAME);
+    }
+
+    public void doAnAction() throws Exception {
+      Scan s = new Scan();
+      for (byte[] family : targetFamilies) {
+        s.addFamily(family);
+      }
+      ResultScanner scanner = table.getScanner(s);
+      
+      for (Result res : scanner) {
+        byte[] gotValue = null;
+  
+        for (byte[] family : targetFamilies) {
+          for (int i = 0; i < NUM_COLS_TO_CHECK; i++) {
+            byte qualifier[] = Bytes.toBytes("col" + i);
+            byte thisValue[] = res.getValue(family, qualifier);
+            if (gotValue != null && !Bytes.equals(gotValue, thisValue)) {
+              gotFailure(gotValue, res);
+            }
+            gotValue = thisValue;
+          }
+        }
+        numRowsScanned.getAndIncrement();
+      }
+      numScans.getAndIncrement();
+    }
+
+    private void gotFailure(byte[] expected, Result res) {
+      StringBuilder msg = new StringBuilder();
+      msg.append("Failed after ").append(numRowsScanned).append("!");
+      msg.append("Expected=").append(Bytes.toStringBinary(expected));
+      msg.append("Got:\n");
+      for (KeyValue kv : res.list()) {
+        msg.append(kv.toString());
+        msg.append(" val= ");
+        msg.append(Bytes.toStringBinary(kv.getValue()));
+        msg.append("\n");
+      }
+      throw new RuntimeException(msg.toString());
+    }
+  }
+
+
+  public void runTestAtomicity(long millisToRun,
+      int numWriters,
+      int numGetters,
+      int numScanners,
+      int numUniqueRows) throws Exception {
+    createTableIfMissing();
+    TestContext ctx = new TestContext(util.getConfiguration());
+    
+    byte rows[][] = new byte[numUniqueRows][];
+    for (int i = 0; i < numUniqueRows; i++) {
+      rows[i] = Bytes.toBytes("test_row_" + i);
+    }
+    
+    List<AtomicityWriter> writers = Lists.newArrayList();
+    for (int i = 0; i < numWriters; i++) {
+      AtomicityWriter writer = new AtomicityWriter(
+          ctx, rows, FAMILIES);
+      writers.add(writer);
+      ctx.addThread(writer);
+    }
+
+    List<AtomicGetReader> getters = Lists.newArrayList();
+    for (int i = 0; i < numGetters; i++) {
+      AtomicGetReader getter = new AtomicGetReader(
+          ctx, rows[i % numUniqueRows], FAMILIES);
+      getters.add(getter);
+      ctx.addThread(getter);
+    }
+    
+    List<AtomicScanReader> scanners = Lists.newArrayList();
+    for (int i = 0; i < numScanners; i++) {
+      AtomicScanReader scanner = new AtomicScanReader(ctx, FAMILIES);
+      scanners.add(scanner);
+      ctx.addThread(scanner);
+    }
+    
+    ctx.startThreads();
+    ctx.waitFor(millisToRun);
+    ctx.stop();
+    
+    LOG.info("Finished test. Writers:");
+    for (AtomicityWriter writer : writers) {
+      LOG.info("  wrote " + writer.numWritten.get());
+    }
+    LOG.info("Readers:");
+    for (AtomicGetReader reader : getters) {
+      LOG.info("  read " + reader.numRead.get());
+    }
+    LOG.info("Scanners:");
+    for (AtomicScanReader scanner : scanners) {
+      LOG.info("  scanned " + scanner.numScans.get());
+      LOG.info("  verified " + scanner.numRowsScanned.get() + " rows");
+    }
+  }
+
+  @Test
+  @Ignore("Currently not passing - see HBASE-2856")
+  public void testGetAtomicity() throws Exception {
+    util.startMiniCluster(1);
+    try {
+      runTestAtomicity(20000, 5, 5, 0, 3);
+    } finally {
+      util.shutdownMiniCluster();
+    }    
+  }
+
+  @Test
+  @Ignore("Currently not passing - see HBASE-2670")
+  public void testScanAtomicity() throws Exception {
+    util.startMiniCluster(1);
+    try {
+      runTestAtomicity(20000, 5, 0, 5, 3);
+    } finally {
+      util.shutdownMiniCluster();
+    }    
+  }
+
+  @Test
+  @Ignore("Currently not passing - see HBASE-2670")
+  public void testMixedAtomicity() throws Exception {
+    util.startMiniCluster(1);
+    try {
+      runTestAtomicity(20000, 5, 2, 2, 3);
+    } finally {
+      util.shutdownMiniCluster();
+    }    
+  }
+
+  public static void main(String args[]) throws Exception {
+    Configuration c = HBaseConfiguration.create();
+    TestAcidGuarantees test = new TestAcidGuarantees();
+    test.setConf(c);
+    test.runTestAtomicity(5*60*1000, 5, 2, 2, 3);
+  }
+
+  private void setConf(Configuration c) {
+    util = new HBaseTestingUtility(c);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestCompare.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestCompare.java
new file mode 100644
index 0000000..bbac815
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestCompare.java
@@ -0,0 +1,55 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test comparing HBase objects.
+ */
+public class TestCompare extends TestCase {
+
+  /**
+   * Sort of HRegionInfo.
+   */
+  public void testHRegionInfo() {
+    HRegionInfo a = new HRegionInfo(new HTableDescriptor("a"), null, null);
+    HRegionInfo b = new HRegionInfo(new HTableDescriptor("b"), null, null);
+    assertTrue(a.compareTo(b) != 0);
+    HTableDescriptor t = new HTableDescriptor("t");
+    byte [] midway = Bytes.toBytes("midway");
+    a = new HRegionInfo(t, null, midway);
+    b = new HRegionInfo(t, midway, null);
+    assertTrue(a.compareTo(b) < 0);
+    assertTrue(b.compareTo(a) > 0);
+    assertEquals(a, a);
+    assertTrue(a.compareTo(a) == 0);
+    a = new HRegionInfo(t, Bytes.toBytes("a"), Bytes.toBytes("d"));
+    b = new HRegionInfo(t, Bytes.toBytes("e"), Bytes.toBytes("g"));
+    assertTrue(a.compareTo(b) < 0);
+    a = new HRegionInfo(t, Bytes.toBytes("aaaa"), Bytes.toBytes("dddd"));
+    b = new HRegionInfo(t, Bytes.toBytes("e"), Bytes.toBytes("g"));
+    assertTrue(a.compareTo(b) < 0);
+    a = new HRegionInfo(t, Bytes.toBytes("aaaa"), Bytes.toBytes("dddd"));
+    b = new HRegionInfo(t, Bytes.toBytes("aaaa"), Bytes.toBytes("eeee"));
+    assertTrue(a.compareTo(b) < 0);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java
new file mode 100644
index 0000000..d7e09b6
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java
@@ -0,0 +1,125 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import static org.junit.Assert.assertEquals;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestFullLogReconstruction {
+
+  private final static HBaseTestingUtility
+      TEST_UTIL = new HBaseTestingUtility();
+
+  private final static byte[] TABLE_NAME = Bytes.toBytes("tabletest");
+  private final static byte[] FAMILY = Bytes.toBytes("family");
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    Configuration c = TEST_UTIL.getConfiguration();
+    c.setInt("hbase.regionserver.flushlogentries", 1);
+    c.setBoolean("dfs.support.append", true);
+    // quicker heartbeat interval for faster DN death notification
+    c.setInt("heartbeat.recheck.interval", 5000);
+    c.setInt("dfs.heartbeat.interval", 1);
+    c.setInt("dfs.socket.timeout", 5000);
+    // faster failover with cluster.shutdown();fs.close() idiom
+    c.setInt("ipc.client.connect.max.retries", 1);
+    c.setInt("dfs.client.block.recovery.retries", 1);
+    c.setInt("hbase.regionserver.flushlogentries", 1);
+    TEST_UTIL.startMiniCluster(2);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @After
+  public void tearDown() throws Exception {
+  }
+
+  /**
+   * Test the whole reconstruction loop. Build a table with regions aaa to zzz
+   * and load every one of them multiple times with the same date and do a flush
+   * at some point. Kill one of the region servers and scan the table. We should
+   * see all the rows.
+   * @throws Exception
+   */
+  @Test
+  public void testReconstruction() throws Exception {
+
+    HTable table = TEST_UTIL.createTable(TABLE_NAME, FAMILY);
+
+    TEST_UTIL.createMultiRegions(table, Bytes.toBytes("family"));
+
+    // Load up the table with simple rows and count them
+    int initialCount = TEST_UTIL.loadTable(table, FAMILY);
+    Scan scan = new Scan();
+    ResultScanner results = table.getScanner(scan);
+    int count = 0;
+    for (Result res : results) {
+      count++;
+    }
+    results.close();
+
+    assertEquals(initialCount, count);
+
+    for(int i = 0; i < 4; i++) {
+      TEST_UTIL.loadTable(table, FAMILY);
+    }
+
+    TEST_UTIL.expireRegionServerSession(0);
+    scan = new Scan();
+    results = table.getScanner(scan);
+    int newCount = 0;
+    for (Result res : results) {
+      newCount++;
+    }
+    assertEquals(count, newCount);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java
new file mode 100644
index 0000000..4e46add
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java
@@ -0,0 +1,171 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test our testing utility class
+ */
+public class TestHBaseTestingUtility {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+
+  private HBaseTestingUtility hbt;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    this.hbt = new HBaseTestingUtility();
+    this.hbt.cleanupTestDir();
+  }
+
+  @After
+  public void tearDown() throws Exception {
+  }
+
+  /**
+   * Basic sanity test that spins up multiple HDFS and HBase clusters that share
+   * the same ZK ensemble. We then create the same table in both and make sure
+   * that what we insert in one place doesn't end up in the other.
+   * @throws Exception
+   */
+  @Test (timeout=180000)
+  public void multiClusters() throws Exception {
+    // Create three clusters
+
+    // Cluster 1.
+    HBaseTestingUtility htu1 = new HBaseTestingUtility();
+    // Set a different zk path for each cluster
+    htu1.getConfiguration().set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/1");
+    htu1.startMiniZKCluster();
+
+    // Cluster 2
+    HBaseTestingUtility htu2 = new HBaseTestingUtility();
+    htu2.getConfiguration().set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/2");
+    htu2.setZkCluster(htu1.getZkCluster());
+
+    // Cluster 3.
+    HBaseTestingUtility htu3 = new HBaseTestingUtility();
+    htu3.getConfiguration().set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/3");
+    htu3.setZkCluster(htu1.getZkCluster());
+
+    try {
+      htu1.startMiniCluster();
+      htu2.startMiniCluster();
+      htu3.startMiniCluster();
+
+      final byte[] TABLE_NAME = Bytes.toBytes("test");
+      final byte[] FAM_NAME = Bytes.toBytes("fam");
+      final byte[] ROW = Bytes.toBytes("row");
+      final byte[] QUAL_NAME = Bytes.toBytes("qual");
+      final byte[] VALUE = Bytes.toBytes("value");
+
+      HTable table1 = htu1.createTable(TABLE_NAME, FAM_NAME);
+      HTable table2 = htu2.createTable(TABLE_NAME, FAM_NAME);
+
+      Put put = new Put(ROW);
+      put.add(FAM_NAME, QUAL_NAME, VALUE);
+      table1.put(put);
+
+      Get get = new Get(ROW);
+      get.addColumn(FAM_NAME, QUAL_NAME);
+      Result res = table1.get(get);
+      assertEquals(1, res.size());
+
+      res = table2.get(get);
+      assertEquals(0, res.size());
+
+    } finally {
+      htu3.shutdownMiniCluster();
+      htu2.shutdownMiniCluster();
+      htu1.shutdownMiniCluster();
+    }
+  }
+
+  @Test public void testMiniCluster() throws Exception {
+    MiniHBaseCluster cluster = this.hbt.startMiniCluster();
+    try {
+      assertEquals(1, cluster.getLiveRegionServerThreads().size());
+    } finally {
+      cluster.shutdown();
+    }
+  }
+
+  @Test public void testMiniDFSCluster() throws Exception {
+    MiniDFSCluster cluster = this.hbt.startMiniDFSCluster(1);
+    FileSystem dfs = cluster.getFileSystem();
+    Path dir = new Path("dir");
+    Path qualifiedDir = dfs.makeQualified(dir);
+    LOG.info("dir=" + dir + ", qualifiedDir=" + qualifiedDir);
+    assertFalse(dfs.exists(qualifiedDir));
+    assertTrue(dfs.mkdirs(qualifiedDir));
+    assertTrue(dfs.delete(qualifiedDir, true));
+    try {
+    } finally {
+      cluster.shutdown();
+    }
+  }
+
+  @Test public void testSetupClusterTestBuildDir() {
+    File testdir = this.hbt.setupClusterTestBuildDir();
+    LOG.info("uuid-subdir=" + testdir);
+    assertFalse(testdir.exists());
+    assertTrue(testdir.mkdirs());
+    assertTrue(testdir.exists());
+  }
+
+  @Test public void testTestDir() throws IOException {
+    Path testdir = HBaseTestingUtility.getTestDir();
+    LOG.info("testdir=" + testdir);
+    FileSystem fs = this.hbt.getTestFileSystem();
+    assertTrue(!fs.exists(testdir));
+    assertTrue(fs.mkdirs(testdir));
+    assertTrue(this.hbt.cleanupTestDir());
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestHMsg.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestHMsg.java
new file mode 100644
index 0000000..b55956f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestHMsg.java
@@ -0,0 +1,82 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+
+public class TestHMsg extends TestCase {
+  public void testList() {
+    List<HMsg> msgs = new ArrayList<HMsg>();
+    HMsg hmsg = null;
+    final int size = 10;
+    for (int i = 0; i < size; i++) {
+      byte [] b = Bytes.toBytes(i);
+      hmsg = new HMsg(HMsg.Type.STOP_REGIONSERVER,
+        new HRegionInfo(new HTableDescriptor(Bytes.toBytes("test")), b, b));
+      msgs.add(hmsg);
+    }
+    assertEquals(size, msgs.size());
+    int index = msgs.indexOf(hmsg);
+    assertNotSame(-1, index);
+    msgs.remove(index);
+    assertEquals(size - 1, msgs.size());
+    byte [] other = Bytes.toBytes("other");
+    hmsg = new HMsg(HMsg.Type.STOP_REGIONSERVER,
+      new HRegionInfo(new HTableDescriptor(Bytes.toBytes("test")), other, other));
+    assertEquals(-1, msgs.indexOf(hmsg));
+    // Assert that two HMsgs are same if same content.
+    byte [] b = Bytes.toBytes(1);
+    hmsg = new HMsg(HMsg.Type.STOP_REGIONSERVER,
+     new HRegionInfo(new HTableDescriptor(Bytes.toBytes("test")), b, b));
+    assertNotSame(-1, msgs.indexOf(hmsg));
+  }
+
+  public void testSerialization() throws IOException {
+    // Check out new HMsg that carries two daughter split regions.
+    byte [] abytes = Bytes.toBytes("a");
+    byte [] bbytes = Bytes.toBytes("b");
+    byte [] parentbytes = Bytes.toBytes("parent");
+    HRegionInfo parent =
+      new HRegionInfo(new HTableDescriptor(Bytes.toBytes("parent")),
+      parentbytes, parentbytes);
+    // Assert simple HMsg serializes
+    HMsg hmsg = new HMsg(HMsg.Type.STOP_REGIONSERVER, parent);
+    byte [] bytes = Writables.getBytes(hmsg);
+    HMsg close = (HMsg)Writables.getWritable(bytes, new HMsg());
+    assertTrue(close.equals(hmsg));
+    // Assert split serializes
+    HRegionInfo daughtera =
+      new HRegionInfo(new HTableDescriptor(Bytes.toBytes("a")), abytes, abytes);
+    HRegionInfo daughterb =
+      new HRegionInfo(new HTableDescriptor(Bytes.toBytes("b")), bbytes, bbytes);
+    HMsg splithmsg = new HMsg(HMsg.Type.REGION_SPLIT,
+      parent, daughtera, daughterb, Bytes.toBytes("REGION_SPLIT"));
+    bytes = Writables.getBytes(splithmsg);
+    hmsg = (HMsg)Writables.getWritable(bytes, new HMsg());
+    assertTrue(splithmsg.equals(hmsg));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java
new file mode 100644
index 0000000..daffe02
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java
@@ -0,0 +1,76 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.BufferedInputStream;
+import java.io.IOException;
+import java.net.URL;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.client.HTable;
+
+/**
+ * Testing, info servers are disabled.  This test enables then and checks that
+ * they serve pages.
+ */
+public class TestInfoServers extends HBaseClusterTestCase {
+  static final Log LOG = LogFactory.getLog(TestInfoServers.class);
+
+  @Override
+  protected void preHBaseClusterSetup() {
+    // The info servers do not run in tests by default.
+    // Set them to ephemeral ports so they will start
+    conf.setInt("hbase.master.info.port", 0);
+    conf.setInt("hbase.regionserver.info.port", 0);
+  }
+
+  /**
+   * @throws Exception
+   */
+  public void testInfoServersAreUp() throws Exception {
+    // give the cluster time to start up
+    new HTable(conf, ".META.");
+    int port = cluster.getMaster().getInfoServer().getPort();
+    assertHasExpectedContent(new URL("http://localhost:" + port +
+      "/index.html"), "master");
+    port = cluster.getRegionServerThreads().get(0).getRegionServer().
+      getInfoServer().getPort();
+    assertHasExpectedContent(new URL("http://localhost:" + port +
+      "/index.html"), "regionserver");
+  }
+
+  private void assertHasExpectedContent(final URL u, final String expected)
+  throws IOException {
+    LOG.info("Testing " + u.toString() + " has " + expected);
+    java.net.URLConnection c = u.openConnection();
+    c.connect();
+    assertTrue(c.getContentLength() > 0);
+    StringBuilder sb = new StringBuilder(c.getContentLength());
+    BufferedInputStream bis = new BufferedInputStream(c.getInputStream());
+    byte [] bytes = new byte[1024];
+    for (int read = -1; (read = bis.read(bytes)) != -1;) {
+      sb.append(new String(bytes, 0, read));
+    }
+    bis.close();
+    String content = sb.toString();
+    assertTrue(content.contains(expected));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
new file mode 100644
index 0000000..68fff55
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestKeyValue.java
@@ -0,0 +1,347 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.Set;
+import java.util.TreeSet;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue.KVComparator;
+import org.apache.hadoop.hbase.KeyValue.Type;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TestKeyValue extends TestCase {
+  private final Log LOG = LogFactory.getLog(this.getClass().getName());
+
+  public void testColumnCompare() throws Exception {
+    final byte [] a = Bytes.toBytes("aaa");
+    byte [] family1 = Bytes.toBytes("abc");
+    byte [] qualifier1 = Bytes.toBytes("def");
+    byte [] family2 = Bytes.toBytes("abcd");
+    byte [] qualifier2 = Bytes.toBytes("ef");
+
+    KeyValue aaa = new KeyValue(a, family1, qualifier1, 0L, Type.Put, a);
+    assertFalse(aaa.matchingColumn(family2, qualifier2));
+    assertTrue(aaa.matchingColumn(family1, qualifier1));
+    aaa = new KeyValue(a, family2, qualifier2, 0L, Type.Put, a);
+    assertFalse(aaa.matchingColumn(family1, qualifier1));
+    assertTrue(aaa.matchingColumn(family2,qualifier2));
+    byte [] nullQualifier = new byte[0];
+    aaa = new KeyValue(a, family1, nullQualifier, 0L, Type.Put, a);
+    assertTrue(aaa.matchingColumn(family1,null));
+    assertFalse(aaa.matchingColumn(family2,qualifier2));
+  }
+
+  public void testBasics() throws Exception {
+    LOG.info("LOWKEY: " + KeyValue.LOWESTKEY.toString());
+    check(Bytes.toBytes(getName()),
+      Bytes.toBytes(getName()), Bytes.toBytes(getName()), 1,
+      Bytes.toBytes(getName()));
+    // Test empty value and empty column -- both should work. (not empty fam)
+    check(Bytes.toBytes(getName()), Bytes.toBytes(getName()), null, 1, null);
+    check(HConstants.EMPTY_BYTE_ARRAY, Bytes.toBytes(getName()), null, 1, null);
+  }
+
+  private void check(final byte [] row, final byte [] family, byte [] qualifier,
+    final long timestamp, final byte [] value) {
+    KeyValue kv = new KeyValue(row, family, qualifier, timestamp, value);
+    assertTrue(Bytes.compareTo(kv.getRow(), row) == 0);
+    assertTrue(kv.matchingColumn(family, qualifier));
+    // Call toString to make sure it works.
+    LOG.info(kv.toString());
+  }
+
+  public void testPlainCompare() throws Exception {
+    final byte [] a = Bytes.toBytes("aaa");
+    final byte [] b = Bytes.toBytes("bbb");
+    final byte [] fam = Bytes.toBytes("col");
+    final byte [] qf = Bytes.toBytes("umn");
+//    final byte [] column = Bytes.toBytes("col:umn");
+    KeyValue aaa = new KeyValue(a, fam, qf, a);
+    KeyValue bbb = new KeyValue(b, fam, qf, b);
+    byte [] keyabb = aaa.getKey();
+    byte [] keybbb = bbb.getKey();
+    assertTrue(KeyValue.COMPARATOR.compare(aaa, bbb) < 0);
+    assertTrue(KeyValue.KEY_COMPARATOR.compare(keyabb, 0, keyabb.length, keybbb,
+      0, keybbb.length) < 0);
+    assertTrue(KeyValue.COMPARATOR.compare(bbb, aaa) > 0);
+    assertTrue(KeyValue.KEY_COMPARATOR.compare(keybbb, 0, keybbb.length, keyabb,
+      0, keyabb.length) > 0);
+    // Compare breaks if passed same ByteBuffer as both left and right arguments.
+    assertTrue(KeyValue.COMPARATOR.compare(bbb, bbb) == 0);
+    assertTrue(KeyValue.KEY_COMPARATOR.compare(keybbb, 0, keybbb.length, keybbb,
+      0, keybbb.length) == 0);
+    assertTrue(KeyValue.COMPARATOR.compare(aaa, aaa) == 0);
+    assertTrue(KeyValue.KEY_COMPARATOR.compare(keyabb, 0, keyabb.length, keyabb,
+      0, keyabb.length) == 0);
+    // Do compare with different timestamps.
+    aaa = new KeyValue(a, fam, qf, 1, a);
+    bbb = new KeyValue(a, fam, qf, 2, a);
+    assertTrue(KeyValue.COMPARATOR.compare(aaa, bbb) > 0);
+    assertTrue(KeyValue.COMPARATOR.compare(bbb, aaa) < 0);
+    assertTrue(KeyValue.COMPARATOR.compare(aaa, aaa) == 0);
+    // Do compare with different types.  Higher numbered types -- Delete
+    // should sort ahead of lower numbers; i.e. Put
+    aaa = new KeyValue(a, fam, qf, 1, KeyValue.Type.Delete, a);
+    bbb = new KeyValue(a, fam, qf, 1, a);
+    assertTrue(KeyValue.COMPARATOR.compare(aaa, bbb) < 0);
+    assertTrue(KeyValue.COMPARATOR.compare(bbb, aaa) > 0);
+    assertTrue(KeyValue.COMPARATOR.compare(aaa, aaa) == 0);
+  }
+
+  public void testMoreComparisons() throws Exception {
+    // Root compares
+    long now = System.currentTimeMillis();
+    KeyValue a = new KeyValue(Bytes.toBytes(".META.,,99999999999999"), now);
+    KeyValue b = new KeyValue(Bytes.toBytes(".META.,,1"), now);
+    KVComparator c = new KeyValue.RootComparator();
+    assertTrue(c.compare(b, a) < 0);
+    KeyValue aa = new KeyValue(Bytes.toBytes(".META.,,1"), now);
+    KeyValue bb = new KeyValue(Bytes.toBytes(".META.,,1"),
+        Bytes.toBytes("info"), Bytes.toBytes("regioninfo"), 1235943454602L,
+        (byte[])null);
+    assertTrue(c.compare(aa, bb) < 0);
+
+    // Meta compares
+    KeyValue aaa = new KeyValue(
+        Bytes.toBytes("TestScanMultipleVersions,row_0500,1236020145502"), now);
+    KeyValue bbb = new KeyValue(
+        Bytes.toBytes("TestScanMultipleVersions,,99999999999999"), now);
+    c = new KeyValue.MetaComparator();
+    assertTrue(c.compare(bbb, aaa) < 0);
+
+    KeyValue aaaa = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,,1236023996656"),
+        Bytes.toBytes("info"), Bytes.toBytes("regioninfo"), 1236024396271L,
+        (byte[])null);
+    assertTrue(c.compare(aaaa, bbb) < 0);
+
+    KeyValue x = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,row_0500,1236034574162"),
+        Bytes.toBytes("info"), Bytes.toBytes(""), 9223372036854775807L,
+        (byte[])null);
+    KeyValue y = new KeyValue(Bytes.toBytes("TestScanMultipleVersions,row_0500,1236034574162"),
+        Bytes.toBytes("info"), Bytes.toBytes("regioninfo"), 1236034574912L,
+        (byte[])null);
+    assertTrue(c.compare(x, y) < 0);
+    comparisons(new KeyValue.MetaComparator());
+    comparisons(new KeyValue.KVComparator());
+    metacomparisons(new KeyValue.RootComparator());
+    metacomparisons(new KeyValue.MetaComparator());
+  }
+
+  /**
+   * Tests cases where rows keys have characters below the ','.
+   * See HBASE-832
+   * @throws IOException
+   */
+  public void testKeyValueBorderCases() throws IOException {
+    // % sorts before , so if we don't do special comparator, rowB would
+    // come before rowA.
+    KeyValue rowA = new KeyValue(Bytes.toBytes("testtable,www.hbase.org/,1234"),
+      Bytes.toBytes("fam"), Bytes.toBytes(""), Long.MAX_VALUE, (byte[])null);
+    KeyValue rowB = new KeyValue(Bytes.toBytes("testtable,www.hbase.org/%20,99999"),
+        Bytes.toBytes("fam"), Bytes.toBytes(""), Long.MAX_VALUE, (byte[])null);
+    assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0);
+
+    rowA = new KeyValue(Bytes.toBytes("testtable,,1234"), Bytes.toBytes("fam"),
+        Bytes.toBytes(""), Long.MAX_VALUE, (byte[])null);
+    rowB = new KeyValue(Bytes.toBytes("testtable,$www.hbase.org/,99999"),
+        Bytes.toBytes("fam"), Bytes.toBytes(""), Long.MAX_VALUE, (byte[])null);
+    assertTrue(KeyValue.META_COMPARATOR.compare(rowA, rowB) < 0);
+
+    rowA = new KeyValue(Bytes.toBytes(".META.,testtable,www.hbase.org/,1234,4321"),
+        Bytes.toBytes("fam"), Bytes.toBytes(""), Long.MAX_VALUE, (byte[])null);
+    rowB = new KeyValue(Bytes.toBytes(".META.,testtable,www.hbase.org/%20,99999,99999"),
+        Bytes.toBytes("fam"), Bytes.toBytes(""), Long.MAX_VALUE, (byte[])null);
+    assertTrue(KeyValue.ROOT_COMPARATOR.compare(rowA, rowB) < 0);
+  }
+
+  private void metacomparisons(final KeyValue.MetaComparator c) {
+    long now = System.currentTimeMillis();
+    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now),
+      new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now)) == 0);
+    KeyValue a = new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now);
+    KeyValue b = new KeyValue(Bytes.toBytes(".META.,a,,0,2"), now);
+    assertTrue(c.compare(a, b) < 0);
+    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,a,,0,2"), now),
+      new KeyValue(Bytes.toBytes(".META.,a,,0,1"), now)) > 0);
+  }
+
+  private void comparisons(final KeyValue.KVComparator c) {
+    long now = System.currentTimeMillis();
+    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,1"), now),
+      new KeyValue(Bytes.toBytes(".META.,,1"), now)) == 0);
+    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,1"), now),
+      new KeyValue(Bytes.toBytes(".META.,,2"), now)) < 0);
+    assertTrue(c.compare(new KeyValue(Bytes.toBytes(".META.,,2"), now),
+      new KeyValue(Bytes.toBytes(".META.,,1"), now)) > 0);
+  }
+
+  public void testBinaryKeys() throws Exception {
+    Set<KeyValue> set = new TreeSet<KeyValue>(KeyValue.COMPARATOR);
+    final byte [] fam = Bytes.toBytes("col");
+    final byte [] qf = Bytes.toBytes("umn");
+    final byte [] nb = new byte[0];
+    KeyValue [] keys = {new KeyValue(Bytes.toBytes("aaaaa,\u0000\u0000,2"), fam, qf, 2, nb),
+      new KeyValue(Bytes.toBytes("aaaaa,\u0001,3"), fam, qf, 3, nb),
+      new KeyValue(Bytes.toBytes("aaaaa,,1"), fam, qf, 1, nb),
+      new KeyValue(Bytes.toBytes("aaaaa,\u1000,5"), fam, qf, 5, nb),
+      new KeyValue(Bytes.toBytes("aaaaa,a,4"), fam, qf, 4, nb),
+      new KeyValue(Bytes.toBytes("a,a,0"), fam, qf, 0, nb),
+    };
+    // Add to set with bad comparator
+    for (int i = 0; i < keys.length; i++) {
+      set.add(keys[i]);
+    }
+    // This will output the keys incorrectly.
+    boolean assertion = false;
+    int count = 0;
+    try {
+      for (KeyValue k: set) {
+        assertTrue(count++ == k.getTimestamp());
+      }
+    } catch (junit.framework.AssertionFailedError e) {
+      // Expected
+      assertion = true;
+    }
+    assertTrue(assertion);
+    // Make set with good comparator
+    set = new TreeSet<KeyValue>(new KeyValue.MetaComparator());
+    for (int i = 0; i < keys.length; i++) {
+      set.add(keys[i]);
+    }
+    count = 0;
+    for (KeyValue k: set) {
+      assertTrue(count++ == k.getTimestamp());
+    }
+    // Make up -ROOT- table keys.
+    KeyValue [] rootKeys = {
+        new KeyValue(Bytes.toBytes(".META.,aaaaa,\u0000\u0000,0,2"), fam, qf, 2, nb),
+        new KeyValue(Bytes.toBytes(".META.,aaaaa,\u0001,0,3"), fam, qf, 3, nb),
+        new KeyValue(Bytes.toBytes(".META.,aaaaa,,0,1"), fam, qf, 1, nb),
+        new KeyValue(Bytes.toBytes(".META.,aaaaa,\u1000,0,5"), fam, qf, 5, nb),
+        new KeyValue(Bytes.toBytes(".META.,aaaaa,a,0,4"), fam, qf, 4, nb),
+        new KeyValue(Bytes.toBytes(".META.,,0"), fam, qf, 0, nb),
+      };
+    // This will output the keys incorrectly.
+    set = new TreeSet<KeyValue>(new KeyValue.MetaComparator());
+    // Add to set with bad comparator
+    for (int i = 0; i < keys.length; i++) {
+      set.add(rootKeys[i]);
+    }
+    assertion = false;
+    count = 0;
+    try {
+      for (KeyValue k: set) {
+        assertTrue(count++ == k.getTimestamp());
+      }
+    } catch (junit.framework.AssertionFailedError e) {
+      // Expected
+      assertion = true;
+    }
+    // Now with right comparator
+    set = new TreeSet<KeyValue>(new KeyValue.RootComparator());
+    // Add to set with bad comparator
+    for (int i = 0; i < keys.length; i++) {
+      set.add(rootKeys[i]);
+    }
+    count = 0;
+    for (KeyValue k: set) {
+      assertTrue(count++ == k.getTimestamp());
+    }
+  }
+
+  public void testStackedUpKeyValue() {
+    // Test multiple KeyValues in a single blob.
+
+    // TODO actually write this test!
+
+  }
+
+  private final byte[] rowA = Bytes.toBytes("rowA");
+  private final byte[] rowB = Bytes.toBytes("rowB");
+
+  private final byte[] family = Bytes.toBytes("family");
+  private final byte[] qualA = Bytes.toBytes("qfA");
+
+  private void assertKVLess(KeyValue.KVComparator c,
+                            KeyValue less,
+                            KeyValue greater) {
+    int cmp = c.compare(less,greater);
+    assertTrue(cmp < 0);
+    cmp = c.compare(greater,less);
+    assertTrue(cmp > 0);
+  }
+
+  public void testFirstLastOnRow() {
+    final KVComparator c = KeyValue.COMPARATOR;
+    long ts = 1;
+
+    // These are listed in sort order (ie: every one should be less
+    // than the one on the next line).
+    final KeyValue firstOnRowA = KeyValue.createFirstOnRow(rowA);
+    final KeyValue kvA_1 = new KeyValue(rowA, null, null, ts, Type.Put);
+    final KeyValue kvA_2 = new KeyValue(rowA, family, qualA, ts, Type.Put);
+        
+    final KeyValue lastOnRowA = KeyValue.createLastOnRow(rowA);
+    final KeyValue firstOnRowB = KeyValue.createFirstOnRow(rowB);
+    final KeyValue kvB = new KeyValue(rowB, family, qualA, ts, Type.Put);
+
+    assertKVLess(c, firstOnRowA, firstOnRowB);
+    assertKVLess(c, firstOnRowA, kvA_1);
+    assertKVLess(c, firstOnRowA, kvA_2);
+    assertKVLess(c, kvA_1, kvA_2);
+    assertKVLess(c, kvA_2, firstOnRowB);
+    assertKVLess(c, kvA_1, firstOnRowB);
+
+    assertKVLess(c, lastOnRowA, firstOnRowB);
+    assertKVLess(c, firstOnRowB, kvB);
+    assertKVLess(c, lastOnRowA, kvB);
+
+    assertKVLess(c, kvA_2, lastOnRowA);
+    assertKVLess(c, kvA_1, lastOnRowA);
+    assertKVLess(c, firstOnRowA, lastOnRowA);
+  }
+
+  public void testConvertToKeyOnly() throws Exception {
+    long ts = 1;
+    byte [] value = Bytes.toBytes("a real value");
+    byte [] evalue = new byte[0]; // empty value
+
+    for (byte[] val : new byte[][]{value, evalue}) {
+      for (boolean useLen : new boolean[]{false,true}) {
+        KeyValue kv1 = new KeyValue(rowA, family, qualA, ts, val);
+        KeyValue kv1ko = kv1.clone();
+        assertTrue(kv1.equals(kv1ko));
+        kv1ko.convertToKeyOnly(useLen);
+        // keys are still the same
+        assertTrue(kv1.equals(kv1ko));
+        // but values are not
+        assertTrue(kv1ko.getValue().length == (useLen?Bytes.SIZEOF_INT:0));
+        if (useLen) {
+          assertEquals(kv1.getValueLength(), Bytes.toInt(kv1ko.getValue()));
+        }
+      }
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
new file mode 100644
index 0000000..7c97d94
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java
@@ -0,0 +1,247 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.util.Threads;
+
+/**
+ * Test whether region rebalancing works. (HBASE-71)
+ */
+public class TestRegionRebalancing extends HBaseClusterTestCase {
+  final Log LOG = LogFactory.getLog(this.getClass().getName());
+  HTable table;
+
+  HTableDescriptor desc;
+
+  final byte[] FIVE_HUNDRED_KBYTES;
+
+  final byte [] FAMILY_NAME = Bytes.toBytes("col");
+
+  /** constructor */
+  public TestRegionRebalancing() {
+    super(1);
+    FIVE_HUNDRED_KBYTES = new byte[500 * 1024];
+    for (int i = 0; i < 500 * 1024; i++) {
+      FIVE_HUNDRED_KBYTES[i] = 'x';
+    }
+
+    desc = new HTableDescriptor("test");
+    desc.addFamily(new HColumnDescriptor(FAMILY_NAME));
+  }
+
+  /**
+   * Before the hbase cluster starts up, create some dummy regions.
+   */
+  @Override
+  public void preHBaseClusterSetup() throws IOException {
+    // create a 20-region table by writing directly to disk
+    List<byte []> startKeys = new ArrayList<byte []>();
+    startKeys.add(null);
+    for (int i = 10; i < 29; i++) {
+      startKeys.add(Bytes.toBytes("row_" + i));
+    }
+    startKeys.add(null);
+    LOG.info(startKeys.size() + " start keys generated");
+
+    List<HRegion> regions = new ArrayList<HRegion>();
+    for (int i = 0; i < 20; i++) {
+      regions.add(createAregion(startKeys.get(i), startKeys.get(i+1)));
+    }
+
+    // Now create the root and meta regions and insert the data regions
+    // created above into the meta
+
+    createRootAndMetaRegions();
+    for (HRegion region : regions) {
+      HRegion.addRegionToMETA(meta, region);
+    }
+    closeRootAndMeta();
+  }
+
+  /**
+   * For HBASE-71. Try a few different configurations of starting and stopping
+   * region servers to see if the assignment or regions is pretty balanced.
+   * @throws IOException
+   */
+  public void testRebalancing() throws IOException {
+    table = new HTable(conf, "test");
+    assertEquals("Test table should have 20 regions",
+      20, table.getStartKeys().length);
+
+    // verify that the region assignments are balanced to start out
+    assertRegionsAreBalanced();
+
+    LOG.debug("Adding 2nd region server.");
+    // add a region server - total of 2
+    LOG.info("Started=" +
+      cluster.startRegionServer().getRegionServer().getServerName());
+    cluster.getMaster().balance();
+    assertRegionsAreBalanced();
+
+    // add a region server - total of 3
+    LOG.debug("Adding 3rd region server.");
+    LOG.info("Started=" +
+      cluster.startRegionServer().getRegionServer().getServerName());
+    cluster.getMaster().balance();
+    assertRegionsAreBalanced();
+
+    // kill a region server - total of 2
+    LOG.debug("Killing the 3rd region server.");
+    LOG.info("Stopped=" + cluster.stopRegionServer(2, false));
+    cluster.waitOnRegionServer(2);
+    cluster.getMaster().balance();
+    assertRegionsAreBalanced();
+
+    // start two more region servers - total of 4
+    LOG.debug("Adding 3rd region server");
+    LOG.info("Started=" +
+      cluster.startRegionServer().getRegionServer().getServerName());
+    LOG.debug("Adding 4th region server");
+    LOG.info("Started=" +
+      cluster.startRegionServer().getRegionServer().getServerName());
+    cluster.getMaster().balance();
+    assertRegionsAreBalanced();
+
+    for (int i = 0; i < 6; i++){
+      LOG.debug("Adding " + (i + 5) + "th region server");
+      cluster.startRegionServer();
+    }
+    cluster.getMaster().balance();
+    assertRegionsAreBalanced();
+  }
+
+  /** figure out how many regions are currently being served. */
+  private int getRegionCount() {
+    int total = 0;
+    for (HRegionServer server : getOnlineRegionServers()) {
+      total += server.getOnlineRegions().size();
+    }
+    return total;
+  }
+
+  /**
+   * Determine if regions are balanced. Figure out the total, divide by the
+   * number of online servers, then test if each server is +/- 1 of average
+   * rounded up.
+   */
+  private void assertRegionsAreBalanced() {
+    // TODO: Fix this test.  Old balancer used to run with 'slop'.  New
+    // balancer does not.
+    boolean success = false;
+    float slop = (float)0.1;
+    if (slop <= 0) slop = 1;
+
+    for (int i = 0; i < 5; i++) {
+      success = true;
+      // make sure all the regions are reassigned before we test balance
+      waitForAllRegionsAssigned();
+
+      int regionCount = getRegionCount();
+      List<HRegionServer> servers = getOnlineRegionServers();
+      double avg = cluster.getMaster().getServerManager().getAverageLoad();
+      int avgLoadPlusSlop = (int)Math.ceil(avg * (1 + slop));
+      int avgLoadMinusSlop = (int)Math.floor(avg * (1 - slop)) - 1;
+      LOG.debug("There are " + servers.size() + " servers and " + regionCount
+        + " regions. Load Average: " + avg + " low border: " + avgLoadMinusSlop
+        + ", up border: " + avgLoadPlusSlop + "; attempt: " + i);
+
+      for (HRegionServer server : servers) {
+        int serverLoad = server.getOnlineRegions().size();
+        LOG.debug(server.getServerName() + " Avg: " + avg + " actual: " + serverLoad);
+        if (!(avg > 2.0 && serverLoad <= avgLoadPlusSlop
+            && serverLoad >= avgLoadMinusSlop)) {
+          LOG.debug(server.getServerName() + " Isn't balanced!!! Avg: " + avg +
+              " actual: " + serverLoad + " slop: " + slop);
+          success = false;
+        }
+      }
+
+      if (!success) {
+        // one or more servers are not balanced. sleep a little to give it a
+        // chance to catch up. then, go back to the retry loop.
+        try {
+          Thread.sleep(10000);
+        } catch (InterruptedException e) {}
+
+        cluster.getMaster().balance();
+        continue;
+      }
+
+      // if we get here, all servers were balanced, so we should just return.
+      return;
+    }
+    // if we get here, we tried 5 times and never got to short circuit out of
+    // the retry loop, so this is a failure.
+    fail("After 5 attempts, region assignments were not balanced.");
+  }
+
+  private List<HRegionServer> getOnlineRegionServers() {
+    List<HRegionServer> list = new ArrayList<HRegionServer>();
+    for (JVMClusterUtil.RegionServerThread rst : cluster.getRegionServerThreads()) {
+      if (rst.getRegionServer().isOnline()) {
+        list.add(rst.getRegionServer());
+      }
+    }
+    return list;
+  }
+
+  /**
+   * Wait until all the regions are assigned.
+   */
+  private void waitForAllRegionsAssigned() {
+    while (getRegionCount() < 22) {
+    // while (!cluster.getMaster().allRegionsAssigned()) {
+      LOG.debug("Waiting for there to be 22 regions, but there are " + getRegionCount() + " right now.");
+      try {
+        Thread.sleep(1000);
+      } catch (InterruptedException e) {}
+    }
+  }
+
+  /**
+   * create a region with the specified start and end key and exactly one row
+   * inside.
+   */
+  private HRegion createAregion(byte [] startKey, byte [] endKey)
+  throws IOException {
+    HRegion region = createNewHRegion(desc, startKey, endKey);
+    byte [] keyToWrite = startKey == null ? Bytes.toBytes("row_000") : startKey;
+    Put put = new Put(keyToWrite);
+    put.add(FAMILY_NAME, null, Bytes.toBytes("test"));
+    region.put(put);
+    region.close();
+    region.getLog().closeAndDelete();
+    return region;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestScanMultipleVersions.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestScanMultipleVersions.java
new file mode 100644
index 0000000..1f51703
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestScanMultipleVersions.java
@@ -0,0 +1,197 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Regression test for HBASE-613
+ */
+public class TestScanMultipleVersions extends HBaseClusterTestCase {
+  private final byte[] TABLE_NAME = Bytes.toBytes("TestScanMultipleVersions");
+  private final HRegionInfo[] INFOS = new HRegionInfo[2];
+  private final HRegion[] REGIONS = new HRegion[2];
+  private final byte[][] ROWS = new byte[][] {
+      Bytes.toBytes("row_0200"),
+      Bytes.toBytes("row_0800")
+  };
+  private final long[] TIMESTAMPS = new long[] {
+      100L,
+      1000L
+  };
+  private HTableDescriptor desc = null;
+
+  @Override
+  protected void preHBaseClusterSetup() throws Exception {
+    testDir = new Path(conf.get(HConstants.HBASE_DIR));
+
+    // Create table description
+
+    this.desc = new HTableDescriptor(TABLE_NAME);
+    this.desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+
+    // Region 0 will contain the key range [,row_0500)
+    INFOS[0] = new HRegionInfo(this.desc, HConstants.EMPTY_START_ROW,
+        Bytes.toBytes("row_0500"));
+    // Region 1 will contain the key range [row_0500,)
+    INFOS[1] = new HRegionInfo(this.desc, Bytes.toBytes("row_0500"),
+        HConstants.EMPTY_END_ROW);
+
+    // Create root and meta regions
+    createRootAndMetaRegions();
+    // Create the regions
+    for (int i = 0; i < REGIONS.length; i++) {
+      REGIONS[i] =
+        HRegion.createHRegion(this.INFOS[i], this.testDir, this.conf);
+      // Insert data
+      for (int j = 0; j < TIMESTAMPS.length; j++) {
+        Put put = new Put(ROWS[i], TIMESTAMPS[j], null);
+        put.add(HConstants.CATALOG_FAMILY, null, TIMESTAMPS[j],
+            Bytes.toBytes(TIMESTAMPS[j]));
+        REGIONS[i].put(put);
+      }
+      // Insert the region we created into the meta
+      HRegion.addRegionToMETA(meta, REGIONS[i]);
+      // Close region
+      REGIONS[i].close();
+      REGIONS[i].getLog().closeAndDelete();
+    }
+    // Close root and meta regions
+    closeRootAndMeta();
+  }
+
+  /**
+   * @throws Exception
+   */
+  public void testScanMultipleVersions() throws Exception {
+    // At this point we have created multiple regions and both HDFS and HBase
+    // are running. There are 5 cases we have to test. Each is described below.
+    HTable t = new HTable(conf, TABLE_NAME);
+    for (int i = 0; i < ROWS.length; i++) {
+      for (int j = 0; j < TIMESTAMPS.length; j++) {
+        Get get = new Get(ROWS[i]);
+        get.addFamily(HConstants.CATALOG_FAMILY);
+        get.setTimeStamp(TIMESTAMPS[j]);
+        Result result = t.get(get);
+        int cellCount = 0;
+        for(@SuppressWarnings("unused")KeyValue kv : result.sorted()) {
+          cellCount++;
+        }
+        assertTrue(cellCount == 1);
+      }
+    }
+
+    // Case 1: scan with LATEST_TIMESTAMP. Should get two rows
+    int count = 0;
+    Scan scan = new Scan();
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    ResultScanner s = t.getScanner(scan);
+    try {
+      for (Result rr = null; (rr = s.next()) != null;) {
+        System.out.println(rr.toString());
+        count += 1;
+      }
+      assertEquals("Number of rows should be 2", 2, count);
+    } finally {
+      s.close();
+    }
+
+    // Case 2: Scan with a timestamp greater than most recent timestamp
+    // (in this case > 1000 and < LATEST_TIMESTAMP. Should get 2 rows.
+
+    count = 0;
+    scan = new Scan();
+    scan.setTimeRange(1000L, Long.MAX_VALUE);
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+
+    s = t.getScanner(scan);
+    try {
+      while (s.next() != null) {
+        count += 1;
+      }
+      assertEquals("Number of rows should be 2", 2, count);
+    } finally {
+      s.close();
+    }
+
+    // Case 3: scan with timestamp equal to most recent timestamp
+    // (in this case == 1000. Should get 2 rows.
+
+    count = 0;
+    scan = new Scan();
+    scan.setTimeStamp(1000L);
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+
+    s = t.getScanner(scan);
+    try {
+      while (s.next() != null) {
+        count += 1;
+      }
+      assertEquals("Number of rows should be 2", 2, count);
+    } finally {
+      s.close();
+    }
+
+    // Case 4: scan with timestamp greater than first timestamp but less than
+    // second timestamp (100 < timestamp < 1000). Should get 2 rows.
+
+    count = 0;
+    scan = new Scan();
+    scan.setTimeRange(100L, 1000L);
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+
+    s = t.getScanner(scan);
+    try {
+      while (s.next() != null) {
+        count += 1;
+      }
+      assertEquals("Number of rows should be 2", 2, count);
+    } finally {
+      s.close();
+    }
+
+    // Case 5: scan with timestamp equal to first timestamp (100)
+    // Should get 2 rows.
+
+    count = 0;
+    scan = new Scan();
+    scan.setTimeStamp(100L);
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+
+    s = t.getScanner(scan);
+    try {
+      while (s.next() != null) {
+        count += 1;
+      }
+      assertEquals("Number of rows should be 2", 2, count);
+    } finally {
+      s.close();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestSerialization.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestSerialization.java
new file mode 100644
index 0000000..befcdaf
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestSerialization.java
@@ -0,0 +1,586 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+
+import static org.junit.Assert.*;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableSet;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.RowLock;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.RowFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.io.HbaseMapWritable;
+import org.apache.hadoop.hbase.io.TimeRange;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.io.DataInputBuffer;
+import org.junit.Test;
+
+/**
+ * Test HBase Writables serializations
+ */
+public class TestSerialization {
+
+  @Test public void testCompareFilter() throws Exception {
+    Filter f = new RowFilter(CompareOp.EQUAL,
+      new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    byte [] bytes = Writables.getBytes(f);
+    Filter ff = (Filter)Writables.getWritable(bytes, new RowFilter());
+    assertNotNull(ff);
+  }
+
+  @Test public void testKeyValue() throws Exception {
+    final String name = "testKeyValue";
+    byte [] row = Bytes.toBytes(name);
+    byte [] family = Bytes.toBytes(name);
+    byte [] qualifier = Bytes.toBytes(name);
+    KeyValue original = new KeyValue(row, family, qualifier);
+    byte [] bytes = Writables.getBytes(original);
+    KeyValue newone = (KeyValue)Writables.getWritable(bytes, new KeyValue());
+    assertTrue(KeyValue.COMPARATOR.compare(original, newone) == 0);
+  }
+
+  @SuppressWarnings("unchecked")
+  @Test public void testHbaseMapWritable() throws Exception {
+    HbaseMapWritable<byte [], byte []> hmw =
+      new HbaseMapWritable<byte[], byte[]>();
+    hmw.put("key".getBytes(), "value".getBytes());
+    byte [] bytes = Writables.getBytes(hmw);
+    hmw = (HbaseMapWritable<byte[], byte[]>)
+      Writables.getWritable(bytes, new HbaseMapWritable<byte [], byte []>());
+    assertTrue(hmw.size() == 1);
+    assertTrue(Bytes.equals("value".getBytes(), hmw.get("key".getBytes())));
+  }
+
+  @Test public void testHMsg() throws Exception {
+    final String name = "testHMsg";
+    HMsg  m = new HMsg(HMsg.Type.STOP_REGIONSERVER);
+    byte [] mb = Writables.getBytes(m);
+    HMsg deserializedHMsg = (HMsg)Writables.getWritable(mb, new HMsg());
+    assertTrue(m.equals(deserializedHMsg));
+    m = new HMsg(HMsg.Type.STOP_REGIONSERVER,
+      new HRegionInfo(new HTableDescriptor(name),
+        HConstants.EMPTY_BYTE_ARRAY, HConstants.EMPTY_BYTE_ARRAY),
+        "Some message".getBytes());
+    mb = Writables.getBytes(m);
+    deserializedHMsg = (HMsg)Writables.getWritable(mb, new HMsg());
+    assertTrue(m.equals(deserializedHMsg));
+  }
+
+  @Test public void testTableDescriptor() throws Exception {
+    final String name = "testTableDescriptor";
+    HTableDescriptor htd = createTableDescriptor(name);
+    byte [] mb = Writables.getBytes(htd);
+    HTableDescriptor deserializedHtd =
+      (HTableDescriptor)Writables.getWritable(mb, new HTableDescriptor());
+    assertEquals(htd.getNameAsString(), deserializedHtd.getNameAsString());
+  }
+
+  /**
+   * Test RegionInfo serialization
+   * @throws Exception
+   */
+  @Test public void testRegionInfo() throws Exception {
+    final String name = "testRegionInfo";
+    HTableDescriptor htd = new HTableDescriptor(name);
+    String [] families = new String [] {"info", "anchor"};
+    for (int i = 0; i < families.length; i++) {
+      htd.addFamily(new HColumnDescriptor(families[i]));
+    }
+    HRegionInfo hri = new HRegionInfo(htd,
+      HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW);
+    byte [] hrib = Writables.getBytes(hri);
+    HRegionInfo deserializedHri =
+      (HRegionInfo)Writables.getWritable(hrib, new HRegionInfo());
+    assertEquals(hri.getEncodedName(), deserializedHri.getEncodedName());
+    assertEquals(hri.getTableDesc().getFamilies().size(),
+      deserializedHri.getTableDesc().getFamilies().size());
+  }
+
+  /**
+   * Test ServerInfo serialization
+   * @throws Exception
+   */
+  @Test public void testServerInfo() throws Exception {
+    HServerInfo hsi = new HServerInfo(new HServerAddress("0.0.0.0:123"), -1,
+      1245, "default name");
+    byte [] b = Writables.getBytes(hsi);
+    HServerInfo deserializedHsi =
+      (HServerInfo)Writables.getWritable(b, new HServerInfo());
+    assertTrue(hsi.equals(deserializedHsi));
+  }
+
+  @Test public void testPut() throws Exception{
+    byte[] row = "row".getBytes();
+    byte[] fam = "fam".getBytes();
+    byte[] qf1 = "qf1".getBytes();
+    byte[] qf2 = "qf2".getBytes();
+    byte[] qf3 = "qf3".getBytes();
+    byte[] qf4 = "qf4".getBytes();
+    byte[] qf5 = "qf5".getBytes();
+    byte[] qf6 = "qf6".getBytes();
+    byte[] qf7 = "qf7".getBytes();
+    byte[] qf8 = "qf8".getBytes();
+
+    long ts = System.currentTimeMillis();
+    byte[] val = "val".getBytes();
+
+    Put put = new Put(row);
+    put.add(fam, qf1, ts, val);
+    put.add(fam, qf2, ts, val);
+    put.add(fam, qf3, ts, val);
+    put.add(fam, qf4, ts, val);
+    put.add(fam, qf5, ts, val);
+    put.add(fam, qf6, ts, val);
+    put.add(fam, qf7, ts, val);
+    put.add(fam, qf8, ts, val);
+
+    byte[] sb = Writables.getBytes(put);
+    Put desPut = (Put)Writables.getWritable(sb, new Put());
+
+    //Timing test
+//    long start = System.nanoTime();
+//    desPut = (Put)Writables.getWritable(sb, new Put());
+//    long stop = System.nanoTime();
+//    System.out.println("timer " +(stop-start));
+
+    assertTrue(Bytes.equals(put.getRow(), desPut.getRow()));
+    List<KeyValue> list = null;
+    List<KeyValue> desList = null;
+    for(Map.Entry<byte[], List<KeyValue>> entry : put.getFamilyMap().entrySet()){
+      assertTrue(desPut.getFamilyMap().containsKey(entry.getKey()));
+      list = entry.getValue();
+      desList = desPut.getFamilyMap().get(entry.getKey());
+      for(int i=0; i<list.size(); i++){
+        assertTrue(list.get(i).equals(desList.get(i)));
+      }
+    }
+  }
+
+
+  @Test public void testPut2() throws Exception{
+    byte[] row = "testAbort,,1243116656250".getBytes();
+    byte[] fam = "historian".getBytes();
+    byte[] qf1 = "creation".getBytes();
+
+    long ts = 9223372036854775807L;
+    byte[] val = "dont-care".getBytes();
+
+    Put put = new Put(row);
+    put.add(fam, qf1, ts, val);
+
+    byte[] sb = Writables.getBytes(put);
+    Put desPut = (Put)Writables.getWritable(sb, new Put());
+
+    assertTrue(Bytes.equals(put.getRow(), desPut.getRow()));
+    List<KeyValue> list = null;
+    List<KeyValue> desList = null;
+    for(Map.Entry<byte[], List<KeyValue>> entry : put.getFamilyMap().entrySet()){
+      assertTrue(desPut.getFamilyMap().containsKey(entry.getKey()));
+      list = entry.getValue();
+      desList = desPut.getFamilyMap().get(entry.getKey());
+      for(int i=0; i<list.size(); i++){
+        assertTrue(list.get(i).equals(desList.get(i)));
+      }
+    }
+  }
+
+
+  @Test public void testDelete() throws Exception{
+    byte[] row = "row".getBytes();
+    byte[] fam = "fam".getBytes();
+    byte[] qf1 = "qf1".getBytes();
+
+    long ts = System.currentTimeMillis();
+
+    Delete delete = new Delete(row);
+    delete.deleteColumn(fam, qf1, ts);
+
+    byte[] sb = Writables.getBytes(delete);
+    Delete desDelete = (Delete)Writables.getWritable(sb, new Delete());
+
+    assertTrue(Bytes.equals(delete.getRow(), desDelete.getRow()));
+    List<KeyValue> list = null;
+    List<KeyValue> desList = null;
+    for(Map.Entry<byte[], List<KeyValue>> entry :
+        delete.getFamilyMap().entrySet()){
+      assertTrue(desDelete.getFamilyMap().containsKey(entry.getKey()));
+      list = entry.getValue();
+      desList = desDelete.getFamilyMap().get(entry.getKey());
+      for(int i=0; i<list.size(); i++){
+        assertTrue(list.get(i).equals(desList.get(i)));
+      }
+    }
+  }
+
+  @Test public void testGet() throws Exception{
+    byte[] row = "row".getBytes();
+    byte[] fam = "fam".getBytes();
+    byte[] qf1 = "qf1".getBytes();
+
+    long ts = System.currentTimeMillis();
+    int maxVersions = 2;
+    long lockid = 5;
+    RowLock rowLock = new RowLock(lockid);
+
+    Get get = new Get(row, rowLock);
+    get.addColumn(fam, qf1);
+    get.setTimeRange(ts, ts+1);
+    get.setMaxVersions(maxVersions);
+
+    byte[] sb = Writables.getBytes(get);
+    Get desGet = (Get)Writables.getWritable(sb, new Get());
+
+    assertTrue(Bytes.equals(get.getRow(), desGet.getRow()));
+    Set<byte[]> set = null;
+    Set<byte[]> desSet = null;
+
+    for(Map.Entry<byte[], NavigableSet<byte[]>> entry :
+        get.getFamilyMap().entrySet()){
+      assertTrue(desGet.getFamilyMap().containsKey(entry.getKey()));
+      set = entry.getValue();
+      desSet = desGet.getFamilyMap().get(entry.getKey());
+      for(byte [] qualifier : set){
+        assertTrue(desSet.contains(qualifier));
+      }
+    }
+
+    assertEquals(get.getLockId(), desGet.getLockId());
+    assertEquals(get.getMaxVersions(), desGet.getMaxVersions());
+    TimeRange tr = get.getTimeRange();
+    TimeRange desTr = desGet.getTimeRange();
+    assertEquals(tr.getMax(), desTr.getMax());
+    assertEquals(tr.getMin(), desTr.getMin());
+  }
+
+
+  @Test public void testScan() throws Exception {
+    
+    byte[] startRow = "startRow".getBytes();
+    byte[] stopRow  = "stopRow".getBytes();
+    byte[] fam = "fam".getBytes();
+    byte[] qf1 = "qf1".getBytes();
+
+    long ts = System.currentTimeMillis();
+    int maxVersions = 2;
+
+    Scan scan = new Scan(startRow, stopRow);
+    scan.addColumn(fam, qf1);
+    scan.setTimeRange(ts, ts+1);
+    scan.setMaxVersions(maxVersions);
+
+    byte[] sb = Writables.getBytes(scan);
+    Scan desScan = (Scan)Writables.getWritable(sb, new Scan());
+
+    assertTrue(Bytes.equals(scan.getStartRow(), desScan.getStartRow()));
+    assertTrue(Bytes.equals(scan.getStopRow(), desScan.getStopRow()));
+    assertEquals(scan.getCacheBlocks(), desScan.getCacheBlocks());
+    Set<byte[]> set = null;
+    Set<byte[]> desSet = null;
+
+    for(Map.Entry<byte[], NavigableSet<byte[]>> entry :
+        scan.getFamilyMap().entrySet()){
+      assertTrue(desScan.getFamilyMap().containsKey(entry.getKey()));
+      set = entry.getValue();
+      desSet = desScan.getFamilyMap().get(entry.getKey());
+      for(byte[] column : set){
+        assertTrue(desSet.contains(column));
+      }
+
+      // Test filters are serialized properly.
+      scan = new Scan(startRow);
+      final String name = "testScan";
+      byte [] prefix = Bytes.toBytes(name);
+      scan.setFilter(new PrefixFilter(prefix));
+      sb = Writables.getBytes(scan);
+      desScan = (Scan)Writables.getWritable(sb, new Scan());
+      Filter f = desScan.getFilter();
+      assertTrue(f instanceof PrefixFilter);
+    }
+
+    assertEquals(scan.getMaxVersions(), desScan.getMaxVersions());
+    TimeRange tr = scan.getTimeRange();
+    TimeRange desTr = desScan.getTimeRange();
+    assertEquals(tr.getMax(), desTr.getMax());
+    assertEquals(tr.getMin(), desTr.getMin());
+  }
+
+  @Test public void testResultEmpty() throws Exception {
+    List<KeyValue> keys = new ArrayList<KeyValue>();
+    Result r = new Result(keys);
+    assertTrue(r.isEmpty());
+    byte [] rb = Writables.getBytes(r);
+    Result deserializedR = (Result)Writables.getWritable(rb, new Result());
+    assertTrue(deserializedR.isEmpty());
+  }
+
+
+  @Test public void testResult() throws Exception {
+    byte [] rowA = Bytes.toBytes("rowA");
+    byte [] famA = Bytes.toBytes("famA");
+    byte [] qfA = Bytes.toBytes("qfA");
+    byte [] valueA = Bytes.toBytes("valueA");
+
+    byte [] rowB = Bytes.toBytes("rowB");
+    byte [] famB = Bytes.toBytes("famB");
+    byte [] qfB = Bytes.toBytes("qfB");
+    byte [] valueB = Bytes.toBytes("valueB");
+
+    KeyValue kvA = new KeyValue(rowA, famA, qfA, valueA);
+    KeyValue kvB = new KeyValue(rowB, famB, qfB, valueB);
+
+    Result result = new Result(new KeyValue[]{kvA, kvB});
+
+    byte [] rb = Writables.getBytes(result);
+    Result deResult = (Result)Writables.getWritable(rb, new Result());
+
+    assertTrue("results are not equivalent, first key mismatch",
+        result.sorted()[0].equals(deResult.sorted()[0]));
+
+    assertTrue("results are not equivalent, second key mismatch",
+        result.sorted()[1].equals(deResult.sorted()[1]));
+
+    // Test empty Result
+    Result r = new Result();
+    byte [] b = Writables.getBytes(r);
+    Result deserialized = (Result)Writables.getWritable(b, new Result());
+    assertEquals(r.size(), deserialized.size());
+  }
+
+  @Test public void testResultDynamicBuild() throws Exception {
+    byte [] rowA = Bytes.toBytes("rowA");
+    byte [] famA = Bytes.toBytes("famA");
+    byte [] qfA = Bytes.toBytes("qfA");
+    byte [] valueA = Bytes.toBytes("valueA");
+
+    byte [] rowB = Bytes.toBytes("rowB");
+    byte [] famB = Bytes.toBytes("famB");
+    byte [] qfB = Bytes.toBytes("qfB");
+    byte [] valueB = Bytes.toBytes("valueB");
+
+    KeyValue kvA = new KeyValue(rowA, famA, qfA, valueA);
+    KeyValue kvB = new KeyValue(rowB, famB, qfB, valueB);
+
+    Result result = new Result(new KeyValue[]{kvA, kvB});
+
+    byte [] rb = Writables.getBytes(result);
+
+
+    // Call getRow() first
+    Result deResult = (Result)Writables.getWritable(rb, new Result());
+    byte [] row = deResult.getRow();
+    assertTrue(Bytes.equals(row, rowA));
+
+    // Call sorted() first
+    deResult = (Result)Writables.getWritable(rb, new Result());
+    assertTrue("results are not equivalent, first key mismatch",
+        result.sorted()[0].equals(deResult.sorted()[0]));
+    assertTrue("results are not equivalent, second key mismatch",
+        result.sorted()[1].equals(deResult.sorted()[1]));
+
+    // Call raw() first
+    deResult = (Result)Writables.getWritable(rb, new Result());
+    assertTrue("results are not equivalent, first key mismatch",
+        result.raw()[0].equals(deResult.raw()[0]));
+    assertTrue("results are not equivalent, second key mismatch",
+        result.raw()[1].equals(deResult.raw()[1]));
+
+
+  }
+
+  @Test public void testResultArray() throws Exception {
+    byte [] rowA = Bytes.toBytes("rowA");
+    byte [] famA = Bytes.toBytes("famA");
+    byte [] qfA = Bytes.toBytes("qfA");
+    byte [] valueA = Bytes.toBytes("valueA");
+
+    byte [] rowB = Bytes.toBytes("rowB");
+    byte [] famB = Bytes.toBytes("famB");
+    byte [] qfB = Bytes.toBytes("qfB");
+    byte [] valueB = Bytes.toBytes("valueB");
+
+    KeyValue kvA = new KeyValue(rowA, famA, qfA, valueA);
+    KeyValue kvB = new KeyValue(rowB, famB, qfB, valueB);
+
+
+    Result result1 = new Result(new KeyValue[]{kvA, kvB});
+    Result result2 = new Result(new KeyValue[]{kvB});
+    Result result3 = new Result(new KeyValue[]{kvB});
+
+    Result [] results = new Result [] {result1, result2, result3};
+
+    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(byteStream);
+    Result.writeArray(out, results);
+
+    byte [] rb = byteStream.toByteArray();
+
+    DataInputBuffer in = new DataInputBuffer();
+    in.reset(rb, 0, rb.length);
+
+    Result [] deResults = Result.readArray(in);
+
+    assertTrue(results.length == deResults.length);
+
+    for(int i=0;i<results.length;i++) {
+      KeyValue [] keysA = results[i].sorted();
+      KeyValue [] keysB = deResults[i].sorted();
+      assertTrue(keysA.length == keysB.length);
+      for(int j=0;j<keysA.length;j++) {
+        assertTrue("Expected equivalent keys but found:\n" +
+            "KeyA : " + keysA[j].toString() + "\n" +
+            "KeyB : " + keysB[j].toString() + "\n" +
+            keysA.length + " total keys, " + i + "th so far"
+            ,keysA[j].equals(keysB[j]));
+      }
+    }
+
+  }
+
+  @Test public void testResultArrayEmpty() throws Exception {
+    List<KeyValue> keys = new ArrayList<KeyValue>();
+    Result r = new Result(keys);
+    Result [] results = new Result [] {r};
+
+    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(byteStream);
+
+    Result.writeArray(out, results);
+
+    results = null;
+
+    byteStream = new ByteArrayOutputStream();
+    out = new DataOutputStream(byteStream);
+    Result.writeArray(out, results);
+
+    byte [] rb = byteStream.toByteArray();
+
+    DataInputBuffer in = new DataInputBuffer();
+    in.reset(rb, 0, rb.length);
+
+    Result [] deResults = Result.readArray(in);
+
+    assertTrue(deResults.length == 0);
+
+    results = new Result[0];
+
+    byteStream = new ByteArrayOutputStream();
+    out = new DataOutputStream(byteStream);
+    Result.writeArray(out, results);
+
+    rb = byteStream.toByteArray();
+
+    in = new DataInputBuffer();
+    in.reset(rb, 0, rb.length);
+
+    deResults = Result.readArray(in);
+
+    assertTrue(deResults.length == 0);
+
+  }
+
+  @Test public void testTimeRange() throws Exception{
+    TimeRange tr = new TimeRange(0,5);
+    byte [] mb = Writables.getBytes(tr);
+    TimeRange deserializedTr =
+      (TimeRange)Writables.getWritable(mb, new TimeRange());
+
+    assertEquals(tr.getMax(), deserializedTr.getMax());
+    assertEquals(tr.getMin(), deserializedTr.getMin());
+
+  }
+
+  @Test public void testKeyValue2() throws Exception {
+    final String name = "testKeyValue2";
+    byte[] row = name.getBytes();
+    byte[] fam = "fam".getBytes();
+    byte[] qf = "qf".getBytes();
+    long ts = System.currentTimeMillis();
+    byte[] val = "val".getBytes();
+
+    KeyValue kv = new KeyValue(row, fam, qf, ts, val);
+
+    byte [] mb = Writables.getBytes(kv);
+    KeyValue deserializedKv =
+      (KeyValue)Writables.getWritable(mb, new KeyValue());
+    assertTrue(Bytes.equals(kv.getBuffer(), deserializedKv.getBuffer()));
+    assertEquals(kv.getOffset(), deserializedKv.getOffset());
+    assertEquals(kv.getLength(), deserializedKv.getLength());
+  }
+
+  protected static final int MAXVERSIONS = 3;
+  protected final static byte [] fam1 = Bytes.toBytes("colfamily1");
+  protected final static byte [] fam2 = Bytes.toBytes("colfamily2");
+  protected final static byte [] fam3 = Bytes.toBytes("colfamily3");
+  protected static final byte [][] COLUMNS = {fam1, fam2, fam3};
+
+  /**
+   * Create a table of name <code>name</code> with {@link COLUMNS} for
+   * families.
+   * @param name Name to give table.
+   * @return Column descriptor.
+   */
+  protected HTableDescriptor createTableDescriptor(final String name) {
+    return createTableDescriptor(name, MAXVERSIONS);
+  }
+
+  /**
+   * Create a table of name <code>name</code> with {@link COLUMNS} for
+   * families.
+   * @param name Name to give table.
+   * @param versions How many versions to allow per column.
+   * @return Column descriptor.
+   */
+  protected HTableDescriptor createTableDescriptor(final String name,
+      final int versions) {
+    HTableDescriptor htd = new HTableDescriptor(name);
+    htd.addFamily(new HColumnDescriptor(fam1, versions,
+      HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+      Integer.MAX_VALUE, HConstants.FOREVER,
+      HColumnDescriptor.DEFAULT_BLOOMFILTER,
+      HConstants.REPLICATION_SCOPE_LOCAL));
+    htd.addFamily(new HColumnDescriptor(fam2, versions,
+        HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+        Integer.MAX_VALUE, HConstants.FOREVER,
+        HColumnDescriptor.DEFAULT_BLOOMFILTER,
+        HConstants.REPLICATION_SCOPE_LOCAL));
+    htd.addFamily(new HColumnDescriptor(fam3, versions,
+        HColumnDescriptor.DEFAULT_COMPRESSION, false, false,
+        Integer.MAX_VALUE,  HConstants.FOREVER,
+        HColumnDescriptor.DEFAULT_BLOOMFILTER,
+        HConstants.REPLICATION_SCOPE_LOCAL));
+    return htd;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java b/0.90/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
new file mode 100644
index 0000000..5852ab7
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java
@@ -0,0 +1,237 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZKConfig;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.ZooKeeper;
+import org.apache.zookeeper.ZooKeeper.States;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestZooKeeper {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+
+  private final static HBaseTestingUtility
+      TEST_UTIL = new HBaseTestingUtility();
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    // Test we can first start the ZK cluster by itself
+    TEST_UTIL.startMiniZKCluster();
+    TEST_UTIL.getConfiguration().setBoolean("dfs.support.append", true);
+    TEST_UTIL.startMiniCluster(2);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+    TEST_UTIL.ensureSomeRegionServersAvailable(2);
+  }
+
+  /**
+   * See HBASE-1232 and http://wiki.apache.org/hadoop/ZooKeeper/FAQ#4.
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testClientSessionExpired()
+  throws IOException, InterruptedException {
+    LOG.info("testClientSessionExpired");
+    Configuration c = new Configuration(TEST_UTIL.getConfiguration());
+    new HTable(c, HConstants.META_TABLE_NAME);
+    String quorumServers = ZKConfig.getZKQuorumServersString(c);
+    int sessionTimeout = 5 * 1000; // 5 seconds
+    HConnection connection = HConnectionManager.getConnection(c);
+    ZooKeeperWatcher connectionZK = connection.getZooKeeperWatcher();
+    long sessionID = connectionZK.getZooKeeper().getSessionId();
+    byte[] password = connectionZK.getZooKeeper().getSessionPasswd();
+    ZooKeeper zk = new ZooKeeper(quorumServers, sessionTimeout,
+        EmptyWatcher.instance, sessionID, password);
+    zk.close();
+
+    Thread.sleep(sessionTimeout * 3L);
+
+    // provoke session expiration by doing something with ZK
+    ZKUtil.dump(connectionZK);
+
+    // Check that the old ZK conenction is closed, means we did expire
+    System.err.println("ZooKeeper should have timed out");
+    LOG.info("state=" + connectionZK.getZooKeeper().getState());
+    Assert.assertTrue(connectionZK.getZooKeeper().getState().equals(
+        States.CLOSED));
+
+    // Check that the client recovered
+    ZooKeeperWatcher newConnectionZK = connection.getZooKeeperWatcher();
+    LOG.info("state=" + newConnectionZK.getZooKeeper().getState());
+    Assert.assertTrue(newConnectionZK.getZooKeeper().getState().equals(
+        States.CONNECTED));
+  }
+  
+  @Test
+  public void testRegionServerSessionExpired() throws Exception {
+    LOG.info("Starting testRegionServerSessionExpired");
+    int metaIndex = TEST_UTIL.getMiniHBaseCluster().getServerWithMeta();
+    TEST_UTIL.expireRegionServerSession(metaIndex);
+    testSanity();
+  }
+
+  //@Test
+  public void disabledTestMasterSessionExpired() throws Exception {
+    LOG.info("Starting testMasterSessionExpired");
+    TEST_UTIL.expireMasterSession();
+    testSanity();
+  }
+
+  /**
+   * Make sure we can use the cluster
+   * @throws Exception
+   */
+  public void testSanity() throws Exception{
+    HBaseAdmin admin =
+      new HBaseAdmin(new Configuration(TEST_UTIL.getConfiguration()));
+    String tableName = "test"+System.currentTimeMillis();
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    HColumnDescriptor family = new HColumnDescriptor("fam");
+    desc.addFamily(family);
+    LOG.info("Creating table " + tableName);
+    admin.createTable(desc);
+
+    HTable table =
+      new HTable(new Configuration(TEST_UTIL.getConfiguration()), tableName);
+    Put put = new Put(Bytes.toBytes("testrow"));
+    put.add(Bytes.toBytes("fam"),
+        Bytes.toBytes("col"), Bytes.toBytes("testdata"));
+    LOG.info("Putting table " + tableName);
+    table.put(put);
+
+  }
+
+  @Test
+  public void testMultipleZK() {
+    try {
+      HTable localMeta =
+        new HTable(new Configuration(TEST_UTIL.getConfiguration()), HConstants.META_TABLE_NAME);
+      Configuration otherConf = new Configuration(TEST_UTIL.getConfiguration());
+      otherConf.set(HConstants.ZOOKEEPER_QUORUM, "127.0.0.1");
+      HTable ipMeta = new HTable(otherConf, HConstants.META_TABLE_NAME);
+
+      // dummy, just to open the connection
+      localMeta.exists(new Get(HConstants.LAST_ROW));
+      ipMeta.exists(new Get(HConstants.LAST_ROW));
+
+      // make sure they aren't the same
+      assertFalse(HConnectionManager.getConnection(localMeta.getConfiguration()).getZooKeeperWatcher()
+          == HConnectionManager.getConnection(otherConf).getZooKeeperWatcher());
+      assertFalse(HConnectionManager.getConnection(localMeta.getConfiguration())
+          .getZooKeeperWatcher().getQuorum().equals(HConnectionManager
+              .getConnection(otherConf).getZooKeeperWatcher().getQuorum()));
+    } catch (Exception e) {
+      e.printStackTrace();
+      fail();
+    }
+  }
+
+  /**
+   * Create a bunch of znodes in a hierarchy, try deleting one that has childs
+   * (it will fail), then delete it recursively, then delete the last znode
+   * @throws Exception
+   */
+  @Test
+  public void testZNodeDeletes() throws Exception {
+    ZooKeeperWatcher zkw = new ZooKeeperWatcher(
+      new Configuration(TEST_UTIL.getConfiguration()), 
+      TestZooKeeper.class.getName(), null);
+    ZKUtil.createWithParents(zkw, "/l1/l2/l3/l4");
+    try {
+      ZKUtil.deleteNode(zkw, "/l1/l2");
+      fail("We should not be able to delete if znode has childs");
+    } catch (KeeperException ex) {
+      assertNotNull(ZKUtil.getDataNoWatch(zkw, "/l1/l2/l3/l4", null));
+    }
+    ZKUtil.deleteNodeRecursively(zkw, "/l1/l2");
+    assertNull(ZKUtil.getDataNoWatch(zkw, "/l1/l2/l3/l4", null));
+    ZKUtil.deleteNode(zkw, "/l1");
+    assertNull(ZKUtil.getDataNoWatch(zkw, "/l1/l2", null));
+  }
+
+  @Test
+  public void testClusterKey() throws Exception {
+    testKey("server", "2181", "hbase");
+    testKey("server1,server2,server3", "2181", "hbase");
+    try {
+      ZKUtil.transformClusterKey("2181:hbase");
+    } catch (IOException ex) {
+      // OK
+    }
+  }
+
+  private void testKey(String ensemble, String port, String znode)
+      throws IOException {
+    Configuration conf = new Configuration();
+    String key = ensemble+":"+port+":"+znode;
+    String[] parts = ZKUtil.transformClusterKey(key);
+    assertEquals(ensemble, parts[0]);
+    assertEquals(port, parts[1]);
+    assertEquals(znode, parts[2]);
+    ZKUtil.applyClusterKeyToConf(conf, key);
+    assertEquals(parts[0], conf.get(HConstants.ZOOKEEPER_QUORUM));
+    assertEquals(parts[1], conf.get("hbase.zookeeper.property.clientPort"));
+    assertEquals(parts[2], conf.get(HConstants.ZOOKEEPER_ZNODE_PARENT));
+    String reconstructedKey = ZKUtil.getZooKeeperClusterKey(conf);
+    assertEquals(key, reconstructedKey);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/TimestampTestBase.java b/0.90/src/test/java/org/apache/hadoop/hbase/TimestampTestBase.java
new file mode 100644
index 0000000..1105509
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/TimestampTestBase.java
@@ -0,0 +1,275 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests user specifiable time stamps putting, getting and scanning.  Also
+ * tests same in presence of deletes.  Test cores are written so can be
+ * run against an HRegion and against an HTable: i.e. both local and remote.
+ */
+public class TimestampTestBase extends HBaseTestCase {
+  private static final long T0 = 10L;
+  private static final long T1 = 100L;
+  private static final long T2 = 200L;
+
+  private static final byte [] FAMILY_NAME = Bytes.toBytes("colfamily1");
+  private static final byte [] QUALIFIER_NAME = Bytes.toBytes("contents");
+
+  private static final byte [] ROW = Bytes.toBytes("row");
+
+    /*
+   * Run test that delete works according to description in <a
+   * href="https://issues.apache.org/jira/browse/HADOOP-1784">hadoop-1784</a>.
+   * @param incommon
+   * @param flusher
+   * @throws IOException
+   */
+  public static void doTestDelete(final Incommon incommon, FlushCache flusher)
+  throws IOException {
+    // Add values at various timestamps (Values are timestampes as bytes).
+    put(incommon, T0);
+    put(incommon, T1);
+    put(incommon, T2);
+    put(incommon);
+    // Verify that returned versions match passed timestamps.
+    assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T2, T1});
+
+    // If I delete w/o specifying a timestamp, this means I'm deleting the
+    // latest.
+    delete(incommon);
+    // Verify that I get back T2 through T1 -- that the latest version has
+    // been deleted.
+    assertVersions(incommon, new long [] {T2, T1, T0});
+
+    // Flush everything out to disk and then retry
+    flusher.flushcache();
+    assertVersions(incommon, new long [] {T2, T1, T0});
+
+    // Now add, back a latest so I can test remove other than the latest.
+    put(incommon);
+    assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T2, T1});
+    delete(incommon, T2);
+    assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T1, T0});
+    // Flush everything out to disk and then retry
+    flusher.flushcache();
+    assertVersions(incommon, new long [] {HConstants.LATEST_TIMESTAMP, T1, T0});
+
+    // Now try deleting all from T2 back inclusive (We first need to add T2
+    // back into the mix and to make things a little interesting, delete and
+    // then readd T1.
+    put(incommon, T2);
+    delete(incommon, T1);
+    put(incommon, T1);
+
+    Delete delete = new Delete(ROW);
+    delete.deleteColumns(FAMILY_NAME, QUALIFIER_NAME, T2);
+    incommon.delete(delete, null, true);
+
+    // Should only be current value in set.  Assert this is so
+    assertOnlyLatest(incommon, HConstants.LATEST_TIMESTAMP);
+
+    // Flush everything out to disk and then redo above tests
+    flusher.flushcache();
+    assertOnlyLatest(incommon, HConstants.LATEST_TIMESTAMP);
+  }
+
+  private static void assertOnlyLatest(final Incommon incommon,
+    final long currentTime)
+  throws IOException {
+    Get get = null;
+    get = new Get(ROW);
+    get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+    get.setMaxVersions(3);
+    Result result = incommon.get(get);
+    assertEquals(1, result.size());
+    long time = Bytes.toLong(result.sorted()[0].getValue());
+    assertEquals(time, currentTime);
+  }
+
+  /*
+   * Assert that returned versions match passed in timestamps and that results
+   * are returned in the right order.  Assert that values when converted to
+   * longs match the corresponding passed timestamp.
+   * @param r
+   * @param tss
+   * @throws IOException
+   */
+  public static void assertVersions(final Incommon incommon, final long [] tss)
+  throws IOException {
+    // Assert that 'latest' is what we expect.
+    Get get = null;
+    get = new Get(ROW);
+    get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+    Result r = incommon.get(get);
+    byte [] bytes = r.getValue(FAMILY_NAME, QUALIFIER_NAME);
+    long t = Bytes.toLong(bytes);
+    assertEquals(tss[0], t);
+
+    // Now assert that if we ask for multiple versions, that they come out in
+    // order.
+    get = new Get(ROW);
+    get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+    get.setMaxVersions(tss.length);
+    Result result = incommon.get(get);
+    KeyValue [] kvs = result.sorted();
+    assertEquals(kvs.length, tss.length);
+    for(int i=0;i<kvs.length;i++) {
+      t = Bytes.toLong(kvs[i].getValue());
+      assertEquals(tss[i], t);
+    }
+
+    // Determine highest stamp to set as next max stamp
+    long maxStamp = kvs[0].getTimestamp();
+
+    // Specify a timestamp get multiple versions.
+    get = new Get(ROW);
+    get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+    get.setTimeRange(0, maxStamp);
+    get.setMaxVersions(kvs.length - 1);
+    result = incommon.get(get);
+    kvs = result.sorted();
+    assertEquals(kvs.length, tss.length - 1);
+    for(int i=1;i<kvs.length;i++) {
+      t = Bytes.toLong(kvs[i-1].getValue());
+      assertEquals(tss[i], t);
+    }
+
+    // Test scanner returns expected version
+    assertScanContentTimestamp(incommon, tss[0]);
+  }
+
+  /*
+   * Run test scanning different timestamps.
+   * @param incommon
+   * @param flusher
+   * @throws IOException
+   */
+  public static void doTestTimestampScanning(final Incommon incommon,
+    final FlushCache flusher)
+  throws IOException {
+    // Add a couple of values for three different timestamps.
+    put(incommon, T0);
+    put(incommon, T1);
+    put(incommon, HConstants.LATEST_TIMESTAMP);
+    // Get count of latest items.
+    int count = assertScanContentTimestamp(incommon,
+      HConstants.LATEST_TIMESTAMP);
+    // Assert I get same count when I scan at each timestamp.
+    assertEquals(count, assertScanContentTimestamp(incommon, T0));
+    assertEquals(count, assertScanContentTimestamp(incommon, T1));
+    // Flush everything out to disk and then retry
+    flusher.flushcache();
+    assertEquals(count, assertScanContentTimestamp(incommon, T0));
+    assertEquals(count, assertScanContentTimestamp(incommon, T1));
+  }
+
+  /*
+   * Assert that the scan returns only values < timestamp.
+   * @param r
+   * @param ts
+   * @return Count of items scanned.
+   * @throws IOException
+   */
+  public static int assertScanContentTimestamp(final Incommon in, final long ts)
+  throws IOException {
+    ScannerIncommon scanner =
+      in.getScanner(COLUMNS[0], null, HConstants.EMPTY_START_ROW, ts);
+    int count = 0;
+    try {
+      // TODO FIX
+//      HStoreKey key = new HStoreKey();
+//      TreeMap<byte [], Cell>value =
+//        new TreeMap<byte [], Cell>(Bytes.BYTES_COMPARATOR);
+//      while (scanner.next(key, value)) {
+//        assertTrue(key.getTimestamp() <= ts);
+//        // Content matches the key or HConstants.LATEST_TIMESTAMP.
+//        // (Key does not match content if we 'put' with LATEST_TIMESTAMP).
+//        long l = Bytes.toLong(value.get(COLUMN).getValue());
+//        assertTrue(key.getTimestamp() == l ||
+//          HConstants.LATEST_TIMESTAMP == l);
+//        count++;
+//        value.clear();
+//      }
+    } finally {
+      scanner.close();
+    }
+    return count;
+  }
+
+  public static void put(final Incommon loader, final long ts)
+  throws IOException {
+    put(loader, Bytes.toBytes(ts), ts);
+  }
+
+  public static void put(final Incommon loader)
+  throws IOException {
+    long ts = HConstants.LATEST_TIMESTAMP;
+    put(loader, Bytes.toBytes(ts), ts);
+  }
+
+  /*
+   * Put values.
+   * @param loader
+   * @param bytes
+   * @param ts
+   * @throws IOException
+   */
+  public static void put(final Incommon loader, final byte [] bytes,
+    final long ts)
+  throws IOException {
+    Put put = new Put(ROW, ts, null);
+    put.add(FAMILY_NAME, QUALIFIER_NAME, bytes);
+    loader.put(put);
+  }
+
+  public static void delete(final Incommon loader) throws IOException {
+    delete(loader, null);
+  }
+
+  public static void delete(final Incommon loader, final byte [] column)
+  throws IOException {
+    delete(loader, column, HConstants.LATEST_TIMESTAMP);
+  }
+
+  public static void delete(final Incommon loader, final long ts)
+  throws IOException {
+    delete(loader, null, ts);
+  }
+
+  public static void delete(final Incommon loader, final byte [] column,
+      final long ts)
+  throws IOException {
+    Delete delete = ts == HConstants.LATEST_TIMESTAMP?
+      new Delete(ROW): new Delete(ROW, ts, null);
+    delete.deleteColumn(FAMILY_NAME, QUALIFIER_NAME, ts);
+    loader.delete(delete, null, true);
+  }
+
+  public static Result get(final Incommon loader) throws IOException {
+    return loader.get(new Get(ROW));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/avro/TestAvroServer.java b/0.90/src/test/java/org/apache/hadoop/hbase/avro/TestAvroServer.java
new file mode 100644
index 0000000..703a5b5
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/avro/TestAvroServer.java
@@ -0,0 +1,226 @@
+/** 
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.avro;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.nio.ByteBuffer;
+
+import org.apache.avro.Schema;
+import org.apache.avro.generic.GenericArray;
+import org.apache.avro.generic.GenericData;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.avro.generated.AColumn;
+import org.apache.hadoop.hbase.avro.generated.AColumnValue;
+import org.apache.hadoop.hbase.avro.generated.AFamilyDescriptor;
+import org.apache.hadoop.hbase.avro.generated.AGet;
+import org.apache.hadoop.hbase.avro.generated.APut;
+import org.apache.hadoop.hbase.avro.generated.ATableDescriptor;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Unit testing for AvroServer.HBaseImpl, a part of the
+ * org.apache.hadoop.hbase.avro package.
+ */
+public class TestAvroServer {
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  // Static names for tables, columns, rows, and values
+  // TODO(hammer): Better style to define these in test method?
+  private static ByteBuffer tableAname = ByteBuffer.wrap(Bytes.toBytes("tableA"));
+  private static ByteBuffer tableBname = ByteBuffer.wrap(Bytes.toBytes("tableB"));
+  private static ByteBuffer familyAname = ByteBuffer.wrap(Bytes.toBytes("FamilyA"));
+  private static ByteBuffer qualifierAname = ByteBuffer.wrap(Bytes.toBytes("QualifierA"));
+  private static ByteBuffer rowAname = ByteBuffer.wrap(Bytes.toBytes("RowA"));
+  private static ByteBuffer valueA = ByteBuffer.wrap(Bytes.toBytes("ValueA"));
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+    // Nothing to do.
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @After
+  public void tearDown() throws Exception {
+    // Nothing to do.
+  }
+
+  /**
+   * Tests for creating, enabling, disabling, modifying, and deleting tables.
+   *
+   * @throws Exception
+   */
+  @Test (timeout=300000)
+  public void testTableAdminAndMetadata() throws Exception {
+    AvroServer.HBaseImpl impl =
+      new AvroServer.HBaseImpl(TEST_UTIL.getConfiguration());
+
+    assertEquals(impl.listTables().size(), 0);
+
+    ATableDescriptor tableA = new ATableDescriptor();
+    tableA.name = tableAname;
+    impl.createTable(tableA);
+    assertEquals(impl.listTables().size(), 1);
+    assertTrue(impl.isTableEnabled(tableAname));
+    assertTrue(impl.tableExists(tableAname));
+
+    ATableDescriptor tableB = new ATableDescriptor();
+    tableB.name = tableBname;
+    impl.createTable(tableB);
+    assertEquals(impl.listTables().size(), 2);
+
+    impl.disableTable(tableBname);
+    assertFalse(impl.isTableEnabled(tableBname));
+
+    impl.deleteTable(tableBname);
+    assertEquals(impl.listTables().size(), 1);
+
+    impl.disableTable(tableAname);
+    assertFalse(impl.isTableEnabled(tableAname));
+
+    tableA.maxFileSize = 123456L;
+    impl.modifyTable(tableAname, tableA);
+    // It can take a while for the change to take effect.  Wait here a while.
+    while(impl.describeTable(tableAname).maxFileSize != 123456L) Threads.sleep(100);
+    assertEquals(123456L, (long) impl.describeTable(tableAname).maxFileSize);
+/* DISABLED FOR NOW TILL WE HAVE BETTER DISABLE/ENABLE
+    impl.enableTable(tableAname);
+    assertTrue(impl.isTableEnabled(tableAname));
+    
+    impl.disableTable(tableAname);
+    */
+    impl.deleteTable(tableAname);
+  }
+
+  /**
+   * Tests for creating, modifying, and deleting column families.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testFamilyAdminAndMetadata() throws Exception {
+    AvroServer.HBaseImpl impl =
+      new AvroServer.HBaseImpl(TEST_UTIL.getConfiguration());
+
+    ATableDescriptor tableA = new ATableDescriptor();
+    tableA.name = tableAname;
+    AFamilyDescriptor familyA = new AFamilyDescriptor();
+    familyA.name = familyAname;
+    Schema familyArraySchema = Schema.createArray(AFamilyDescriptor.SCHEMA$);
+    GenericArray<AFamilyDescriptor> families = new GenericData.Array<AFamilyDescriptor>(1, familyArraySchema);
+    families.add(familyA);
+    tableA.families = families;
+    impl.createTable(tableA);
+    assertEquals(impl.describeTable(tableAname).families.size(), 1);
+
+    impl.disableTable(tableAname);
+    assertFalse(impl.isTableEnabled(tableAname));
+
+    familyA.maxVersions = 123456;
+    impl.modifyFamily(tableAname, familyAname, familyA);
+    assertEquals((int) impl.describeFamily(tableAname, familyAname).maxVersions, 123456);
+
+    impl.deleteFamily(tableAname, familyAname);
+    assertEquals(impl.describeTable(tableAname).families.size(), 0);
+
+    impl.disableTable(tableAname);
+    impl.deleteTable(tableAname);
+  }
+
+  /**
+   * Tests for adding, reading, and deleting data.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testDML() throws Exception {
+    AvroServer.HBaseImpl impl =
+      new AvroServer.HBaseImpl(TEST_UTIL.getConfiguration());
+
+    ATableDescriptor tableA = new ATableDescriptor();
+    tableA.name = tableAname;
+    AFamilyDescriptor familyA = new AFamilyDescriptor();
+    familyA.name = familyAname;
+    Schema familyArraySchema = Schema.createArray(AFamilyDescriptor.SCHEMA$);
+    GenericArray<AFamilyDescriptor> families = new GenericData.Array<AFamilyDescriptor>(1, familyArraySchema);
+    families.add(familyA);
+    tableA.families = families;
+    impl.createTable(tableA);
+    assertEquals(impl.describeTable(tableAname).families.size(), 1);
+
+    AGet getA = new AGet();
+    getA.row = rowAname;
+    Schema columnsSchema = Schema.createArray(AColumn.SCHEMA$);
+    GenericArray<AColumn> columns = new GenericData.Array<AColumn>(1, columnsSchema);
+    AColumn column = new AColumn();
+    column.family = familyAname;
+    column.qualifier = qualifierAname;
+    columns.add(column);
+    getA.columns = columns;
+   
+    assertFalse(impl.exists(tableAname, getA));
+
+    APut putA = new APut();
+    putA.row = rowAname;
+    Schema columnValuesSchema = Schema.createArray(AColumnValue.SCHEMA$);
+    GenericArray<AColumnValue> columnValues = new GenericData.Array<AColumnValue>(1, columnValuesSchema);
+    AColumnValue acv = new AColumnValue();
+    acv.family = familyAname;
+    acv.qualifier = qualifierAname;
+    acv.value = valueA;
+    columnValues.add(acv);
+    putA.columnValues = columnValues;
+
+    impl.put(tableAname, putA);
+    assertTrue(impl.exists(tableAname, getA));
+
+    assertEquals(impl.get(tableAname, getA).entries.size(), 1);
+
+    impl.disableTable(tableAname);
+    impl.deleteTable(tableAname);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java b/0.90/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java
new file mode 100644
index 0000000..e25184e
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java
@@ -0,0 +1,356 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.catalog;
+
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.net.ConnectException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import junit.framework.Assert;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.util.Progressable;
+import org.apache.zookeeper.KeeperException;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.Matchers;
+import org.mockito.Mockito;
+
+/**
+ * Test {@link CatalogTracker}
+ */
+public class TestCatalogTracker {
+  private static final Log LOG = LogFactory.getLog(TestCatalogTracker.class);
+  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static final HServerAddress HSA =
+    new HServerAddress("example.org:1234");
+  private ZooKeeperWatcher watcher;
+  private Abortable abortable;
+
+  @BeforeClass public static void beforeClass() throws Exception {
+    UTIL.startMiniZKCluster();
+  }
+
+  @AfterClass public static void afterClass() throws IOException {
+    UTIL.getZkCluster().shutdown();
+  }
+
+  @Before public void before() throws IOException {
+    this.abortable = new Abortable() {
+      @Override
+      public void abort(String why, Throwable e) {
+        LOG.info(why, e);
+      }
+    };
+    this.watcher = new ZooKeeperWatcher(UTIL.getConfiguration(),
+      this.getClass().getSimpleName(), this.abortable);
+  }
+
+  @After public void after() {
+    this.watcher.close();
+  }
+
+  private CatalogTracker constructAndStartCatalogTracker()
+  throws IOException, InterruptedException {
+    return constructAndStartCatalogTracker(null);
+  }
+
+  private CatalogTracker constructAndStartCatalogTracker(final HConnection c)
+  throws IOException, InterruptedException {
+    CatalogTracker ct = new CatalogTracker(this.watcher, c, this.abortable);
+    ct.start();
+    return ct;
+  }
+
+  /**
+   * Test that we get notification if .META. moves.
+   * @throws IOException 
+   * @throws InterruptedException 
+   * @throws KeeperException 
+   */
+  @Test public void testThatIfMETAMovesWeAreNotified()
+  throws IOException, InterruptedException, KeeperException {
+    HConnection connection = Mockito.mock(HConnection.class);
+    final CatalogTracker ct = constructAndStartCatalogTracker(connection);
+    try {
+      RootLocationEditor.setRootLocation(this.watcher,
+        new HServerAddress("example.com:1234"));
+    } finally {
+      // Clean out root location or later tests will be confused... they presume
+      // start fresh in zk.
+      RootLocationEditor.deleteRootLocation(this.watcher);
+    }
+  }
+
+  /**
+   * Test interruptable while blocking wait on root and meta.
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  @Test public void testInterruptWaitOnMetaAndRoot()
+  throws IOException, InterruptedException {
+    final CatalogTracker ct = constructAndStartCatalogTracker();
+    HServerAddress hsa = ct.getRootLocation();
+    Assert.assertNull(hsa);
+    HServerAddress meta = ct.getMetaLocation();
+    Assert.assertNull(meta);
+    Thread t = new Thread() {
+      @Override
+      public void run() {
+        try {
+          ct.waitForMeta();
+        } catch (InterruptedException e) {
+          throw new RuntimeException("Interrupted", e);
+        }
+      }
+    };
+    t.start();
+    while (!t.isAlive()) Threads.sleep(1);
+    Threads.sleep(1);
+    assertTrue(t.isAlive());
+    ct.stop();
+    // Join the thread... should exit shortly.
+    t.join();
+  }
+
+  @Test public void testGetMetaServerConnectionFails()
+  throws IOException, InterruptedException, KeeperException {
+    HConnection connection = Mockito.mock(HConnection.class);
+    ConnectException connectException =
+      new ConnectException("Connection refused");
+    final HRegionInterface implementation =
+      Mockito.mock(HRegionInterface.class);
+    Mockito.when(implementation.get((byte [])Mockito.any(), (Get)Mockito.any())).
+      thenThrow(connectException);
+    Mockito.when(connection.getHRegionConnection((HServerAddress)Matchers.anyObject(), Matchers.anyBoolean())).
+      thenReturn(implementation);
+    Assert.assertNotNull(connection.getHRegionConnection(new HServerAddress(), false));
+    final CatalogTracker ct = constructAndStartCatalogTracker(connection);
+    try {
+      RootLocationEditor.setRootLocation(this.watcher,
+        new HServerAddress("example.com:1234"));
+      Assert.assertFalse(ct.verifyMetaRegionLocation(100));
+    } finally {
+      // Clean out root location or later tests will be confused... they presume
+      // start fresh in zk.
+      RootLocationEditor.deleteRootLocation(this.watcher);
+    }
+  }
+
+  /**
+   * Test get of root region fails properly if nothing to connect to.
+   * @throws IOException
+   * @throws InterruptedException
+   * @throws KeeperException
+   */
+  @Test
+  public void testVerifyRootRegionLocationFails()
+  throws IOException, InterruptedException, KeeperException {
+    HConnection connection = Mockito.mock(HConnection.class);
+    ConnectException connectException =
+      new ConnectException("Connection refused");
+    final HRegionInterface implementation =
+      Mockito.mock(HRegionInterface.class);
+    Mockito.when(implementation.getRegionInfo((byte [])Mockito.any())).
+      thenThrow(connectException);
+    Mockito.when(connection.getHRegionConnection((HServerAddress)Matchers.anyObject(), Matchers.anyBoolean())).
+      thenReturn(implementation);
+    Assert.assertNotNull(connection.getHRegionConnection(new HServerAddress(), false));
+    final CatalogTracker ct = constructAndStartCatalogTracker(connection);
+    try {
+      RootLocationEditor.setRootLocation(this.watcher,
+        new HServerAddress("example.com:1234"));
+      Assert.assertFalse(ct.verifyRootRegionLocation(100));
+    } finally {
+      // Clean out root location or later tests will be confused... they presume
+      // start fresh in zk.
+      RootLocationEditor.deleteRootLocation(this.watcher);
+    }
+  }
+
+  @Test (expected = NotAllMetaRegionsOnlineException.class)
+  public void testTimeoutWaitForRoot()
+  throws IOException, InterruptedException {
+    final CatalogTracker ct = constructAndStartCatalogTracker();
+    ct.waitForRoot(100);
+  }
+
+  @Test (expected = NotAllMetaRegionsOnlineException.class)
+  public void testTimeoutWaitForMeta()
+  throws IOException, InterruptedException {
+    final CatalogTracker ct = constructAndStartCatalogTracker();
+    ct.waitForMeta(100);
+  }
+
+  /**
+   * Test waiting on root w/ no timeout specified.
+   * @throws IOException
+   * @throws InterruptedException
+   * @throws KeeperException
+   */
+  @Test public void testNoTimeoutWaitForRoot()
+  throws IOException, InterruptedException, KeeperException {
+    final CatalogTracker ct = constructAndStartCatalogTracker();
+    HServerAddress hsa = ct.getRootLocation();
+    Assert.assertNull(hsa);
+
+    // Now test waiting on root location getting set.
+    Thread t = new WaitOnMetaThread(ct);
+    startWaitAliveThenWaitItLives(t, 1000);
+    // Set a root location.
+    hsa = setRootLocation();
+    // Join the thread... should exit shortly.
+    t.join();
+    // Now root is available.
+    Assert.assertTrue(ct.getRootLocation().equals(hsa));
+  }
+
+  private HServerAddress setRootLocation() throws KeeperException {
+    RootLocationEditor.setRootLocation(this.watcher, HSA);
+    return HSA;
+  }
+
+  /**
+   * Test waiting on meta w/ no timeout specified.
+   * @throws IOException
+   * @throws InterruptedException
+   * @throws KeeperException
+   */
+  @Test public void testNoTimeoutWaitForMeta()
+  throws IOException, InterruptedException, KeeperException {
+    // Mock an HConnection and a HRegionInterface implementation.  Have the
+    // HConnection return the HRI.  Have the HRI return a few mocked up responses
+    // to make our test work.
+    HConnection connection = Mockito.mock(HConnection.class);
+    HRegionInterface  mockHRI = Mockito.mock(HRegionInterface.class);
+    // Make the HRI return an answer no matter how Get is called.  Same for
+    // getHRegionInfo.  Thats enough for this test.
+    Mockito.when(connection.getHRegionConnection((HServerAddress)Mockito.any(), Mockito.anyBoolean())).
+      thenReturn(mockHRI);
+
+    final CatalogTracker ct = constructAndStartCatalogTracker(connection);
+    HServerAddress hsa = ct.getMetaLocation();
+    Assert.assertNull(hsa);
+
+    // Now test waiting on meta location getting set.
+    Thread t = new WaitOnMetaThread(ct) {
+      @Override
+      void doWaiting() throws InterruptedException {
+        this.ct.waitForMeta();
+      }
+    };
+    startWaitAliveThenWaitItLives(t, 1000);
+
+    // Now the ct is up... set into the mocks some answers that make it look
+    // like things have been getting assigned.  Make it so we'll return a
+    // location (no matter what the Get is).  Same for getHRegionInfo -- always
+    // just return the meta region.
+    List<KeyValue> kvs = new ArrayList<KeyValue>();
+    kvs.add(new KeyValue(HConstants.EMPTY_BYTE_ARRAY,
+      HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER,
+      Bytes.toBytes(HSA.toString())));
+    final Result result = new Result(kvs);
+    Mockito.when(mockHRI.get((byte [])Mockito.any(), (Get)Mockito.any())).
+      thenReturn(result);
+    Mockito.when(mockHRI.getRegionInfo((byte [])Mockito.any())).
+      thenReturn(HRegionInfo.FIRST_META_REGIONINFO);
+    // This should trigger wake up of meta wait (Its the removal of the meta
+    // region unassigned node that triggers catalogtrackers that a meta has
+    // been assigned.
+    String node = ct.getMetaNodeTracker().getNode();
+    ZKUtil.createAndFailSilent(this.watcher, node);
+    MetaEditor.updateMetaLocation(ct, HRegionInfo.FIRST_META_REGIONINFO,
+      new HServerInfo(HSA, -1, "example.com"));
+    ZKUtil.deleteNode(this.watcher, node);
+    // Join the thread... should exit shortly.
+    t.join();
+    // Now meta is available.
+    Assert.assertTrue(ct.getMetaLocation().equals(HSA));
+  }
+
+  private void startWaitAliveThenWaitItLives(final Thread t, final int ms) {
+    t.start();
+    while(!t.isAlive()) {
+      // Wait
+    }
+    // Wait one second.
+    Threads.sleep(ms);
+    Assert.assertTrue("Assert " + t.getName() + " still waiting", t.isAlive());
+  }
+
+  class CountingProgressable implements Progressable {
+    final AtomicInteger counter = new AtomicInteger(0);
+    @Override
+    public void progress() {
+      this.counter.incrementAndGet();
+    }
+  }
+
+  /**
+   * Wait on META.
+   * Default is wait on -ROOT-.
+   */
+  class WaitOnMetaThread extends Thread {
+    final CatalogTracker ct;
+
+    WaitOnMetaThread(final CatalogTracker ct) {
+      super("WaitOnMeta");
+      this.ct = ct;
+    }
+
+    @Override
+    public void run() {
+      try {
+        doWaiting();
+      } catch (InterruptedException e) {
+        throw new RuntimeException("Failed wait", e);
+      }
+      LOG.info("Exiting " + getName());
+    }
+
+    void doWaiting() throws InterruptedException {
+      this.ct.waitForRoot();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditor.java b/0.90/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditor.java
new file mode 100644
index 0000000..43a8171
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditor.java
@@ -0,0 +1,133 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.catalog;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test {@link MetaReader}, {@link MetaEditor}, and {@link RootLocationEditor}.
+ */
+public class TestMetaReaderEditor {
+  private static final Log LOG = LogFactory.getLog(TestMetaReaderEditor.class);
+  private static final  HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private ZooKeeperWatcher zkw;
+  private CatalogTracker ct;
+  private final static Abortable ABORTABLE = new Abortable() {
+    private final AtomicBoolean abort = new AtomicBoolean(false);
+
+    @Override
+    public void abort(String why, Throwable e) {
+      LOG.info(why, e);
+      abort.set(true);
+    }
+  };
+
+  @BeforeClass public static void beforeClass() throws Exception {
+    UTIL.startMiniCluster();
+  }
+
+  @Before public void setup() throws IOException, InterruptedException {
+    Configuration c = new Configuration(UTIL.getConfiguration());
+    zkw = new ZooKeeperWatcher(c, "TestMetaReaderEditor", ABORTABLE);
+    HConnection connection = HConnectionManager.getConnection(c);
+    ct = new CatalogTracker(zkw, connection, ABORTABLE);
+    ct.start();
+  }
+
+  @AfterClass public static void afterClass() throws IOException {
+    UTIL.shutdownMiniCluster();
+  }
+
+  @Test public void testGetRegionsCatalogTables()
+  throws IOException, InterruptedException {
+    List<HRegionInfo> regions =
+      MetaReader.getTableRegions(ct, HConstants.META_TABLE_NAME);
+    assertTrue(regions.size() >= 1);
+    assertTrue(MetaReader.getTableRegionsAndLocations(ct,
+      Bytes.toString(HConstants.META_TABLE_NAME)).size() >= 1);
+    assertTrue(MetaReader.getTableRegionsAndLocations(ct,
+      Bytes.toString(HConstants.ROOT_TABLE_NAME)).size() == 1);
+  }
+
+  @Test public void testTableExists() throws IOException {
+    final String name = "testTableExists";
+    final byte [] nameBytes = Bytes.toBytes(name);
+    assertFalse(MetaReader.tableExists(ct, name));
+    UTIL.createTable(nameBytes, HConstants.CATALOG_FAMILY);
+    assertTrue(MetaReader.tableExists(ct, name));
+    HBaseAdmin admin = UTIL.getHBaseAdmin();
+    admin.disableTable(name);
+    admin.deleteTable(name);
+    assertFalse(MetaReader.tableExists(ct, name));
+    assertTrue(MetaReader.tableExists(ct,
+      Bytes.toString(HConstants.META_TABLE_NAME)));
+    assertTrue(MetaReader.tableExists(ct,
+      Bytes.toString(HConstants.ROOT_TABLE_NAME)));
+  }
+
+  @Test public void testGetRegion() throws IOException, InterruptedException {
+    final String name = "testGetRegion";
+    LOG.info("Started " + name);
+    final byte [] nameBytes = Bytes.toBytes(name);
+    HTable t = UTIL.createTable(nameBytes, HConstants.CATALOG_FAMILY);
+    int regionCount = UTIL.createMultiRegions(t, HConstants.CATALOG_FAMILY);
+
+    // Test it works getting a region from user table.
+    List<HRegionInfo> regions = MetaReader.getTableRegions(ct, nameBytes);
+    assertEquals(regionCount, regions.size());
+    Pair<HRegionInfo, HServerAddress> pair =
+      MetaReader.getRegion(ct, regions.get(0).getRegionName());
+    assertEquals(regions.get(0).getEncodedName(),
+      pair.getFirst().getEncodedName());
+    // Test get on non-existent region.
+    pair = MetaReader.getRegion(ct, Bytes.toBytes("nonexistent-region"));
+    assertNull(pair);
+    // Test it works getting a region from meta/root.
+    pair =
+      MetaReader.getRegion(ct, HRegionInfo.FIRST_META_REGIONINFO.getRegionName());
+    assertEquals(HRegionInfo.FIRST_META_REGIONINFO.getEncodedName(),
+      pair.getFirst().getEncodedName());
+    LOG.info("Finished " + name);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
new file mode 100644
index 0000000..8560d22
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestAdmin.java
@@ -0,0 +1,760 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NotServingRegionException;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.TableNotDisabledException;
+import org.apache.hadoop.hbase.TableNotFoundException;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+
+/**
+ * Class to test HBaseAdmin.
+ * Spins up the minicluster once at test start and then takes it down afterward.
+ * Add any testing of HBaseAdmin functionality here.
+ */
+public class TestAdmin {
+  final Log LOG = LogFactory.getLog(getClass());
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private HBaseAdmin admin;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.getConfiguration().setInt("hbase.regionserver.msginterval", 100);
+    TEST_UTIL.getConfiguration().setInt("hbase.client.pause", 250);
+    TEST_UTIL.getConfiguration().setInt("hbase.client.retries.number", 6);
+    TEST_UTIL.startMiniCluster(3);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    this.admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+  }
+
+  @Test
+  public void testDisableAndEnableTable() throws IOException {
+    final byte [] row = Bytes.toBytes("row");
+    final byte [] qualifier = Bytes.toBytes("qualifier");
+    final byte [] value = Bytes.toBytes("value");
+    final byte [] table = Bytes.toBytes("testDisableAndEnableTable");
+    HTable ht = TEST_UTIL.createTable(table, HConstants.CATALOG_FAMILY);
+    Put put = new Put(row);
+    put.add(HConstants.CATALOG_FAMILY, qualifier, value);
+    ht.put(put);
+    Get get = new Get(row);
+    get.addColumn(HConstants.CATALOG_FAMILY, qualifier);
+    ht.get(get);
+
+    this.admin.disableTable(table);
+
+    // Test that table is disabled
+    get = new Get(row);
+    get.addColumn(HConstants.CATALOG_FAMILY, qualifier);
+    boolean ok = false;
+    try {
+      ht.get(get);
+    } catch (NotServingRegionException e) {
+      ok = true;
+    } catch (RetriesExhaustedException e) {
+      ok = true;
+    }
+    assertTrue(ok);
+    this.admin.enableTable(table);
+
+    // Test that table is enabled
+    try {
+      ht.get(get);
+    } catch (RetriesExhaustedException e) {
+      ok = false;
+    }
+    assertTrue(ok);
+  }
+
+  @Test
+  public void testCreateTable() throws IOException {
+    HTableDescriptor [] tables = admin.listTables();
+    int numTables = tables.length;
+    TEST_UTIL.createTable(Bytes.toBytes("testCreateTable"),
+      HConstants.CATALOG_FAMILY);
+    tables = this.admin.listTables();
+    assertEquals(numTables + 1, tables.length);
+  }
+
+  @Test
+  public void testGetTableDescriptor() throws IOException {
+    HColumnDescriptor fam1 = new HColumnDescriptor("fam1");
+    HColumnDescriptor fam2 = new HColumnDescriptor("fam2");
+    HColumnDescriptor fam3 = new HColumnDescriptor("fam3");
+    HTableDescriptor htd = new HTableDescriptor("myTestTable");
+    htd.addFamily(fam1);
+    htd.addFamily(fam2);
+    htd.addFamily(fam3);
+    this.admin.createTable(htd);
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), "myTestTable");
+    HTableDescriptor confirmedHtd = table.getTableDescriptor();
+    assertEquals(htd.compareTo(confirmedHtd), 0);
+  }
+
+  /**
+   * Verify schema modification takes.
+   * @throws IOException
+   */
+  @Test public void testChangeTableSchema() throws IOException {
+    final byte [] tableName = Bytes.toBytes("changeTableSchema");
+    HTableDescriptor [] tables = admin.listTables();
+    int numTables = tables.length;
+    TEST_UTIL.createTable(tableName, HConstants.CATALOG_FAMILY);
+    tables = this.admin.listTables();
+    assertEquals(numTables + 1, tables.length);
+
+    // FIRST, do htabledescriptor changes.
+    HTableDescriptor htd = this.admin.getTableDescriptor(tableName);
+    // Make a copy and assert copy is good.
+    HTableDescriptor copy = new HTableDescriptor(htd);
+    assertTrue(htd.equals(copy));
+    // Now amend the copy. Introduce differences.
+    long newFlushSize = htd.getMemStoreFlushSize() / 2;
+    copy.setMemStoreFlushSize(newFlushSize);
+    final String key = "anyoldkey";
+    assertTrue(htd.getValue(key) == null);
+    copy.setValue(key, key);
+    boolean expectedException = false;
+    try {
+      this.admin.modifyTable(tableName, copy);
+    } catch (TableNotDisabledException re) {
+      expectedException = true;
+    }
+    assertTrue(expectedException);
+    this.admin.disableTable(tableName);
+    assertTrue(this.admin.isTableDisabled(tableName));
+    modifyTable(tableName, copy);
+    HTableDescriptor modifiedHtd = this.admin.getTableDescriptor(tableName);
+    // Assert returned modifiedhcd is same as the copy.
+    assertFalse(htd.equals(modifiedHtd));
+    assertTrue(copy.equals(modifiedHtd));
+    assertEquals(newFlushSize, modifiedHtd.getMemStoreFlushSize());
+    assertEquals(key, modifiedHtd.getValue(key));
+
+    // Reenable table to test it fails if not disabled.
+    this.admin.enableTable(tableName);
+    assertFalse(this.admin.isTableDisabled(tableName));
+
+    // Now work on column family changes.
+    int countOfFamilies = modifiedHtd.getFamilies().size();
+    assertTrue(countOfFamilies > 0);
+    HColumnDescriptor hcd = modifiedHtd.getFamilies().iterator().next();
+    int maxversions = hcd.getMaxVersions();
+    final int newMaxVersions = maxversions + 1;
+    hcd.setMaxVersions(newMaxVersions);
+    final byte [] hcdName = hcd.getName();
+    expectedException = false;
+    try {
+      this.admin.modifyColumn(tableName, hcd);
+    } catch (TableNotDisabledException re) {
+      expectedException = true;
+    }
+    assertTrue(expectedException);
+    this.admin.disableTable(tableName);
+    assertTrue(this.admin.isTableDisabled(tableName));
+    // Modify Column is synchronous
+    this.admin.modifyColumn(tableName, hcd);
+    modifiedHtd = this.admin.getTableDescriptor(tableName);
+    HColumnDescriptor modifiedHcd = modifiedHtd.getFamily(hcdName);
+    assertEquals(newMaxVersions, modifiedHcd.getMaxVersions());
+
+    // Try adding a column
+    // Reenable table to test it fails if not disabled.
+    this.admin.enableTable(tableName);
+    assertFalse(this.admin.isTableDisabled(tableName));
+    final String xtracolName = "xtracol";
+    HColumnDescriptor xtracol = new HColumnDescriptor(xtracolName);
+    xtracol.setValue(xtracolName, xtracolName);
+    try {
+      this.admin.addColumn(tableName, xtracol);
+    } catch (TableNotDisabledException re) {
+      expectedException = true;
+    }
+    assertTrue(expectedException);
+    this.admin.disableTable(tableName);
+    assertTrue(this.admin.isTableDisabled(tableName));
+    this.admin.addColumn(tableName, xtracol);
+    modifiedHtd = this.admin.getTableDescriptor(tableName);
+    hcd = modifiedHtd.getFamily(xtracol.getName());
+    assertTrue(hcd != null);
+    assertTrue(hcd.getValue(xtracolName).equals(xtracolName));
+
+    // Delete the just-added column.
+    this.admin.deleteColumn(tableName, xtracol.getName());
+    modifiedHtd = this.admin.getTableDescriptor(tableName);
+    hcd = modifiedHtd.getFamily(xtracol.getName());
+    assertTrue(hcd == null);
+
+    // Delete the table
+    this.admin.deleteTable(tableName);
+    this.admin.listTables();
+    assertFalse(this.admin.tableExists(tableName));
+  }
+
+  /**
+   * Modify table is async so wait on completion of the table operation in master.
+   * @param tableName
+   * @param htd
+   * @throws IOException
+   */
+  private void modifyTable(final byte [] tableName, final HTableDescriptor htd)
+  throws IOException {
+    MasterServices services = TEST_UTIL.getMiniHBaseCluster().getMaster();
+    ExecutorService executor = services.getExecutorService();
+    AtomicBoolean done = new AtomicBoolean(false);
+    executor.registerListener(EventType.C_M_MODIFY_TABLE, new DoneListener(done));
+    this.admin.modifyTable(tableName, htd);
+    while (!done.get()) {
+      synchronized (done) {
+        try {
+          done.wait(1000);
+        } catch (InterruptedException e) {
+          e.printStackTrace();
+        }
+      }
+    }
+    executor.unregisterListener(EventType.C_M_MODIFY_TABLE);
+  }
+
+  /**
+   * Listens for when an event is done in Master.
+   */
+  static class DoneListener implements EventHandler.EventHandlerListener {
+    private final AtomicBoolean done;
+
+    DoneListener(final AtomicBoolean done) {
+      super();
+      this.done = done;
+    }
+
+    @Override
+    public void afterProcess(EventHandler event) {
+      this.done.set(true);
+      synchronized (this.done) {
+        // Wake anyone waiting on this value to change.
+        this.done.notifyAll();
+      }
+    }
+
+    @Override
+    public void beforeProcess(EventHandler event) {
+      // continue
+    }
+  }
+
+  @Test
+  public void testCreateTableWithRegions() throws IOException, InterruptedException {
+
+    byte[] tableName = Bytes.toBytes("testCreateTableWithRegions");
+
+    byte [][] splitKeys = {
+        new byte [] { 1, 1, 1 },
+        new byte [] { 2, 2, 2 },
+        new byte [] { 3, 3, 3 },
+        new byte [] { 4, 4, 4 },
+        new byte [] { 5, 5, 5 },
+        new byte [] { 6, 6, 6 },
+        new byte [] { 7, 7, 7 },
+        new byte [] { 8, 8, 8 },
+        new byte [] { 9, 9, 9 },
+    };
+    int expectedRegions = splitKeys.length + 1;
+
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    admin.createTable(desc, splitKeys);
+
+    HTable ht = new HTable(TEST_UTIL.getConfiguration(), tableName);
+    Map<HRegionInfo,HServerAddress> regions = ht.getRegionsInfo();
+    assertEquals("Tried to create " + expectedRegions + " regions " +
+        "but only found " + regions.size(),
+        expectedRegions, regions.size());
+    System.err.println("Found " + regions.size() + " regions");
+
+    Iterator<HRegionInfo> hris = regions.keySet().iterator();
+    HRegionInfo hri = hris.next();
+    assertTrue(hri.getStartKey() == null || hri.getStartKey().length == 0);
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[0]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[0]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[1]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[1]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[2]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[2]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[3]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[3]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[4]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[4]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[5]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[5]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[6]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[6]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[7]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[7]));
+    assertTrue(Bytes.equals(hri.getEndKey(), splitKeys[8]));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), splitKeys[8]));
+    assertTrue(hri.getEndKey() == null || hri.getEndKey().length == 0);
+
+    // Now test using start/end with a number of regions
+
+    // Use 80 bit numbers to make sure we aren't limited
+    byte [] startKey = { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };
+    byte [] endKey =   { 9, 9, 9, 9, 9, 9, 9, 9, 9, 9 };
+
+    // Splitting into 10 regions, we expect (null,1) ... (9, null)
+    // with (1,2) (2,3) (3,4) (4,5) (5,6) (6,7) (7,8) (8,9) in the middle
+
+    expectedRegions = 10;
+
+    byte [] TABLE_2 = Bytes.add(tableName, Bytes.toBytes("_2"));
+
+    desc = new HTableDescriptor(TABLE_2);
+    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+    admin.createTable(desc, startKey, endKey, expectedRegions);
+
+    ht = new HTable(TEST_UTIL.getConfiguration(), TABLE_2);
+    regions = ht.getRegionsInfo();
+    assertEquals("Tried to create " + expectedRegions + " regions " +
+        "but only found " + regions.size(),
+        expectedRegions, regions.size());
+    System.err.println("Found " + regions.size() + " regions");
+
+    hris = regions.keySet().iterator();
+    hri = hris.next();
+    assertTrue(hri.getStartKey() == null || hri.getStartKey().length == 0);
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {1,1,1,1,1,1,1,1,1,1}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {1,1,1,1,1,1,1,1,1,1}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {2,2,2,2,2,2,2,2,2,2}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {2,2,2,2,2,2,2,2,2,2}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {3,3,3,3,3,3,3,3,3,3}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {3,3,3,3,3,3,3,3,3,3}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {4,4,4,4,4,4,4,4,4,4}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {4,4,4,4,4,4,4,4,4,4}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {5,5,5,5,5,5,5,5,5,5}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {5,5,5,5,5,5,5,5,5,5}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {6,6,6,6,6,6,6,6,6,6}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {6,6,6,6,6,6,6,6,6,6}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {7,7,7,7,7,7,7,7,7,7}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {7,7,7,7,7,7,7,7,7,7}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {8,8,8,8,8,8,8,8,8,8}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {8,8,8,8,8,8,8,8,8,8}));
+    assertTrue(Bytes.equals(hri.getEndKey(), new byte [] {9,9,9,9,9,9,9,9,9,9}));
+    hri = hris.next();
+    assertTrue(Bytes.equals(hri.getStartKey(), new byte [] {9,9,9,9,9,9,9,9,9,9}));
+    assertTrue(hri.getEndKey() == null || hri.getEndKey().length == 0);
+
+    // Try once more with something that divides into something infinite
+
+    startKey = new byte [] { 0, 0, 0, 0, 0, 0 };
+    endKey = new byte [] { 1, 0, 0, 0, 0, 0 };
+
+    expectedRegions = 5;
+
+    byte [] TABLE_3 = Bytes.add(tableName, Bytes.toBytes("_3"));
+
+    desc = new HTableDescriptor(TABLE_3);
+    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+    admin.createTable(desc, startKey, endKey, expectedRegions);
+
+    ht = new HTable(TEST_UTIL.getConfiguration(), TABLE_3);
+    regions = ht.getRegionsInfo();
+    assertEquals("Tried to create " + expectedRegions + " regions " +
+        "but only found " + regions.size(),
+        expectedRegions, regions.size());
+    System.err.println("Found " + regions.size() + " regions");
+
+    // Try an invalid case where there are duplicate split keys
+    splitKeys = new byte [][] {
+        new byte [] { 1, 1, 1 },
+        new byte [] { 2, 2, 2 },
+        new byte [] { 3, 3, 3 },
+        new byte [] { 2, 2, 2 }
+    };
+
+    byte [] TABLE_4 = Bytes.add(tableName, Bytes.toBytes("_4"));
+    desc = new HTableDescriptor(TABLE_4);
+    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+    try {
+      admin.createTable(desc, splitKeys);
+      assertTrue("Should not be able to create this table because of " +
+          "duplicate split keys", false);
+    } catch(IllegalArgumentException iae) {
+      // Expected
+    }
+  }
+
+  @Test
+  public void testTableExist() throws IOException {
+    final byte [] table = Bytes.toBytes("testTableExist");
+    boolean exist = false;
+    exist = this.admin.tableExists(table);
+    assertEquals(false, exist);
+    TEST_UTIL.createTable(table, HConstants.CATALOG_FAMILY);
+    exist = this.admin.tableExists(table);
+    assertEquals(true, exist);
+  }
+
+  /**
+   * Tests forcing split from client and having scanners successfully ride over split.
+   * @throws Exception
+   * @throws IOException
+   */
+  @Test
+  public void testForceSplit() throws Exception {
+    byte [] familyName = HConstants.CATALOG_FAMILY;
+    byte [] tableName = Bytes.toBytes("testForceSplit");
+    final HTable table = TEST_UTIL.createTable(tableName, familyName);
+    byte[] k = new byte[3];
+    int rowCount = 0;
+    for (byte b1 = 'a'; b1 < 'z'; b1++) {
+      for (byte b2 = 'a'; b2 < 'z'; b2++) {
+        for (byte b3 = 'a'; b3 < 'z'; b3++) {
+          k[0] = b1;
+          k[1] = b2;
+          k[2] = b3;
+          Put put = new Put(k);
+          put.add(familyName, new byte[0], k);
+          table.put(put);
+          rowCount++;
+        }
+      }
+    }
+
+    // get the initial layout (should just be one region)
+    Map<HRegionInfo,HServerAddress> m = table.getRegionsInfo();
+    System.out.println("Initial regions (" + m.size() + "): " + m);
+    assertTrue(m.size() == 1);
+
+    // Verify row count
+    Scan scan = new Scan();
+    ResultScanner scanner = table.getScanner(scan);
+    int rows = 0;
+    for(@SuppressWarnings("unused") Result result : scanner) {
+      rows++;
+    }
+    scanner.close();
+    assertEquals(rowCount, rows);
+
+    // Have an outstanding scan going on to make sure we can scan over splits.
+    scan = new Scan();
+    scanner = table.getScanner(scan);
+    // Scan first row so we are into first region before split happens.
+    scanner.next();
+
+    final AtomicInteger count = new AtomicInteger(0);
+    Thread t = new Thread("CheckForSplit") {
+      public void run() {
+        for (int i = 0; i < 20; i++) {
+          try {
+            sleep(1000);
+          } catch (InterruptedException e) {
+            continue;
+          }
+          // check again    table = new HTable(conf, tableName);
+          Map<HRegionInfo, HServerAddress> regions = null;
+          try {
+            regions = table.getRegionsInfo();
+          } catch (IOException e) {
+            e.printStackTrace();
+          }
+          if (regions == null) continue;
+          count.set(regions.size());
+          if (count.get() >= 2) break;
+          LOG.debug("Cycle waiting on split");
+        }
+      }
+    };
+    t.start();
+    // Split the table
+    this.admin.split(Bytes.toString(tableName));
+    t.join();
+
+    // Verify row count
+    rows = 1; // We counted one row above.
+    for (@SuppressWarnings("unused") Result result : scanner) {
+      rows++;
+      if (rows > rowCount) {
+        scanner.close();
+        assertTrue("Scanned more than expected (" + rowCount + ")", false);
+      }
+    }
+    scanner.close();
+    assertEquals(rowCount, rows);
+  }
+
+  /**
+   * HADOOP-2156
+   * @throws IOException
+   */
+  @Test (expected=IllegalArgumentException.class)
+  public void testEmptyHHTableDescriptor() throws IOException {
+    this.admin.createTable(new HTableDescriptor());
+  }
+
+  @Test
+  public void testEnableDisableAddColumnDeleteColumn() throws Exception {
+    byte [] tableName = Bytes.toBytes("testMasterAdmin");
+    TEST_UTIL.createTable(tableName, HConstants.CATALOG_FAMILY);
+    this.admin.disableTable(tableName);
+    try {
+      new HTable(TEST_UTIL.getConfiguration(), tableName);
+    } catch (org.apache.hadoop.hbase.client.RegionOfflineException e) {
+      // Expected
+    }
+    this.admin.addColumn(tableName, new HColumnDescriptor("col2"));
+    this.admin.enableTable(tableName);
+    try {
+      this.admin.deleteColumn(tableName, Bytes.toBytes("col2"));
+    } catch(TableNotDisabledException e) {
+      // Expected
+    }
+    this.admin.disableTable(tableName);
+    this.admin.deleteColumn(tableName, Bytes.toBytes("col2"));
+    this.admin.deleteTable(tableName);
+  }
+
+  @Test
+  public void testCreateBadTables() throws IOException {
+    String msg = null;
+    try {
+      this.admin.createTable(HTableDescriptor.ROOT_TABLEDESC);
+    } catch (IllegalArgumentException e) {
+      msg = e.toString();
+    }
+    assertTrue("Unexcepted exception message " + msg, msg != null &&
+      msg.startsWith(IllegalArgumentException.class.getName()) &&
+      msg.contains(HTableDescriptor.ROOT_TABLEDESC.getNameAsString()));
+    msg = null;
+    try {
+      this.admin.createTable(HTableDescriptor.META_TABLEDESC);
+    } catch(IllegalArgumentException e) {
+      msg = e.toString();
+    }
+    assertTrue("Unexcepted exception message " + msg, msg != null &&
+      msg.startsWith(IllegalArgumentException.class.getName()) &&
+      msg.contains(HTableDescriptor.META_TABLEDESC.getNameAsString()));
+
+    // Now try and do concurrent creation with a bunch of threads.
+    final HTableDescriptor threadDesc =
+      new HTableDescriptor("threaded_testCreateBadTables");
+    threadDesc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    int count = 10;
+    Thread [] threads = new Thread [count];
+    final AtomicInteger successes = new AtomicInteger(0);
+    final AtomicInteger failures = new AtomicInteger(0);
+    final HBaseAdmin localAdmin = this.admin;
+    for (int i = 0; i < count; i++) {
+      threads[i] = new Thread(Integer.toString(i)) {
+        @Override
+        public void run() {
+          try {
+            localAdmin.createTable(threadDesc);
+            successes.incrementAndGet();
+          } catch (TableExistsException e) {
+            failures.incrementAndGet();
+          } catch (IOException e) {
+            throw new RuntimeException("Failed threaded create" + getName(), e);
+          }
+        }
+      };
+    }
+    for (int i = 0; i < count; i++) {
+      threads[i].start();
+    }
+    for (int i = 0; i < count; i++) {
+      while(threads[i].isAlive()) {
+        try {
+          Thread.sleep(1000);
+        } catch (InterruptedException e) {
+          // continue
+        }
+      }
+    }
+    // All threads are now dead.  Count up how many tables were created and
+    // how many failed w/ appropriate exception.
+    assertEquals(1, successes.get());
+    assertEquals(count - 1, failures.get());
+  }
+
+  /**
+   * Test for hadoop-1581 'HBASE: Unopenable tablename bug'.
+   * @throws Exception
+   */
+  @Test
+  public void testTableNameClash() throws Exception {
+    String name = "testTableNameClash";
+    admin.createTable(new HTableDescriptor(name + "SOMEUPPERCASE"));
+    admin.createTable(new HTableDescriptor(name));
+    // Before fix, below would fail throwing a NoServerForRegionException.
+    new HTable(TEST_UTIL.getConfiguration(), name);
+  }
+
+  /**
+   * Test read only tables
+   * @throws Exception
+   */
+  @Test
+  public void testReadOnlyTable() throws Exception {
+    byte [] name = Bytes.toBytes("testReadOnlyTable");
+    HTable table = TEST_UTIL.createTable(name, HConstants.CATALOG_FAMILY);
+    byte[] value = Bytes.toBytes("somedata");
+    // This used to use an empty row... That must have been a bug
+    Put put = new Put(value);
+    put.add(HConstants.CATALOG_FAMILY, HConstants.CATALOG_FAMILY, value);
+    table.put(put);
+  }
+
+  /**
+   * Test that user table names can contain '-' and '.' so long as they do not
+   * start with same. HBASE-771
+   * @throws IOException
+   */
+  @Test
+  public void testTableNames() throws IOException {
+    byte[][] illegalNames = new byte[][] {
+        Bytes.toBytes("-bad"),
+        Bytes.toBytes(".bad"),
+        HConstants.ROOT_TABLE_NAME,
+        HConstants.META_TABLE_NAME
+    };
+    for (int i = 0; i < illegalNames.length; i++) {
+      try {
+        new HTableDescriptor(illegalNames[i]);
+        throw new IOException("Did not detect '" +
+          Bytes.toString(illegalNames[i]) + "' as an illegal user table name");
+      } catch (IllegalArgumentException e) {
+        // expected
+      }
+    }
+    byte[] legalName = Bytes.toBytes("g-oo.d");
+    try {
+      new HTableDescriptor(legalName);
+    } catch (IllegalArgumentException e) {
+      throw new IOException("Legal user table name: '" +
+        Bytes.toString(legalName) + "' caused IllegalArgumentException: " +
+        e.getMessage());
+    }
+  }
+
+  /**
+   * For HADOOP-2579
+   * @throws IOException
+   */
+  @Test (expected=TableExistsException.class)
+  public void testTableNotFoundExceptionWithATable() throws IOException {
+    final byte [] name = Bytes.toBytes("testTableNotFoundExceptionWithATable");
+    TEST_UTIL.createTable(name, HConstants.CATALOG_FAMILY);
+    TEST_UTIL.createTable(name, HConstants.CATALOG_FAMILY);
+  }
+
+  /**
+   * For HADOOP-2579
+   * @throws IOException
+   */
+  @Test (expected=TableNotFoundException.class)
+  public void testTableNotFoundExceptionWithoutAnyTables() throws IOException {
+    new HTable(TEST_UTIL.getConfiguration(),
+        "testTableNotFoundExceptionWithoutAnyTables");
+  }
+
+  @Test
+  public void testHundredsOfTable() throws IOException{
+    final int times = 100;
+    HColumnDescriptor fam1 = new HColumnDescriptor("fam1");
+    HColumnDescriptor fam2 = new HColumnDescriptor("fam2");
+    HColumnDescriptor fam3 = new HColumnDescriptor("fam3");
+
+    for(int i = 0; i < times; i++) {
+      HTableDescriptor htd = new HTableDescriptor("table"+i);
+      htd.addFamily(fam1);
+      htd.addFamily(fam2);
+      htd.addFamily(fam3);
+      this.admin.createTable(htd);
+    }
+
+    for(int i = 0; i < times; i++) {
+      String tableName = "table"+i;
+      this.admin.disableTable(tableName);
+      byte [] tableNameBytes = Bytes.toBytes(tableName);
+      assertTrue(this.admin.isTableDisabled(tableNameBytes));
+      this.admin.enableTable(tableName);
+      assertFalse(this.admin.isTableDisabled(tableNameBytes));
+      this.admin.disableTable(tableName);
+      assertTrue(this.admin.isTableDisabled(tableNameBytes));
+      this.admin.deleteTable(tableName);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
new file mode 100644
index 0000000..199b7ae
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java
@@ -0,0 +1,3974 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.UUID;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.KeyOnlyFilter;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.QualifierFilter;
+import org.apache.hadoop.hbase.filter.RegexStringComparator;
+import org.apache.hadoop.hbase.filter.RowFilter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+/**
+ * Run tests that use the HBase clients; {@link HTable} and {@link HTablePool}.
+ * Sets up the HBase mini cluster once at start and runs through all client tests.
+ * Each creates a table named for the method and does its stuff against that.
+ */
+public class TestFromClientSide {
+  final Log LOG = LogFactory.getLog(getClass());
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static byte [] ROW = Bytes.toBytes("testRow");
+  private static byte [] FAMILY = Bytes.toBytes("testFamily");
+  private static byte [] QUALIFIER = Bytes.toBytes("testQualifier");
+  private static byte [] VALUE = Bytes.toBytes("testValue");
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+    // Nothing to do.
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @After
+  public void tearDown() throws Exception {
+    // Nothing to do.
+  }
+
+  /**
+   * Verifies that getConfiguration returns the same Configuration object used
+   * to create the HTable instance.
+   */
+  @Test
+  public void testGetConfiguration() throws Exception {
+    byte[] TABLE = Bytes.toBytes("testGetConfiguration");
+    byte[][] FAMILIES = new byte[][] { Bytes.toBytes("foo") };
+    Configuration conf = TEST_UTIL.getConfiguration();
+    HTable table = TEST_UTIL.createTable(TABLE, FAMILIES, conf);
+    assertSame(conf, table.getConfiguration());
+  }
+
+  /**
+   * Test from client side of an involved filter against a multi family that
+   * involves deletes.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testWeirdCacheBehaviour() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testWeirdCacheBehaviour");
+    byte [][] FAMILIES = new byte[][] { Bytes.toBytes("trans-blob"),
+        Bytes.toBytes("trans-type"), Bytes.toBytes("trans-date"),
+        Bytes.toBytes("trans-tags"), Bytes.toBytes("trans-group") };
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES);
+    String value = "this is the value";
+    String value2 = "this is some other value";
+    String keyPrefix1 = UUID.randomUUID().toString();
+    String keyPrefix2 = UUID.randomUUID().toString();
+    String keyPrefix3 = UUID.randomUUID().toString();
+    putRows(ht, 3, value, keyPrefix1);
+    putRows(ht, 3, value, keyPrefix2);
+    putRows(ht, 3, value, keyPrefix3);
+    ht.flushCommits();
+    putRows(ht, 3, value2, keyPrefix1);
+    putRows(ht, 3, value2, keyPrefix2);
+    putRows(ht, 3, value2, keyPrefix3);
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), TABLE);
+    System.out.println("Checking values for key: " + keyPrefix1);
+    assertEquals("Got back incorrect number of rows from scan", 3,
+        getNumberOfRows(keyPrefix1, value2, table));
+    System.out.println("Checking values for key: " + keyPrefix2);
+    assertEquals("Got back incorrect number of rows from scan", 3,
+        getNumberOfRows(keyPrefix2, value2, table));
+    System.out.println("Checking values for key: " + keyPrefix3);
+    assertEquals("Got back incorrect number of rows from scan", 3,
+        getNumberOfRows(keyPrefix3, value2, table));
+    deleteColumns(ht, value2, keyPrefix1);
+    deleteColumns(ht, value2, keyPrefix2);
+    deleteColumns(ht, value2, keyPrefix3);
+    System.out.println("Starting important checks.....");
+    assertEquals("Got back incorrect number of rows from scan: " + keyPrefix1,
+      0, getNumberOfRows(keyPrefix1, value2, table));
+    assertEquals("Got back incorrect number of rows from scan: " + keyPrefix2,
+      0, getNumberOfRows(keyPrefix2, value2, table));
+    assertEquals("Got back incorrect number of rows from scan: " + keyPrefix3,
+      0, getNumberOfRows(keyPrefix3, value2, table));
+    ht.setScannerCaching(0);
+    assertEquals("Got back incorrect number of rows from scan", 0,
+      getNumberOfRows(keyPrefix1, value2, table)); ht.setScannerCaching(100);
+    assertEquals("Got back incorrect number of rows from scan", 0,
+      getNumberOfRows(keyPrefix2, value2, table));
+  }
+
+  private void deleteColumns(HTable ht, String value, String keyPrefix)
+  throws IOException {
+    ResultScanner scanner = buildScanner(keyPrefix, value, ht);
+    Iterator<Result> it = scanner.iterator();
+    int count = 0;
+    while (it.hasNext()) {
+      Result result = it.next();
+      Delete delete = new Delete(result.getRow());
+      delete.deleteColumn(Bytes.toBytes("trans-tags"), Bytes.toBytes("qual2"));
+      ht.delete(delete);
+      count++;
+    }
+    assertEquals("Did not perform correct number of deletes", 3, count);
+  }
+
+  private int getNumberOfRows(String keyPrefix, String value, HTable ht)
+      throws Exception {
+    ResultScanner resultScanner = buildScanner(keyPrefix, value, ht);
+    Iterator<Result> scanner = resultScanner.iterator();
+    int numberOfResults = 0;
+    while (scanner.hasNext()) {
+      Result result = scanner.next();
+      System.out.println("Got back key: " + Bytes.toString(result.getRow()));
+      for (KeyValue kv : result.raw()) {
+        System.out.println("kv=" + kv.toString() + ", "
+            + Bytes.toString(kv.getValue()));
+      }
+      numberOfResults++;
+    }
+    return numberOfResults;
+  }
+
+  private ResultScanner buildScanner(String keyPrefix, String value, HTable ht)
+      throws IOException {
+    // OurFilterList allFilters = new OurFilterList();
+    FilterList allFilters = new FilterList(/* FilterList.Operator.MUST_PASS_ALL */);
+    allFilters.addFilter(new PrefixFilter(Bytes.toBytes(keyPrefix)));
+    SingleColumnValueFilter filter = new SingleColumnValueFilter(Bytes
+        .toBytes("trans-tags"), Bytes.toBytes("qual2"), CompareOp.EQUAL, Bytes
+        .toBytes(value));
+    filter.setFilterIfMissing(true);
+    allFilters.addFilter(filter);
+
+    // allFilters.addFilter(new
+    // RowExcludingSingleColumnValueFilter(Bytes.toBytes("trans-tags"),
+    // Bytes.toBytes("qual2"), CompareOp.EQUAL, Bytes.toBytes(value)));
+
+    Scan scan = new Scan();
+    scan.addFamily(Bytes.toBytes("trans-blob"));
+    scan.addFamily(Bytes.toBytes("trans-type"));
+    scan.addFamily(Bytes.toBytes("trans-date"));
+    scan.addFamily(Bytes.toBytes("trans-tags"));
+    scan.addFamily(Bytes.toBytes("trans-group"));
+    scan.setFilter(allFilters);
+
+    return ht.getScanner(scan);
+  }
+
+  private void putRows(HTable ht, int numRows, String value, String key)
+      throws IOException {
+    for (int i = 0; i < numRows; i++) {
+      String row = key + "_" + UUID.randomUUID().toString();
+      System.out.println(String.format("Saving row: %s, with value %s", row,
+          value));
+      Put put = new Put(Bytes.toBytes(row));
+      put.add(Bytes.toBytes("trans-blob"), null, Bytes
+          .toBytes("value for blob"));
+      put.add(Bytes.toBytes("trans-type"), null, Bytes.toBytes("statement"));
+      put.add(Bytes.toBytes("trans-date"), null, Bytes
+          .toBytes("20090921010101999"));
+      put.add(Bytes.toBytes("trans-tags"), Bytes.toBytes("qual2"), Bytes
+          .toBytes(value));
+      put.add(Bytes.toBytes("trans-group"), null, Bytes
+          .toBytes("adhocTransactionGroupId"));
+      ht.put(put);
+    }
+  }
+
+  /**
+   * Test filters when multiple regions.  It does counts.  Needs eye-balling of
+   * logs to ensure that we're not scanning more regions that we're supposed to.
+   * Related to the TestFilterAcrossRegions over in the o.a.h.h.filter package.
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testFilterAcrossMultipleRegions()
+  throws IOException, InterruptedException {
+    byte [] name = Bytes.toBytes("testFilterAcrossMutlipleRegions");
+    HTable t = TEST_UTIL.createTable(name, FAMILY);
+    int rowCount = TEST_UTIL.loadTable(t, FAMILY);
+    assertRowCount(t, rowCount);
+    // Split the table.  Should split on a reasonable key; 'lqj'
+    Map<HRegionInfo, HServerAddress> regions  = splitTable(t);
+    assertRowCount(t, rowCount);
+    // Get end key of first region.
+    byte [] endKey = regions.keySet().iterator().next().getEndKey();
+    // Count rows with a filter that stops us before passed 'endKey'.
+    // Should be count of rows in first region.
+    int endKeyCount = countRows(t, createScanWithRowFilter(endKey));
+    assertTrue(endKeyCount < rowCount);
+
+    // How do I know I did not got to second region?  Thats tough.  Can't really
+    // do that in client-side region test.  I verified by tracing in debugger.
+    // I changed the messages that come out when set to DEBUG so should see
+    // when scanner is done. Says "Finished with scanning..." with region name.
+    // Check that its finished in right region.
+
+    // New test.  Make it so scan goes into next region by one and then two.
+    // Make sure count comes out right.
+    byte [] key = new byte [] {endKey[0], endKey[1], (byte)(endKey[2] + 1)};
+    int plusOneCount = countRows(t, createScanWithRowFilter(key));
+    assertEquals(endKeyCount + 1, plusOneCount);
+    key = new byte [] {endKey[0], endKey[1], (byte)(endKey[2] + 2)};
+    int plusTwoCount = countRows(t, createScanWithRowFilter(key));
+    assertEquals(endKeyCount + 2, plusTwoCount);
+
+    // New test.  Make it so I scan one less than endkey.
+    key = new byte [] {endKey[0], endKey[1], (byte)(endKey[2] - 1)};
+    int minusOneCount = countRows(t, createScanWithRowFilter(key));
+    assertEquals(endKeyCount - 1, minusOneCount);
+    // For above test... study logs.  Make sure we do "Finished with scanning.."
+    // in first region and that we do not fall into the next region.
+
+    key = new byte [] {'a', 'a', 'a'};
+    int countBBB = countRows(t,
+      createScanWithRowFilter(key, null, CompareFilter.CompareOp.EQUAL));
+    assertEquals(1, countBBB);
+
+    int countGreater = countRows(t, createScanWithRowFilter(endKey, null,
+      CompareFilter.CompareOp.GREATER_OR_EQUAL));
+    // Because started at start of table.
+    assertEquals(0, countGreater);
+    countGreater = countRows(t, createScanWithRowFilter(endKey, endKey,
+      CompareFilter.CompareOp.GREATER_OR_EQUAL));
+    assertEquals(rowCount - endKeyCount, countGreater);
+  }
+
+  /*
+   * @param key
+   * @return Scan with RowFilter that does LESS than passed key.
+   */
+  private Scan createScanWithRowFilter(final byte [] key) {
+    return createScanWithRowFilter(key, null, CompareFilter.CompareOp.LESS);
+  }
+
+  /*
+   * @param key
+   * @param op
+   * @param startRow
+   * @return Scan with RowFilter that does CompareOp op on passed key.
+   */
+  private Scan createScanWithRowFilter(final byte [] key,
+      final byte [] startRow, CompareFilter.CompareOp op) {
+    // Make sure key is of some substance... non-null and > than first key.
+    assertTrue(key != null && key.length > 0 &&
+      Bytes.BYTES_COMPARATOR.compare(key, new byte [] {'a', 'a', 'a'}) >= 0);
+    LOG.info("Key=" + Bytes.toString(key));
+    Scan s = startRow == null? new Scan(): new Scan(startRow);
+    Filter f = new RowFilter(op, new BinaryComparator(key));
+    f = new WhileMatchFilter(f);
+    s.setFilter(f);
+    return s;
+  }
+
+  /*
+   * @param t
+   * @param s
+   * @return Count of rows in table.
+   * @throws IOException
+   */
+  private int countRows(final HTable t, final Scan s)
+  throws IOException {
+    // Assert all rows in table.
+    ResultScanner scanner = t.getScanner(s);
+    int count = 0;
+    for (Result result: scanner) {
+      count++;
+      assertTrue(result.size() > 0);
+      // LOG.info("Count=" + count + ", row=" + Bytes.toString(result.getRow()));
+    }
+    return count;
+  }
+
+  private void assertRowCount(final HTable t, final int expected)
+  throws IOException {
+    assertEquals(expected, countRows(t, new Scan()));
+  }
+
+  /*
+   * Split table into multiple regions.
+   * @param t Table to split.
+   * @return Map of regions to servers.
+   * @throws IOException
+   */
+  private Map<HRegionInfo, HServerAddress> splitTable(final HTable t)
+  throws IOException, InterruptedException {
+    // Split this table in two.
+    HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+    admin.split(t.getTableName());
+    Map<HRegionInfo, HServerAddress> regions = waitOnSplit(t);
+    assertTrue(regions.size() > 1);
+    return regions;
+  }
+
+  /*
+   * Wait on table split.  May return because we waited long enough on the split
+   * and it didn't happen.  Caller should check.
+   * @param t
+   * @return Map of table regions; caller needs to check table actually split.
+   */
+  private Map<HRegionInfo, HServerAddress> waitOnSplit(final HTable t)
+  throws IOException {
+    Map<HRegionInfo, HServerAddress> regions = t.getRegionsInfo();
+    int originalCount = regions.size();
+    for (int i = 0; i < TEST_UTIL.getConfiguration().getInt("hbase.test.retries", 30); i++) {
+      Thread.currentThread();
+      try {
+        Thread.sleep(1000);
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+      regions = t.getRegionsInfo();
+      if (regions.size() > originalCount) break;
+    }
+    return regions;
+  }
+
+  @Test
+  public void testSuperSimple() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testSuperSimple");
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY);
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, VALUE);
+    ht.put(put);
+    Scan scan = new Scan();
+    scan.addColumn(FAMILY, TABLE);
+    ResultScanner scanner = ht.getScanner(scan);
+    Result result = scanner.next();
+    assertTrue("Expected null result", result == null);
+    scanner.close();
+    System.out.println("Done.");
+  }
+
+  @Test
+  public void testMaxKeyValueSize() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testMaxKeyValueSize");
+    Configuration conf = TEST_UTIL.getConfiguration();
+    String oldMaxSize = conf.get("hbase.client.keyvalue.maxsize");
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY);
+    byte[] value = new byte[4 * 1024 * 1024];
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, value);
+    ht.put(put);
+    try {
+      conf.setInt("hbase.client.keyvalue.maxsize", 2 * 1024 * 1024);
+      TABLE = Bytes.toBytes("testMaxKeyValueSize2");
+      ht = TEST_UTIL.createTable(TABLE, FAMILY);
+      put = new Put(ROW);
+      put.add(FAMILY, QUALIFIER, value);
+      ht.put(put);
+      fail("Inserting a too large KeyValue worked, should throw exception");
+    } catch(Exception e) {}
+    conf.set("hbase.client.keyvalue.maxsize", oldMaxSize);
+  }
+
+  @Test
+  public void testFilters() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testFilters");
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY);
+    byte [][] ROWS = makeN(ROW, 10);
+    byte [][] QUALIFIERS = {
+        Bytes.toBytes("col0-<d2v1>-<d3v2>"), Bytes.toBytes("col1-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col2-<d2v1>-<d3v2>"), Bytes.toBytes("col3-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col4-<d2v1>-<d3v2>"), Bytes.toBytes("col5-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col6-<d2v1>-<d3v2>"), Bytes.toBytes("col7-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col8-<d2v1>-<d3v2>"), Bytes.toBytes("col9-<d2v1>-<d3v2>")
+    };
+    for(int i=0;i<10;i++) {
+      Put put = new Put(ROWS[i]);
+      put.add(FAMILY, QUALIFIERS[i], VALUE);
+      ht.put(put);
+    }
+    Scan scan = new Scan();
+    scan.addFamily(FAMILY);
+    Filter filter = new QualifierFilter(CompareOp.EQUAL,
+      new RegexStringComparator("col[1-5]"));
+    scan.setFilter(filter);
+    ResultScanner scanner = ht.getScanner(scan);
+    int expectedIndex = 1;
+    for(Result result : ht.getScanner(scan)) {
+      assertEquals(result.size(), 1);
+      assertTrue(Bytes.equals(result.raw()[0].getRow(), ROWS[expectedIndex]));
+      assertTrue(Bytes.equals(result.raw()[0].getQualifier(),
+          QUALIFIERS[expectedIndex]));
+      expectedIndex++;
+    }
+    assertEquals(expectedIndex, 6);
+    scanner.close();
+  }
+
+  @Test
+  public void testKeyOnlyFilter() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testKeyOnlyFilter");
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY);
+    byte [][] ROWS = makeN(ROW, 10);
+    byte [][] QUALIFIERS = {
+        Bytes.toBytes("col0-<d2v1>-<d3v2>"), Bytes.toBytes("col1-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col2-<d2v1>-<d3v2>"), Bytes.toBytes("col3-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col4-<d2v1>-<d3v2>"), Bytes.toBytes("col5-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col6-<d2v1>-<d3v2>"), Bytes.toBytes("col7-<d2v1>-<d3v2>"),
+        Bytes.toBytes("col8-<d2v1>-<d3v2>"), Bytes.toBytes("col9-<d2v1>-<d3v2>")
+    };
+    for(int i=0;i<10;i++) {
+      Put put = new Put(ROWS[i]);
+      put.add(FAMILY, QUALIFIERS[i], VALUE);
+      ht.put(put);
+    }
+    Scan scan = new Scan();
+    scan.addFamily(FAMILY);
+    Filter filter = new KeyOnlyFilter(true);
+    scan.setFilter(filter);
+    ResultScanner scanner = ht.getScanner(scan);
+    int count = 0;
+    for(Result result : ht.getScanner(scan)) {
+      assertEquals(result.size(), 1);
+      assertEquals(result.raw()[0].getValueLength(), Bytes.SIZEOF_INT);
+      assertEquals(Bytes.toInt(result.raw()[0].getValue()), VALUE.length);
+      count++;
+    }
+    assertEquals(count, 10);
+    scanner.close();
+  }
+  
+  /**
+   * Test simple table and non-existent row cases.
+   */
+  @Test
+  public void testSimpleMissing() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testSimpleMissing");
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY);
+    byte [][] ROWS = makeN(ROW, 4);
+
+    // Try to get a row on an empty table
+    Get get = new Get(ROWS[0]);
+    Result result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILY);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILY, QUALIFIER);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    Scan scan = new Scan();
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+
+    scan = new Scan(ROWS[0]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan(ROWS[0],ROWS[1]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan();
+    scan.addFamily(FAMILY);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan();
+    scan.addColumn(FAMILY, QUALIFIER);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Insert a row
+
+    Put put = new Put(ROWS[2]);
+    put.add(FAMILY, QUALIFIER, VALUE);
+    ht.put(put);
+
+    // Try to get empty rows around it
+
+    get = new Get(ROWS[1]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILY);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[3]);
+    get.addColumn(FAMILY, QUALIFIER);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to scan empty rows around it
+
+    scan = new Scan(ROWS[3]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan(ROWS[0],ROWS[2]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Make sure we can actually get the row
+
+    get = new Get(ROWS[2]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[2], FAMILY, QUALIFIER, VALUE);
+
+    get = new Get(ROWS[2]);
+    get.addFamily(FAMILY);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[2], FAMILY, QUALIFIER, VALUE);
+
+    get = new Get(ROWS[2]);
+    get.addColumn(FAMILY, QUALIFIER);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[2], FAMILY, QUALIFIER, VALUE);
+
+    // Make sure we can scan the row
+
+    scan = new Scan();
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[2], FAMILY, QUALIFIER, VALUE);
+
+    scan = new Scan(ROWS[0],ROWS[3]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[2], FAMILY, QUALIFIER, VALUE);
+
+    scan = new Scan(ROWS[2],ROWS[3]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[2], FAMILY, QUALIFIER, VALUE);
+  }
+
+  /**
+   * Test basic puts, gets, scans, and deletes for a single row
+   * in a multiple family table.
+   */
+  @Test
+  public void testSingleRowMultipleFamily() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testSingleRowMultipleFamily");
+    byte [][] ROWS = makeN(ROW, 3);
+    byte [][] FAMILIES = makeNAscii(FAMILY, 10);
+    byte [][] QUALIFIERS = makeN(QUALIFIER, 10);
+    byte [][] VALUES = makeN(VALUE, 10);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES);
+
+    Get get;
+    Scan scan;
+    Delete delete;
+    Put put;
+    Result result;
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Insert one column to one family
+    ////////////////////////////////////////////////////////////////////////////
+
+    put = new Put(ROWS[0]);
+    put.add(FAMILIES[4], QUALIFIERS[0], VALUES[0]);
+    ht.put(put);
+
+    // Get the single column
+    getVerifySingleColumn(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0, VALUES, 0);
+
+    // Scan the single column
+    scanVerifySingleColumn(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0, VALUES, 0);
+
+    // Get empty results around inserted column
+    getVerifySingleEmpty(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0);
+
+    // Scan empty results around inserted column
+    scanVerifySingleEmpty(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0);
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Flush memstore and run same tests from storefiles
+    ////////////////////////////////////////////////////////////////////////////
+
+    TEST_UTIL.flush();
+
+    // Redo get and scan tests from storefile
+    getVerifySingleColumn(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0, VALUES, 0);
+    scanVerifySingleColumn(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0, VALUES, 0);
+    getVerifySingleEmpty(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0);
+    scanVerifySingleEmpty(ht, ROWS, 0, FAMILIES, 4, QUALIFIERS, 0);
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Now, Test reading from memstore and storefiles at once
+    ////////////////////////////////////////////////////////////////////////////
+
+    // Insert multiple columns to two other families
+    put = new Put(ROWS[0]);
+    put.add(FAMILIES[2], QUALIFIERS[2], VALUES[2]);
+    put.add(FAMILIES[2], QUALIFIERS[4], VALUES[4]);
+    put.add(FAMILIES[4], QUALIFIERS[4], VALUES[4]);
+    put.add(FAMILIES[6], QUALIFIERS[6], VALUES[6]);
+    put.add(FAMILIES[6], QUALIFIERS[7], VALUES[7]);
+    put.add(FAMILIES[7], QUALIFIERS[7], VALUES[7]);
+    put.add(FAMILIES[9], QUALIFIERS[0], VALUES[0]);
+    ht.put(put);
+
+    // Get multiple columns across multiple families and get empties around it
+    singleRowGetTest(ht, ROWS, FAMILIES, QUALIFIERS, VALUES);
+
+    // Scan multiple columns across multiple families and scan empties around it
+    singleRowScanTest(ht, ROWS, FAMILIES, QUALIFIERS, VALUES);
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Flush the table again
+    ////////////////////////////////////////////////////////////////////////////
+
+    TEST_UTIL.flush();
+
+    // Redo tests again
+    singleRowGetTest(ht, ROWS, FAMILIES, QUALIFIERS, VALUES);
+    singleRowScanTest(ht, ROWS, FAMILIES, QUALIFIERS, VALUES);
+
+    // Insert more data to memstore
+    put = new Put(ROWS[0]);
+    put.add(FAMILIES[6], QUALIFIERS[5], VALUES[5]);
+    put.add(FAMILIES[6], QUALIFIERS[8], VALUES[8]);
+    put.add(FAMILIES[6], QUALIFIERS[9], VALUES[9]);
+    put.add(FAMILIES[4], QUALIFIERS[3], VALUES[3]);
+    ht.put(put);
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Delete a storefile column
+    ////////////////////////////////////////////////////////////////////////////
+    delete = new Delete(ROWS[0]);
+    delete.deleteColumns(FAMILIES[6], QUALIFIERS[7]);
+    ht.delete(delete);
+
+    // Try to get deleted column
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[7]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to scan deleted column
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[7]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Make sure we can still get a column before it and after it
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[6], VALUES[6]);
+
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[8]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[8], VALUES[8]);
+
+    // Make sure we can still scan a column before it and after it
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[6], VALUES[6]);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[8]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[8], VALUES[8]);
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Delete a memstore column
+    ////////////////////////////////////////////////////////////////////////////
+    delete = new Delete(ROWS[0]);
+    delete.deleteColumns(FAMILIES[6], QUALIFIERS[8]);
+    ht.delete(delete);
+
+    // Try to get deleted column
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[8]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to scan deleted column
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[8]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Make sure we can still get a column before it and after it
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[6], VALUES[6]);
+
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[9]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[9], VALUES[9]);
+
+    // Make sure we can still scan a column before it and after it
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[6], VALUES[6]);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[9]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[9], VALUES[9]);
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Delete joint storefile/memstore family
+    ////////////////////////////////////////////////////////////////////////////
+
+    delete = new Delete(ROWS[0]);
+    delete.deleteFamily(FAMILIES[4]);
+    ht.delete(delete);
+
+    // Try to get storefile column in deleted family
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to get memstore column in deleted family
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[3]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to get deleted family
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILIES[4]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to scan storefile column in deleted family
+    scan = new Scan();
+    scan.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Try to scan memstore column in deleted family
+    scan = new Scan();
+    scan.addColumn(FAMILIES[4], QUALIFIERS[3]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Try to scan deleted family
+    scan = new Scan();
+    scan.addFamily(FAMILIES[4]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Make sure we can still get another family
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[2], QUALIFIERS[2]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[2], QUALIFIERS[2], VALUES[2]);
+
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[9]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[9], VALUES[9]);
+
+    // Make sure we can still scan another family
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[6], VALUES[6]);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[9]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[9], VALUES[9]);
+
+    ////////////////////////////////////////////////////////////////////////////
+    // Flush everything and rerun delete tests
+    ////////////////////////////////////////////////////////////////////////////
+
+    TEST_UTIL.flush();
+
+    // Try to get storefile column in deleted family
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to get memstore column in deleted family
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[3]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to get deleted family
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILIES[4]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    // Try to scan storefile column in deleted family
+    scan = new Scan();
+    scan.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Try to scan memstore column in deleted family
+    scan = new Scan();
+    scan.addColumn(FAMILIES[4], QUALIFIERS[3]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Try to scan deleted family
+    scan = new Scan();
+    scan.addFamily(FAMILIES[4]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    // Make sure we can still get another family
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[2], QUALIFIERS[2]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[2], QUALIFIERS[2], VALUES[2]);
+
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[9]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[9], VALUES[9]);
+
+    // Make sure we can still scan another family
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[6], VALUES[6]);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[6], QUALIFIERS[9]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[6], QUALIFIERS[9], VALUES[9]);
+
+  }
+
+  @Test
+  public void testNull() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testNull");
+
+    // Null table name (should NOT work)
+    try {
+      TEST_UTIL.createTable(null, FAMILY);
+      fail("Creating a table with null name passed, should have failed");
+    } catch(Exception e) {}
+
+    // Null family (should NOT work)
+    try {
+      TEST_UTIL.createTable(TABLE, (byte[])null);
+      fail("Creating a table with a null family passed, should fail");
+    } catch(Exception e) {}
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY);
+
+    // Null row (should NOT work)
+    try {
+      Put put = new Put((byte[])null);
+      put.add(FAMILY, QUALIFIER, VALUE);
+      ht.put(put);
+      fail("Inserting a null row worked, should throw exception");
+    } catch(Exception e) {}
+
+    // Null qualifier (should work)
+    {
+      Put put = new Put(ROW);
+      put.add(FAMILY, null, VALUE);
+      ht.put(put);
+
+      getTestNull(ht, ROW, FAMILY, VALUE);
+
+      scanTestNull(ht, ROW, FAMILY, VALUE);
+
+      Delete delete = new Delete(ROW);
+      delete.deleteColumns(FAMILY, null);
+      ht.delete(delete);
+
+      Get get = new Get(ROW);
+      Result result = ht.get(get);
+      assertEmptyResult(result);
+    }
+
+    // Use a new table
+    byte [] TABLE2 = Bytes.toBytes("testNull2");
+    ht = TEST_UTIL.createTable(TABLE2, FAMILY);
+
+    // Empty qualifier, byte[0] instead of null (should work)
+    try {
+      Put put = new Put(ROW);
+      put.add(FAMILY, HConstants.EMPTY_BYTE_ARRAY, VALUE);
+      ht.put(put);
+
+      getTestNull(ht, ROW, FAMILY, VALUE);
+
+      scanTestNull(ht, ROW, FAMILY, VALUE);
+
+      // Flush and try again
+
+      TEST_UTIL.flush();
+
+      getTestNull(ht, ROW, FAMILY, VALUE);
+
+      scanTestNull(ht, ROW, FAMILY, VALUE);
+
+      Delete delete = new Delete(ROW);
+      delete.deleteColumns(FAMILY, HConstants.EMPTY_BYTE_ARRAY);
+      ht.delete(delete);
+
+      Get get = new Get(ROW);
+      Result result = ht.get(get);
+      assertEmptyResult(result);
+
+    } catch(Exception e) {
+      throw new IOException("Using a row with null qualifier threw exception, should ");
+    }
+
+    // Null value
+    try {
+      Put put = new Put(ROW);
+      put.add(FAMILY, QUALIFIER, null);
+      ht.put(put);
+
+      Get get = new Get(ROW);
+      get.addColumn(FAMILY, QUALIFIER);
+      Result result = ht.get(get);
+      assertSingleResult(result, ROW, FAMILY, QUALIFIER, null);
+
+      Scan scan = new Scan();
+      scan.addColumn(FAMILY, QUALIFIER);
+      result = getSingleScanResult(ht, scan);
+      assertSingleResult(result, ROW, FAMILY, QUALIFIER, null);
+
+      Delete delete = new Delete(ROW);
+      delete.deleteColumns(FAMILY, QUALIFIER);
+      ht.delete(delete);
+
+      get = new Get(ROW);
+      result = ht.get(get);
+      assertEmptyResult(result);
+
+    } catch(Exception e) {
+      throw new IOException("Null values should be allowed, but threw exception");
+    }
+  }
+
+  @Test
+  public void testVersions() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testVersions");
+
+    long [] STAMPS = makeStamps(20);
+    byte [][] VALUES = makeNAscii(VALUE, 20);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    // Insert 4 versions of same column
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    put.add(FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    ht.put(put);
+
+    // Verify we can get each one properly
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+
+    // Verify we don't accidentally get others
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+
+    // Ensure maxVersions in query is respected
+    Get get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(2);
+    Result result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+    Scan scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(2);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+    // Flush and redo
+
+    TEST_UTIL.flush();
+
+    // Verify we can get each one properly
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+
+    // Verify we don't accidentally get others
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+
+    // Ensure maxVersions in query is respected
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(2);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(2);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+
+    // Add some memstore and retest
+
+    // Insert 4 more versions of same column and a dupe
+    put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILY, QUALIFIER, STAMPS[6], VALUES[6]);
+    put.add(FAMILY, QUALIFIER, STAMPS[7], VALUES[7]);
+    put.add(FAMILY, QUALIFIER, STAMPS[8], VALUES[8]);
+    ht.put(put);
+
+    // Ensure maxVersions in query is respected
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions();
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 7);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions();
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 7);
+
+    get = new Get(ROW);
+    get.setMaxVersions();
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 7);
+
+    scan = new Scan(ROW);
+    scan.setMaxVersions();
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 7);
+
+    // Verify we can get each one properly
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[7], VALUES[7]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[7], VALUES[7]);
+
+    // Verify we don't accidentally get others
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[9]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[9]);
+
+    // Ensure maxVersions of table is respected
+
+    TEST_UTIL.flush();
+
+    // Insert 4 more versions of same column and a dupe
+    put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[9], VALUES[9]);
+    put.add(FAMILY, QUALIFIER, STAMPS[11], VALUES[11]);
+    put.add(FAMILY, QUALIFIER, STAMPS[13], VALUES[13]);
+    put.add(FAMILY, QUALIFIER, STAMPS[15], VALUES[15]);
+    ht.put(put);
+
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8], STAMPS[9], STAMPS[11], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[7], VALUES[8], VALUES[9], VALUES[11], VALUES[13], VALUES[15]},
+        0, 9);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8], STAMPS[9], STAMPS[11], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[7], VALUES[8], VALUES[9], VALUES[11], VALUES[13], VALUES[15]},
+        0, 9);
+
+    // Delete a version in the memstore and a version in a storefile
+    Delete delete = new Delete(ROW);
+    delete.deleteColumn(FAMILY, QUALIFIER, STAMPS[11]);
+    delete.deleteColumn(FAMILY, QUALIFIER, STAMPS[7]);
+    ht.delete(delete);
+
+    // Test that it's gone
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[8], STAMPS[9], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[8], VALUES[9], VALUES[13], VALUES[15]},
+        0, 9);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[8], STAMPS[9], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6], VALUES[8], VALUES[9], VALUES[13], VALUES[15]},
+        0, 9);
+
+  }
+
+  @Test
+  public void testVersionLimits() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testVersionLimits");
+    byte [][] FAMILIES = makeNAscii(FAMILY, 3);
+    int [] LIMITS = {1,3,5};
+    long [] STAMPS = makeStamps(10);
+    byte [][] VALUES = makeNAscii(VALUE, 10);
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, LIMITS);
+
+    // Insert limit + 1 on each family
+    Put put = new Put(ROW);
+    put.add(FAMILIES[0], QUALIFIER, STAMPS[0], VALUES[0]);
+    put.add(FAMILIES[0], QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[0], VALUES[0]);
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILIES[2], QUALIFIER, STAMPS[0], VALUES[0]);
+    put.add(FAMILIES[2], QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILIES[2], QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILIES[2], QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILIES[2], QUALIFIER, STAMPS[4], VALUES[4]);
+    put.add(FAMILIES[2], QUALIFIER, STAMPS[5], VALUES[5]);
+    put.add(FAMILIES[2], QUALIFIER, STAMPS[6], VALUES[6]);
+    ht.put(put);
+
+    // Verify we only get the right number out of each
+
+    // Family0
+
+    Get get = new Get(ROW);
+    get.addColumn(FAMILIES[0], QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    Result result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {STAMPS[1]},
+        new byte[][] {VALUES[1]},
+        0, 0);
+
+    get = new Get(ROW);
+    get.addFamily(FAMILIES[0]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {STAMPS[1]},
+        new byte[][] {VALUES[1]},
+        0, 0);
+
+    Scan scan = new Scan(ROW);
+    scan.addColumn(FAMILIES[0], QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {STAMPS[1]},
+        new byte[][] {VALUES[1]},
+        0, 0);
+
+    scan = new Scan(ROW);
+    scan.addFamily(FAMILIES[0]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {STAMPS[1]},
+        new byte[][] {VALUES[1]},
+        0, 0);
+
+    // Family1
+
+    get = new Get(ROW);
+    get.addColumn(FAMILIES[1], QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[1], QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    get = new Get(ROW);
+    get.addFamily(FAMILIES[1]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[1], QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILIES[1], QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[1], QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    scan = new Scan(ROW);
+    scan.addFamily(FAMILIES[1]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[1], QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    // Family2
+
+    get = new Get(ROW);
+    get.addColumn(FAMILIES[2], QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[2], QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6]},
+        0, 4);
+
+    get = new Get(ROW);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[2], QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6]},
+        0, 4);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILIES[2], QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[2], QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6]},
+        0, 4);
+
+    scan = new Scan(ROW);
+    scan.addFamily(FAMILIES[2]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[2], QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[4], VALUES[5], VALUES[6]},
+        0, 4);
+
+    // Try all families
+
+    get = new Get(ROW);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 9 keys but received " + result.size(),
+        result.size() == 9);
+
+    get = new Get(ROW);
+    get.addFamily(FAMILIES[0]);
+    get.addFamily(FAMILIES[1]);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 9 keys but received " + result.size(),
+        result.size() == 9);
+
+    get = new Get(ROW);
+    get.addColumn(FAMILIES[0], QUALIFIER);
+    get.addColumn(FAMILIES[1], QUALIFIER);
+    get.addColumn(FAMILIES[2], QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 9 keys but received " + result.size(),
+        result.size() == 9);
+
+    scan = new Scan(ROW);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertTrue("Expected 9 keys but received " + result.size(),
+        result.size() == 9);
+
+    scan = new Scan(ROW);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    scan.addFamily(FAMILIES[0]);
+    scan.addFamily(FAMILIES[1]);
+    scan.addFamily(FAMILIES[2]);
+    result = getSingleScanResult(ht, scan);
+    assertTrue("Expected 9 keys but received " + result.size(),
+        result.size() == 9);
+
+    scan = new Scan(ROW);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    scan.addColumn(FAMILIES[0], QUALIFIER);
+    scan.addColumn(FAMILIES[1], QUALIFIER);
+    scan.addColumn(FAMILIES[2], QUALIFIER);
+    result = getSingleScanResult(ht, scan);
+    assertTrue("Expected 9 keys but received " + result.size(),
+        result.size() == 9);
+
+  }
+
+  @Test
+  public void testDeletes() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testDeletes");
+
+    byte [][] ROWS = makeNAscii(ROW, 6);
+    byte [][] FAMILIES = makeNAscii(FAMILY, 3);
+    byte [][] VALUES = makeN(VALUE, 5);
+    long [] ts = {1000, 2000, 3000, 4000, 5000};
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES);
+
+    Put put = new Put(ROW);
+    put.add(FAMILIES[0], QUALIFIER, ts[0], VALUES[0]);
+    put.add(FAMILIES[0], QUALIFIER, ts[1], VALUES[1]);
+    ht.put(put);
+
+    Delete delete = new Delete(ROW);
+    delete.deleteFamily(FAMILIES[0], ts[0]);
+    ht.delete(delete);
+
+    Get get = new Get(ROW);
+    get.addFamily(FAMILIES[0]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    Result result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {ts[1]},
+        new byte[][] {VALUES[1]},
+        0, 0);
+
+    Scan scan = new Scan(ROW);
+    scan.addFamily(FAMILIES[0]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {ts[1]},
+        new byte[][] {VALUES[1]},
+        0, 0);
+
+    // Test delete latest version
+    put = new Put(ROW);
+    put.add(FAMILIES[0], QUALIFIER, ts[4], VALUES[4]);
+    put.add(FAMILIES[0], QUALIFIER, ts[2], VALUES[2]);
+    put.add(FAMILIES[0], QUALIFIER, ts[3], VALUES[3]);
+    put.add(FAMILIES[0], null, ts[4], VALUES[4]);
+    put.add(FAMILIES[0], null, ts[2], VALUES[2]);
+    put.add(FAMILIES[0], null, ts[3], VALUES[3]);
+    ht.put(put);
+
+    delete = new Delete(ROW);
+    delete.deleteColumn(FAMILIES[0], QUALIFIER); // ts[4]
+    ht.delete(delete);
+
+    get = new Get(ROW);
+    get.addColumn(FAMILIES[0], QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {ts[1], ts[2], ts[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILIES[0], QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {ts[1], ts[2], ts[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    // Test for HBASE-1847
+    delete = new Delete(ROW);
+    delete.deleteColumn(FAMILIES[0], null);
+    ht.delete(delete);
+
+    // Cleanup null qualifier
+    delete = new Delete(ROW);
+    delete.deleteColumns(FAMILIES[0], null);
+    ht.delete(delete);
+
+    // Expected client behavior might be that you can re-put deleted values
+    // But alas, this is not to be.  We can't put them back in either case.
+
+    put = new Put(ROW);
+    put.add(FAMILIES[0], QUALIFIER, ts[0], VALUES[0]); // 1000
+    put.add(FAMILIES[0], QUALIFIER, ts[4], VALUES[4]); // 5000
+    ht.put(put);
+
+
+    // It used to be due to the internal implementation of Get, that
+    // the Get() call would return ts[4] UNLIKE the Scan below. With
+    // the switch to using Scan for Get this is no longer the case.
+    get = new Get(ROW);
+    get.addFamily(FAMILIES[0]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {ts[1], ts[2], ts[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    // The Scanner returns the previous values, the expected-naive-unexpected behavior
+
+    scan = new Scan(ROW);
+    scan.addFamily(FAMILIES[0]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILIES[0], QUALIFIER,
+        new long [] {ts[1], ts[2], ts[3]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3]},
+        0, 2);
+
+    // Test deleting an entire family from one row but not the other various ways
+
+    put = new Put(ROWS[0]);
+    put.add(FAMILIES[1], QUALIFIER, ts[0], VALUES[0]);
+    put.add(FAMILIES[1], QUALIFIER, ts[1], VALUES[1]);
+    put.add(FAMILIES[2], QUALIFIER, ts[2], VALUES[2]);
+    put.add(FAMILIES[2], QUALIFIER, ts[3], VALUES[3]);
+    ht.put(put);
+
+    put = new Put(ROWS[1]);
+    put.add(FAMILIES[1], QUALIFIER, ts[0], VALUES[0]);
+    put.add(FAMILIES[1], QUALIFIER, ts[1], VALUES[1]);
+    put.add(FAMILIES[2], QUALIFIER, ts[2], VALUES[2]);
+    put.add(FAMILIES[2], QUALIFIER, ts[3], VALUES[3]);
+    ht.put(put);
+
+    put = new Put(ROWS[2]);
+    put.add(FAMILIES[1], QUALIFIER, ts[0], VALUES[0]);
+    put.add(FAMILIES[1], QUALIFIER, ts[1], VALUES[1]);
+    put.add(FAMILIES[2], QUALIFIER, ts[2], VALUES[2]);
+    put.add(FAMILIES[2], QUALIFIER, ts[3], VALUES[3]);
+    ht.put(put);
+
+    // Assert that above went in.
+    get = new Get(ROWS[2]);
+    get.addFamily(FAMILIES[1]);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 4 key but received " + result.size() + ": " + result,
+        result.size() == 4);
+
+    delete = new Delete(ROWS[0]);
+    delete.deleteFamily(FAMILIES[2]);
+    ht.delete(delete);
+
+    delete = new Delete(ROWS[1]);
+    delete.deleteColumns(FAMILIES[1], QUALIFIER);
+    ht.delete(delete);
+
+    delete = new Delete(ROWS[2]);
+    delete.deleteColumn(FAMILIES[1], QUALIFIER);
+    delete.deleteColumn(FAMILIES[1], QUALIFIER);
+    delete.deleteColumn(FAMILIES[2], QUALIFIER);
+    ht.delete(delete);
+
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILIES[1]);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 2 keys but received " + result.size(),
+        result.size() == 2);
+    assertNResult(result, ROWS[0], FAMILIES[1], QUALIFIER,
+        new long [] {ts[0], ts[1]},
+        new byte[][] {VALUES[0], VALUES[1]},
+        0, 1);
+
+    scan = new Scan(ROWS[0]);
+    scan.addFamily(FAMILIES[1]);
+    scan.addFamily(FAMILIES[2]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertTrue("Expected 2 keys but received " + result.size(),
+        result.size() == 2);
+    assertNResult(result, ROWS[0], FAMILIES[1], QUALIFIER,
+        new long [] {ts[0], ts[1]},
+        new byte[][] {VALUES[0], VALUES[1]},
+        0, 1);
+
+    get = new Get(ROWS[1]);
+    get.addFamily(FAMILIES[1]);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 2 keys but received " + result.size(),
+        result.size() == 2);
+
+    scan = new Scan(ROWS[1]);
+    scan.addFamily(FAMILIES[1]);
+    scan.addFamily(FAMILIES[2]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertTrue("Expected 2 keys but received " + result.size(),
+        result.size() == 2);
+
+    get = new Get(ROWS[2]);
+    get.addFamily(FAMILIES[1]);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertEquals(1, result.size());
+    assertNResult(result, ROWS[2], FAMILIES[2], QUALIFIER,
+        new long [] {ts[2]},
+        new byte[][] {VALUES[2]},
+        0, 0);
+
+    scan = new Scan(ROWS[2]);
+    scan.addFamily(FAMILIES[1]);
+    scan.addFamily(FAMILIES[2]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertEquals(1, result.size());
+    assertNResult(result, ROWS[2], FAMILIES[2], QUALIFIER,
+        new long [] {ts[2]},
+        new byte[][] {VALUES[2]},
+        0, 0);
+
+    // Test if we delete the family first in one row (HBASE-1541)
+
+    delete = new Delete(ROWS[3]);
+    delete.deleteFamily(FAMILIES[1]);
+    ht.delete(delete);
+
+    put = new Put(ROWS[3]);
+    put.add(FAMILIES[2], QUALIFIER, VALUES[0]);
+    ht.put(put);
+
+    put = new Put(ROWS[4]);
+    put.add(FAMILIES[1], QUALIFIER, VALUES[1]);
+    put.add(FAMILIES[2], QUALIFIER, VALUES[2]);
+    ht.put(put);
+
+    get = new Get(ROWS[3]);
+    get.addFamily(FAMILIES[1]);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 1 key but received " + result.size(),
+        result.size() == 1);
+
+    get = new Get(ROWS[4]);
+    get.addFamily(FAMILIES[1]);
+    get.addFamily(FAMILIES[2]);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertTrue("Expected 2 keys but received " + result.size(),
+        result.size() == 2);
+
+    scan = new Scan(ROWS[3]);
+    scan.addFamily(FAMILIES[1]);
+    scan.addFamily(FAMILIES[2]);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    ResultScanner scanner = ht.getScanner(scan);
+    result = scanner.next();
+    assertTrue("Expected 1 key but received " + result.size(),
+        result.size() == 1);
+    assertTrue(Bytes.equals(result.sorted()[0].getRow(), ROWS[3]));
+    assertTrue(Bytes.equals(result.sorted()[0].getValue(), VALUES[0]));
+    result = scanner.next();
+    assertTrue("Expected 2 keys but received " + result.size(),
+        result.size() == 2);
+    assertTrue(Bytes.equals(result.sorted()[0].getRow(), ROWS[4]));
+    assertTrue(Bytes.equals(result.sorted()[1].getRow(), ROWS[4]));
+    assertTrue(Bytes.equals(result.sorted()[0].getValue(), VALUES[1]));
+    assertTrue(Bytes.equals(result.sorted()[1].getValue(), VALUES[2]));
+    scanner.close();
+
+    // Add test of bulk deleting.
+    for (int i = 0; i < 10; i++) {
+      byte [] bytes = Bytes.toBytes(i);
+      put = new Put(bytes);
+      put.add(FAMILIES[0], QUALIFIER, bytes);
+      ht.put(put);
+    }
+    for (int i = 0; i < 10; i++) {
+      byte [] bytes = Bytes.toBytes(i);
+      get = new Get(bytes);
+      get.addFamily(FAMILIES[0]);
+      result = ht.get(get);
+      assertTrue(result.size() == 1);
+    }
+    ArrayList<Delete> deletes = new ArrayList<Delete>();
+    for (int i = 0; i < 10; i++) {
+      byte [] bytes = Bytes.toBytes(i);
+      delete = new Delete(bytes);
+      delete.deleteFamily(FAMILIES[0]);
+      deletes.add(delete);
+    }
+    ht.delete(deletes);
+    for (int i = 0; i < 10; i++) {
+      byte [] bytes = Bytes.toBytes(i);
+      get = new Get(bytes);
+      get.addFamily(FAMILIES[0]);
+      result = ht.get(get);
+      assertTrue(result.size() == 0);
+    }
+  }
+
+  /*
+   * Baseline "scalability" test.
+   *
+   * Tests one hundred families, one million columns, one million versions
+   */
+  @Ignore @Test
+  public void testMillions() throws Exception {
+
+    // 100 families
+
+    // millions of columns
+
+    // millions of versions
+
+  }
+
+  @Ignore @Test
+  public void testMultipleRegionsAndBatchPuts() throws Exception {
+    // Two family table
+
+    // Insert lots of rows
+
+    // Insert to the same row with batched puts
+
+    // Insert to multiple rows with batched puts
+
+    // Split the table
+
+    // Get row from first region
+
+    // Get row from second region
+
+    // Scan all rows
+
+    // Insert to multiple regions with batched puts
+
+    // Get row from first region
+
+    // Get row from second region
+
+    // Scan all rows
+
+
+  }
+
+  @Ignore @Test
+  public void testMultipleRowMultipleFamily() throws Exception {
+
+  }
+
+  //
+  // JIRA Testers
+  //
+
+  /**
+   * HBASE-867
+   *    If millions of columns in a column family, hbase scanner won't come up
+   *
+   *    Test will create numRows rows, each with numColsPerRow columns
+   *    (1 version each), and attempt to scan them all.
+   *
+   *    To test at scale, up numColsPerRow to the millions
+   *    (have not gotten that to work running as junit though)
+   */
+  @Test
+  public void testJiraTest867() throws Exception {
+    int numRows = 10;
+    int numColsPerRow = 2000;
+
+    byte [] TABLE = Bytes.toBytes("testJiraTest867");
+
+    byte [][] ROWS = makeN(ROW, numRows);
+    byte [][] QUALIFIERS = makeN(QUALIFIER, numColsPerRow);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY);
+
+    // Insert rows
+
+    for(int i=0;i<numRows;i++) {
+      Put put = new Put(ROWS[i]);
+      for(int j=0;j<numColsPerRow;j++) {
+        put.add(FAMILY, QUALIFIERS[j], QUALIFIERS[j]);
+      }
+      assertTrue("Put expected to contain " + numColsPerRow + " columns but " +
+          "only contains " + put.size(), put.size() == numColsPerRow);
+      ht.put(put);
+    }
+
+    // Get a row
+    Get get = new Get(ROWS[numRows-1]);
+    Result result = ht.get(get);
+    assertNumKeys(result, numColsPerRow);
+    KeyValue [] keys = result.sorted();
+    for(int i=0;i<result.size();i++) {
+      assertKey(keys[i], ROWS[numRows-1], FAMILY, QUALIFIERS[i], QUALIFIERS[i]);
+    }
+
+    // Scan the rows
+    Scan scan = new Scan();
+    ResultScanner scanner = ht.getScanner(scan);
+    int rowCount = 0;
+    while((result = scanner.next()) != null) {
+      assertNumKeys(result, numColsPerRow);
+      KeyValue [] kvs = result.sorted();
+      for(int i=0;i<numColsPerRow;i++) {
+        assertKey(kvs[i], ROWS[rowCount], FAMILY, QUALIFIERS[i], QUALIFIERS[i]);
+      }
+      rowCount++;
+    }
+    scanner.close();
+    assertTrue("Expected to scan " + numRows + " rows but actually scanned "
+        + rowCount + " rows", rowCount == numRows);
+
+    // flush and try again
+
+    TEST_UTIL.flush();
+
+    // Get a row
+    get = new Get(ROWS[numRows-1]);
+    result = ht.get(get);
+    assertNumKeys(result, numColsPerRow);
+    keys = result.sorted();
+    for(int i=0;i<result.size();i++) {
+      assertKey(keys[i], ROWS[numRows-1], FAMILY, QUALIFIERS[i], QUALIFIERS[i]);
+    }
+
+    // Scan the rows
+    scan = new Scan();
+    scanner = ht.getScanner(scan);
+    rowCount = 0;
+    while((result = scanner.next()) != null) {
+      assertNumKeys(result, numColsPerRow);
+      KeyValue [] kvs = result.sorted();
+      for(int i=0;i<numColsPerRow;i++) {
+        assertKey(kvs[i], ROWS[rowCount], FAMILY, QUALIFIERS[i], QUALIFIERS[i]);
+      }
+      rowCount++;
+    }
+    scanner.close();
+    assertTrue("Expected to scan " + numRows + " rows but actually scanned "
+        + rowCount + " rows", rowCount == numRows);
+
+  }
+
+  /**
+   * HBASE-861
+   *    get with timestamp will return a value if there is a version with an
+   *    earlier timestamp
+   */
+  @Test
+  public void testJiraTest861() throws Exception {
+
+    byte [] TABLE = Bytes.toBytes("testJiraTest861");
+    byte [][] VALUES = makeNAscii(VALUE, 7);
+    long [] STAMPS = makeStamps(7);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    // Insert three versions
+
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    ht.put(put);
+
+    // Get the middle value
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+
+    // Try to get one version before (expect fail)
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[1]);
+
+    // Try to get one version after (expect fail)
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[5]);
+
+    // Try same from storefile
+    TEST_UTIL.flush();
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[1]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[5]);
+
+    // Insert two more versions surrounding others, into memstore
+    put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[0], VALUES[0]);
+    put.add(FAMILY, QUALIFIER, STAMPS[6], VALUES[6]);
+    ht.put(put);
+
+    // Check we can get everything we should and can't get what we shouldn't
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[0], VALUES[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[5]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[6], VALUES[6]);
+
+    // Try same from two storefiles
+    TEST_UTIL.flush();
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[0], VALUES[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[5]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[6], VALUES[6]);
+
+  }
+
+  /**
+   * HBASE-33
+   *    Add a HTable get/obtainScanner method that retrieves all versions of a
+   *    particular column and row between two timestamps
+   */
+  @Test
+  public void testJiraTest33() throws Exception {
+
+    byte [] TABLE = Bytes.toBytes("testJiraTest33");
+    byte [][] VALUES = makeNAscii(VALUE, 7);
+    long [] STAMPS = makeStamps(7);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    // Insert lots versions
+
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[0], VALUES[0]);
+    put.add(FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    put.add(FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    ht.put(put);
+
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 2);
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 3);
+
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 2);
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 3);
+
+    // Try same from storefile
+    TEST_UTIL.flush();
+
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 2);
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+    getVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 3);
+
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 2);
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+    scanVersionRangeAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 3);
+
+  }
+
+  /**
+   * HBASE-1014
+   *    commit(BatchUpdate) method should return timestamp
+   */
+  @Test
+  public void testJiraTest1014() throws Exception {
+
+    byte [] TABLE = Bytes.toBytes("testJiraTest1014");
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    long manualStamp = 12345;
+
+    // Insert lots versions
+
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, manualStamp, VALUE);
+    ht.put(put);
+
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, manualStamp, VALUE);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, manualStamp-1);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, manualStamp+1);
+
+  }
+
+  /**
+   * HBASE-1182
+   *    Scan for columns > some timestamp
+   */
+  @Test
+  public void testJiraTest1182() throws Exception {
+
+    byte [] TABLE = Bytes.toBytes("testJiraTest1182");
+    byte [][] VALUES = makeNAscii(VALUE, 7);
+    long [] STAMPS = makeStamps(7);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    // Insert lots versions
+
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[0], VALUES[0]);
+    put.add(FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    put.add(FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    ht.put(put);
+
+    getVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    getVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 5);
+    getVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+
+    scanVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    scanVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 5);
+    scanVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+
+    // Try same from storefile
+    TEST_UTIL.flush();
+
+    getVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    getVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 5);
+    getVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+
+    scanVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+    scanVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 2, 5);
+    scanVersionRangeAndVerifyGreaterThan(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 4, 5);
+  }
+
+  /**
+   * HBASE-52
+   *    Add a means of scanning over all versions
+   */
+  @Test
+  public void testJiraTest52() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testJiraTest52");
+    byte [][] VALUES = makeNAscii(VALUE, 7);
+    long [] STAMPS = makeStamps(7);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    // Insert lots versions
+
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[0], VALUES[0]);
+    put.add(FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    put.add(FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    ht.put(put);
+
+    getAllVersionsAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+
+    scanAllVersionsAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+
+    // Try same from storefile
+    TEST_UTIL.flush();
+
+    getAllVersionsAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+
+    scanAllVersionsAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS, VALUES, 0, 5);
+  }
+
+  //
+  // Bulk Testers
+  //
+
+  private void getVersionRangeAndVerifyGreaterThan(HTable ht, byte [] row,
+      byte [] family, byte [] qualifier, long [] stamps, byte [][] values,
+      int start, int end)
+  throws IOException {
+    Get get = new Get(row);
+    get.addColumn(family, qualifier);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    get.setTimeRange(stamps[start+1], Long.MAX_VALUE);
+    Result result = ht.get(get);
+    assertNResult(result, row, family, qualifier, stamps, values, start+1, end);
+  }
+
+  private void getVersionRangeAndVerify(HTable ht, byte [] row, byte [] family,
+      byte [] qualifier, long [] stamps, byte [][] values, int start, int end)
+  throws IOException {
+    Get get = new Get(row);
+    get.addColumn(family, qualifier);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    get.setTimeRange(stamps[start], stamps[end]+1);
+    Result result = ht.get(get);
+    assertNResult(result, row, family, qualifier, stamps, values, start, end);
+  }
+
+  private void getAllVersionsAndVerify(HTable ht, byte [] row, byte [] family,
+      byte [] qualifier, long [] stamps, byte [][] values, int start, int end)
+  throws IOException {
+    Get get = new Get(row);
+    get.addColumn(family, qualifier);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    Result result = ht.get(get);
+    assertNResult(result, row, family, qualifier, stamps, values, start, end);
+  }
+
+  private void scanVersionRangeAndVerifyGreaterThan(HTable ht, byte [] row,
+      byte [] family, byte [] qualifier, long [] stamps, byte [][] values,
+      int start, int end)
+  throws IOException {
+    Scan scan = new Scan(row);
+    scan.addColumn(family, qualifier);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    scan.setTimeRange(stamps[start+1], Long.MAX_VALUE);
+    Result result = getSingleScanResult(ht, scan);
+    assertNResult(result, row, family, qualifier, stamps, values, start+1, end);
+  }
+
+  private void scanVersionRangeAndVerify(HTable ht, byte [] row, byte [] family,
+      byte [] qualifier, long [] stamps, byte [][] values, int start, int end)
+  throws IOException {
+    Scan scan = new Scan(row);
+    scan.addColumn(family, qualifier);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    scan.setTimeRange(stamps[start], stamps[end]+1);
+    Result result = getSingleScanResult(ht, scan);
+    assertNResult(result, row, family, qualifier, stamps, values, start, end);
+  }
+
+  private void scanAllVersionsAndVerify(HTable ht, byte [] row, byte [] family,
+      byte [] qualifier, long [] stamps, byte [][] values, int start, int end)
+  throws IOException {
+    Scan scan = new Scan(row);
+    scan.addColumn(family, qualifier);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    Result result = getSingleScanResult(ht, scan);
+    assertNResult(result, row, family, qualifier, stamps, values, start, end);
+  }
+
+  private void getVersionAndVerify(HTable ht, byte [] row, byte [] family,
+      byte [] qualifier, long stamp, byte [] value)
+  throws Exception {
+    Get get = new Get(row);
+    get.addColumn(family, qualifier);
+    get.setTimeStamp(stamp);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    Result result = ht.get(get);
+    assertSingleResult(result, row, family, qualifier, stamp, value);
+  }
+
+  private void getVersionAndVerifyMissing(HTable ht, byte [] row, byte [] family,
+      byte [] qualifier, long stamp)
+  throws Exception {
+    Get get = new Get(row);
+    get.addColumn(family, qualifier);
+    get.setTimeStamp(stamp);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    Result result = ht.get(get);
+    assertEmptyResult(result);
+  }
+
+  private void scanVersionAndVerify(HTable ht, byte [] row, byte [] family,
+      byte [] qualifier, long stamp, byte [] value)
+  throws Exception {
+    Scan scan = new Scan(row);
+    scan.addColumn(family, qualifier);
+    scan.setTimeStamp(stamp);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    Result result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, row, family, qualifier, stamp, value);
+  }
+
+  private void scanVersionAndVerifyMissing(HTable ht, byte [] row,
+      byte [] family, byte [] qualifier, long stamp)
+  throws Exception {
+    Scan scan = new Scan(row);
+    scan.addColumn(family, qualifier);
+    scan.setTimeStamp(stamp);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    Result result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+  }
+
+  private void getTestNull(HTable ht, byte [] row, byte [] family,
+      byte [] value)
+  throws Exception {
+
+    Get get = new Get(row);
+    get.addColumn(family, null);
+    Result result = ht.get(get);
+    assertSingleResult(result, row, family, null, value);
+
+    get = new Get(row);
+    get.addColumn(family, HConstants.EMPTY_BYTE_ARRAY);
+    result = ht.get(get);
+    assertSingleResult(result, row, family, HConstants.EMPTY_BYTE_ARRAY, value);
+
+    get = new Get(row);
+    get.addFamily(family);
+    result = ht.get(get);
+    assertSingleResult(result, row, family, HConstants.EMPTY_BYTE_ARRAY, value);
+
+    get = new Get(row);
+    result = ht.get(get);
+    assertSingleResult(result, row, family, HConstants.EMPTY_BYTE_ARRAY, value);
+
+  }
+
+  private void scanTestNull(HTable ht, byte [] row, byte [] family,
+      byte [] value)
+  throws Exception {
+
+    Scan scan = new Scan();
+    scan.addColumn(family, null);
+    Result result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, row, family, HConstants.EMPTY_BYTE_ARRAY, value);
+
+    scan = new Scan();
+    scan.addColumn(family, HConstants.EMPTY_BYTE_ARRAY);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, row, family, HConstants.EMPTY_BYTE_ARRAY, value);
+
+    scan = new Scan();
+    scan.addFamily(family);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, row, family, HConstants.EMPTY_BYTE_ARRAY, value);
+
+    scan = new Scan();
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, row, family, HConstants.EMPTY_BYTE_ARRAY, value);
+
+  }
+
+  private void singleRowGetTest(HTable ht, byte [][] ROWS, byte [][] FAMILIES,
+      byte [][] QUALIFIERS, byte [][] VALUES)
+  throws Exception {
+
+    // Single column from memstore
+    Get get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[0]);
+    Result result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[4], QUALIFIERS[0], VALUES[0]);
+
+    // Single column from storefile
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[2], QUALIFIERS[2]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[2], QUALIFIERS[2], VALUES[2]);
+
+    // Single column from storefile, family match
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILIES[7]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[0], FAMILIES[7], QUALIFIERS[7], VALUES[7]);
+
+    // Two columns, one from memstore one from storefile, same family,
+    // wildcard match
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILIES[4]);
+    result = ht.get(get);
+    assertDoubleResult(result, ROWS[0], FAMILIES[4], QUALIFIERS[0], VALUES[0],
+        FAMILIES[4], QUALIFIERS[4], VALUES[4]);
+
+    // Two columns, one from memstore one from storefile, same family,
+    // explicit match
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    result = ht.get(get);
+    assertDoubleResult(result, ROWS[0], FAMILIES[4], QUALIFIERS[0], VALUES[0],
+        FAMILIES[4], QUALIFIERS[4], VALUES[4]);
+
+    // Three column, one from memstore two from storefile, different families,
+    // wildcard match
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILIES[4]);
+    get.addFamily(FAMILIES[7]);
+    result = ht.get(get);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] { {4, 0, 0}, {4, 4, 4}, {7, 7, 7} });
+
+    // Multiple columns from everywhere storefile, many family, wildcard
+    get = new Get(ROWS[0]);
+    get.addFamily(FAMILIES[2]);
+    get.addFamily(FAMILIES[4]);
+    get.addFamily(FAMILIES[6]);
+    get.addFamily(FAMILIES[7]);
+    result = ht.get(get);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] {
+          {2, 2, 2}, {2, 4, 4}, {4, 0, 0}, {4, 4, 4}, {6, 6, 6}, {6, 7, 7}, {7, 7, 7}
+    });
+
+    // Multiple columns from everywhere storefile, many family, wildcard
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[2], QUALIFIERS[2]);
+    get.addColumn(FAMILIES[2], QUALIFIERS[4]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    get.addColumn(FAMILIES[6], QUALIFIERS[7]);
+    get.addColumn(FAMILIES[7], QUALIFIERS[7]);
+    get.addColumn(FAMILIES[7], QUALIFIERS[8]);
+    result = ht.get(get);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] {
+          {2, 2, 2}, {2, 4, 4}, {4, 0, 0}, {4, 4, 4}, {6, 6, 6}, {6, 7, 7}, {7, 7, 7}
+    });
+
+    // Everything
+    get = new Get(ROWS[0]);
+    result = ht.get(get);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] {
+          {2, 2, 2}, {2, 4, 4}, {4, 0, 0}, {4, 4, 4}, {6, 6, 6}, {6, 7, 7}, {7, 7, 7}, {9, 0, 0}
+    });
+
+    // Get around inserted columns
+
+    get = new Get(ROWS[1]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[0]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[3]);
+    get.addColumn(FAMILIES[2], QUALIFIERS[3]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+  }
+
+  private void singleRowScanTest(HTable ht, byte [][] ROWS, byte [][] FAMILIES,
+      byte [][] QUALIFIERS, byte [][] VALUES)
+  throws Exception {
+
+    // Single column from memstore
+    Scan scan = new Scan();
+    scan.addColumn(FAMILIES[4], QUALIFIERS[0]);
+    Result result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[4], QUALIFIERS[0], VALUES[0]);
+
+    // Single column from storefile
+    scan = new Scan();
+    scan.addColumn(FAMILIES[2], QUALIFIERS[2]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[2], QUALIFIERS[2], VALUES[2]);
+
+    // Single column from storefile, family match
+    scan = new Scan();
+    scan.addFamily(FAMILIES[7]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[0], FAMILIES[7], QUALIFIERS[7], VALUES[7]);
+
+    // Two columns, one from memstore one from storefile, same family,
+    // wildcard match
+    scan = new Scan();
+    scan.addFamily(FAMILIES[4]);
+    result = getSingleScanResult(ht, scan);
+    assertDoubleResult(result, ROWS[0], FAMILIES[4], QUALIFIERS[0], VALUES[0],
+        FAMILIES[4], QUALIFIERS[4], VALUES[4]);
+
+    // Two columns, one from memstore one from storefile, same family,
+    // explicit match
+    scan = new Scan();
+    scan.addColumn(FAMILIES[4], QUALIFIERS[0]);
+    scan.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    result = getSingleScanResult(ht, scan);
+    assertDoubleResult(result, ROWS[0], FAMILIES[4], QUALIFIERS[0], VALUES[0],
+        FAMILIES[4], QUALIFIERS[4], VALUES[4]);
+
+    // Three column, one from memstore two from storefile, different families,
+    // wildcard match
+    scan = new Scan();
+    scan.addFamily(FAMILIES[4]);
+    scan.addFamily(FAMILIES[7]);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] { {4, 0, 0}, {4, 4, 4}, {7, 7, 7} });
+
+    // Multiple columns from everywhere storefile, many family, wildcard
+    scan = new Scan();
+    scan.addFamily(FAMILIES[2]);
+    scan.addFamily(FAMILIES[4]);
+    scan.addFamily(FAMILIES[6]);
+    scan.addFamily(FAMILIES[7]);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] {
+          {2, 2, 2}, {2, 4, 4}, {4, 0, 0}, {4, 4, 4}, {6, 6, 6}, {6, 7, 7}, {7, 7, 7}
+    });
+
+    // Multiple columns from everywhere storefile, many family, wildcard
+    scan = new Scan();
+    scan.addColumn(FAMILIES[2], QUALIFIERS[2]);
+    scan.addColumn(FAMILIES[2], QUALIFIERS[4]);
+    scan.addColumn(FAMILIES[4], QUALIFIERS[0]);
+    scan.addColumn(FAMILIES[4], QUALIFIERS[4]);
+    scan.addColumn(FAMILIES[6], QUALIFIERS[6]);
+    scan.addColumn(FAMILIES[6], QUALIFIERS[7]);
+    scan.addColumn(FAMILIES[7], QUALIFIERS[7]);
+    scan.addColumn(FAMILIES[7], QUALIFIERS[8]);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] {
+          {2, 2, 2}, {2, 4, 4}, {4, 0, 0}, {4, 4, 4}, {6, 6, 6}, {6, 7, 7}, {7, 7, 7}
+    });
+
+    // Everything
+    scan = new Scan();
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROWS[0], FAMILIES, QUALIFIERS, VALUES,
+        new int [][] {
+          {2, 2, 2}, {2, 4, 4}, {4, 0, 0}, {4, 4, 4}, {6, 6, 6}, {6, 7, 7}, {7, 7, 7}, {9, 0, 0}
+    });
+
+    // Scan around inserted columns
+
+    scan = new Scan(ROWS[1]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[4], QUALIFIERS[3]);
+    scan.addColumn(FAMILIES[2], QUALIFIERS[3]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+  }
+
+  /**
+   * Verify a single column using gets.
+   * Expects family and qualifier arrays to be valid for at least
+   * the range:  idx-2 < idx < idx+2
+   */
+  private void getVerifySingleColumn(HTable ht,
+      byte [][] ROWS, int ROWIDX,
+      byte [][] FAMILIES, int FAMILYIDX,
+      byte [][] QUALIFIERS, int QUALIFIERIDX,
+      byte [][] VALUES, int VALUEIDX)
+  throws Exception {
+
+    Get get = new Get(ROWS[ROWIDX]);
+    Result result = ht.get(get);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    get = new Get(ROWS[ROWIDX]);
+    get.addFamily(FAMILIES[FAMILYIDX]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    get = new Get(ROWS[ROWIDX]);
+    get.addFamily(FAMILIES[FAMILYIDX-2]);
+    get.addFamily(FAMILIES[FAMILYIDX]);
+    get.addFamily(FAMILIES[FAMILYIDX+2]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    get = new Get(ROWS[ROWIDX]);
+    get.addColumn(FAMILIES[FAMILYIDX], QUALIFIERS[0]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    get = new Get(ROWS[ROWIDX]);
+    get.addColumn(FAMILIES[FAMILYIDX], QUALIFIERS[1]);
+    get.addFamily(FAMILIES[FAMILYIDX]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    get = new Get(ROWS[ROWIDX]);
+    get.addFamily(FAMILIES[FAMILYIDX]);
+    get.addColumn(FAMILIES[FAMILYIDX+1], QUALIFIERS[1]);
+    get.addColumn(FAMILIES[FAMILYIDX-2], QUALIFIERS[1]);
+    get.addFamily(FAMILIES[FAMILYIDX-1]);
+    get.addFamily(FAMILIES[FAMILYIDX+2]);
+    result = ht.get(get);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+  }
+
+
+  /**
+   * Verify a single column using scanners.
+   * Expects family and qualifier arrays to be valid for at least
+   * the range:  idx-2 to idx+2
+   * Expects row array to be valid for at least idx to idx+2
+   */
+  private void scanVerifySingleColumn(HTable ht,
+      byte [][] ROWS, int ROWIDX,
+      byte [][] FAMILIES, int FAMILYIDX,
+      byte [][] QUALIFIERS, int QUALIFIERIDX,
+      byte [][] VALUES, int VALUEIDX)
+  throws Exception {
+
+    Scan scan = new Scan();
+    Result result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    scan = new Scan(ROWS[ROWIDX]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    scan = new Scan(ROWS[ROWIDX], ROWS[ROWIDX+1]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    scan = new Scan(HConstants.EMPTY_START_ROW, ROWS[ROWIDX+1]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    scan = new Scan();
+    scan.addFamily(FAMILIES[FAMILYIDX]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[FAMILYIDX], QUALIFIERS[QUALIFIERIDX]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[FAMILYIDX], QUALIFIERS[QUALIFIERIDX+1]);
+    scan.addFamily(FAMILIES[FAMILYIDX]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[FAMILYIDX-1], QUALIFIERS[QUALIFIERIDX+1]);
+    scan.addColumn(FAMILIES[FAMILYIDX], QUALIFIERS[QUALIFIERIDX]);
+    scan.addFamily(FAMILIES[FAMILYIDX+1]);
+    result = getSingleScanResult(ht, scan);
+    assertSingleResult(result, ROWS[ROWIDX], FAMILIES[FAMILYIDX],
+        QUALIFIERS[QUALIFIERIDX], VALUES[VALUEIDX]);
+
+  }
+
+  /**
+   * Verify we do not read any values by accident around a single column
+   * Same requirements as getVerifySingleColumn
+   */
+  private void getVerifySingleEmpty(HTable ht,
+      byte [][] ROWS, int ROWIDX,
+      byte [][] FAMILIES, int FAMILYIDX,
+      byte [][] QUALIFIERS, int QUALIFIERIDX)
+  throws Exception {
+
+    Get get = new Get(ROWS[ROWIDX]);
+    get.addFamily(FAMILIES[4]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[1]);
+    Result result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[ROWIDX]);
+    get.addFamily(FAMILIES[4]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[2]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[ROWIDX]);
+    get.addFamily(FAMILIES[3]);
+    get.addColumn(FAMILIES[4], QUALIFIERS[2]);
+    get.addFamily(FAMILIES[5]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+    get = new Get(ROWS[ROWIDX+1]);
+    result = ht.get(get);
+    assertEmptyResult(result);
+
+  }
+
+  private void scanVerifySingleEmpty(HTable ht,
+      byte [][] ROWS, int ROWIDX,
+      byte [][] FAMILIES, int FAMILYIDX,
+      byte [][] QUALIFIERS, int QUALIFIERIDX)
+  throws Exception {
+
+    Scan scan = new Scan(ROWS[ROWIDX+1]);
+    Result result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan(ROWS[ROWIDX+1],ROWS[ROWIDX+2]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan(HConstants.EMPTY_START_ROW, ROWS[ROWIDX]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+    scan = new Scan();
+    scan.addColumn(FAMILIES[FAMILYIDX], QUALIFIERS[QUALIFIERIDX+1]);
+    scan.addFamily(FAMILIES[FAMILYIDX-1]);
+    result = getSingleScanResult(ht, scan);
+    assertNullResult(result);
+
+  }
+
+  //
+  // Verifiers
+  //
+
+  private void assertKey(KeyValue key, byte [] row, byte [] family,
+      byte [] qualifier, byte [] value)
+  throws Exception {
+    assertTrue("Expected row [" + Bytes.toString(row) + "] " +
+        "Got row [" + Bytes.toString(key.getRow()) +"]",
+        equals(row, key.getRow()));
+    assertTrue("Expected family [" + Bytes.toString(family) + "] " +
+        "Got family [" + Bytes.toString(key.getFamily()) + "]",
+        equals(family, key.getFamily()));
+    assertTrue("Expected qualifier [" + Bytes.toString(qualifier) + "] " +
+        "Got qualifier [" + Bytes.toString(key.getQualifier()) + "]",
+        equals(qualifier, key.getQualifier()));
+    assertTrue("Expected value [" + Bytes.toString(value) + "] " +
+        "Got value [" + Bytes.toString(key.getValue()) + "]",
+        equals(value, key.getValue()));
+  }
+
+  private void assertIncrementKey(KeyValue key, byte [] row, byte [] family,
+      byte [] qualifier, long value)
+  throws Exception {
+    assertTrue("Expected row [" + Bytes.toString(row) + "] " +
+        "Got row [" + Bytes.toString(key.getRow()) +"]",
+        equals(row, key.getRow()));
+    assertTrue("Expected family [" + Bytes.toString(family) + "] " +
+        "Got family [" + Bytes.toString(key.getFamily()) + "]",
+        equals(family, key.getFamily()));
+    assertTrue("Expected qualifier [" + Bytes.toString(qualifier) + "] " +
+        "Got qualifier [" + Bytes.toString(key.getQualifier()) + "]",
+        equals(qualifier, key.getQualifier()));
+    assertTrue("Expected value [" + value + "] " +
+        "Got value [" + Bytes.toLong(key.getValue()) + "]",
+        Bytes.toLong(key.getValue()) == value);
+  }
+
+  private void assertNumKeys(Result result, int n) throws Exception {
+    assertTrue("Expected " + n + " keys but got " + result.size(),
+        result.size() == n);
+  }
+
+  private void assertNResult(Result result, byte [] row,
+      byte [][] families, byte [][] qualifiers, byte [][] values,
+      int [][] idxs)
+  throws Exception {
+    assertTrue("Expected row [" + Bytes.toString(row) + "] " +
+        "Got row [" + Bytes.toString(result.getRow()) +"]",
+        equals(row, result.getRow()));
+    assertTrue("Expected " + idxs.length + " keys but result contains "
+        + result.size(), result.size() == idxs.length);
+
+    KeyValue [] keys = result.sorted();
+
+    for(int i=0;i<keys.length;i++) {
+      byte [] family = families[idxs[i][0]];
+      byte [] qualifier = qualifiers[idxs[i][1]];
+      byte [] value = values[idxs[i][2]];
+      KeyValue key = keys[i];
+
+      assertTrue("(" + i + ") Expected family [" + Bytes.toString(family)
+          + "] " + "Got family [" + Bytes.toString(key.getFamily()) + "]",
+          equals(family, key.getFamily()));
+      assertTrue("(" + i + ") Expected qualifier [" + Bytes.toString(qualifier)
+          + "] " + "Got qualifier [" + Bytes.toString(key.getQualifier()) + "]",
+          equals(qualifier, key.getQualifier()));
+      assertTrue("(" + i + ") Expected value [" + Bytes.toString(value) + "] "
+          + "Got value [" + Bytes.toString(key.getValue()) + "]",
+          equals(value, key.getValue()));
+    }
+  }
+
+  private void assertNResult(Result result, byte [] row,
+      byte [] family, byte [] qualifier, long [] stamps, byte [][] values,
+      int start, int end)
+  throws IOException {
+    assertTrue("Expected row [" + Bytes.toString(row) + "] " +
+        "Got row [" + Bytes.toString(result.getRow()) +"]",
+        equals(row, result.getRow()));
+    int expectedResults = end - start + 1;
+    assertEquals(expectedResults, result.size());
+
+    KeyValue [] keys = result.sorted();
+
+    for (int i=0; i<keys.length; i++) {
+      byte [] value = values[end-i];
+      long ts = stamps[end-i];
+      KeyValue key = keys[i];
+
+      assertTrue("(" + i + ") Expected family [" + Bytes.toString(family)
+          + "] " + "Got family [" + Bytes.toString(key.getFamily()) + "]",
+          equals(family, key.getFamily()));
+      assertTrue("(" + i + ") Expected qualifier [" + Bytes.toString(qualifier)
+          + "] " + "Got qualifier [" + Bytes.toString(key.getQualifier()) + "]",
+          equals(qualifier, key.getQualifier()));
+      assertTrue("Expected ts [" + ts + "] " +
+          "Got ts [" + key.getTimestamp() + "]", ts == key.getTimestamp());
+      assertTrue("(" + i + ") Expected value [" + Bytes.toString(value) + "] "
+          + "Got value [" + Bytes.toString(key.getValue()) + "]",
+          equals(value, key.getValue()));
+    }
+  }
+
+  /**
+   * Validate that result contains two specified keys, exactly.
+   * It is assumed key A sorts before key B.
+   */
+  private void assertDoubleResult(Result result, byte [] row,
+      byte [] familyA, byte [] qualifierA, byte [] valueA,
+      byte [] familyB, byte [] qualifierB, byte [] valueB)
+  throws Exception {
+    assertTrue("Expected row [" + Bytes.toString(row) + "] " +
+        "Got row [" + Bytes.toString(result.getRow()) +"]",
+        equals(row, result.getRow()));
+    assertTrue("Expected two keys but result contains " + result.size(),
+        result.size() == 2);
+    KeyValue [] kv = result.sorted();
+    KeyValue kvA = kv[0];
+    assertTrue("(A) Expected family [" + Bytes.toString(familyA) + "] " +
+        "Got family [" + Bytes.toString(kvA.getFamily()) + "]",
+        equals(familyA, kvA.getFamily()));
+    assertTrue("(A) Expected qualifier [" + Bytes.toString(qualifierA) + "] " +
+        "Got qualifier [" + Bytes.toString(kvA.getQualifier()) + "]",
+        equals(qualifierA, kvA.getQualifier()));
+    assertTrue("(A) Expected value [" + Bytes.toString(valueA) + "] " +
+        "Got value [" + Bytes.toString(kvA.getValue()) + "]",
+        equals(valueA, kvA.getValue()));
+    KeyValue kvB = kv[1];
+    assertTrue("(B) Expected family [" + Bytes.toString(familyB) + "] " +
+        "Got family [" + Bytes.toString(kvB.getFamily()) + "]",
+        equals(familyB, kvB.getFamily()));
+    assertTrue("(B) Expected qualifier [" + Bytes.toString(qualifierB) + "] " +
+        "Got qualifier [" + Bytes.toString(kvB.getQualifier()) + "]",
+        equals(qualifierB, kvB.getQualifier()));
+    assertTrue("(B) Expected value [" + Bytes.toString(valueB) + "] " +
+        "Got value [" + Bytes.toString(kvB.getValue()) + "]",
+        equals(valueB, kvB.getValue()));
+  }
+
+  private void assertSingleResult(Result result, byte [] row, byte [] family,
+      byte [] qualifier, byte [] value)
+  throws Exception {
+    assertTrue("Expected row [" + Bytes.toString(row) + "] " +
+        "Got row [" + Bytes.toString(result.getRow()) +"]",
+        equals(row, result.getRow()));
+    assertTrue("Expected a single key but result contains " + result.size(),
+        result.size() == 1);
+    KeyValue kv = result.sorted()[0];
+    assertTrue("Expected family [" + Bytes.toString(family) + "] " +
+        "Got family [" + Bytes.toString(kv.getFamily()) + "]",
+        equals(family, kv.getFamily()));
+    assertTrue("Expected qualifier [" + Bytes.toString(qualifier) + "] " +
+        "Got qualifier [" + Bytes.toString(kv.getQualifier()) + "]",
+        equals(qualifier, kv.getQualifier()));
+    assertTrue("Expected value [" + Bytes.toString(value) + "] " +
+        "Got value [" + Bytes.toString(kv.getValue()) + "]",
+        equals(value, kv.getValue()));
+  }
+
+  private void assertSingleResult(Result result, byte [] row, byte [] family,
+      byte [] qualifier, long ts, byte [] value)
+  throws Exception {
+    assertTrue("Expected row [" + Bytes.toString(row) + "] " +
+        "Got row [" + Bytes.toString(result.getRow()) +"]",
+        equals(row, result.getRow()));
+    assertTrue("Expected a single key but result contains " + result.size(),
+        result.size() == 1);
+    KeyValue kv = result.sorted()[0];
+    assertTrue("Expected family [" + Bytes.toString(family) + "] " +
+        "Got family [" + Bytes.toString(kv.getFamily()) + "]",
+        equals(family, kv.getFamily()));
+    assertTrue("Expected qualifier [" + Bytes.toString(qualifier) + "] " +
+        "Got qualifier [" + Bytes.toString(kv.getQualifier()) + "]",
+        equals(qualifier, kv.getQualifier()));
+    assertTrue("Expected ts [" + ts + "] " +
+        "Got ts [" + kv.getTimestamp() + "]", ts == kv.getTimestamp());
+    assertTrue("Expected value [" + Bytes.toString(value) + "] " +
+        "Got value [" + Bytes.toString(kv.getValue()) + "]",
+        equals(value, kv.getValue()));
+  }
+
+  private void assertEmptyResult(Result result) throws Exception {
+    assertTrue("expected an empty result but result contains " +
+        result.size() + " keys", result.isEmpty());
+  }
+
+  private void assertNullResult(Result result) throws Exception {
+    assertTrue("expected null result but received a non-null result",
+        result == null);
+  }
+
+  //
+  // Helpers
+  //
+
+  private Result getSingleScanResult(HTable ht, Scan scan) throws IOException {
+    ResultScanner scanner = ht.getScanner(scan);
+    Result result = scanner.next();
+    scanner.close();
+    return result;
+  }
+
+  private byte [][] makeNAscii(byte [] base, int n) {
+    if(n > 256) {
+      return makeNBig(base, n);
+    }
+    byte [][] ret = new byte[n][];
+    for(int i=0;i<n;i++) {
+      byte [] tail = Bytes.toBytes(Integer.toString(i));
+      ret[i] = Bytes.add(base, tail);
+    }
+    return ret;
+  }
+
+  private byte [][] makeN(byte [] base, int n) {
+    if (n > 256) {
+      return makeNBig(base, n);
+    }
+    byte [][] ret = new byte[n][];
+    for(int i=0;i<n;i++) {
+      ret[i] = Bytes.add(base, new byte[]{(byte)i});
+    }
+    return ret;
+  }
+
+  private byte [][] makeNBig(byte [] base, int n) {
+    byte [][] ret = new byte[n][];
+    for(int i=0;i<n;i++) {
+      int byteA = (i % 256);
+      int byteB = (i >> 8);
+      ret[i] = Bytes.add(base, new byte[]{(byte)byteB,(byte)byteA});
+    }
+    return ret;
+  }
+
+  private long [] makeStamps(int n) {
+    long [] stamps = new long[n];
+    for(int i=0;i<n;i++) stamps[i] = i+1;
+    return stamps;
+  }
+
+  private boolean equals(byte [] left, byte [] right) {
+    if (left == null && right == null) return true;
+    if (left == null && right.length == 0) return true;
+    if (right == null && left.length == 0) return true;
+    return Bytes.equals(left, right);
+  }
+
+  @Test
+  public void testDuplicateVersions() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testDuplicateVersions");
+
+    long [] STAMPS = makeStamps(20);
+    byte [][] VALUES = makeNAscii(VALUE, 20);
+
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    // Insert 4 versions of same column
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    put.add(FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    put.add(FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    put.add(FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    ht.put(put);
+
+    // Verify we can get each one properly
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+
+    // Verify we don't accidentally get others
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+
+    // Ensure maxVersions in query is respected
+    Get get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(2);
+    Result result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+    Scan scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(2);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+    // Flush and redo
+
+    TEST_UTIL.flush();
+
+    // Verify we can get each one properly
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[4]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[5], VALUES[5]);
+
+    // Verify we don't accidentally get others
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[3]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[6]);
+
+    // Ensure maxVersions in query is respected
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(2);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(2);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[4], STAMPS[5]},
+        new byte[][] {VALUES[4], VALUES[5]},
+        0, 1);
+
+
+    // Add some memstore and retest
+
+    // Insert 4 more versions of same column and a dupe
+    put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[3], VALUES[3]);
+    put.add(FAMILY, QUALIFIER, STAMPS[4], VALUES[14]);
+    put.add(FAMILY, QUALIFIER, STAMPS[6], VALUES[6]);
+    put.add(FAMILY, QUALIFIER, STAMPS[7], VALUES[7]);
+    put.add(FAMILY, QUALIFIER, STAMPS[8], VALUES[8]);
+    ht.put(put);
+
+    // Ensure maxVersions in query is respected
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(7);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 6);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(7);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 6);
+
+    get = new Get(ROW);
+    get.setMaxVersions(7);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 6);
+
+    scan = new Scan(ROW);
+    scan.setMaxVersions(7);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8]},
+        new byte[][] {VALUES[2], VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[7], VALUES[8]},
+        0, 6);
+
+    // Verify we can get each one properly
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[14]);
+    getVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[7], VALUES[7]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[1], VALUES[1]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[2], VALUES[2]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[4], VALUES[14]);
+    scanVersionAndVerify(ht, ROW, FAMILY, QUALIFIER, STAMPS[7], VALUES[7]);
+
+    // Verify we don't accidentally get others
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    getVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[9]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[0]);
+    scanVersionAndVerifyMissing(ht, ROW, FAMILY, QUALIFIER, STAMPS[9]);
+
+    // Ensure maxVersions of table is respected
+
+    TEST_UTIL.flush();
+
+    // Insert 4 more versions of same column and a dupe
+    put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, STAMPS[9], VALUES[9]);
+    put.add(FAMILY, QUALIFIER, STAMPS[11], VALUES[11]);
+    put.add(FAMILY, QUALIFIER, STAMPS[13], VALUES[13]);
+    put.add(FAMILY, QUALIFIER, STAMPS[15], VALUES[15]);
+    ht.put(put);
+
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8], STAMPS[9], STAMPS[11], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[7], VALUES[8], VALUES[9], VALUES[11], VALUES[13], VALUES[15]},
+        0, 9);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[7], STAMPS[8], STAMPS[9], STAMPS[11], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[7], VALUES[8], VALUES[9], VALUES[11], VALUES[13], VALUES[15]},
+        0, 9);
+
+    // Delete a version in the memstore and a version in a storefile
+    Delete delete = new Delete(ROW);
+    delete.deleteColumn(FAMILY, QUALIFIER, STAMPS[11]);
+    delete.deleteColumn(FAMILY, QUALIFIER, STAMPS[7]);
+    ht.delete(delete);
+
+    // Test that it's gone
+    get = new Get(ROW);
+    get.addColumn(FAMILY, QUALIFIER);
+    get.setMaxVersions(Integer.MAX_VALUE);
+    result = ht.get(get);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[8], STAMPS[9], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[8], VALUES[9], VALUES[13], VALUES[15]},
+        0, 9);
+
+    scan = new Scan(ROW);
+    scan.addColumn(FAMILY, QUALIFIER);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+    result = getSingleScanResult(ht, scan);
+    assertNResult(result, ROW, FAMILY, QUALIFIER,
+        new long [] {STAMPS[1], STAMPS[2], STAMPS[3], STAMPS[4], STAMPS[5], STAMPS[6], STAMPS[8], STAMPS[9], STAMPS[13], STAMPS[15]},
+        new byte[][] {VALUES[1], VALUES[2], VALUES[3], VALUES[14], VALUES[5], VALUES[6], VALUES[8], VALUES[9], VALUES[13], VALUES[15]},
+        0, 9);
+  }
+
+  @Test
+  public void testUpdates() throws Exception {
+
+    byte [] TABLE = Bytes.toBytes("testUpdates");
+    HTable hTable = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+
+    // Write a column with values at timestamp 1, 2 and 3
+    byte[] row = Bytes.toBytes("row1");
+    byte[] qualifier = Bytes.toBytes("myCol");
+    Put put = new Put(row);
+    put.add(FAMILY, qualifier, 1L, Bytes.toBytes("AAA"));
+    hTable.put(put);
+
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 2L, Bytes.toBytes("BBB"));
+    hTable.put(put);
+
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 3L, Bytes.toBytes("EEE"));
+    hTable.put(put);
+
+    Get get = new Get(row);
+    get.addColumn(FAMILY, qualifier);
+    get.setMaxVersions();
+
+    // Check that the column indeed has the right values at timestamps 1 and
+    // 2
+    Result result = hTable.get(get);
+    NavigableMap<Long, byte[]> navigableMap =
+        result.getMap().get(FAMILY).get(qualifier);
+    assertEquals("AAA", Bytes.toString(navigableMap.get(1L)));
+    assertEquals("BBB", Bytes.toString(navigableMap.get(2L)));
+
+    // Update the value at timestamp 1
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 1L, Bytes.toBytes("CCC"));
+    hTable.put(put);
+
+    // Update the value at timestamp 2
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 2L, Bytes.toBytes("DDD"));
+    hTable.put(put);
+
+    // Check that the values at timestamp 2 and 1 got updated
+    result = hTable.get(get);
+    navigableMap = result.getMap().get(FAMILY).get(qualifier);
+    assertEquals("CCC", Bytes.toString(navigableMap.get(1L)));
+    assertEquals("DDD", Bytes.toString(navigableMap.get(2L)));
+  }
+
+  @Test
+  public void testUpdatesWithMajorCompaction() throws Exception {
+
+    String tableName = "testUpdatesWithMajorCompaction";
+    byte [] TABLE = Bytes.toBytes(tableName);
+    HTable hTable = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+    HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+
+    // Write a column with values at timestamp 1, 2 and 3
+    byte[] row = Bytes.toBytes("row2");
+    byte[] qualifier = Bytes.toBytes("myCol");
+    Put put = new Put(row);
+    put.add(FAMILY, qualifier, 1L, Bytes.toBytes("AAA"));
+    hTable.put(put);
+
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 2L, Bytes.toBytes("BBB"));
+    hTable.put(put);
+
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 3L, Bytes.toBytes("EEE"));
+    hTable.put(put);
+
+    Get get = new Get(row);
+    get.addColumn(FAMILY, qualifier);
+    get.setMaxVersions();
+
+    // Check that the column indeed has the right values at timestamps 1 and
+    // 2
+    Result result = hTable.get(get);
+    NavigableMap<Long, byte[]> navigableMap =
+        result.getMap().get(FAMILY).get(qualifier);
+    assertEquals("AAA", Bytes.toString(navigableMap.get(1L)));
+    assertEquals("BBB", Bytes.toString(navigableMap.get(2L)));
+
+    // Trigger a major compaction
+    admin.flush(tableName);
+    admin.majorCompact(tableName);
+    Thread.sleep(6000);
+
+    // Update the value at timestamp 1
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 1L, Bytes.toBytes("CCC"));
+    hTable.put(put);
+
+    // Update the value at timestamp 2
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 2L, Bytes.toBytes("DDD"));
+    hTable.put(put);
+
+    // Trigger a major compaction
+    admin.flush(tableName);
+    admin.majorCompact(tableName);
+    Thread.sleep(6000);
+
+    // Check that the values at timestamp 2 and 1 got updated
+    result = hTable.get(get);
+    navigableMap = result.getMap().get(FAMILY).get(qualifier);
+    assertEquals("CCC", Bytes.toString(navigableMap.get(1L)));
+    assertEquals("DDD", Bytes.toString(navigableMap.get(2L)));
+  }
+
+  @Test
+  public void testMajorCompactionBetweenTwoUpdates() throws Exception {
+
+    String tableName = "testMajorCompactionBetweenTwoUpdates";
+    byte [] TABLE = Bytes.toBytes(tableName);
+    HTable hTable = TEST_UTIL.createTable(TABLE, FAMILY, 10);
+    HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+
+    // Write a column with values at timestamp 1, 2 and 3
+    byte[] row = Bytes.toBytes("row3");
+    byte[] qualifier = Bytes.toBytes("myCol");
+    Put put = new Put(row);
+    put.add(FAMILY, qualifier, 1L, Bytes.toBytes("AAA"));
+    hTable.put(put);
+
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 2L, Bytes.toBytes("BBB"));
+    hTable.put(put);
+
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 3L, Bytes.toBytes("EEE"));
+    hTable.put(put);
+
+    Get get = new Get(row);
+    get.addColumn(FAMILY, qualifier);
+    get.setMaxVersions();
+
+    // Check that the column indeed has the right values at timestamps 1 and
+    // 2
+    Result result = hTable.get(get);
+    NavigableMap<Long, byte[]> navigableMap =
+        result.getMap().get(FAMILY).get(qualifier);
+    assertEquals("AAA", Bytes.toString(navigableMap.get(1L)));
+    assertEquals("BBB", Bytes.toString(navigableMap.get(2L)));
+
+    // Trigger a major compaction
+    admin.flush(tableName);
+    admin.majorCompact(tableName);
+    Thread.sleep(6000);
+
+    // Update the value at timestamp 1
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 1L, Bytes.toBytes("CCC"));
+    hTable.put(put);
+
+    // Trigger a major compaction
+    admin.flush(tableName);
+    admin.majorCompact(tableName);
+    Thread.sleep(6000);
+
+    // Update the value at timestamp 2
+    put = new Put(row);
+    put.add(FAMILY, qualifier, 2L, Bytes.toBytes("DDD"));
+    hTable.put(put);
+
+    // Trigger a major compaction
+    admin.flush(tableName);
+    admin.majorCompact(tableName);
+    Thread.sleep(6000);
+
+    // Check that the values at timestamp 2 and 1 got updated
+    result = hTable.get(get);
+    navigableMap = result.getMap().get(FAMILY).get(qualifier);
+
+    assertEquals("CCC", Bytes.toString(navigableMap.get(1L)));
+    assertEquals("DDD", Bytes.toString(navigableMap.get(2L)));
+  }
+
+  @Test
+  public void testGet_EmptyTable() throws IOException {
+    HTable table = TEST_UTIL.createTable(Bytes.toBytes("testGet_EmptyTable"), FAMILY);
+    Get get = new Get(ROW);
+    get.addFamily(FAMILY);
+    Result r = table.get(get);
+    assertTrue(r.isEmpty());
+  }
+
+  @Test
+  public void testGet_NonExistentRow() throws IOException {
+    HTable table = TEST_UTIL.createTable(Bytes.toBytes("testGet_NonExistentRow"), FAMILY);
+    Put put = new Put(ROW);
+    put.add(FAMILY, QUALIFIER, VALUE);
+    table.put(put);
+    LOG.info("Row put");
+
+    Get get = new Get(ROW);
+    get.addFamily(FAMILY);
+    Result r = table.get(get);
+    assertFalse(r.isEmpty());
+    System.out.println("Row retrieved successfully");
+
+    byte [] missingrow = Bytes.toBytes("missingrow");
+    get = new Get(missingrow);
+    get.addFamily(FAMILY);
+    r = table.get(get);
+    assertTrue(r.isEmpty());
+    LOG.info("Row missing as it should be");
+  }
+
+  @Test
+  public void testPut() throws IOException {
+    final byte [] CONTENTS_FAMILY = Bytes.toBytes("contents");
+    final byte [] SMALL_FAMILY = Bytes.toBytes("smallfam");
+    final byte [] row1 = Bytes.toBytes("row1");
+    final byte [] row2 = Bytes.toBytes("row2");
+    final byte [] value = Bytes.toBytes("abcd");
+    HTable table = TEST_UTIL.createTable(Bytes.toBytes("testPut"),
+      new byte [][] {CONTENTS_FAMILY, SMALL_FAMILY});
+    Put put = new Put(row1);
+    put.add(CONTENTS_FAMILY, null, value);
+    table.put(put);
+
+    put = new Put(row2);
+    put.add(CONTENTS_FAMILY, null, value);
+
+    assertEquals(put.size(), 1);
+    assertEquals(put.getFamilyMap().get(CONTENTS_FAMILY).size(), 1);
+
+    KeyValue kv = put.getFamilyMap().get(CONTENTS_FAMILY).get(0);
+
+    assertTrue(Bytes.equals(kv.getFamily(), CONTENTS_FAMILY));
+    // will it return null or an empty byte array?
+    assertTrue(Bytes.equals(kv.getQualifier(), new byte[0]));
+
+    assertTrue(Bytes.equals(kv.getValue(), value));
+
+    table.put(put);
+
+    Scan scan = new Scan();
+    scan.addColumn(CONTENTS_FAMILY, null);
+    ResultScanner scanner = table.getScanner(scan);
+    for (Result r : scanner) {
+      for(KeyValue key : r.sorted()) {
+        System.out.println(Bytes.toString(r.getRow()) + ": " + key.toString());
+      }
+    }
+  }
+
+  @Test
+  public void testRowsPut() throws IOException {
+    final byte[] CONTENTS_FAMILY = Bytes.toBytes("contents");
+    final byte[] SMALL_FAMILY = Bytes.toBytes("smallfam");
+    final int NB_BATCH_ROWS = 10;
+    final byte[] value = Bytes.toBytes("abcd");
+    HTable table = TEST_UTIL.createTable(Bytes.toBytes("testRowsPut"),
+      new byte[][] {CONTENTS_FAMILY, SMALL_FAMILY });
+    ArrayList<Put> rowsUpdate = new ArrayList<Put>();
+    for (int i = 0; i < NB_BATCH_ROWS; i++) {
+      byte[] row = Bytes.toBytes("row" + i);
+      Put put = new Put(row);
+      put.add(CONTENTS_FAMILY, null, value);
+      rowsUpdate.add(put);
+    }
+    table.put(rowsUpdate);
+    Scan scan = new Scan();
+    scan.addFamily(CONTENTS_FAMILY);
+    ResultScanner scanner = table.getScanner(scan);
+    int nbRows = 0;
+    for (@SuppressWarnings("unused")
+    Result row : scanner)
+      nbRows++;
+    assertEquals(NB_BATCH_ROWS, nbRows);
+  }
+
+  @Test
+  public void testRowsPutBufferedOneFlush() throws IOException {
+    final byte [] CONTENTS_FAMILY = Bytes.toBytes("contents");
+    final byte [] SMALL_FAMILY = Bytes.toBytes("smallfam");
+    final byte [] value = Bytes.toBytes("abcd");
+    final int NB_BATCH_ROWS = 10;
+    HTable table = TEST_UTIL.createTable(Bytes.toBytes("testRowsPutBufferedOneFlush"),
+      new byte [][] {CONTENTS_FAMILY, SMALL_FAMILY});
+    table.setAutoFlush(false);
+    ArrayList<Put> rowsUpdate = new ArrayList<Put>();
+    for (int i = 0; i < NB_BATCH_ROWS * 10; i++) {
+      byte[] row = Bytes.toBytes("row" + i);
+      Put put = new Put(row);
+      put.add(CONTENTS_FAMILY, null, value);
+      rowsUpdate.add(put);
+    }
+    table.put(rowsUpdate);
+
+    Scan scan = new Scan();
+    scan.addFamily(CONTENTS_FAMILY);
+    ResultScanner scanner = table.getScanner(scan);
+    int nbRows = 0;
+    for (@SuppressWarnings("unused")
+    Result row : scanner)
+      nbRows++;
+    assertEquals(0, nbRows);
+    scanner.close();
+
+    table.flushCommits();
+
+    scan = new Scan();
+    scan.addFamily(CONTENTS_FAMILY);
+    scanner = table.getScanner(scan);
+    nbRows = 0;
+    for (@SuppressWarnings("unused")
+    Result row : scanner)
+      nbRows++;
+    assertEquals(NB_BATCH_ROWS * 10, nbRows);
+  }
+
+  @Test
+  public void testRowsPutBufferedManyManyFlushes() throws IOException {
+    final byte[] CONTENTS_FAMILY = Bytes.toBytes("contents");
+    final byte[] SMALL_FAMILY = Bytes.toBytes("smallfam");
+    final byte[] value = Bytes.toBytes("abcd");
+    final int NB_BATCH_ROWS = 10;
+    HTable table = TEST_UTIL.createTable(Bytes.toBytes("testRowsPutBufferedManyManyFlushes"),
+      new byte[][] {CONTENTS_FAMILY, SMALL_FAMILY });
+    table.setAutoFlush(false);
+    table.setWriteBufferSize(10);
+    ArrayList<Put> rowsUpdate = new ArrayList<Put>();
+    for (int i = 0; i < NB_BATCH_ROWS * 10; i++) {
+      byte[] row = Bytes.toBytes("row" + i);
+      Put put = new Put(row);
+      put.add(CONTENTS_FAMILY, null, value);
+      rowsUpdate.add(put);
+    }
+    table.put(rowsUpdate);
+
+    table.flushCommits();
+
+    Scan scan = new Scan();
+    scan.addFamily(CONTENTS_FAMILY);
+    ResultScanner scanner = table.getScanner(scan);
+    int nbRows = 0;
+    for (@SuppressWarnings("unused")
+    Result row : scanner)
+      nbRows++;
+    assertEquals(NB_BATCH_ROWS * 10, nbRows);
+  }
+
+  @Test
+  public void testAddKeyValue() throws IOException {
+    final byte[] CONTENTS_FAMILY = Bytes.toBytes("contents");
+    final byte[] value = Bytes.toBytes("abcd");
+    final byte[] row1 = Bytes.toBytes("row1");
+    final byte[] row2 = Bytes.toBytes("row2");
+    byte[] qualifier = Bytes.toBytes("qf1");
+    Put put = new Put(row1);
+
+    // Adding KeyValue with the same row
+    KeyValue kv = new KeyValue(row1, CONTENTS_FAMILY, qualifier, value);
+    boolean ok = true;
+    try {
+      put.add(kv);
+    } catch (IOException e) {
+      ok = false;
+    }
+    assertEquals(true, ok);
+
+    // Adding KeyValue with the different row
+    kv = new KeyValue(row2, CONTENTS_FAMILY, qualifier, value);
+    ok = false;
+    try {
+      put.add(kv);
+    } catch (IOException e) {
+      ok = true;
+    }
+    assertEquals(true, ok);
+  }
+
+  /**
+   * test for HBASE-737
+   * @throws IOException
+   */
+  @Test
+  public void testHBase737 () throws IOException {
+    final byte [] FAM1 = Bytes.toBytes("fam1");
+    final byte [] FAM2 = Bytes.toBytes("fam2");
+    // Open table
+    HTable table = TEST_UTIL.createTable(Bytes.toBytes("testHBase737"),
+      new byte [][] {FAM1, FAM2});
+    // Insert some values
+    Put put = new Put(ROW);
+    put.add(FAM1, Bytes.toBytes("letters"), Bytes.toBytes("abcdefg"));
+    table.put(put);
+    try {
+      Thread.sleep(1000);
+    } catch (InterruptedException i) {
+      //ignore
+    }
+
+    put = new Put(ROW);
+    put.add(FAM1, Bytes.toBytes("numbers"), Bytes.toBytes("123456"));
+    table.put(put);
+
+    try {
+      Thread.sleep(1000);
+    } catch (InterruptedException i) {
+      //ignore
+    }
+
+    put = new Put(ROW);
+    put.add(FAM2, Bytes.toBytes("letters"), Bytes.toBytes("hijklmnop"));
+    table.put(put);
+
+    long times[] = new long[3];
+
+    // First scan the memstore
+
+    Scan scan = new Scan();
+    scan.addFamily(FAM1);
+    scan.addFamily(FAM2);
+    ResultScanner s = table.getScanner(scan);
+    try {
+      int index = 0;
+      Result r = null;
+      while ((r = s.next()) != null) {
+        for(KeyValue key : r.sorted()) {
+          times[index++] = key.getTimestamp();
+        }
+      }
+    } finally {
+      s.close();
+    }
+    for (int i = 0; i < times.length - 1; i++) {
+      for (int j = i + 1; j < times.length; j++) {
+        assertTrue(times[j] > times[i]);
+      }
+    }
+
+    // Flush data to disk and try again
+    TEST_UTIL.flush();
+
+    // Reset times
+    for(int i=0;i<times.length;i++) {
+      times[i] = 0;
+    }
+
+    try {
+      Thread.sleep(1000);
+    } catch (InterruptedException i) {
+      //ignore
+    }
+    scan = new Scan();
+    scan.addFamily(FAM1);
+    scan.addFamily(FAM2);
+    s = table.getScanner(scan);
+    try {
+      int index = 0;
+      Result r = null;
+      while ((r = s.next()) != null) {
+        for(KeyValue key : r.sorted()) {
+          times[index++] = key.getTimestamp();
+        }
+      }
+    } finally {
+      s.close();
+    }
+    for (int i = 0; i < times.length - 1; i++) {
+      for (int j = i + 1; j < times.length; j++) {
+        assertTrue(times[j] > times[i]);
+      }
+    }
+  }
+
+  @Test
+  public void testListTables() throws IOException, InterruptedException {
+    byte [] t1 = Bytes.toBytes("testListTables1");
+    byte [] t2 = Bytes.toBytes("testListTables2");
+    byte [] t3 = Bytes.toBytes("testListTables3");
+    byte [][] tables = new byte[][] { t1, t2, t3 };
+    for (int i = 0; i < tables.length; i++) {
+      TEST_UTIL.createTable(tables[i], FAMILY);
+    }
+    HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+    HTableDescriptor[] ts = admin.listTables();
+    HashSet<HTableDescriptor> result = new HashSet<HTableDescriptor>(ts.length);
+    for (int i = 0; i < ts.length; i++) {
+      result.add(ts[i]);
+    }
+    int size = result.size();
+    assertTrue(size >= tables.length);
+    for (int i = 0; i < tables.length && i < size; i++) {
+      boolean found = false;
+      for (int j = 0; j < ts.length; j++) {
+        if (Bytes.equals(ts[j].getName(), tables[i])) {
+          found = true;
+          break;
+        }
+      }
+      assertTrue("Not found: " + Bytes.toString(tables[i]), found);
+    }
+  }
+
+  @Test
+  public void testMiscHTableStuff() throws IOException {
+    final byte[] tableAname = Bytes.toBytes("testMiscHTableStuffA");
+    final byte[] tableBname = Bytes.toBytes("testMiscHTableStuffB");
+    final byte[] attrName = Bytes.toBytes("TESTATTR");
+    final byte[] attrValue = Bytes.toBytes("somevalue");
+    byte[] value = Bytes.toBytes("value");
+
+    HTable a = TEST_UTIL.createTable(tableAname, HConstants.CATALOG_FAMILY);
+    HTable b = TEST_UTIL.createTable(tableBname, HConstants.CATALOG_FAMILY);
+    Put put = new Put(ROW);
+    put.add(HConstants.CATALOG_FAMILY, null, value);
+    a.put(put);
+
+    // open a new connection to A and a connection to b
+    HTable newA = new HTable(TEST_UTIL.getConfiguration(), tableAname);
+
+    // copy data from A to B
+    Scan scan = new Scan();
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    ResultScanner s = newA.getScanner(scan);
+    try {
+      for (Result r : s) {
+        put = new Put(r.getRow());
+        for (KeyValue kv : r.sorted()) {
+          put.add(kv);
+        }
+        b.put(put);
+      }
+    } finally {
+      s.close();
+    }
+
+    // Opening a new connection to A will cause the tables to be reloaded
+    HTable anotherA = new HTable(TEST_UTIL.getConfiguration(), tableAname);
+    Get get = new Get(ROW);
+    get.addFamily(HConstants.CATALOG_FAMILY);
+    anotherA.get(get);
+
+    // We can still access A through newA because it has the table information
+    // cached. And if it needs to recalibrate, that will cause the information
+    // to be reloaded.
+
+    // Test user metadata
+    HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+    // make a modifiable descriptor
+    HTableDescriptor desc = new HTableDescriptor(a.getTableDescriptor());
+    // offline the table
+    admin.disableTable(tableAname);
+    // add a user attribute to HTD
+    desc.setValue(attrName, attrValue);
+    // add a user attribute to HCD
+    for (HColumnDescriptor c : desc.getFamilies())
+      c.setValue(attrName, attrValue);
+    // update metadata for all regions of this table
+    admin.modifyTable(tableAname, desc);
+    // enable the table
+    admin.enableTable(tableAname);
+
+    // Test that attribute changes were applied
+    desc = a.getTableDescriptor();
+    assertTrue("wrong table descriptor returned",
+      Bytes.compareTo(desc.getName(), tableAname) == 0);
+    // check HTD attribute
+    value = desc.getValue(attrName);
+    assertFalse("missing HTD attribute value", value == null);
+    assertFalse("HTD attribute value is incorrect",
+      Bytes.compareTo(value, attrValue) != 0);
+    // check HCD attribute
+    for (HColumnDescriptor c : desc.getFamilies()) {
+      value = c.getValue(attrName);
+      assertFalse("missing HCD attribute value", value == null);
+      assertFalse("HCD attribute value is incorrect",
+        Bytes.compareTo(value, attrValue) != 0);
+    }
+  }
+
+  @Test
+  public void testGetClosestRowBefore() throws IOException {
+    final byte [] tableAname = Bytes.toBytes("testGetClosestRowBefore");
+    final byte [] row = Bytes.toBytes("row");
+
+
+    byte[] firstRow = Bytes.toBytes("ro");
+    byte[] beforeFirstRow = Bytes.toBytes("rn");
+    byte[] beforeSecondRow = Bytes.toBytes("rov");
+
+    HTable table = TEST_UTIL.createTable(tableAname,
+      new byte [][] {HConstants.CATALOG_FAMILY, Bytes.toBytes("info2")});
+    Put put = new Put(firstRow);
+    Put put2 = new Put(row);
+    byte[] zero = new byte[]{0};
+    byte[] one = new byte[]{1};
+
+    put.add(HConstants.CATALOG_FAMILY, null, zero);
+    put2.add(HConstants.CATALOG_FAMILY, null, one);
+
+    table.put(put);
+    table.put(put2);
+
+    Result result = null;
+
+    // Test before first that null is returned
+    result = table.getRowOrBefore(beforeFirstRow, HConstants.CATALOG_FAMILY);
+    assertTrue(result == null);
+
+    // Test at first that first is returned
+    result = table.getRowOrBefore(firstRow, HConstants.CATALOG_FAMILY);
+    assertTrue(result.containsColumn(HConstants.CATALOG_FAMILY, null));
+    assertTrue(Bytes.equals(result.getValue(HConstants.CATALOG_FAMILY, null), zero));
+
+    // Test in between first and second that first is returned
+    result = table.getRowOrBefore(beforeSecondRow, HConstants.CATALOG_FAMILY);
+    assertTrue(result.containsColumn(HConstants.CATALOG_FAMILY, null));
+    assertTrue(Bytes.equals(result.getValue(HConstants.CATALOG_FAMILY, null), zero));
+
+    // Test at second make sure second is returned
+    result = table.getRowOrBefore(row, HConstants.CATALOG_FAMILY);
+    assertTrue(result.containsColumn(HConstants.CATALOG_FAMILY, null));
+    assertTrue(Bytes.equals(result.getValue(HConstants.CATALOG_FAMILY, null), one));
+
+    // Test after second, make sure second is returned
+    result = table.getRowOrBefore(Bytes.add(row,one), HConstants.CATALOG_FAMILY);
+    assertTrue(result.containsColumn(HConstants.CATALOG_FAMILY, null));
+    assertTrue(Bytes.equals(result.getValue(HConstants.CATALOG_FAMILY, null), one));
+  }
+
+  /**
+   * For HBASE-2156
+   * @throws Exception
+   */
+  @Test
+  public void testScanVariableReuse() throws Exception {
+    Scan scan = new Scan();
+    scan.addFamily(FAMILY);
+    scan.addColumn(FAMILY, ROW);
+
+    assertTrue(scan.getFamilyMap().get(FAMILY).size() == 1);
+
+    scan = new Scan();
+    scan.addFamily(FAMILY);
+
+    assertTrue(scan.getFamilyMap().get(FAMILY) == null);
+    assertTrue(scan.getFamilyMap().containsKey(FAMILY));
+  }
+
+  /**
+   * HBASE-2468 use case 1 and 2: region info de/serialization
+   */
+   @Test
+   public void testRegionCacheDeSerialization() throws Exception {
+     // 1. test serialization.
+     LOG.info("Starting testRegionCacheDeSerialization");
+     final byte[] TABLENAME = Bytes.toBytes("testCachePrewarm2");
+     final byte[] FAMILY = Bytes.toBytes("family");
+     Configuration conf = TEST_UTIL.getConfiguration();
+     TEST_UTIL.createTable(TABLENAME, FAMILY);
+
+     // Set up test table:
+     // Create table:
+     HTable table = new HTable(conf, TABLENAME);
+
+     // Create multiple regions for this table
+     TEST_UTIL.createMultiRegions(table, FAMILY);
+
+     Path tempPath = new Path(HBaseTestingUtility.getTestDir(), "regions.dat");
+
+     final String tempFileName = tempPath.toString();
+
+     FileOutputStream fos = new FileOutputStream(tempFileName);
+     DataOutputStream dos = new DataOutputStream(fos);
+
+     // serialize the region info and output to a local file.
+     table.serializeRegionInfo(dos);
+     dos.flush();
+     dos.close();
+
+     // read a local file and deserialize the region info from it.
+     FileInputStream fis = new FileInputStream(tempFileName);
+     DataInputStream dis = new DataInputStream(fis);
+
+     Map<HRegionInfo, HServerAddress> deserRegions =
+       table.deserializeRegionInfo(dis);
+     dis.close();
+
+     // regions obtained from meta scanner.
+     Map<HRegionInfo, HServerAddress> loadedRegions =
+       table.getRegionsInfo();
+
+     // set the deserialized regions to the global cache.
+     table.getConnection().clearRegionCache();
+
+     table.getConnection().prewarmRegionCache(table.getTableName(),
+         deserRegions);
+
+     // verify whether the 2 maps are identical or not.
+     assertEquals("Number of cached region is incorrect",
+         HConnectionManager.getCachedRegionCount(conf, TABLENAME),
+         loadedRegions.size());
+
+     // verify each region is prefetched or not.
+     for (Map.Entry<HRegionInfo, HServerAddress> e: loadedRegions.entrySet()) {
+       HRegionInfo hri = e.getKey();
+       assertTrue(HConnectionManager.isRegionCached(conf,
+           hri.getTableDesc().getName(), hri.getStartKey()));
+     }
+
+     // delete the temp file
+     File f = new java.io.File(tempFileName);
+     f.delete();
+     LOG.info("Finishing testRegionCacheDeSerialization");
+   }
+
+  /**
+   * HBASE-2468 use case 3:
+   */
+  @Test
+  public void testRegionCachePreWarm() throws Exception {
+    LOG.info("Starting testRegionCachePreWarm");
+    final byte [] TABLENAME = Bytes.toBytes("testCachePrewarm");
+    Configuration conf = TEST_UTIL.getConfiguration();
+
+    // Set up test table:
+    // Create table:
+    TEST_UTIL.createTable(TABLENAME, FAMILY);
+
+    // disable region cache for the table.
+    HTable.setRegionCachePrefetch(conf, TABLENAME, false);
+    assertFalse("The table is disabled for region cache prefetch",
+        HTable.getRegionCachePrefetch(conf, TABLENAME));
+
+    HTable table = new HTable(conf, TABLENAME);
+
+    // create many regions for the table.
+    TEST_UTIL.createMultiRegions(table, FAMILY);
+    // This count effectively waits until the regions have been
+    // fully assigned
+    TEST_UTIL.countRows(table);
+    table.getConnection().clearRegionCache();
+    assertEquals("Clearing cache should have 0 cached ", 0,
+        HConnectionManager.getCachedRegionCount(conf, TABLENAME));
+
+    // A Get is suppose to do a region lookup request
+    Get g = new Get(Bytes.toBytes("aaa"));
+    table.get(g);
+
+    // only one region should be cached if the cache prefetch is disabled.
+    assertEquals("Number of cached region is incorrect ", 1,
+        HConnectionManager.getCachedRegionCount(conf, TABLENAME));
+
+    // now we enable cached prefetch.
+    HTable.setRegionCachePrefetch(conf, TABLENAME, true);
+    assertTrue("The table is enabled for region cache prefetch",
+        HTable.getRegionCachePrefetch(conf, TABLENAME));
+
+    HTable.setRegionCachePrefetch(conf, TABLENAME, false);
+    assertFalse("The table is disabled for region cache prefetch",
+        HTable.getRegionCachePrefetch(conf, TABLENAME));
+
+    HTable.setRegionCachePrefetch(conf, TABLENAME, true);
+    assertTrue("The table is enabled for region cache prefetch",
+        HTable.getRegionCachePrefetch(conf, TABLENAME));
+
+    table.getConnection().clearRegionCache();
+
+    assertEquals("Number of cached region is incorrect ", 0,
+        HConnectionManager.getCachedRegionCount(conf, TABLENAME));
+
+    // if there is a cache miss, some additional regions should be prefetched.
+    Get g2 = new Get(Bytes.toBytes("bbb"));
+    table.get(g2);
+
+    // Get the configured number of cache read-ahead regions.
+    int prefetchRegionNumber = conf.getInt("hbase.client.prefetch.limit", 10);
+
+    // the total number of cached regions == region('aaa") + prefeched regions.
+    LOG.info("Testing how many regions cached");
+    assertEquals("Number of cached region is incorrect ", prefetchRegionNumber,
+        HConnectionManager.getCachedRegionCount(conf, TABLENAME));
+
+    table.getConnection().clearRegionCache();
+
+    Get g3 = new Get(Bytes.toBytes("abc"));
+    table.get(g3);
+    assertEquals("Number of cached region is incorrect ", prefetchRegionNumber,
+        HConnectionManager.getCachedRegionCount(conf, TABLENAME));
+
+    LOG.info("Finishing testRegionCachePreWarm");
+  }
+
+  @Test
+  public void testIncrement() throws Exception {
+    LOG.info("Starting testIncrement");
+    final byte [] TABLENAME = Bytes.toBytes("testIncrement");
+    HTable ht = TEST_UTIL.createTable(TABLENAME, FAMILY);
+
+    byte [][] ROWS = new byte [][] {
+        Bytes.toBytes("a"), Bytes.toBytes("b"), Bytes.toBytes("c"),
+        Bytes.toBytes("d"), Bytes.toBytes("e"), Bytes.toBytes("f"),
+        Bytes.toBytes("g"), Bytes.toBytes("h"), Bytes.toBytes("i")
+    };
+    byte [][] QUALIFIERS = new byte [][] {
+        Bytes.toBytes("a"), Bytes.toBytes("b"), Bytes.toBytes("c"),
+        Bytes.toBytes("d"), Bytes.toBytes("e"), Bytes.toBytes("f"),
+        Bytes.toBytes("g"), Bytes.toBytes("h"), Bytes.toBytes("i")
+    };
+
+    // Do some simple single-column increments
+
+    // First with old API
+    ht.incrementColumnValue(ROW, FAMILY, QUALIFIERS[0], 1);
+    ht.incrementColumnValue(ROW, FAMILY, QUALIFIERS[1], 2);
+    ht.incrementColumnValue(ROW, FAMILY, QUALIFIERS[2], 3);
+    ht.incrementColumnValue(ROW, FAMILY, QUALIFIERS[3], 4);
+
+    // Now increment things incremented with old and do some new
+    Increment inc = new Increment(ROW);
+    inc.addColumn(FAMILY, QUALIFIERS[1], 1);
+    inc.addColumn(FAMILY, QUALIFIERS[3], 1);
+    inc.addColumn(FAMILY, QUALIFIERS[4], 1);
+    ht.increment(inc);
+
+    // Verify expected results
+    Result r = ht.get(new Get(ROW));
+    KeyValue [] kvs = r.raw();
+    assertEquals(5, kvs.length);
+    assertIncrementKey(kvs[0], ROW, FAMILY, QUALIFIERS[0], 1);
+    assertIncrementKey(kvs[1], ROW, FAMILY, QUALIFIERS[1], 3);
+    assertIncrementKey(kvs[2], ROW, FAMILY, QUALIFIERS[2], 3);
+    assertIncrementKey(kvs[3], ROW, FAMILY, QUALIFIERS[3], 5);
+    assertIncrementKey(kvs[4], ROW, FAMILY, QUALIFIERS[4], 1);
+
+    // Now try multiple columns by different amounts
+    inc = new Increment(ROWS[0]);
+    for (int i=0;i<QUALIFIERS.length;i++) {
+      inc.addColumn(FAMILY, QUALIFIERS[i], i+1);
+    }
+    ht.increment(inc);
+    // Verify
+    r = ht.get(new Get(ROWS[0]));
+    kvs = r.raw();
+    assertEquals(QUALIFIERS.length, kvs.length);
+    for (int i=0;i<QUALIFIERS.length;i++) {
+      assertIncrementKey(kvs[i], ROWS[0], FAMILY, QUALIFIERS[i], i+1);
+    }
+
+    // Re-increment them
+    inc = new Increment(ROWS[0]);
+    for (int i=0;i<QUALIFIERS.length;i++) {
+      inc.addColumn(FAMILY, QUALIFIERS[i], i+1);
+    }
+    ht.increment(inc);
+    // Verify
+    r = ht.get(new Get(ROWS[0]));
+    kvs = r.raw();
+    assertEquals(QUALIFIERS.length, kvs.length);
+    for (int i=0;i<QUALIFIERS.length;i++) {
+      assertIncrementKey(kvs[i], ROWS[0], FAMILY, QUALIFIERS[i], 2*(i+1));
+    }
+  }
+}
+
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestGetRowVersions.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestGetRowVersions.java
new file mode 100644
index 0000000..27842ed
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestGetRowVersions.java
@@ -0,0 +1,102 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test versions.
+ * Does shutdown in middle of test to prove versions work across restart.
+ */
+public class TestGetRowVersions extends HBaseClusterTestCase {
+  private static final Log LOG = LogFactory.getLog(TestGetRowVersions.class);
+
+  private static final String TABLE_NAME = "test";
+  private static final byte [] CONTENTS = Bytes.toBytes("contents");
+  private static final byte [] ROW = Bytes.toBytes("row");
+  private static final byte [] VALUE1 = Bytes.toBytes("value1");
+  private static final byte [] VALUE2 = Bytes.toBytes("value2");
+  private static final long TIMESTAMP1 = 100L;
+  private static final long TIMESTAMP2 = 200L;
+
+  @Override
+  public void setUp() throws Exception {
+    super.setUp();
+    HTableDescriptor desc = new HTableDescriptor(TABLE_NAME);
+    desc.addFamily(new HColumnDescriptor(CONTENTS));
+    HBaseAdmin admin = new HBaseAdmin(conf);
+    admin.createTable(desc);
+  }
+
+  /** @throws Exception */
+  public void testGetRowMultipleVersions() throws Exception {
+    Put put = new Put(ROW, TIMESTAMP1, null);
+    put.add(CONTENTS, CONTENTS, VALUE1);
+    HTable table = new HTable(new Configuration(conf), TABLE_NAME);
+    table.put(put);
+    // Shut down and restart the HBase cluster
+    this.cluster.shutdown();
+    this.zooKeeperCluster.shutdown();
+    LOG.debug("HBase cluster shut down -- restarting");
+    this.hBaseClusterSetup();
+    // Make a new connection.  Use new Configuration instance because old one
+    // is tied to an HConnection that has since gone statle.
+    table = new HTable(new Configuration(conf), TABLE_NAME);
+    // Overwrite previous value
+    put = new Put(ROW, TIMESTAMP2, null);
+    put.add(CONTENTS, CONTENTS, VALUE2);
+    table.put(put);
+    // Now verify that getRow(row, column, latest) works
+    Get get = new Get(ROW);
+    // Should get one version by default
+    Result r = table.get(get);
+    assertNotNull(r);
+    assertFalse(r.isEmpty());
+    assertTrue(r.size() == 1);
+    byte [] value = r.getValue(CONTENTS, CONTENTS);
+    assertTrue(value.length != 0);
+    assertTrue(Bytes.equals(value, VALUE2));
+    // Now check getRow with multiple versions
+    get = new Get(ROW);
+    get.setMaxVersions();
+    r = table.get(get);
+    assertTrue(r.size() == 2);
+    value = r.getValue(CONTENTS, CONTENTS);
+    assertTrue(value.length != 0);
+    assertTrue(Bytes.equals(value, VALUE2));
+    NavigableMap<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>> map =
+      r.getMap();
+    NavigableMap<byte[], NavigableMap<Long, byte[]>> familyMap =
+      map.get(CONTENTS);
+    NavigableMap<Long, byte[]> versionMap = familyMap.get(CONTENTS);
+    assertTrue(versionMap.size() == 2);
+    assertTrue(Bytes.equals(VALUE1, versionMap.get(TIMESTAMP1)));
+    assertTrue(Bytes.equals(VALUE2, versionMap.get(TIMESTAMP2)));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
new file mode 100644
index 0000000..b01a2d2
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java
@@ -0,0 +1,159 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.lang.reflect.Field;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionLocation;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertNotNull;
+
+/**
+ * This class is for testing HCM features
+ */
+public class TestHCM {
+  private static final Log LOG = LogFactory.getLog(TestHCM.class);
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final byte[] TABLE_NAME = Bytes.toBytes("test");
+  private static final byte[] FAM_NAM = Bytes.toBytes("f");
+  private static final byte[] ROW = Bytes.toBytes("bbb");
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(1);
+  }
+
+  @AfterClass public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws InterruptedException 
+   * @throws IllegalAccessException 
+   * @throws NoSuchFieldException 
+   * @throws ZooKeeperConnectionException 
+   * @throws IllegalArgumentException 
+   * @throws SecurityException 
+   * @see https://issues.apache.org/jira/browse/HBASE-2925
+   */
+  @Test public void testManyNewConnectionsDoesnotOOME()
+  throws SecurityException, IllegalArgumentException,
+  ZooKeeperConnectionException, NoSuchFieldException, IllegalAccessException,
+  InterruptedException {
+    createNewConfigurations();
+  }
+
+  private static Random _randy = new Random();
+
+  public static void createNewConfigurations() throws SecurityException,
+  IllegalArgumentException, NoSuchFieldException,
+  IllegalAccessException, InterruptedException, ZooKeeperConnectionException {
+    HConnection last = null;
+    for (int i = 0; i <= (HConnectionManager.MAX_CACHED_HBASE_INSTANCES * 2); i++) {
+      // set random key to differentiate the connection from previous ones
+      Configuration configuration = HBaseConfiguration.create();
+      configuration.set("somekey", String.valueOf(_randy.nextInt()));
+      System.out.println("Hash Code: " + configuration.hashCode());
+      HConnection connection =
+        HConnectionManager.getConnection(configuration);
+      if (last != null) {
+        if (last == connection) {
+          System.out.println("!! Got same connection for once !!");
+        }
+      }
+      // change the configuration once, and the cached connection is lost forever:
+      //      the hashtable holding the cache won't be able to find its own keys
+      //      to remove them, so the LRU strategy does not work.
+      configuration.set("someotherkey", String.valueOf(_randy.nextInt()));
+      last = connection;
+      LOG.info("Cache Size: "
+          + getHConnectionManagerCacheSize() + ", Valid Keys: "
+          + getValidKeyCount());
+      Thread.sleep(100);
+    }
+    Assert.assertEquals(HConnectionManager.MAX_CACHED_HBASE_INSTANCES,
+      getHConnectionManagerCacheSize());
+    Assert.assertEquals(HConnectionManager.MAX_CACHED_HBASE_INSTANCES,
+      getValidKeyCount());
+  }
+
+  private static int getHConnectionManagerCacheSize()
+  throws SecurityException, NoSuchFieldException,
+  IllegalArgumentException, IllegalAccessException {
+    Field cacheField =
+      HConnectionManager.class.getDeclaredField("HBASE_INSTANCES");
+    cacheField.setAccessible(true);
+    Map<?, ?> cache = (Map<?, ?>) cacheField.get(null);
+    return cache.size();
+  }
+
+  private static int getValidKeyCount() throws SecurityException,
+  NoSuchFieldException, IllegalArgumentException,
+  IllegalAccessException {
+    Field cacheField =
+      HConnectionManager.class.getDeclaredField("HBASE_INSTANCES");
+    cacheField.setAccessible(true);
+    Map<?, ?> cache = (Map<?, ?>) cacheField.get(null);
+    List<Object> keys = new ArrayList<Object>(cache.keySet());
+    Set<Object> values = new HashSet<Object>();
+    for (Object key : keys) {
+      values.add(cache.get(key));
+    }
+    return values.size();
+  }
+
+  /**
+   * Test that when we delete a location using the first row of a region
+   * that we really delete it.
+   * @throws Exception
+   */
+  @Test
+  public void testRegionCaching() throws Exception{
+    HTable table = TEST_UTIL.createTable(TABLE_NAME, FAM_NAM);
+    TEST_UTIL.createMultiRegions(table, FAM_NAM);
+    Put put = new Put(ROW);
+    put.add(FAM_NAM, ROW, ROW);
+    table.put(put);
+    HConnectionManager.HConnectionImplementation conn =
+        (HConnectionManager.HConnectionImplementation)table.getConnection();
+    assertNotNull(conn.getCachedLocation(TABLE_NAME, ROW));
+    conn.deleteCachedLocation(TABLE_NAME, ROW);
+    HRegionLocation rl = conn.getCachedLocation(TABLE_NAME, ROW);
+    assertNull("What is this location?? " + rl, rl);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java
new file mode 100644
index 0000000..a55935b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java
@@ -0,0 +1,168 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+
+import junit.framework.Assert;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests HTablePool.
+ */
+public class TestHTablePool  {
+  private static HBaseTestingUtility TEST_UTIL   =  new HBaseTestingUtility();
+  private final static byte [] TABLENAME = Bytes.toBytes("TestHTablePool");
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(1);
+    TEST_UTIL.createTable(TABLENAME, HConstants.CATALOG_FAMILY);
+  }
+
+  @AfterClass
+  public static void afterClass() throws IOException {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testTableWithStringName() {
+    HTablePool pool =
+      new HTablePool(TEST_UTIL.getConfiguration(), Integer.MAX_VALUE);
+    String tableName = Bytes.toString(TABLENAME);
+
+    // Request a table from an empty pool
+    HTableInterface table = pool.getTable(tableName);
+    Assert.assertNotNull(table);
+
+    // Return the table to the pool
+    pool.putTable(table);
+
+    // Request a table of the same name
+    HTableInterface sameTable = pool.getTable(tableName);
+    Assert.assertSame(table, sameTable);
+  }
+
+  @Test
+  public void testTableWithByteArrayName() throws IOException {
+    HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), Integer.MAX_VALUE);
+
+    // Request a table from an empty pool
+    HTableInterface table = pool.getTable(TABLENAME);
+    Assert.assertNotNull(table);
+
+    // Return the table to the pool
+    pool.putTable(table);
+
+    // Request a table of the same name
+    HTableInterface sameTable = pool.getTable(TABLENAME);
+    Assert.assertSame(table, sameTable);
+  }
+
+  @Test
+  public void testTableWithMaxSize() {
+    HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 2);
+
+    // Request tables from an empty pool
+    HTableInterface table1 = pool.getTable(TABLENAME);
+    HTableInterface table2 = pool.getTable(TABLENAME);
+    HTableInterface table3 = pool.getTable(TABLENAME);
+
+    // Return the tables to the pool
+    pool.putTable(table1);
+    pool.putTable(table2);
+    // The pool should reject this one since it is already full
+    pool.putTable(table3);
+
+    // Request tables of the same name
+    HTableInterface sameTable1 = pool.getTable(TABLENAME);
+    HTableInterface sameTable2 = pool.getTable(TABLENAME);
+    HTableInterface sameTable3 = pool.getTable(TABLENAME);
+    Assert.assertSame(table1, sameTable1);
+    Assert.assertSame(table2, sameTable2);
+    Assert.assertNotSame(table3, sameTable3);
+  }
+
+  @Test
+  public void testTablesWithDifferentNames() throws IOException {
+    HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), Integer.MAX_VALUE);
+    byte [] otherTable = Bytes.toBytes("OtherTable");
+    TEST_UTIL.createTable(otherTable, HConstants.CATALOG_FAMILY);
+
+    // Request a table from an empty pool
+    HTableInterface table1 = pool.getTable(TABLENAME);
+    HTableInterface table2 = pool.getTable(otherTable);
+    Assert.assertNotNull(table2);
+
+    // Return the tables to the pool
+    pool.putTable(table1);
+    pool.putTable(table2);
+
+    // Request tables of the same names
+    HTableInterface sameTable1 = pool.getTable(TABLENAME);
+    HTableInterface sameTable2 = pool.getTable(otherTable);
+    Assert.assertSame(table1, sameTable1);
+    Assert.assertSame(table2, sameTable2);
+  }
+
+
+  @Test
+  public void testCloseTablePool() throws IOException {
+    HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 4);
+    HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration());
+
+    if (admin.tableExists(TABLENAME)) {
+      admin.disableTable(TABLENAME);
+      admin.deleteTable(TABLENAME);
+    }
+
+    HTableDescriptor tableDescriptor = new HTableDescriptor(TABLENAME);
+    tableDescriptor.addFamily(new HColumnDescriptor("randomFamily"));
+    admin.createTable(tableDescriptor);
+
+
+    // Request tables from an empty pool
+    HTableInterface[] tables = new HTableInterface[4];
+    for (int i = 0; i < 4; ++i ) {
+      tables[i] = pool.getTable(TABLENAME);
+    }
+
+    pool.closeTablePool(TABLENAME);
+
+    for (int i = 0; i < 4; ++i ) {
+      pool.putTable(tables[i]);
+    }
+
+    Assert.assertEquals(4, pool.getCurrentPoolSize(Bytes.toString(TABLENAME)));
+
+    pool.closeTablePool(TABLENAME);
+
+    Assert.assertEquals(0, pool.getCurrentPoolSize(Bytes.toString(TABLENAME)));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java
new file mode 100644
index 0000000..4ec12a2
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java
@@ -0,0 +1,96 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static org.mockito.Mockito.*;
+
+public class TestMetaScanner {
+  final Log LOG = LogFactory.getLog(getClass());
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(1);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testMetaScanner() throws Exception {
+    LOG.info("Starting testMetaScanner");
+    final byte[] TABLENAME = Bytes.toBytes("testMetaScanner");
+    final byte[] FAMILY = Bytes.toBytes("family");
+    TEST_UTIL.createTable(TABLENAME, FAMILY);
+    Configuration conf = TEST_UTIL.getConfiguration();
+    HTable table = new HTable(conf, TABLENAME);
+    TEST_UTIL.createMultiRegions(conf, table, FAMILY,
+        new byte[][]{
+          HConstants.EMPTY_START_ROW,
+          Bytes.toBytes("region_a"),
+          Bytes.toBytes("region_b")});
+    // Make sure all the regions are deployed
+    TEST_UTIL.countRows(table);
+    
+    MetaScanner.MetaScannerVisitor visitor = 
+      mock(MetaScanner.MetaScannerVisitor.class);
+    doReturn(true).when(visitor).processRow((Result)anyObject());
+
+    // Scanning the entire table should give us three rows
+    MetaScanner.metaScan(conf, visitor, TABLENAME);
+    verify(visitor, times(3)).processRow((Result)anyObject());
+    
+    // Scanning the table with a specified empty start row should also
+    // give us three META rows
+    reset(visitor);
+    doReturn(true).when(visitor).processRow((Result)anyObject());
+    MetaScanner.metaScan(conf, visitor, TABLENAME, HConstants.EMPTY_BYTE_ARRAY, 1000);
+    verify(visitor, times(3)).processRow((Result)anyObject());
+    
+    // Scanning the table starting in the middle should give us two rows:
+    // region_a and region_b
+    reset(visitor);
+    doReturn(true).when(visitor).processRow((Result)anyObject());
+    MetaScanner.metaScan(conf, visitor, TABLENAME, Bytes.toBytes("region_ac"), 1000);
+    verify(visitor, times(2)).processRow((Result)anyObject());
+    
+    // Scanning with a limit of 1 should only give us one row
+    reset(visitor);
+    doReturn(true).when(visitor).processRow((Result)anyObject());
+    MetaScanner.metaScan(conf, visitor, TABLENAME, Bytes.toBytes("region_ac"), 1);
+    verify(visitor, times(1)).processRow((Result)anyObject());
+        
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
new file mode 100644
index 0000000..6974f88
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java
@@ -0,0 +1,465 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static org.junit.Assert.*;
+
+public class TestMultiParallel {
+  private static final Log LOG = LogFactory.getLog(TestMultiParallel.class);
+  private static final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static final byte[] VALUE = Bytes.toBytes("value");
+  private static final byte[] QUALIFIER = Bytes.toBytes("qual");
+  private static final String FAMILY = "family";
+  private static final String TEST_TABLE = "multi_test_table";
+  private static final byte[] BYTES_FAMILY = Bytes.toBytes(FAMILY);
+  private static final byte[] ONE_ROW = Bytes.toBytes("xxx");
+  private static final byte [][] KEYS = makeKeys();
+
+  @BeforeClass public static void beforeClass() throws Exception {
+    UTIL.startMiniCluster(2);
+    HTable t = UTIL.createTable(Bytes.toBytes(TEST_TABLE), Bytes.toBytes(FAMILY));
+    UTIL.createMultiRegions(t, Bytes.toBytes(FAMILY));
+  }
+
+  @AfterClass public static void afterClass() throws IOException {
+    UTIL.getMiniHBaseCluster().shutdown();
+  }
+
+  @Before public void before() throws IOException {
+    LOG.info("before");
+    if (UTIL.ensureSomeRegionServersAvailable(2)) {
+      // Distribute regions
+      UTIL.getMiniHBaseCluster().getMaster().balance();
+    }
+    LOG.info("before done");
+  }
+
+  private static byte[][] makeKeys() {
+    byte [][] starterKeys = HBaseTestingUtility.KEYS;
+    // Create a "non-uniform" test set with the following characteristics:
+    // a) Unequal number of keys per region
+
+    // Don't use integer as a multiple, so that we have a number of keys that is
+    // not a multiple of the number of regions
+    int numKeys = (int) ((float) starterKeys.length * 10.33F);
+
+    List<byte[]> keys = new ArrayList<byte[]>();
+    for (int i = 0; i < numKeys; i++) {
+      int kIdx = i % starterKeys.length;
+      byte[] k = starterKeys[kIdx];
+      byte[] cp = new byte[k.length + 1];
+      System.arraycopy(k, 0, cp, 0, k.length);
+      cp[k.length] = new Integer(i % 256).byteValue();
+      keys.add(cp);
+    }
+
+    // b) Same duplicate keys (showing multiple Gets/Puts to the same row, which
+    // should work)
+    // c) keys are not in sorted order (within a region), to ensure that the
+    // sorting code and index mapping doesn't break the functionality
+    for (int i = 0; i < 100; i++) {
+      int kIdx = i % starterKeys.length;
+      byte[] k = starterKeys[kIdx];
+      byte[] cp = new byte[k.length + 1];
+      System.arraycopy(k, 0, cp, 0, k.length);
+      cp[k.length] = new Integer(i % 256).byteValue();
+      keys.add(cp);
+    }
+    return keys.toArray(new byte [][] {new byte [] {}});
+  }
+
+  @Test public void testBatchWithGet() throws Exception {
+    LOG.info("test=testBatchWithGet");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+
+    // load test data
+    List<Row> puts = constructPutRequests();
+    table.batch(puts);
+
+    // create a list of gets and run it
+    List<Row> gets = new ArrayList<Row>();
+    for (byte[] k : KEYS) {
+      Get get = new Get(k);
+      get.addColumn(BYTES_FAMILY, QUALIFIER);
+      gets.add(get);
+    }
+    Result[] multiRes = new Result[gets.size()];
+    table.batch(gets, multiRes);
+
+    // Same gets using individual call API
+    List<Result> singleRes = new ArrayList<Result>();
+    for (Row get : gets) {
+      singleRes.add(table.get((Get) get));
+    }
+
+    // Compare results
+    Assert.assertEquals(singleRes.size(), multiRes.length);
+    for (int i = 0; i < singleRes.size(); i++) {
+      Assert.assertTrue(singleRes.get(i).containsColumn(BYTES_FAMILY, QUALIFIER));
+      KeyValue[] singleKvs = singleRes.get(i).raw();
+      KeyValue[] multiKvs = multiRes[i].raw();
+      for (int j = 0; j < singleKvs.length; j++) {
+        Assert.assertEquals(singleKvs[j], multiKvs[j]);
+        Assert.assertEquals(0, Bytes.compareTo(singleKvs[j].getValue(), multiKvs[j]
+            .getValue()));
+      }
+    }
+  }
+
+  @Test
+  public void testBadFam() throws Exception {
+    LOG.info("test=testBadFam");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+
+    List<Row> actions = new ArrayList<Row>();
+    Put p = new Put(Bytes.toBytes("row1"));
+    p.add(Bytes.toBytes("bad_family"), Bytes.toBytes("qual"), Bytes.toBytes("value"));
+    actions.add(p);
+    p = new Put(Bytes.toBytes("row2"));
+    p.add(BYTES_FAMILY, Bytes.toBytes("qual"), Bytes.toBytes("value"));
+    actions.add(p);
+
+    // row1 and row2 should be in the same region.
+
+    Object [] r = new Object[actions.size()];
+    try {
+      table.batch(actions, r);
+      fail();
+    } catch (RetriesExhaustedWithDetailsException ex) {
+      LOG.debug(ex);
+      // good!
+      assertFalse(ex.mayHaveClusterIssues());
+    }
+    assertEquals(2, r.length);
+    assertTrue(r[0] instanceof Throwable);
+    assertTrue(r[1] instanceof Result);
+  }
+
+  /**
+   * Only run one Multi test with a forced RegionServer abort. Otherwise, the
+   * unit tests will take an unnecessarily long time to run.
+   *
+   * @throws Exception
+   */
+  @Test public void testFlushCommitsWithAbort() throws Exception {
+    LOG.info("test=testFlushCommitsWithAbort");
+    doTestFlushCommits(true);
+  }
+
+  @Test public void testFlushCommitsNoAbort() throws Exception {
+    LOG.info("test=testFlushCommitsNoAbort");
+    doTestFlushCommits(false);
+  }
+
+  private void doTestFlushCommits(boolean doAbort) throws Exception {
+    // Load the data
+    LOG.info("get new table");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+    table.setAutoFlush(false);
+    table.setWriteBufferSize(10 * 1024 * 1024);
+
+    LOG.info("constructPutRequests");
+    List<Row> puts = constructPutRequests();
+    for (Row put : puts) {
+      table.put((Put) put);
+    }
+    LOG.info("puts");
+    table.flushCommits();
+    if (doAbort) {
+      LOG.info("Aborted=" + UTIL.getMiniHBaseCluster().abortRegionServer(0));
+
+      // try putting more keys after the abort. same key/qual... just validating
+      // no exceptions thrown
+      puts = constructPutRequests();
+      for (Row put : puts) {
+        table.put((Put) put);
+      }
+
+      table.flushCommits();
+    }
+
+    LOG.info("validating loaded data");
+    validateLoadedData(table);
+
+    // Validate server and region count
+    List<JVMClusterUtil.RegionServerThread> liveRSs =
+      UTIL.getMiniHBaseCluster().getLiveRegionServerThreads();
+    int count = 0;
+    for (JVMClusterUtil.RegionServerThread t: liveRSs) {
+      count++;
+      LOG.info("Count=" + count + ", Alive=" + t.getRegionServer());
+    }
+    LOG.info("Count=" + count);
+    Assert.assertEquals("Server count=" + count + ", abort=" + doAbort,
+      (doAbort ? 1 : 2), count);
+    for (JVMClusterUtil.RegionServerThread t: liveRSs) {
+      int regions = t.getRegionServer().getOnlineRegions().size();
+      Assert.assertTrue("Count of regions=" + regions, regions > 10);
+    }
+    LOG.info("done");
+  }
+
+  @Test public void testBatchWithPut() throws Exception {
+    LOG.info("test=testBatchWithPut");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+
+    // put multiple rows using a batch
+    List<Row> puts = constructPutRequests();
+
+    Object[] results = table.batch(puts);
+    validateSizeAndEmpty(results, KEYS.length);
+
+    if (true) {
+      UTIL.getMiniHBaseCluster().abortRegionServer(0);
+
+      puts = constructPutRequests();
+      results = table.batch(puts);
+      validateSizeAndEmpty(results, KEYS.length);
+    }
+
+    validateLoadedData(table);
+  }
+
+  @Test public void testBatchWithDelete() throws Exception {
+    LOG.info("test=testBatchWithDelete");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+
+    // Load some data
+    List<Row> puts = constructPutRequests();
+    Object[] results = table.batch(puts);
+    validateSizeAndEmpty(results, KEYS.length);
+
+    // Deletes
+    List<Row> deletes = new ArrayList<Row>();
+    for (int i = 0; i < KEYS.length; i++) {
+      Delete delete = new Delete(KEYS[i]);
+      delete.deleteFamily(BYTES_FAMILY);
+      deletes.add(delete);
+    }
+    results = table.batch(deletes);
+    validateSizeAndEmpty(results, KEYS.length);
+
+    // Get to make sure ...
+    for (byte[] k : KEYS) {
+      Get get = new Get(k);
+      get.addColumn(BYTES_FAMILY, QUALIFIER);
+      Assert.assertFalse(table.exists(get));
+    }
+
+  }
+
+  @Test public void testHTableDeleteWithList() throws Exception {
+    LOG.info("test=testHTableDeleteWithList");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+
+    // Load some data
+    List<Row> puts = constructPutRequests();
+    Object[] results = table.batch(puts);
+    validateSizeAndEmpty(results, KEYS.length);
+
+    // Deletes
+    ArrayList<Delete> deletes = new ArrayList<Delete>();
+    for (int i = 0; i < KEYS.length; i++) {
+      Delete delete = new Delete(KEYS[i]);
+      delete.deleteFamily(BYTES_FAMILY);
+      deletes.add(delete);
+    }
+    table.delete(deletes);
+    Assert.assertTrue(deletes.isEmpty());
+
+    // Get to make sure ...
+    for (byte[] k : KEYS) {
+      Get get = new Get(k);
+      get.addColumn(BYTES_FAMILY, QUALIFIER);
+      Assert.assertFalse(table.exists(get));
+    }
+
+  }
+
+  @Test public void testBatchWithManyColsInOneRowGetAndPut() throws Exception {
+    LOG.info("test=testBatchWithManyColsInOneRowGetAndPut");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+
+    List<Row> puts = new ArrayList<Row>();
+    for (int i = 0; i < 100; i++) {
+      Put put = new Put(ONE_ROW);
+      byte[] qual = Bytes.toBytes("column" + i);
+      put.add(BYTES_FAMILY, qual, VALUE);
+      puts.add(put);
+    }
+    Object[] results = table.batch(puts);
+
+    // validate
+    validateSizeAndEmpty(results, 100);
+
+    // get the data back and validate that it is correct
+    List<Row> gets = new ArrayList<Row>();
+    for (int i = 0; i < 100; i++) {
+      Get get = new Get(ONE_ROW);
+      byte[] qual = Bytes.toBytes("column" + i);
+      get.addColumn(BYTES_FAMILY, qual);
+      gets.add(get);
+    }
+
+    Object[] multiRes = table.batch(gets);
+
+    int idx = 0;
+    for (Object r : multiRes) {
+      byte[] qual = Bytes.toBytes("column" + idx);
+      validateResult(r, qual, VALUE);
+      idx++;
+    }
+
+  }
+
+  @Test public void testBatchWithMixedActions() throws Exception {
+    LOG.info("test=testBatchWithMixedActions");
+    HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE);
+
+    // Load some data to start
+    Object[] results = table.batch(constructPutRequests());
+    validateSizeAndEmpty(results, KEYS.length);
+
+    // Batch: get, get, put(new col), delete, get, get of put, get of deleted,
+    // put
+    List<Row> actions = new ArrayList<Row>();
+
+    byte[] qual2 = Bytes.toBytes("qual2");
+    byte[] val2 = Bytes.toBytes("putvalue2");
+
+    // 0 get
+    Get get = new Get(KEYS[10]);
+    get.addColumn(BYTES_FAMILY, QUALIFIER);
+    actions.add(get);
+
+    // 1 get
+    get = new Get(KEYS[11]);
+    get.addColumn(BYTES_FAMILY, QUALIFIER);
+    actions.add(get);
+
+    // 2 put of new column
+    Put put = new Put(KEYS[10]);
+    put.add(BYTES_FAMILY, qual2, val2);
+    actions.add(put);
+
+    // 3 delete
+    Delete delete = new Delete(KEYS[20]);
+    delete.deleteFamily(BYTES_FAMILY);
+    actions.add(delete);
+
+    // 4 get
+    get = new Get(KEYS[30]);
+    get.addColumn(BYTES_FAMILY, QUALIFIER);
+    actions.add(get);
+
+    // There used to be a 'get' of a previous put here, but removed
+    // since this API really cannot guarantee order in terms of mixed
+    // get/puts.
+
+    // 5 put of new column
+    put = new Put(KEYS[40]);
+    put.add(BYTES_FAMILY, qual2, val2);
+    actions.add(put);
+
+    results = table.batch(actions);
+
+    // Validation
+
+    validateResult(results[0]);
+    validateResult(results[1]);
+    validateEmpty(results[2]);
+    validateEmpty(results[3]);
+    validateResult(results[4]);
+    validateEmpty(results[5]);
+
+    // validate last put, externally from the batch
+    get = new Get(KEYS[40]);
+    get.addColumn(BYTES_FAMILY, qual2);
+    Result r = table.get(get);
+    validateResult(r, qual2, val2);
+  }
+
+  // // Helper methods ////
+
+  private void validateResult(Object r) {
+    validateResult(r, QUALIFIER, VALUE);
+  }
+
+  private void validateResult(Object r1, byte[] qual, byte[] val) {
+    // TODO provide nice assert here or something.
+    Result r = (Result)r1;
+    Assert.assertTrue(r.containsColumn(BYTES_FAMILY, qual));
+    Assert.assertEquals(0, Bytes.compareTo(val, r.getValue(BYTES_FAMILY, qual)));
+  }
+
+  private List<Row> constructPutRequests() {
+    List<Row> puts = new ArrayList<Row>();
+    for (byte[] k : KEYS) {
+      Put put = new Put(k);
+      put.add(BYTES_FAMILY, QUALIFIER, VALUE);
+      puts.add(put);
+    }
+    return puts;
+  }
+
+  private void validateLoadedData(HTable table) throws IOException {
+    // get the data back and validate that it is correct
+    for (byte[] k : KEYS) {
+      LOG.info("Assert=" + Bytes.toString(k));
+      Get get = new Get(k);
+      get.addColumn(BYTES_FAMILY, QUALIFIER);
+      Result r = table.get(get);
+      Assert.assertTrue(r.containsColumn(BYTES_FAMILY, QUALIFIER));
+      Assert.assertEquals(0, Bytes.compareTo(VALUE, r
+          .getValue(BYTES_FAMILY, QUALIFIER)));
+    }
+  }
+
+  private void validateEmpty(Object r1) {
+    Result result = (Result)r1;
+    Assert.assertTrue(result != null);
+    Assert.assertTrue(result.getRow() == null);
+    Assert.assertEquals(0, result.raw().length);
+  }
+
+  private void validateSizeAndEmpty(Object[] results, int expectedSize) {
+    // Validate got back the same number of Result objects, all empty
+    Assert.assertEquals(expectedSize, results.length);
+    for (Object result : results) {
+      validateEmpty(result);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java
new file mode 100644
index 0000000..37c7359
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java
@@ -0,0 +1,516 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Run tests related to {@link TimestampsFilter} using HBase client APIs.
+ * Sets up the HBase mini cluster once at start. Each creates a table
+ * named for the method and does its stuff against that.
+ */
+public class TestMultipleTimestamps {
+  final Log LOG = LogFactory.getLog(getClass());
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+    // Nothing to do.
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @After
+  public void tearDown() throws Exception {
+    // Nothing to do.
+  }
+
+  @Test
+  public void testReseeksWithOneColumnMiltipleTimestamp() throws IOException {
+    byte [] TABLE = Bytes.toBytes("testReseeksWithOne" +
+    "ColumnMiltipleTimestamps");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    Integer[] putRows = new Integer[] {1, 3, 5, 7};
+    Integer[] putColumns = new Integer[] { 1, 3, 5};
+    Long[] putTimestamps = new Long[] {1L, 2L, 3L, 4L, 5L};
+
+    Integer[] scanRows = new Integer[] {3, 5};
+    Integer[] scanColumns = new Integer[] {3};
+    Long[] scanTimestamps = new Long[] {3L, 4L};
+    int scanMaxVersions = 2;
+
+    put(ht, FAMILY, putRows, putColumns, putTimestamps);
+
+    flush(TABLE);
+
+    ResultScanner scanner = scan(ht, FAMILY, scanRows, scanColumns,
+        scanTimestamps, scanMaxVersions);
+
+    KeyValue[] kvs;
+
+    kvs = scanner.next().raw();
+    assertEquals(2, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 3, 3, 4);
+    checkOneCell(kvs[1], FAMILY, 3, 3, 3);
+    kvs = scanner.next().raw();
+    assertEquals(2, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 5, 3, 4);
+    checkOneCell(kvs[1], FAMILY, 5, 3, 3);
+  }
+
+  @Test
+  public void testReseeksWithMultipleColumnOneTimestamp() throws IOException {
+    byte [] TABLE = Bytes.toBytes("testReseeksWithMultiple" +
+    "ColumnOneTimestamps");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    Integer[] putRows = new Integer[] {1, 3, 5, 7};
+    Integer[] putColumns = new Integer[] { 1, 3, 5};
+    Long[] putTimestamps = new Long[] {1L, 2L, 3L, 4L, 5L};
+
+    Integer[] scanRows = new Integer[] {3, 5};
+    Integer[] scanColumns = new Integer[] {3,4};
+    Long[] scanTimestamps = new Long[] {3L};
+    int scanMaxVersions = 2;
+
+    put(ht, FAMILY, putRows, putColumns, putTimestamps);
+
+    flush(TABLE);
+
+    ResultScanner scanner = scan(ht, FAMILY, scanRows, scanColumns,
+        scanTimestamps, scanMaxVersions);
+
+    KeyValue[] kvs;
+
+    kvs = scanner.next().raw();
+    assertEquals(1, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 3, 3, 3);
+    kvs = scanner.next().raw();
+    assertEquals(1, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 5, 3, 3);
+  }
+
+  @Test
+  public void testReseeksWithMultipleColumnMultipleTimestamp() throws
+  IOException {
+    byte [] TABLE = Bytes.toBytes("testReseeksWithMultiple" +
+    "ColumnMiltipleTimestamps");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    Integer[] putRows = new Integer[] {1, 3, 5, 7};
+    Integer[] putColumns = new Integer[] { 1, 3, 5};
+    Long[] putTimestamps = new Long[] {1L, 2L, 3L, 4L, 5L};
+
+    Integer[] scanRows = new Integer[] {5, 7};
+    Integer[] scanColumns = new Integer[] {3, 4, 5};
+    Long[] scanTimestamps = new Long[] {2l, 3L};
+    int scanMaxVersions = 2;
+
+    put(ht, FAMILY, putRows, putColumns, putTimestamps);
+
+    flush(TABLE);
+
+    ResultScanner scanner = scan(ht, FAMILY, scanRows, scanColumns,
+        scanTimestamps, scanMaxVersions);
+
+    KeyValue[] kvs;
+
+    kvs = scanner.next().raw();
+    assertEquals(4, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 5, 3, 3);
+    checkOneCell(kvs[1], FAMILY, 5, 3, 2);
+    checkOneCell(kvs[2], FAMILY, 5, 5, 3);
+    checkOneCell(kvs[3], FAMILY, 5, 5, 2);
+    kvs = scanner.next().raw();
+    assertEquals(4, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 7, 3, 3);
+    checkOneCell(kvs[1], FAMILY, 7, 3, 2);
+    checkOneCell(kvs[2], FAMILY, 7, 5, 3);
+    checkOneCell(kvs[3], FAMILY, 7, 5, 2);
+  }
+
+  @Test
+  public void testReseeksWithMultipleFiles() throws IOException {
+    byte [] TABLE = Bytes.toBytes("testReseeksWithMultipleFiles");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    Integer[] putRows1 = new Integer[] {1, 2, 3};
+    Integer[] putColumns1 = new Integer[] { 2, 5, 6};
+    Long[] putTimestamps1 = new Long[] {1L, 2L, 5L};
+
+    Integer[] putRows2 = new Integer[] {6, 7};
+    Integer[] putColumns2 = new Integer[] {3, 6};
+    Long[] putTimestamps2 = new Long[] {4L, 5L};
+
+    Integer[] putRows3 = new Integer[] {2, 3, 5};
+    Integer[] putColumns3 = new Integer[] {1, 2, 3};
+    Long[] putTimestamps3 = new Long[] {4L,8L};
+
+
+    Integer[] scanRows = new Integer[] {3, 5, 7};
+    Integer[] scanColumns = new Integer[] {3, 4, 5};
+    Long[] scanTimestamps = new Long[] {2l, 4L};
+    int scanMaxVersions = 5;
+
+    put(ht, FAMILY, putRows1, putColumns1, putTimestamps1);
+    flush(TABLE);
+    put(ht, FAMILY, putRows2, putColumns2, putTimestamps2);
+    flush(TABLE);
+    put(ht, FAMILY, putRows3, putColumns3, putTimestamps3);
+
+    ResultScanner scanner = scan(ht, FAMILY, scanRows, scanColumns,
+        scanTimestamps, scanMaxVersions);
+
+    KeyValue[] kvs;
+
+    kvs = scanner.next().raw();
+    assertEquals(2, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 3, 3, 4);
+    checkOneCell(kvs[1], FAMILY, 3, 5, 2);
+
+    kvs = scanner.next().raw();
+    assertEquals(1, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 5, 3, 4);
+
+    kvs = scanner.next().raw();
+    assertEquals(1, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 6, 3, 4);
+
+    kvs = scanner.next().raw();
+    assertEquals(1, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 7, 3, 4);
+  }
+
+  @Test
+  public void testWithVersionDeletes() throws Exception {
+
+    // first test from memstore (without flushing).
+    testWithVersionDeletes(false);
+
+    // run same test against HFiles (by forcing a flush).
+    testWithVersionDeletes(true);
+  }
+
+  public void testWithVersionDeletes(boolean flushTables) throws IOException {
+    byte [] TABLE = Bytes.toBytes("testWithVersionDeletes_" +
+        (flushTables ? "flush" : "noflush"));
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    // For row:0, col:0: insert versions 1 through 5.
+    putNVersions(ht, FAMILY, 0, 0, 1, 5);
+
+    if (flushTables) {
+      flush(TABLE);
+    }
+
+    // delete version 4.
+    deleteOneVersion(ht, FAMILY, 0, 0, 4);
+
+    // request a bunch of versions including the deleted version. We should
+    // only get back entries for the versions that exist.
+    KeyValue kvs[] = getNVersions(ht, FAMILY, 0, 0,
+        Arrays.asList(2L, 3L, 4L, 5L));
+    assertEquals(3, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 0, 0, 5);
+    checkOneCell(kvs[1], FAMILY, 0, 0, 3);
+    checkOneCell(kvs[2], FAMILY, 0, 0, 2);
+  }
+
+  @Test
+  public void testWithMultipleVersionDeletes() throws IOException {
+    byte [] TABLE = Bytes.toBytes("testWithMultipleVersionDeletes");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    // For row:0, col:0: insert versions 1 through 5.
+    putNVersions(ht, FAMILY, 0, 0, 1, 5);
+
+    flush(TABLE);
+
+    // delete all versions before 4.
+    deleteAllVersionsBefore(ht, FAMILY, 0, 0, 4);
+
+    // request a bunch of versions including the deleted version. We should
+    // only get back entries for the versions that exist.
+    KeyValue kvs[] = getNVersions(ht, FAMILY, 0, 0, Arrays.asList(2L, 3L));
+    assertEquals(0, kvs.length);
+  }
+
+  @Test
+  public void testWithColumnDeletes() throws IOException {
+    byte [] TABLE = Bytes.toBytes("testWithColumnDeletes");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    // For row:0, col:0: insert versions 1 through 5.
+    putNVersions(ht, FAMILY, 0, 0, 1, 5);
+
+    flush(TABLE);
+
+    // delete all versions before 4.
+    deleteColumn(ht, FAMILY, 0, 0);
+
+    // request a bunch of versions including the deleted version. We should
+    // only get back entries for the versions that exist.
+    KeyValue kvs[] = getNVersions(ht, FAMILY, 0, 0, Arrays.asList(2L, 3L));
+    assertEquals(0, kvs.length);
+  }
+
+  @Test
+  public void testWithFamilyDeletes() throws IOException {
+    byte [] TABLE = Bytes.toBytes("testWithFamilyDeletes");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    // For row:0, col:0: insert versions 1 through 5.
+    putNVersions(ht, FAMILY, 0, 0, 1, 5);
+
+    flush(TABLE);
+
+    // delete all versions before 4.
+    deleteFamily(ht, FAMILY, 0);
+
+    // request a bunch of versions including the deleted version. We should
+    // only get back entries for the versions that exist.
+    KeyValue kvs[] = getNVersions(ht, FAMILY, 0, 0, Arrays.asList(2L, 3L));
+    assertEquals(0, kvs.length);
+  }
+
+  // Flush tables. Since flushing is asynchronous, sleep for a bit.
+  private void flush(byte [] tableName) throws IOException {
+    TEST_UTIL.flush(tableName);
+    try {
+      Thread.sleep(3000);
+    } catch (InterruptedException i) {
+      // ignore
+    }
+  }
+
+  /**
+   * Assert that the passed in KeyValue has expected contents for the
+   * specified row, column & timestamp.
+   */
+  private void checkOneCell(KeyValue kv, byte[] cf,
+      int rowIdx, int colIdx, long ts) {
+
+    String ctx = "rowIdx=" + rowIdx + "; colIdx=" + colIdx + "; ts=" + ts;
+
+    assertEquals("Row mismatch which checking: " + ctx,
+        "row:"+ rowIdx, Bytes.toString(kv.getRow()));
+
+    assertEquals("ColumnFamily mismatch while checking: " + ctx,
+        Bytes.toString(cf), Bytes.toString(kv.getFamily()));
+
+    assertEquals("Column qualifier mismatch while checking: " + ctx,
+        "column:" + colIdx,
+        Bytes.toString(kv.getQualifier()));
+
+    assertEquals("Timestamp mismatch while checking: " + ctx,
+        ts, kv.getTimestamp());
+
+    assertEquals("Value mismatch while checking: " + ctx,
+        "value-version-" + ts, Bytes.toString(kv.getValue()));
+  }
+
+  /**
+   * Uses the TimestampFilter on a Get to request a specified list of
+   * versions for the row/column specified by rowIdx & colIdx.
+   *
+   */
+  private  KeyValue[] getNVersions(HTable ht, byte[] cf, int rowIdx,
+      int colIdx, List<Long> versions)
+  throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Get get = new Get(row);
+    get.addColumn(cf, column);
+    get.setMaxVersions();
+    get.setTimeRange(Collections.min(versions), Collections.max(versions)+1);
+    Result result = ht.get(get);
+
+    return result.raw();
+  }
+
+  private  ResultScanner scan(HTable ht, byte[] cf,
+      Integer[] rowIndexes, Integer[] columnIndexes,
+      Long[] versions, int maxVersions)
+  throws IOException {
+    Arrays.asList(rowIndexes);
+    byte startRow[] = Bytes.toBytes("row:" +
+        Collections.min( Arrays.asList(rowIndexes)));
+    byte endRow[] = Bytes.toBytes("row:" +
+        Collections.max( Arrays.asList(rowIndexes))+1);
+    Scan scan = new Scan(startRow, endRow);
+    for (Integer colIdx: columnIndexes) {
+      byte column[] = Bytes.toBytes("column:" + colIdx);
+      scan.addColumn(cf, column);
+    }
+    scan.setMaxVersions(maxVersions);
+    scan.setTimeRange(Collections.min(Arrays.asList(versions)),
+        Collections.max(Arrays.asList(versions))+1);
+    ResultScanner scanner = ht.getScanner(scan);
+    return scanner;
+  }
+
+  private void put(HTable ht, byte[] cf, Integer[] rowIndexes,
+      Integer[] columnIndexes, Long[] versions)
+  throws IOException {
+    for (int rowIdx: rowIndexes) {
+      byte row[] = Bytes.toBytes("row:" + rowIdx);
+      Put put = new Put(row);
+      for(int colIdx: columnIndexes) {
+        byte column[] = Bytes.toBytes("column:" + colIdx);
+        for (long version: versions) {
+          put.add(cf, column, version, Bytes.toBytes("value-version-" +
+              version));
+        }
+      }
+      ht.put(put);
+    }
+  }
+
+  /**
+   * Insert in specific row/column versions with timestamps
+   * versionStart..versionEnd.
+   */
+  private void putNVersions(HTable ht, byte[] cf, int rowIdx, int colIdx,
+      long versionStart, long versionEnd)
+  throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Put put = new Put(row);
+
+    for (long idx = versionStart; idx <= versionEnd; idx++) {
+      put.add(cf, column, idx, Bytes.toBytes("value-version-" + idx));
+    }
+
+    ht.put(put);
+  }
+
+  /**
+   * For row/column specified by rowIdx/colIdx, delete the cell
+   * corresponding to the specified version.
+   */
+  private void deleteOneVersion(HTable ht, byte[] cf, int rowIdx,
+      int colIdx, long version)
+  throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Delete del = new Delete(row);
+    del.deleteColumn(cf, column, version);
+    ht.delete(del);
+  }
+
+  /**
+   * For row/column specified by rowIdx/colIdx, delete all cells
+   * preceeding the specified version.
+   */
+  private void deleteAllVersionsBefore(HTable ht, byte[] cf, int rowIdx,
+      int colIdx, long version)
+  throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Delete del = new Delete(row);
+    del.deleteColumns(cf, column, version);
+    ht.delete(del);
+  }
+
+  private void deleteColumn(HTable ht, byte[] cf, int rowIdx, int colIdx) throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Delete del = new Delete(row);
+    del.deleteColumns(cf, column);
+    ht.delete(del);
+  }
+
+  private void deleteFamily(HTable ht, byte[] cf, int rowIdx) throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    Delete del = new Delete(row);
+    del.deleteFamily(cf);
+    ht.delete(del);
+  }
+}
+
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestResult.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestResult.java
new file mode 100644
index 0000000..becabcf
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestResult.java
@@ -0,0 +1,98 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import junit.framework.TestCase;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.apache.hadoop.hbase.HBaseTestCase.assertByteEquals;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+
+public class TestResult extends TestCase {
+
+  static KeyValue[] genKVs(final byte[] row, final byte[] family,
+                           final byte[] value,
+                    final long timestamp,
+                    final int cols) {
+    KeyValue [] kvs = new KeyValue[cols];
+
+    for (int i = 0; i < cols ; i++) {
+      kvs[i] = new KeyValue(
+          row, family, Bytes.toBytes(i),
+          timestamp,
+          Bytes.add(value, Bytes.toBytes(i)));
+    }
+    return kvs;
+  }
+
+  static final byte [] row = Bytes.toBytes("row");
+  static final byte [] family = Bytes.toBytes("family");
+  static final byte [] value = Bytes.toBytes("value");
+
+  public void testBasic() throws Exception {
+    KeyValue [] kvs = genKVs(row, family, value, 1, 100);
+
+    Arrays.sort(kvs, KeyValue.COMPARATOR);
+
+    Result r = new Result(kvs);
+
+    for (int i = 0; i < 100; ++i) {
+      final byte[] qf = Bytes.toBytes(i);
+
+      List<KeyValue> ks = r.getColumn(family, qf);
+      assertEquals(1, ks.size());
+      assertByteEquals(qf, ks.get(0).getQualifier());
+
+      assertEquals(ks.get(0), r.getColumnLatest(family, qf));
+      assertByteEquals(Bytes.add(value, Bytes.toBytes(i)), r.getValue(family, qf));
+      assertTrue(r.containsColumn(family, qf));
+    }
+  }
+  public void testMultiVersion() throws Exception {
+    KeyValue [] kvs1 = genKVs(row, family, value, 1, 100);
+    KeyValue [] kvs2 = genKVs(row, family, value, 200, 100);
+
+    KeyValue [] kvs = new KeyValue[kvs1.length+kvs2.length];
+    System.arraycopy(kvs1, 0, kvs, 0, kvs1.length);
+    System.arraycopy(kvs2, 0, kvs, kvs1.length, kvs2.length);
+
+    Arrays.sort(kvs, KeyValue.COMPARATOR);
+
+    Result r = new Result(kvs);
+    for (int i = 0; i < 100; ++i) {
+      final byte[] qf = Bytes.toBytes(i);
+
+      List<KeyValue> ks = r.getColumn(family, qf);
+      assertEquals(2, ks.size());
+      assertByteEquals(qf, ks.get(0).getQualifier());
+      assertEquals(200, ks.get(0).getTimestamp());
+
+      assertEquals(ks.get(0), r.getColumnLatest(family, qf));
+      assertByteEquals(Bytes.add(value, Bytes.toBytes(i)), r.getValue(family, qf));
+      assertTrue(r.containsColumn(family, qf));
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
new file mode 100644
index 0000000..4a97f45
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java
@@ -0,0 +1,137 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test various scanner timeout issues.
+ */
+public class TestScannerTimeout {
+
+  private final static HBaseTestingUtility
+      TEST_UTIL = new HBaseTestingUtility();
+
+  final Log LOG = LogFactory.getLog(getClass());
+  private final static byte[] SOME_BYTES = Bytes.toBytes("f");
+  private final static byte[] TABLE_NAME = Bytes.toBytes("t");
+  private final static int NB_ROWS = 10;
+  // Be careful w/ what you set this timer too... it can get in the way of
+  // the mini cluster coming up -- the verification in particular.
+  private final static int SCANNER_TIMEOUT = 10000;
+
+   /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    Configuration c = TEST_UTIL.getConfiguration();
+    c.setInt("hbase.regionserver.lease.period", SCANNER_TIMEOUT);
+    TEST_UTIL.startMiniCluster(2);
+    HTable table = TEST_UTIL.createTable(TABLE_NAME, SOME_BYTES);
+     for (int i = 0; i < NB_ROWS; i++) {
+      Put put = new Put(Bytes.toBytes(i));
+      put.add(SOME_BYTES, SOME_BYTES, SOME_BYTES);
+      table.put(put);
+    }
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+    TEST_UTIL.ensureSomeRegionServersAvailable(2);
+  }
+
+  /**
+   * Test that we do get a ScannerTimeoutException
+   * @throws Exception
+   */
+  @Test
+  public void test2481() throws Exception {
+    Scan scan = new Scan();
+    HTable table =
+      new HTable(new Configuration(TEST_UTIL.getConfiguration()), TABLE_NAME);
+    ResultScanner r = table.getScanner(scan);
+    int count = 0;
+    try {
+      Result res = r.next();
+      while (res != null) {
+        count++;
+        if (count == 5) {
+          // Sleep just a bit more to be sure
+          Thread.sleep(SCANNER_TIMEOUT+100);
+        }
+        res = r.next();
+      }
+    } catch (ScannerTimeoutException e) {
+      LOG.info("Got the timeout " + e.getMessage(), e);
+      return;
+    }
+    fail("We should be timing out");
+  }
+
+  /**
+   * Test that scanner can continue even if the region server it was reading
+   * from failed. Before 2772, it reused the same scanner id.
+   * @throws Exception
+   */
+  @Test
+  public void test2772() throws Exception {
+    HRegionServer rs = TEST_UTIL.getRSForFirstRegionInTable(TABLE_NAME);
+    Scan scan = new Scan();
+    // Set a very high timeout, we want to test what happens when a RS
+    // fails but the region is recovered before the lease times out.
+    // Since the RS is already created, this conf is client-side only for
+    // this new table
+    Configuration conf = new Configuration(TEST_UTIL.getConfiguration());
+    conf.setInt(
+        HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY, SCANNER_TIMEOUT*100);
+    HTable higherScanTimeoutTable = new HTable(conf, TABLE_NAME);
+    ResultScanner r = higherScanTimeoutTable.getScanner(scan);
+    // This takes way less than SCANNER_TIMEOUT*100
+    rs.abort("die!");
+    Result[] results = r.next(NB_ROWS);
+    assertEquals(NB_ROWS, results.length);
+    r.close();
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestTimestamp.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestTimestamp.java
new file mode 100644
index 0000000..db42192
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestTimestamp.java
@@ -0,0 +1,76 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.client;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TimestampTestBase;
+
+/**
+ * Tests user specifiable time stamps putting, getting and scanning.  Also
+ * tests same in presence of deletes.  Test cores are written so can be
+ * run against an HRegion and against an HTable: i.e. both local and remote.
+ */
+public class TestTimestamp extends HBaseClusterTestCase {
+  public static String COLUMN_NAME = "colfamily1";
+
+  /** constructor */
+  public TestTimestamp() {
+    super();
+  }
+
+  /**
+   * Basic test of timestamps.
+   * Do the above tests from client side.
+   * @throws IOException
+   */
+  public void testTimestamps() throws IOException {
+    HTable t = createTable();
+    Incommon incommon = new HTableIncommon(t);
+    TimestampTestBase.doTestDelete(incommon, new FlushCache() {
+      public void flushcache() throws IOException {
+        cluster.flushcache();
+      }
+     });
+
+    // Perhaps drop and readd the table between tests so the former does
+    // not pollute this latter?  Or put into separate tests.
+    TimestampTestBase.doTestTimestampScanning(incommon, new FlushCache() {
+      public void flushcache() throws IOException {
+        cluster.flushcache();
+      }
+    });
+  }
+
+  /*
+   * Create a table named TABLE_NAME.
+   * @return An instance of an HTable connected to the created table.
+   * @throws IOException
+   */
+  private HTable createTable() throws IOException {
+    HTableDescriptor desc = new HTableDescriptor(getName());
+    desc.addFamily(new HColumnDescriptor(COLUMN_NAME));
+    HBaseAdmin admin = new HBaseAdmin(conf);
+    admin.createTable(desc);
+    return new HTable(conf, getName());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java
new file mode 100644
index 0000000..e48e5dd
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java
@@ -0,0 +1,342 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.TimestampsFilter;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Run tests related to {@link TimestampsFilter} using HBase client APIs.
+ * Sets up the HBase mini cluster once at start. Each creates a table
+ * named for the method and does its stuff against that.
+ */
+public class TestTimestampsFilter {
+  final Log LOG = LogFactory.getLog(getClass());
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+    // Nothing to do.
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @After
+  public void tearDown() throws Exception {
+    // Nothing to do.
+  }
+
+  /**
+   * Test from client side for TimestampsFilter.
+   *
+   * The TimestampsFilter provides the ability to request cells (KeyValues)
+   * whose timestamp/version is in the specified list of timestamps/version.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testTimestampsFilter() throws Exception {
+    byte [] TABLE = Bytes.toBytes("testTimestampsFilter");
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+    KeyValue kvs[];
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    for (int rowIdx = 0; rowIdx < 5; rowIdx++) {
+      for (int colIdx = 0; colIdx < 5; colIdx++) {
+        // insert versions 201..300
+        putNVersions(ht, FAMILY, rowIdx, colIdx, 201, 300);
+        // insert versions 1..100
+        putNVersions(ht, FAMILY, rowIdx, colIdx, 1, 100);
+      }
+    }
+
+    // do some verification before flush
+    verifyInsertedValues(ht, FAMILY);
+
+    flush();
+
+    // do some verification after flush
+    verifyInsertedValues(ht, FAMILY);
+
+    // Insert some more versions after flush. These should be in memstore.
+    // After this we should have data in both memstore & HFiles.
+    for (int rowIdx = 0; rowIdx < 5; rowIdx++) {
+      for (int colIdx = 0; colIdx < 5; colIdx++) {
+        putNVersions(ht, FAMILY, rowIdx, colIdx, 301, 400);
+        putNVersions(ht, FAMILY, rowIdx, colIdx, 101, 200);
+      }
+    }
+
+    for (int rowIdx = 0; rowIdx < 5; rowIdx++) {
+      for (int colIdx = 0; colIdx < 5; colIdx++) {
+        kvs = getNVersions(ht, FAMILY, rowIdx, colIdx,
+                           Arrays.asList(505L, 5L, 105L, 305L, 205L));
+        assertEquals(4, kvs.length);
+        checkOneCell(kvs[0], FAMILY, rowIdx, colIdx, 305);
+        checkOneCell(kvs[1], FAMILY, rowIdx, colIdx, 205);
+        checkOneCell(kvs[2], FAMILY, rowIdx, colIdx, 105);
+        checkOneCell(kvs[3], FAMILY, rowIdx, colIdx, 5);
+      }
+    }
+
+    // Request an empty list of versions using the Timestamps filter;
+    // Should return none.
+    kvs = getNVersions(ht, FAMILY, 2, 2, new ArrayList<Long>());
+    assertEquals(0, kvs.length);
+
+    //
+    // Test the filter using a Scan operation
+    // Scan rows 0..4. For each row, get all its columns, but only
+    // those versions of the columns with the specified timestamps.
+    Result[] results = scanNVersions(ht, FAMILY, 0, 4,
+                                     Arrays.asList(6L, 106L, 306L));
+    assertEquals("# of rows returned from scan", 5, results.length);
+    for (int rowIdx = 0; rowIdx < 5; rowIdx++) {
+      kvs = results[rowIdx].raw();
+      // each row should have 5 columns.
+      // And we have requested 3 versions for each.
+      assertEquals("Number of KeyValues in result for row:" + rowIdx,
+                   3*5, kvs.length);
+      for (int colIdx = 0; colIdx < 5; colIdx++) {
+        int offset = colIdx * 3;
+        checkOneCell(kvs[offset + 0], FAMILY, rowIdx, colIdx, 306);
+        checkOneCell(kvs[offset + 1], FAMILY, rowIdx, colIdx, 106);
+        checkOneCell(kvs[offset + 2], FAMILY, rowIdx, colIdx, 6);
+      }
+    }
+  }
+
+  /**
+   * Test TimestampsFilter in the presence of version deletes.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testWithVersionDeletes() throws Exception {
+
+    // first test from memstore (without flushing).
+    testWithVersionDeletes(false);
+
+    // run same test against HFiles (by forcing a flush).
+    testWithVersionDeletes(true);
+  }
+
+  private void testWithVersionDeletes(boolean flushTables) throws IOException {
+    byte [] TABLE = Bytes.toBytes("testWithVersionDeletes_" +
+                                   (flushTables ? "flush" : "noflush")); 
+    byte [] FAMILY = Bytes.toBytes("event_log");
+    byte [][] FAMILIES = new byte[][] { FAMILY };
+
+    // create table; set versions to max...
+    HTable ht = TEST_UTIL.createTable(TABLE, FAMILIES, Integer.MAX_VALUE);
+
+    // For row:0, col:0: insert versions 1 through 5.
+    putNVersions(ht, FAMILY, 0, 0, 1, 5);
+
+    // delete version 4.
+    deleteOneVersion(ht, FAMILY, 0, 0, 4);
+
+    if (flushTables) {
+      flush();
+    }
+
+    // request a bunch of versions including the deleted version. We should
+    // only get back entries for the versions that exist.
+    KeyValue kvs[] = getNVersions(ht, FAMILY, 0, 0, Arrays.asList(2L, 3L, 4L, 5L));
+    assertEquals(3, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 0, 0, 5);
+    checkOneCell(kvs[1], FAMILY, 0, 0, 3);
+    checkOneCell(kvs[2], FAMILY, 0, 0, 2);
+  }
+
+  private void verifyInsertedValues(HTable ht, byte[] cf) throws IOException {
+    for (int rowIdx = 0; rowIdx < 5; rowIdx++) {
+      for (int colIdx = 0; colIdx < 5; colIdx++) {
+        // ask for versions that exist.
+        KeyValue[] kvs = getNVersions(ht, cf, rowIdx, colIdx,
+                                      Arrays.asList(5L, 300L, 6L, 80L));
+        assertEquals(4, kvs.length);
+        checkOneCell(kvs[0], cf, rowIdx, colIdx, 300);
+        checkOneCell(kvs[1], cf, rowIdx, colIdx, 80);
+        checkOneCell(kvs[2], cf, rowIdx, colIdx, 6);
+        checkOneCell(kvs[3], cf, rowIdx, colIdx, 5);
+
+        // ask for versions that do not exist.
+        kvs = getNVersions(ht, cf, rowIdx, colIdx,
+                           Arrays.asList(101L, 102L));
+        assertEquals(0, kvs.length);
+
+        // ask for some versions that exist and some that do not.
+        kvs = getNVersions(ht, cf, rowIdx, colIdx,
+                           Arrays.asList(1L, 300L, 105L, 70L, 115L));
+        assertEquals(3, kvs.length);
+        checkOneCell(kvs[0], cf, rowIdx, colIdx, 300);
+        checkOneCell(kvs[1], cf, rowIdx, colIdx, 70);
+        checkOneCell(kvs[2], cf, rowIdx, colIdx, 1);
+      }
+    }
+  }
+
+  // Flush tables. Since flushing is asynchronous, sleep for a bit.
+  private void flush() throws IOException {
+    TEST_UTIL.flush();
+    try {
+      Thread.sleep(3000);
+    } catch (InterruptedException i) {
+      // ignore
+    }
+  }
+
+  /**
+   * Assert that the passed in KeyValue has expected contents for the
+   * specified row, column & timestamp.
+   */
+  private void checkOneCell(KeyValue kv, byte[] cf,
+                             int rowIdx, int colIdx, long ts) {
+
+    String ctx = "rowIdx=" + rowIdx + "; colIdx=" + colIdx + "; ts=" + ts;
+
+    assertEquals("Row mismatch which checking: " + ctx,
+                 "row:"+ rowIdx, Bytes.toString(kv.getRow()));
+
+    assertEquals("ColumnFamily mismatch while checking: " + ctx,
+                 Bytes.toString(cf), Bytes.toString(kv.getFamily()));
+
+    assertEquals("Column qualifier mismatch while checking: " + ctx,
+                 "column:" + colIdx,
+                  Bytes.toString(kv.getQualifier()));
+
+    assertEquals("Timestamp mismatch while checking: " + ctx,
+                 ts, kv.getTimestamp());
+
+    assertEquals("Value mismatch while checking: " + ctx,
+                 "value-version-" + ts, Bytes.toString(kv.getValue()));
+  }
+
+  /**
+   * Uses the TimestampFilter on a Get to request a specified list of
+   * versions for the row/column specified by rowIdx & colIdx.
+   *
+   */
+  private  KeyValue[] getNVersions(HTable ht, byte[] cf, int rowIdx,
+                                   int colIdx, List<Long> versions)
+    throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Filter filter = new TimestampsFilter(versions);
+    Get get = new Get(row);
+    get.addColumn(cf, column);
+    get.setFilter(filter);
+    get.setMaxVersions();
+    Result result = ht.get(get);
+
+    return result.raw();
+  }
+
+  /**
+   * Uses the TimestampFilter on a Scan to request a specified list of
+   * versions for the rows from startRowIdx to endRowIdx (both inclusive).
+   */
+  private Result[] scanNVersions(HTable ht, byte[] cf, int startRowIdx,
+                                 int endRowIdx, List<Long> versions)
+    throws IOException {
+    byte startRow[] = Bytes.toBytes("row:" + startRowIdx);
+    byte endRow[] = Bytes.toBytes("row:" + endRowIdx + 1); // exclusive
+    Filter filter = new TimestampsFilter(versions);
+    Scan scan = new Scan(startRow, endRow);
+    scan.setFilter(filter);
+    scan.setMaxVersions();
+    ResultScanner scanner = ht.getScanner(scan);
+    return scanner.next(endRowIdx - startRowIdx + 1);
+  }
+
+  /**
+   * Insert in specific row/column versions with timestamps
+   * versionStart..versionEnd.
+   */
+  private void putNVersions(HTable ht, byte[] cf, int rowIdx, int colIdx,
+                            long versionStart, long versionEnd)
+      throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Put put = new Put(row);
+
+    for (long idx = versionStart; idx <= versionEnd; idx++) {
+      put.add(cf, column, idx, Bytes.toBytes("value-version-" + idx));
+    }
+
+    ht.put(put);
+  }
+
+  /**
+   * For row/column specified by rowIdx/colIdx, delete the cell
+   * corresponding to the specified version.
+   */
+  private void deleteOneVersion(HTable ht, byte[] cf, int rowIdx,
+                                int colIdx, long version)
+    throws IOException {
+    byte row[] = Bytes.toBytes("row:" + rowIdx);
+    byte column[] = Bytes.toBytes("column:" + colIdx);
+    Delete del = new Delete(row);
+    del.deleteColumn(cf, column, version);
+    ht.delete(del);
+  }
+}
+
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java b/0.90/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
new file mode 100644
index 0000000..645b8ce
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java
@@ -0,0 +1,92 @@
+package org.apache.hadoop.hbase.client.replication;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static org.junit.Assert.fail;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Unit testing of ReplicationAdmin
+ */
+public class TestReplicationAdmin {
+
+  private static final Log LOG =
+      LogFactory.getLog(TestReplicationAdmin.class);
+  private final static HBaseTestingUtility TEST_UTIL =
+      new HBaseTestingUtility();
+
+  private final String ID_ONE = "1";
+  private final String KEY_ONE = "127.0.0.1:2181:/hbase";
+  private final String ID_SECOND = "2";
+  private final String KEY_SECOND = "127.0.0.1:2181:/hbase2";
+
+  private static ReplicationSourceManager manager;
+  private static ReplicationAdmin admin;
+  private static AtomicBoolean replicating = new AtomicBoolean(true);
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniZKCluster();
+    Configuration conf = TEST_UTIL.getConfiguration();
+    conf.setBoolean(HConstants.REPLICATION_ENABLE_KEY, true);
+    admin = new ReplicationAdmin(conf);
+    Path oldLogDir = new Path(TEST_UTIL.getTestDir(),
+        HConstants.HREGION_OLDLOGDIR_NAME);
+    Path logDir = new Path(TEST_UTIL.getTestDir(),
+        HConstants.HREGION_LOGDIR_NAME);
+    manager = new ReplicationSourceManager(admin.getReplicationZk(),
+        conf, null, FileSystem.get(conf), replicating, logDir, oldLogDir);
+  }
+
+  /**
+   * Simple testing of adding and removing peers, basically shows that
+   * all interactions with ZK work
+   * @throws Exception
+   */
+  @Test
+  public void testAddRemovePeer() throws Exception {
+    assertEquals(0, manager.getSources().size());
+    // Add a valid peer
+    admin.addPeer(ID_ONE, KEY_ONE);
+    // try adding the same (fails)
+    try {
+      admin.addPeer(ID_ONE, KEY_ONE);
+    } catch (IllegalArgumentException iae) {
+      // OK!
+    }
+    assertEquals(1, admin.getPeersCount());
+    // Try to remove an inexisting peer
+    try {
+      admin.removePeer(ID_SECOND);
+      fail();
+    } catch (IllegalArgumentException iae) {
+      // OK!
+    }
+    assertEquals(1, admin.getPeersCount());
+    // Add a second, returns illegal since multi-slave isn't supported
+    try {
+      admin.addPeer(ID_SECOND, KEY_SECOND);
+      fail();
+    } catch (IllegalStateException iae) {
+      // OK!
+    }
+    assertEquals(1, admin.getPeersCount());
+    // Remove the first peer we added
+    admin.removePeer(ID_ONE);
+    assertEquals(0, admin.getPeersCount());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/executor/TestExecutorService.java b/0.90/src/test/java/org/apache/hadoop/hbase/executor/TestExecutorService.java
new file mode 100644
index 0000000..3ecd652
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/executor/TestExecutorService.java
@@ -0,0 +1,139 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.executor;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.IOException;
+import java.util.concurrent.ThreadPoolExecutor;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.executor.ExecutorService.Executor;
+import org.apache.hadoop.hbase.executor.ExecutorService.ExecutorType;
+import org.junit.Test;
+
+public class TestExecutorService {
+  private static final Log LOG = LogFactory.getLog(TestExecutorService.class);
+
+  @Test
+  public void testExecutorService() throws Exception {
+    int maxThreads = 5;
+    int maxTries = 10;
+    int sleepInterval = 10;
+
+    // Start an executor service pool with max 5 threads
+    ExecutorService executorService = new ExecutorService("unit_test");
+    executorService.startExecutorService(
+      ExecutorType.MASTER_SERVER_OPERATIONS, maxThreads);
+
+    Executor executor =
+      executorService.getExecutor(ExecutorType.MASTER_SERVER_OPERATIONS);
+    ThreadPoolExecutor pool = executor.threadPoolExecutor;
+
+    // Assert no threads yet
+    assertEquals(0, pool.getPoolSize());
+
+    AtomicBoolean lock = new AtomicBoolean(true);
+    AtomicInteger counter = new AtomicInteger(0);
+
+    // Submit maxThreads executors.
+    for (int i = 0; i < maxThreads; i++) {
+      executorService.submit(
+        new TestEventHandler(EventType.M_SERVER_SHUTDOWN, lock, counter));
+    }
+
+    // The TestEventHandler will increment counter when it starts.
+    int tries = 0;
+    while (counter.get() < maxThreads && tries < maxTries) {
+      LOG.info("Waiting for all event handlers to start...");
+      Thread.sleep(sleepInterval);
+      tries++;
+    }
+
+    // Assert that pool is at max threads.
+    assertEquals(maxThreads, counter.get());
+    assertEquals(maxThreads, pool.getPoolSize());
+
+    // Now interrupt the running Executor
+    synchronized (lock) {
+      lock.set(false);
+      lock.notifyAll();
+    }
+
+    // Executor increments counter again on way out so.... test that happened.
+    while (counter.get() < (maxThreads * 2) && tries < maxTries) {
+      System.out.println("Waiting for all event handlers to finish...");
+      Thread.sleep(sleepInterval);
+      tries++;
+    }
+
+    assertEquals(maxThreads * 2, counter.get());
+    assertEquals(maxThreads, pool.getPoolSize());
+
+    // Add more than the number of threads items.
+    // Make sure we don't get RejectedExecutionException.
+    for (int i = 0; i < (2 * maxThreads); i++) {
+      executorService.submit(
+        new TestEventHandler(EventType.M_SERVER_SHUTDOWN, lock, counter));
+    }
+    // Now interrupt the running Executor
+    synchronized (lock) {
+      lock.set(false);
+      lock.notifyAll();
+    }
+
+    // Make sure threads are still around even after their timetolive expires.
+    Thread.sleep(executor.keepAliveTimeInMillis * 2);
+    assertEquals(maxThreads, pool.getPoolSize());
+  }
+
+  public static class TestEventHandler extends EventHandler {
+    private AtomicBoolean lock;
+    private AtomicInteger counter;
+
+    public TestEventHandler(EventType eventType, AtomicBoolean lock,
+        AtomicInteger counter) {
+      super(null, eventType);
+      this.lock = lock;
+      this.counter = counter;
+    }
+
+    @Override
+    public void process() throws IOException {
+      int num = counter.incrementAndGet();
+      LOG.info("Running process #" + num + ", threadName=" +
+        Thread.currentThread().getName());
+      synchronized (lock) {
+        while (lock.get()) {
+          try {
+            lock.wait();
+          } catch (InterruptedException e) {
+            // do nothing
+          }
+        }
+      }
+      counter.incrementAndGet();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPaginationFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPaginationFilter.java
new file mode 100644
index 0000000..2c8af2a
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPaginationFilter.java
@@ -0,0 +1,94 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+/**
+ * Test for the ColumnPaginationFilter, used mainly to test the successful serialization of the filter.
+ * More test functionality can be found within {@link org.apache.hadoop.hbase.filter.TestFilter#testColumnPaginationFilter()}
+ */
+public class TestColumnPaginationFilter extends TestCase
+{
+    private static final byte[] ROW = Bytes.toBytes("row_1_test");
+    private static final byte[] COLUMN_FAMILY = Bytes.toBytes("test");
+    private static final byte[] VAL_1 = Bytes.toBytes("a");
+    private static final byte [] COLUMN_QUALIFIER = Bytes.toBytes("foo");
+
+    private Filter columnPaginationFilter;
+
+    @Override
+    protected void setUp() throws Exception {
+        super.setUp();
+        columnPaginationFilter = getColumnPaginationFilter();
+
+    }
+    private Filter getColumnPaginationFilter() {
+        return new ColumnPaginationFilter(1,0);
+    }
+
+    private Filter serializationTest(Filter filter) throws Exception {
+        ByteArrayOutputStream stream = new ByteArrayOutputStream();
+        DataOutputStream out = new DataOutputStream(stream);
+        filter.write(out);
+        out.close();
+        byte[] buffer = stream.toByteArray();
+
+        DataInputStream in =
+            new DataInputStream(new ByteArrayInputStream(buffer));
+        Filter newFilter = new ColumnPaginationFilter();
+        newFilter.readFields(in);
+
+        return newFilter;
+    }
+
+
+    /**
+     * The more specific functionality tests are contained within the TestFilters class.  This class is mainly for testing
+     * serialization
+     *
+     * @param filter
+     * @throws Exception
+     */
+    private void basicFilterTests(ColumnPaginationFilter filter) throws Exception
+    {
+      KeyValue kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_1);
+      assertTrue("basicFilter1", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    }
+
+    /**
+     * Tests serialization
+     * @throws Exception
+     */
+    public void testSerialization() throws Exception {
+      Filter newFilter = serializationTest(columnPaginationFilter);
+      basicFilterTests((ColumnPaginationFilter)newFilter);
+    }
+
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java
new file mode 100644
index 0000000..78181c5
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java
@@ -0,0 +1,104 @@
+package org.apache.hadoop.hbase.filter;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValueTestUtil;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+public class TestColumnPrefixFilter {
+
+  private final static HBaseTestingUtility TEST_UTIL = new
+      HBaseTestingUtility();
+
+  @Test
+  public void testColumnPrefixFilter() throws IOException {
+    String family = "Family";
+    HTableDescriptor htd = new HTableDescriptor("TestColumnPrefixFilter");
+    htd.addFamily(new HColumnDescriptor(family));
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    HRegion region = HRegion.createHRegion(info, HBaseTestingUtility.
+        getTestDir(), TEST_UTIL.getConfiguration());
+
+    List<String> rows = generateRandomWords(100, "row");
+    List<String> columns = generateRandomWords(10000, "column");
+    long maxTimestamp = 2;
+
+    List<KeyValue> kvList = new ArrayList<KeyValue>();
+
+    Map<String, List<KeyValue>> prefixMap = new HashMap<String,
+        List<KeyValue>>();
+
+    prefixMap.put("p", new ArrayList<KeyValue>());
+    prefixMap.put("s", new ArrayList<KeyValue>());
+
+    String valueString = "ValueString";
+
+    for (String row: rows) {
+      Put p = new Put(Bytes.toBytes(row));
+      for (String column: columns) {
+        for (long timestamp = 1; timestamp <= maxTimestamp; timestamp++) {
+          KeyValue kv = KeyValueTestUtil.create(row, family, column, timestamp,
+              valueString);
+          p.add(kv);
+          kvList.add(kv);
+          for (String s: prefixMap.keySet()) {
+            if (column.startsWith(s)) {
+              prefixMap.get(s).add(kv);
+            }
+          }
+        }
+      }
+      region.put(p);
+    }
+
+    ColumnPrefixFilter filter;
+    Scan scan = new Scan();
+    scan.setMaxVersions();
+    for (String s: prefixMap.keySet()) {
+      filter = new ColumnPrefixFilter(Bytes.toBytes(s));
+      scan.setFilter(filter);
+      InternalScanner scanner = region.getScanner(scan);
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      while(scanner.next(results));
+      assertEquals(prefixMap.get(s).size(), results.size());
+    }
+  }
+
+  List<String> generateRandomWords(int numberOfWords, String suffix) {
+    Set<String> wordSet = new HashSet<String>();
+    for (int i = 0; i < numberOfWords; i++) {
+      int lengthOfWords = (int) (Math.random()*2) + 1;
+      char[] wordChar = new char[lengthOfWords];
+      for (int j = 0; j < wordChar.length; j++) {
+        wordChar[j] = (char) (Math.random() * 26 + 97);
+      }
+      String word;
+      if (suffix == null) {
+        word = new String(wordChar);
+      } else {
+        word = new String(wordChar) + suffix;
+      }
+      wordSet.add(word);
+    }
+    List<String> wordList = new ArrayList<String>(wordSet);
+    return wordList;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java
new file mode 100644
index 0000000..04705c3
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java
@@ -0,0 +1,245 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.filter.Filter.ReturnCode;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestDependentColumnFilter extends TestCase {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+  private static final byte[][] ROWS = {
+	  Bytes.toBytes("test1"),Bytes.toBytes("test2")
+  };
+  private static final byte[][] FAMILIES = {
+	  Bytes.toBytes("familyOne"),Bytes.toBytes("familyTwo")
+  };
+  private static final long STAMP_BASE = System.currentTimeMillis();
+  private static final long[] STAMPS = {
+	  STAMP_BASE-100, STAMP_BASE-200, STAMP_BASE-300
+  };
+  private static final byte[] QUALIFIER = Bytes.toBytes("qualifier");
+  private static final byte[][] BAD_VALS = {
+	  Bytes.toBytes("bad1"), Bytes.toBytes("bad2"), Bytes.toBytes("bad3")
+  };
+  private static final byte[] MATCH_VAL = Bytes.toBytes("match");
+  private HBaseTestingUtility testUtil;
+
+  List<KeyValue> testVals;
+  private HRegion region;
+
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+
+    testUtil = new HBaseTestingUtility();
+
+    testVals = makeTestVals();
+
+    HTableDescriptor htd = new HTableDescriptor(getName());
+    htd.addFamily(new HColumnDescriptor(FAMILIES[0]));
+    htd.addFamily(new HColumnDescriptor(FAMILIES[1]));
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    this.region = HRegion.createHRegion(info, testUtil.getTestDir(), testUtil.getConfiguration());
+    addData();
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    super.tearDown();
+    this.region.close();
+  }
+
+  private void addData() throws IOException {
+    Put put = new Put(ROWS[0]);
+    // add in an entry for each stamp, with 2 as a "good" value
+    put.add(FAMILIES[0], QUALIFIER, STAMPS[0], BAD_VALS[0]);
+    put.add(FAMILIES[0], QUALIFIER, STAMPS[1], BAD_VALS[1]);
+    put.add(FAMILIES[0], QUALIFIER, STAMPS[2], MATCH_VAL);
+    // add in entries for stamps 0 and 2.
+    // without a value check both will be "accepted"
+    // with one 2 will be accepted(since the corresponding ts entry
+    // has a matching value
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[0], BAD_VALS[0]);
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[2], BAD_VALS[2]);
+
+    this.region.put(put);
+
+    put = new Put(ROWS[1]);
+    put.add(FAMILIES[0], QUALIFIER, STAMPS[0], BAD_VALS[0]);
+    // there is no corresponding timestamp for this so it should never pass
+    put.add(FAMILIES[0], QUALIFIER, STAMPS[2], MATCH_VAL);
+    // if we reverse the qualifiers this one should pass
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[0], MATCH_VAL);
+    // should pass
+    put.add(FAMILIES[1], QUALIFIER, STAMPS[1], BAD_VALS[2]);
+
+    this.region.put(put);
+  }
+
+  private List<KeyValue> makeTestVals() {
+	List<KeyValue> testVals = new ArrayList<KeyValue>();
+	testVals.add(new KeyValue(ROWS[0], FAMILIES[0], QUALIFIER, STAMPS[0], BAD_VALS[0]));
+	testVals.add(new KeyValue(ROWS[0], FAMILIES[0], QUALIFIER, STAMPS[1], BAD_VALS[1]));
+	testVals.add(new KeyValue(ROWS[0], FAMILIES[1], QUALIFIER, STAMPS[1], BAD_VALS[2]));
+	testVals.add(new KeyValue(ROWS[0], FAMILIES[1], QUALIFIER, STAMPS[0], MATCH_VAL));
+	testVals.add(new KeyValue(ROWS[0], FAMILIES[1], QUALIFIER, STAMPS[2], BAD_VALS[2]));
+
+	return testVals;
+  }
+
+  /**
+   * This shouldn't be confused with TestFilter#verifyScan
+   * as expectedKeys is not the per row total, but the scan total
+   *
+   * @param s
+   * @param expectedRows
+   * @param expectedCells
+   * @throws IOException
+   */
+  private void verifyScan(Scan s, long expectedRows, long expectedCells)
+  throws IOException {
+    InternalScanner scanner = this.region.getScanner(s);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    int i = 0;
+    int cells = 0;
+    for (boolean done = true; done; i++) {
+      done = scanner.next(results);
+      Arrays.sort(results.toArray(new KeyValue[results.size()]),
+          KeyValue.COMPARATOR);
+      LOG.info("counter=" + i + ", " + results);
+      if (results.isEmpty()) break;
+      cells += results.size();
+      assertTrue("Scanned too many rows! Only expected " + expectedRows +
+          " total but already scanned " + (i+1), expectedRows > i);
+      assertTrue("Expected " + expectedCells + " cells total but " +
+          "already scanned " + cells, expectedCells >= cells);
+      results.clear();
+    }
+    assertEquals("Expected " + expectedRows + " rows but scanned " + i +
+        " rows", expectedRows, i);
+    assertEquals("Expected " + expectedCells + " cells but scanned " + cells +
+            " cells", expectedCells, cells);
+  }
+
+  /**
+   * Test scans using a DependentColumnFilter
+   */
+  public void testScans() throws Exception {
+    Filter filter = new DependentColumnFilter(FAMILIES[0], QUALIFIER);
+
+    Scan scan = new Scan();
+    scan.setFilter(filter);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+
+    verifyScan(scan, 2, 8);
+
+    // drop the filtering cells
+    filter = new DependentColumnFilter(FAMILIES[0], QUALIFIER, true);
+    scan = new Scan();
+    scan.setFilter(filter);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+
+    verifyScan(scan, 2, 3);
+
+    // include a comparator operation
+    filter = new DependentColumnFilter(FAMILIES[0], QUALIFIER, false,
+        CompareOp.EQUAL, new BinaryComparator(MATCH_VAL));
+    scan = new Scan();
+    scan.setFilter(filter);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+
+    /*
+     * expecting to get the following 3 cells
+     * row 0
+     *   put.add(FAMILIES[0], QUALIFIER, STAMPS[2], MATCH_VAL);
+     *   put.add(FAMILIES[1], QUALIFIER, STAMPS[2], BAD_VALS[2]);
+     * row 1
+     *   put.add(FAMILIES[0], QUALIFIER, STAMPS[2], MATCH_VAL);
+     */
+    verifyScan(scan, 2, 3);
+
+    // include a comparator operation and drop comparator
+    filter = new DependentColumnFilter(FAMILIES[0], QUALIFIER, true,
+        CompareOp.EQUAL, new BinaryComparator(MATCH_VAL));
+    scan = new Scan();
+    scan.setFilter(filter);
+    scan.setMaxVersions(Integer.MAX_VALUE);
+
+    /*
+     * expecting to get the following 1 cell
+     * row 0
+     *   put.add(FAMILIES[1], QUALIFIER, STAMPS[2], BAD_VALS[2]);
+     */
+    verifyScan(scan, 1, 1);
+
+  }
+
+  /**
+   * Test that the filter correctly drops rows without a corresponding timestamp
+   *
+   * @throws Exception
+   */
+  public void testFilterDropping() throws Exception {
+    Filter filter = new DependentColumnFilter(FAMILIES[0], QUALIFIER);
+    List<KeyValue> accepted = new ArrayList<KeyValue>();
+    for(KeyValue val : testVals) {
+      if(filter.filterKeyValue(val) == ReturnCode.INCLUDE) {
+        accepted.add(val);
+      }
+    }
+    assertEquals("check all values accepted from filterKeyValue", 5, accepted.size());
+
+    filter.filterRow(accepted);
+    assertEquals("check filterRow(List<KeyValue>) dropped cell without corresponding column entry", 4, accepted.size());
+
+    // start do it again with dependent column dropping on
+    filter = new DependentColumnFilter(FAMILIES[1], QUALIFIER, true);
+    accepted.clear();
+    for(KeyValue val : testVals) {
+        if(filter.filterKeyValue(val) == ReturnCode.INCLUDE) {
+          accepted.add(val);
+        }
+      }
+      assertEquals("check the filtering column cells got dropped", 2, accepted.size());
+
+      filter.filterRow(accepted);
+      assertEquals("check cell retention", 2, accepted.size());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java
new file mode 100644
index 0000000..bfa3c72
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java
@@ -0,0 +1,1513 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import junit.framework.Assert;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test filters at the HRegion doorstep.
+ */
+public class TestFilter extends HBaseTestCase {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+  private HRegion region;
+
+  //
+  // Rows, Qualifiers, and Values are in two groups, One and Two.
+  //
+
+  private static final byte [][] ROWS_ONE = {
+      Bytes.toBytes("testRowOne-0"), Bytes.toBytes("testRowOne-1"),
+      Bytes.toBytes("testRowOne-2"), Bytes.toBytes("testRowOne-3")
+  };
+
+  private static final byte [][] ROWS_TWO = {
+      Bytes.toBytes("testRowTwo-0"), Bytes.toBytes("testRowTwo-1"),
+      Bytes.toBytes("testRowTwo-2"), Bytes.toBytes("testRowTwo-3")
+  };
+
+  private static final byte [][] FAMILIES = {
+    Bytes.toBytes("testFamilyOne"), Bytes.toBytes("testFamilyTwo")
+  };
+
+  private static final byte [][] QUALIFIERS_ONE = {
+    Bytes.toBytes("testQualifierOne-0"), Bytes.toBytes("testQualifierOne-1"),
+    Bytes.toBytes("testQualifierOne-2"), Bytes.toBytes("testQualifierOne-3")
+  };
+
+  private static final byte [][] QUALIFIERS_TWO = {
+    Bytes.toBytes("testQualifierTwo-0"), Bytes.toBytes("testQualifierTwo-1"),
+    Bytes.toBytes("testQualifierTwo-2"), Bytes.toBytes("testQualifierTwo-3")
+  };
+
+  private static final byte [][] VALUES = {
+    Bytes.toBytes("testValueOne"), Bytes.toBytes("testValueTwo")
+  };
+
+  private long numRows = ROWS_ONE.length + ROWS_TWO.length;
+  private long colsPerRow = FAMILIES.length * QUALIFIERS_ONE.length;
+
+
+  protected void setUp() throws Exception {
+    super.setUp();
+    HTableDescriptor htd = new HTableDescriptor(getName());
+    htd.addFamily(new HColumnDescriptor(FAMILIES[0]));
+    htd.addFamily(new HColumnDescriptor(FAMILIES[1]));
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    this.region = HRegion.createHRegion(info, this.testDir, this.conf);
+
+    // Insert first half
+    for(byte [] ROW : ROWS_ONE) {
+      Put p = new Put(ROW);
+      for(byte [] QUALIFIER : QUALIFIERS_ONE) {
+        p.add(FAMILIES[0], QUALIFIER, VALUES[0]);
+      }
+      this.region.put(p);
+    }
+    for(byte [] ROW : ROWS_TWO) {
+      Put p = new Put(ROW);
+      for(byte [] QUALIFIER : QUALIFIERS_TWO) {
+        p.add(FAMILIES[1], QUALIFIER, VALUES[1]);
+      }
+      this.region.put(p);
+    }
+
+    // Flush
+    this.region.flushcache();
+
+    // Insert second half (reverse families)
+    for(byte [] ROW : ROWS_ONE) {
+      Put p = new Put(ROW);
+      for(byte [] QUALIFIER : QUALIFIERS_ONE) {
+        p.add(FAMILIES[1], QUALIFIER, VALUES[0]);
+      }
+      this.region.put(p);
+    }
+    for(byte [] ROW : ROWS_TWO) {
+      Put p = new Put(ROW);
+      for(byte [] QUALIFIER : QUALIFIERS_TWO) {
+        p.add(FAMILIES[0], QUALIFIER, VALUES[1]);
+      }
+      this.region.put(p);
+    }
+
+    // Delete the second qualifier from all rows and families
+    for(byte [] ROW : ROWS_ONE) {
+      Delete d = new Delete(ROW);
+      d.deleteColumns(FAMILIES[0], QUALIFIERS_ONE[1]);
+      d.deleteColumns(FAMILIES[1], QUALIFIERS_ONE[1]);
+      this.region.delete(d, null, false);
+    }
+    for(byte [] ROW : ROWS_TWO) {
+      Delete d = new Delete(ROW);
+      d.deleteColumns(FAMILIES[0], QUALIFIERS_TWO[1]);
+      d.deleteColumns(FAMILIES[1], QUALIFIERS_TWO[1]);
+      this.region.delete(d, null, false);
+    }
+    colsPerRow -= 2;
+
+    // Delete the second rows from both groups, one column at a time
+    for(byte [] QUALIFIER : QUALIFIERS_ONE) {
+      Delete d = new Delete(ROWS_ONE[1]);
+      d.deleteColumns(FAMILIES[0], QUALIFIER);
+      d.deleteColumns(FAMILIES[1], QUALIFIER);
+      this.region.delete(d, null, false);
+    }
+    for(byte [] QUALIFIER : QUALIFIERS_TWO) {
+      Delete d = new Delete(ROWS_TWO[1]);
+      d.deleteColumns(FAMILIES[0], QUALIFIER);
+      d.deleteColumns(FAMILIES[1], QUALIFIER);
+      this.region.delete(d, null, false);
+    }
+    numRows -= 2;
+  }
+
+  protected void tearDown() throws Exception {
+    this.region.close();
+    super.tearDown();
+  }
+
+  public void testNoFilter() throws Exception {
+    // No filter
+    long expectedRows = this.numRows;
+    long expectedKeys = this.colsPerRow;
+
+    // Both families
+    Scan s = new Scan();
+    verifyScan(s, expectedRows, expectedKeys);
+
+    // One family
+    s = new Scan();
+    s.addFamily(FAMILIES[0]);
+    verifyScan(s, expectedRows, expectedKeys/2);
+  }
+
+  public void testPrefixFilter() throws Exception {
+    // Grab rows from group one (half of total)
+    long expectedRows = this.numRows / 2;
+    long expectedKeys = this.colsPerRow;
+    Scan s = new Scan();
+    s.setFilter(new PrefixFilter(Bytes.toBytes("testRowOne")));
+    verifyScan(s, expectedRows, expectedKeys);
+  }
+
+  public void testPageFilter() throws Exception {
+
+    // KVs in first 6 rows
+    KeyValue [] expectedKVs = {
+      // testRowOne-0
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowOne-2
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowOne-3
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowTwo-0
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+      // testRowTwo-2
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+      // testRowTwo-3
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1])
+    };
+
+    // Grab all 6 rows
+    long expectedRows = 6;
+    long expectedKeys = this.colsPerRow;
+    Scan s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, expectedKVs);
+
+    // Grab first 4 rows (6 cols per row)
+    expectedRows = 4;
+    expectedKeys = this.colsPerRow;
+    s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, Arrays.copyOf(expectedKVs, 24));
+
+    // Grab first 2 rows
+    expectedRows = 2;
+    expectedKeys = this.colsPerRow;
+    s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, Arrays.copyOf(expectedKVs, 12));
+
+    // Grab first row
+    expectedRows = 1;
+    expectedKeys = this.colsPerRow;
+    s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, Arrays.copyOf(expectedKVs, 6));
+
+  }
+
+  /**
+   * Tests the the {@link WhileMatchFilter} works in combination with a
+   * {@link Filter} that uses the
+   * {@link Filter#filterRow()} method.
+   *
+   * See HBASE-2258.
+   *
+   * @throws Exception
+   */
+  public void testWhileMatchFilterWithFilterRow() throws Exception {
+    final int pageSize = 4;
+
+    Scan s = new Scan();
+    WhileMatchFilter filter = new WhileMatchFilter(new PageFilter(pageSize));
+    s.setFilter(filter);
+
+    InternalScanner scanner = this.region.getScanner(s);
+    int scannerCounter = 0;
+    while (true) {
+      boolean isMoreResults = scanner.next(new ArrayList<KeyValue>());
+      scannerCounter++;
+
+      if (scannerCounter >= pageSize) {
+        Assert.assertTrue("The WhileMatchFilter should now filter all remaining", filter.filterAllRemaining());
+      }
+      if (!isMoreResults) {
+        break;
+      }
+    }
+    Assert.assertEquals("The page filter returned more rows than expected", pageSize, scannerCounter);
+  }
+
+  /**
+   * Tests the the {@link WhileMatchFilter} works in combination with a
+   * {@link Filter} that uses the
+   * {@link Filter#filterRowKey(byte[], int, int)} method.
+   *
+   * See HBASE-2258.
+   *
+   * @throws Exception
+   */
+  public void testWhileMatchFilterWithFilterRowKey() throws Exception {
+    Scan s = new Scan();
+    String prefix = "testRowOne";
+    WhileMatchFilter filter = new WhileMatchFilter(new PrefixFilter(Bytes.toBytes(prefix)));
+    s.setFilter(filter);
+
+    InternalScanner scanner = this.region.getScanner(s);
+    while (true) {
+      ArrayList<KeyValue> values = new ArrayList<KeyValue>();
+      boolean isMoreResults = scanner.next(values);
+      if (!isMoreResults || !Bytes.toString(values.get(0).getRow()).startsWith(prefix)) {
+        Assert.assertTrue("The WhileMatchFilter should now filter all remaining", filter.filterAllRemaining());
+      }
+      if (!isMoreResults) {
+        break;
+      }
+    }
+  }
+
+  /**
+   * Tests the the {@link WhileMatchFilter} works in combination with a
+   * {@link Filter} that uses the
+   * {@link Filter#filterKeyValue(org.apache.hadoop.hbase.KeyValue)} method.
+   *
+   * See HBASE-2258.
+   *
+   * @throws Exception
+   */
+  public void testWhileMatchFilterWithFilterKeyValue() throws Exception {
+    Scan s = new Scan();
+    WhileMatchFilter filter = new WhileMatchFilter(
+        new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_ONE[0], CompareOp.EQUAL, Bytes.toBytes("foo"))
+    );
+    s.setFilter(filter);
+
+    InternalScanner scanner = this.region.getScanner(s);
+    while (true) {
+      ArrayList<KeyValue> values = new ArrayList<KeyValue>();
+      boolean isMoreResults = scanner.next(values);
+      Assert.assertTrue("The WhileMatchFilter should now filter all remaining", filter.filterAllRemaining());
+      if (!isMoreResults) {
+        break;
+      }
+    }
+  }
+
+  public void testInclusiveStopFilter() throws IOException {
+
+    // Grab rows from group one
+
+    // If we just use start/stop row, we get total/2 - 1 rows
+    long expectedRows = (this.numRows / 2) - 1;
+    long expectedKeys = this.colsPerRow;
+    Scan s = new Scan(Bytes.toBytes("testRowOne-0"),
+        Bytes.toBytes("testRowOne-3"));
+    verifyScan(s, expectedRows, expectedKeys);
+
+    // Now use start row with inclusive stop filter
+    expectedRows = this.numRows / 2;
+    s = new Scan(Bytes.toBytes("testRowOne-0"));
+    s.setFilter(new InclusiveStopFilter(Bytes.toBytes("testRowOne-3")));
+    verifyScan(s, expectedRows, expectedKeys);
+
+    // Grab rows from group two
+
+    // If we just use start/stop row, we get total/2 - 1 rows
+    expectedRows = (this.numRows / 2) - 1;
+    expectedKeys = this.colsPerRow;
+    s = new Scan(Bytes.toBytes("testRowTwo-0"),
+        Bytes.toBytes("testRowTwo-3"));
+    verifyScan(s, expectedRows, expectedKeys);
+
+    // Now use start row with inclusive stop filter
+    expectedRows = this.numRows / 2;
+    s = new Scan(Bytes.toBytes("testRowTwo-0"));
+    s.setFilter(new InclusiveStopFilter(Bytes.toBytes("testRowTwo-3")));
+    verifyScan(s, expectedRows, expectedKeys);
+
+  }
+
+  public void testQualifierFilter() throws IOException {
+
+    // Match two keys (one from each family) in half the rows
+    long expectedRows = this.numRows / 2;
+    long expectedKeys = 2;
+    Filter f = new QualifierFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    Scan s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys less than same qualifier
+    // Expect only two keys (one from each family) in half the rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = 2;
+    f = new QualifierFilter(CompareOp.LESS,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys less than or equal
+    // Expect four keys (two from each family) in half the rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = 4;
+    f = new QualifierFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys not equal
+    // Expect four keys (two from each family)
+    // Only look in first group of rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = 4;
+    f = new QualifierFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys greater or equal
+    // Expect four keys (two from each family)
+    // Only look in first group of rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = 4;
+    f = new QualifierFilter(CompareOp.GREATER_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys greater
+    // Expect two keys (one from each family)
+    // Only look in first group of rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = 2;
+    f = new QualifierFilter(CompareOp.GREATER,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys not equal to
+    // Look across rows and fully validate the keys and ordering
+    // Expect varied numbers of keys, 4 per row in group one, 6 per row in group two
+    f = new QualifierFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(QUALIFIERS_ONE[2]));
+    s = new Scan();
+    s.setFilter(f);
+
+    KeyValue [] kvs = {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+
+
+    // Test across rows and groups with a regex
+    // Filter out "test*-2"
+    // Expect 4 keys per row across both groups
+    f = new QualifierFilter(CompareOp.NOT_EQUAL,
+        new RegexStringComparator("test.+-2"));
+    s = new Scan();
+    s.setFilter(f);
+
+    kvs = new KeyValue [] {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+
+  }
+
+    public void testFamilyFilter() throws IOException {
+
+      // Match family, only half of columns returned.
+      long expectedRows = this.numRows;
+      long expectedKeys = this.colsPerRow / 2;
+      Filter f = new FamilyFilter(CompareOp.EQUAL,
+          new BinaryComparator(Bytes.toBytes("testFamilyOne")));
+      Scan s = new Scan();
+      s.setFilter(f);
+      verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+      // Match keys less than given family, should return nothing
+      expectedRows = 0;
+      expectedKeys = 0;
+      f = new FamilyFilter(CompareOp.LESS,
+          new BinaryComparator(Bytes.toBytes("testFamily")));
+      s = new Scan();
+      s.setFilter(f);
+      verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+      // Match keys less than or equal, should return half of columns
+      expectedRows = this.numRows;
+      expectedKeys = this.colsPerRow / 2;
+      f = new FamilyFilter(CompareOp.LESS_OR_EQUAL,
+          new BinaryComparator(Bytes.toBytes("testFamilyOne")));
+      s = new Scan();
+      s.setFilter(f);
+      verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+      // Match keys from second family
+      // look only in second group of rows
+      expectedRows = this.numRows / 2;
+      expectedKeys = this.colsPerRow / 2;
+      f = new FamilyFilter(CompareOp.NOT_EQUAL,
+          new BinaryComparator(Bytes.toBytes("testFamilyOne")));
+      s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+      s.setFilter(f);
+      verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+      // Match all columns
+      // look only in second group of rows
+      expectedRows = this.numRows / 2;
+      expectedKeys = this.colsPerRow;
+      f = new FamilyFilter(CompareOp.GREATER_OR_EQUAL,
+          new BinaryComparator(Bytes.toBytes("testFamilyOne")));
+      s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+      s.setFilter(f);
+      verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+      // Match all columns in second family
+      // look only in second group of rows        
+      expectedRows = this.numRows / 2;
+      expectedKeys = this.colsPerRow / 2;
+      f = new FamilyFilter(CompareOp.GREATER,
+          new BinaryComparator(Bytes.toBytes("testFamilyOne")));
+      s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+      s.setFilter(f);
+      verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+      // Match keys not equal to given family
+      // Look across rows and fully validate the keys and ordering
+      f = new FamilyFilter(CompareOp.NOT_EQUAL,
+          new BinaryComparator(FAMILIES[1]));
+      s = new Scan();
+      s.setFilter(f);
+
+      KeyValue [] kvs = {
+          // testRowOne-0
+          new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+          new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+          new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+          // testRowOne-2
+          new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+          new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+          new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+          // testRowOne-3
+          new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+          new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+          new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+          // testRowTwo-0
+          new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+          new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+          new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+          // testRowTwo-2
+          new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+          new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+          new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+          // testRowTwo-3
+          new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+          new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+          new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      };
+      verifyScanFull(s, kvs);
+
+
+      // Test across rows and groups with a regex
+      // Filter out "test*-2"
+      // Expect 4 keys per row across both groups
+      f = new FamilyFilter(CompareOp.NOT_EQUAL,
+          new RegexStringComparator("test.*One"));
+      s = new Scan();
+      s.setFilter(f);
+
+      kvs = new KeyValue [] {
+          // testRowOne-0
+          new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+          new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+          new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+          // testRowOne-2
+          new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+          new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+          new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+          // testRowOne-3
+          new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+          new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+          new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+          // testRowTwo-0
+          new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+          new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+          new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+          // testRowTwo-2
+          new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+          new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+          new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+          // testRowTwo-3
+          new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+          new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+          new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+      };
+      verifyScanFull(s, kvs);
+
+    }
+
+
+  public void testRowFilter() throws IOException {
+
+    // Match a single row, all keys
+    long expectedRows = 1;
+    long expectedKeys = this.colsPerRow;
+    Filter f = new RowFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    Scan s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match a two rows, one from each group, using regex
+    expectedRows = 2;
+    expectedKeys = this.colsPerRow;
+    f = new RowFilter(CompareOp.EQUAL,
+        new RegexStringComparator("testRow.+-2"));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match rows less than
+    // Expect all keys in one row
+    expectedRows = 1;
+    expectedKeys = this.colsPerRow;
+    f = new RowFilter(CompareOp.LESS,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match rows less than or equal
+    // Expect all keys in two rows
+    expectedRows = 2;
+    expectedKeys = this.colsPerRow;
+    f = new RowFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match rows not equal
+    // Expect all keys in all but one row
+    expectedRows = this.numRows - 1;
+    expectedKeys = this.colsPerRow;
+    f = new RowFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys greater or equal
+    // Expect all keys in all but one row
+    expectedRows = this.numRows - 1;
+    expectedKeys = this.colsPerRow;
+    f = new RowFilter(CompareOp.GREATER_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match keys greater
+    // Expect all keys in all but two rows
+    expectedRows = this.numRows - 2;
+    expectedKeys = this.colsPerRow;
+    f = new RowFilter(CompareOp.GREATER,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match rows not equal to testRowTwo-2
+    // Look across rows and fully validate the keys and ordering
+    // Should see all keys in all rows but testRowTwo-2
+    f = new RowFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+
+    KeyValue [] kvs = {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+
+
+    // Test across rows and groups with a regex
+    // Filter out everything that doesn't match "*-2"
+    // Expect all keys in two rows
+    f = new RowFilter(CompareOp.EQUAL,
+        new RegexStringComparator(".+-2"));
+    s = new Scan();
+    s.setFilter(f);
+
+    kvs = new KeyValue [] {
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1])
+    };
+    verifyScanFull(s, kvs);
+
+  }
+
+  public void testValueFilter() throws IOException {
+
+    // Match group one rows
+    long expectedRows = this.numRows / 2;
+    long expectedKeys = this.colsPerRow;
+    Filter f = new ValueFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    Scan s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match group two rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueTwo")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match all values using regex
+    expectedRows = this.numRows;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.EQUAL,
+        new RegexStringComparator("testValue((One)|(Two))"));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values less than
+    // Expect group one rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.LESS,
+        new BinaryComparator(Bytes.toBytes("testValueTwo")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values less than or equal
+    // Expect all rows
+    expectedRows = this.numRows;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueTwo")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values less than or equal
+    // Expect group one rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values not equal
+    // Expect half the rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values greater or equal
+    // Expect all rows
+    expectedRows = this.numRows;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.GREATER_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values greater
+    // Expect half rows
+    expectedRows = this.numRows / 2;
+    expectedKeys = this.colsPerRow;
+    f = new ValueFilter(CompareOp.GREATER,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values not equal to testValueOne
+    // Look across rows and fully validate the keys and ordering
+    // Should see all keys in all group two rows
+    f = new ValueFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+
+    KeyValue [] kvs = {
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+  }
+
+  public void testSkipFilter() throws IOException {
+
+    // Test for qualifier regex: "testQualifierOne-2"
+    // Should only get rows from second group, and all keys
+    Filter f = new SkipFilter(new QualifierFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2"))));
+    Scan s = new Scan();
+    s.setFilter(f);
+
+    KeyValue [] kvs = {
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+  }
+
+  // TODO: This is important... need many more tests for ordering, etc
+  // There are limited tests elsewhere but we need HRegion level ones here
+  public void testFilterList() throws IOException {
+
+    // Test getting a single row, single key using Row, Qualifier, and Value
+    // regular expression and substring filters
+    // Use must pass all
+    List<Filter> filters = new ArrayList<Filter>();
+    filters.add(new RowFilter(CompareOp.EQUAL, new RegexStringComparator(".+-2")));
+    filters.add(new QualifierFilter(CompareOp.EQUAL, new RegexStringComparator(".+-2")));
+    filters.add(new ValueFilter(CompareOp.EQUAL, new SubstringComparator("One")));
+    Filter f = new FilterList(Operator.MUST_PASS_ALL, filters);
+    Scan s = new Scan();
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(f);
+    KeyValue [] kvs = {
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0])
+    };
+    verifyScanFull(s, kvs);
+
+    // Test getting everything with a MUST_PASS_ONE filter including row, qf, val
+    // regular expression and substring filters
+    filters.clear();
+    filters.add(new RowFilter(CompareOp.EQUAL, new RegexStringComparator(".+Two.+")));
+    filters.add(new QualifierFilter(CompareOp.EQUAL, new RegexStringComparator(".+-2")));
+    filters.add(new ValueFilter(CompareOp.EQUAL, new SubstringComparator("One")));
+    f = new FilterList(Operator.MUST_PASS_ONE, filters);
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, this.numRows, this.colsPerRow);
+
+
+  }
+
+  public void testFirstKeyOnlyFilter() throws IOException {
+    Scan s = new Scan();
+    s.setFilter(new FirstKeyOnlyFilter());
+    // Expected KVs, the first KV from each of the remaining 6 rows
+    KeyValue [] kvs = {
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1])
+    };
+    verifyScanFull(s, kvs);
+  }
+
+  public void testFilterListWithSingleColumnValueFilter() throws IOException {
+    // Test for HBASE-3191
+
+    // Scan using SingleColumnValueFilter
+    SingleColumnValueFilter f1 = new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_ONE[0],
+          CompareOp.EQUAL, VALUES[0]);
+    f1.setFilterIfMissing( true );
+    Scan s1 = new Scan();
+    s1.addFamily(FAMILIES[0]);
+    s1.setFilter(f1);
+    KeyValue [] kvs1 = {
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+    };
+    verifyScanNoEarlyOut(s1, 3, 3);
+    verifyScanFull(s1, kvs1);
+
+    // Scan using another SingleColumnValueFilter, expect disjoint result
+    SingleColumnValueFilter f2 = new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_TWO[0],
+        CompareOp.EQUAL, VALUES[1]);
+    f2.setFilterIfMissing( true );
+    Scan s2 = new Scan();
+    s2.addFamily(FAMILIES[0]);
+    s2.setFilter(f2);
+    KeyValue [] kvs2 = {
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanNoEarlyOut(s2, 3, 3);
+    verifyScanFull(s2, kvs2);
+
+    // Scan, ORing the two previous filters, expect unified result
+    FilterList f = new FilterList(Operator.MUST_PASS_ONE);
+    f.addFilter(f1);
+    f.addFilter(f2);
+    Scan s = new Scan();
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(f);
+    KeyValue [] kvs = {
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanNoEarlyOut(s, 6, 3);
+    verifyScanFull(s, kvs);
+  }
+
+  public void testSingleColumnValueFilter() throws IOException {
+
+    // From HBASE-1821
+    // Desired action is to combine two SCVF in a FilterList
+    // Want to return only rows that match both conditions
+
+    // Need to change one of the group one columns to use group two value
+    Put p = new Put(ROWS_ONE[2]);
+    p.add(FAMILIES[0], QUALIFIERS_ONE[2], VALUES[1]);
+    this.region.put(p);
+
+    // Now let's grab rows that have Q_ONE[0](VALUES[0]) and Q_ONE[2](VALUES[1])
+    // Since group two rows don't have these qualifiers, they will pass
+    // so limiting scan to group one
+    List<Filter> filters = new ArrayList<Filter>();
+    filters.add(new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_ONE[0],
+        CompareOp.EQUAL, VALUES[0]));
+    filters.add(new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_ONE[2],
+        CompareOp.EQUAL, VALUES[1]));
+    Filter f = new FilterList(Operator.MUST_PASS_ALL, filters);
+    Scan s = new Scan(ROWS_ONE[0], ROWS_TWO[0]);
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(f);
+    // Expect only one row, all qualifiers
+    KeyValue [] kvs = {
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[1]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0])
+    };
+    verifyScanNoEarlyOut(s, 1, 3);
+    verifyScanFull(s, kvs);
+
+    // In order to get expected behavior without limiting to group one
+    // need to wrap SCVFs in SkipFilters
+    filters = new ArrayList<Filter>();
+    filters.add(new SkipFilter(new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_ONE[0],
+        CompareOp.EQUAL, VALUES[0])));
+    filters.add(new SkipFilter(new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_ONE[2],
+        CompareOp.EQUAL, VALUES[1])));
+    f = new FilterList(Operator.MUST_PASS_ALL, filters);
+    s = new Scan(ROWS_ONE[0], ROWS_TWO[0]);
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(f);
+    // Expect same KVs
+    verifyScanNoEarlyOut(s, 1, 3);
+    verifyScanFull(s, kvs);
+
+    // More tests from HBASE-1821 for Clint and filterIfMissing flag
+
+    byte [][] ROWS_THREE = {
+        Bytes.toBytes("rowThree-0"), Bytes.toBytes("rowThree-1"),
+        Bytes.toBytes("rowThree-2"), Bytes.toBytes("rowThree-3")
+    };
+
+    // Give row 0 and 2 QUALIFIERS_ONE[0] (VALUE[0] VALUE[1])
+    // Give row 1 and 3 QUALIFIERS_ONE[1] (VALUE[0] VALUE[1])
+
+    KeyValue [] srcKVs = new KeyValue [] {
+        new KeyValue(ROWS_THREE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_THREE[1], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[1]),
+        new KeyValue(ROWS_THREE[2], FAMILIES[0], QUALIFIERS_ONE[1], VALUES[0]),
+        new KeyValue(ROWS_THREE[3], FAMILIES[0], QUALIFIERS_ONE[1], VALUES[1])
+    };
+
+    for(KeyValue kv : srcKVs) {
+      this.region.put(new Put(kv.getRow()).add(kv));
+    }
+
+    // Match VALUES[0] against QUALIFIERS_ONE[0] with filterIfMissing = false
+    // Expect 3 rows (0, 2, 3)
+    SingleColumnValueFilter scvf = new SingleColumnValueFilter(FAMILIES[0],
+        QUALIFIERS_ONE[0], CompareOp.EQUAL, VALUES[0]);
+    s = new Scan(ROWS_THREE[0], Bytes.toBytes("rowThree-4"));
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(scvf);
+    kvs = new KeyValue [] { srcKVs[0], srcKVs[2], srcKVs[3] };
+    verifyScanFull(s, kvs);
+
+    // Match VALUES[0] against QUALIFIERS_ONE[0] with filterIfMissing = true
+    // Expect 1 row (0)
+    scvf = new SingleColumnValueFilter(FAMILIES[0], QUALIFIERS_ONE[0],
+        CompareOp.EQUAL, VALUES[0]);
+    scvf.setFilterIfMissing(true);
+    s = new Scan(ROWS_THREE[0], Bytes.toBytes("rowThree-4"));
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(scvf);
+    kvs = new KeyValue [] { srcKVs[0] };
+    verifyScanFull(s, kvs);
+
+    // Match VALUES[1] against QUALIFIERS_ONE[1] with filterIfMissing = true
+    // Expect 1 row (3)
+    scvf = new SingleColumnValueFilter(FAMILIES[0],
+        QUALIFIERS_ONE[1], CompareOp.EQUAL, VALUES[1]);
+    scvf.setFilterIfMissing(true);
+    s = new Scan(ROWS_THREE[0], Bytes.toBytes("rowThree-4"));
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(scvf);
+    kvs = new KeyValue [] { srcKVs[3] };
+    verifyScanFull(s, kvs);
+
+    // Add QUALIFIERS_ONE[1] to ROWS_THREE[0] with VALUES[0]
+    KeyValue kvA = new KeyValue(ROWS_THREE[0], FAMILIES[0], QUALIFIERS_ONE[1], VALUES[0]);
+    this.region.put(new Put(kvA.getRow()).add(kvA));
+
+    // Match VALUES[1] against QUALIFIERS_ONE[1] with filterIfMissing = true
+    // Expect 1 row (3)
+    scvf = new SingleColumnValueFilter(FAMILIES[0],
+        QUALIFIERS_ONE[1], CompareOp.EQUAL, VALUES[1]);
+    scvf.setFilterIfMissing(true);
+    s = new Scan(ROWS_THREE[0], Bytes.toBytes("rowThree-4"));
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(scvf);
+    kvs = new KeyValue [] { srcKVs[3] };
+    verifyScanFull(s, kvs);
+
+  }
+
+  private void verifyScan(Scan s, long expectedRows, long expectedKeys)
+  throws IOException {
+    InternalScanner scanner = this.region.getScanner(s);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    int i = 0;
+    for (boolean done = true; done; i++) {
+      done = scanner.next(results);
+      Arrays.sort(results.toArray(new KeyValue[results.size()]),
+          KeyValue.COMPARATOR);
+      LOG.info("counter=" + i + ", " + results);
+      if (results.isEmpty()) break;
+      assertTrue("Scanned too many rows! Only expected " + expectedRows +
+          " total but already scanned " + (i+1), expectedRows > i);
+      assertEquals("Expected " + expectedKeys + " keys per row but " +
+          "returned " + results.size(), expectedKeys, results.size());
+      results.clear();
+    }
+    assertEquals("Expected " + expectedRows + " rows but scanned " + i +
+        " rows", expectedRows, i);
+  }
+
+  private void verifyScanNoEarlyOut(Scan s, long expectedRows,
+      long expectedKeys)
+  throws IOException {
+    InternalScanner scanner = this.region.getScanner(s);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    int i = 0;
+    for (boolean done = true; done; i++) {
+      done = scanner.next(results);
+      Arrays.sort(results.toArray(new KeyValue[results.size()]),
+          KeyValue.COMPARATOR);
+      LOG.info("counter=" + i + ", " + results);
+      if(results.isEmpty()) break;
+      assertTrue("Scanned too many rows! Only expected " + expectedRows +
+          " total but already scanned " + (i+1), expectedRows > i);
+      assertEquals("Expected " + expectedKeys + " keys per row but " +
+          "returned " + results.size(), expectedKeys, results.size());
+      results.clear();
+    }
+    assertEquals("Expected " + expectedRows + " rows but scanned " + i +
+        " rows", expectedRows, i);
+  }
+
+  private void verifyScanFull(Scan s, KeyValue [] kvs)
+  throws IOException {
+    InternalScanner scanner = this.region.getScanner(s);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    int row = 0;
+    int idx = 0;
+    for (boolean done = true; done; row++) {
+      done = scanner.next(results);
+      Arrays.sort(results.toArray(new KeyValue[results.size()]),
+          KeyValue.COMPARATOR);
+      if(results.isEmpty()) break;
+      assertTrue("Scanned too many keys! Only expected " + kvs.length +
+          " total but already scanned " + (results.size() + idx) +
+          (results.isEmpty() ? "" : "(" + results.get(0).toString() + ")"),
+          kvs.length >= idx + results.size());
+      for(KeyValue kv : results) {
+        LOG.info("row=" + row + ", result=" + kv.toString() +
+            ", match=" + kvs[idx].toString());
+        assertTrue("Row mismatch",
+            Bytes.equals(kv.getRow(), kvs[idx].getRow()));
+        assertTrue("Family mismatch",
+            Bytes.equals(kv.getFamily(), kvs[idx].getFamily()));
+        assertTrue("Qualifier mismatch",
+            Bytes.equals(kv.getQualifier(), kvs[idx].getQualifier()));
+        assertTrue("Value mismatch",
+            Bytes.equals(kv.getValue(), kvs[idx].getValue()));
+        idx++;
+      }
+      results.clear();
+    }
+    LOG.info("Looked at " + row + " rows with " + idx + " keys");
+    assertEquals("Expected " + kvs.length + " total keys but scanned " + idx,
+        kvs.length, idx);
+  }
+
+  private void verifyScanFullNoValues(Scan s, KeyValue [] kvs, boolean useLen)
+  throws IOException {
+    InternalScanner scanner = this.region.getScanner(s);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    int row = 0;
+    int idx = 0;
+    for (boolean more = true; more; row++) {
+      more = scanner.next(results);
+      Arrays.sort(results.toArray(new KeyValue[results.size()]),
+          KeyValue.COMPARATOR);
+      if(results.isEmpty()) break;
+      assertTrue("Scanned too many keys! Only expected " + kvs.length +
+          " total but already scanned " + (results.size() + idx) +
+          (results.isEmpty() ? "" : "(" + results.get(0).toString() + ")"),
+          kvs.length >= idx + results.size());
+      for(KeyValue kv : results) {
+        LOG.info("row=" + row + ", result=" + kv.toString() +
+            ", match=" + kvs[idx].toString());
+        assertTrue("Row mismatch",
+            Bytes.equals(kv.getRow(), kvs[idx].getRow()));
+        assertTrue("Family mismatch",
+            Bytes.equals(kv.getFamily(), kvs[idx].getFamily()));
+        assertTrue("Qualifier mismatch",
+            Bytes.equals(kv.getQualifier(), kvs[idx].getQualifier()));
+        assertFalse("Should not have returned whole value",
+            Bytes.equals(kv.getValue(), kvs[idx].getValue()));
+        if (useLen) {
+          assertEquals("Value in result is not SIZEOF_INT", 
+                     kv.getValue().length, Bytes.SIZEOF_INT);
+          LOG.info("idx = "  + idx + ", len=" + kvs[idx].getValueLength()
+              + ", actual=" +  Bytes.toInt(kv.getValue()));
+          assertEquals("Scan value should be the length of the actual value. ",
+                     kvs[idx].getValueLength(), Bytes.toInt(kv.getValue()) );
+          LOG.info("good");
+        } else {
+          assertEquals("Value in result is not empty", 
+                     kv.getValue().length, 0);
+        }
+        idx++;
+      }
+      results.clear();
+    }
+    LOG.info("Looked at " + row + " rows with " + idx + " keys");
+    assertEquals("Expected " + kvs.length + " total keys but scanned " + idx,
+        kvs.length, idx);
+  }
+
+
+  public void testColumnPaginationFilter() throws Exception {
+
+     // Set of KVs (page: 1; pageSize: 1) - the first set of 1 column per row
+      KeyValue [] expectedKVs = {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1])
+      };
+
+
+      // Set of KVs (page: 3; pageSize: 1)  - the third set of 1 column per row
+      KeyValue [] expectedKVs2 = {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      };
+
+      // Set of KVs (page: 2; pageSize 2)  - the 2nd set of 2 columns per row
+      KeyValue [] expectedKVs3 = {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      };
+
+
+      // Set of KVs (page: 2; pageSize 2)  - the 2nd set of 2 columns per row
+      KeyValue [] expectedKVs4 = {
+
+      };
+
+      long expectedRows = this.numRows;
+      long expectedKeys = 1;
+      Scan s = new Scan();
+
+
+      // Page 1; 1 Column per page  (Limit 1, Offset 0)
+      s.setFilter(new ColumnPaginationFilter(1,0));
+      verifyScan(s, expectedRows, expectedKeys);
+      this.verifyScanFull(s, expectedKVs);
+
+      // Page 3; 1 Result per page  (Limit 1, Offset 2)
+      s.setFilter(new ColumnPaginationFilter(1,2));
+      verifyScan(s, expectedRows, expectedKeys);
+      this.verifyScanFull(s, expectedKVs2);
+
+      // Page 2; 2 Results per page (Limit 2, Offset 2)
+      s.setFilter(new ColumnPaginationFilter(2,2));
+      expectedKeys = 2;
+      verifyScan(s, expectedRows, expectedKeys);
+      this.verifyScanFull(s, expectedKVs3);
+
+      // Page 8; 20 Results per page (no results) (Limit 20, Offset 140)
+      s.setFilter(new ColumnPaginationFilter(20,140));
+      expectedKeys = 0;
+      expectedRows = 0;
+      verifyScan(s, expectedRows, 0);
+      this.verifyScanFull(s, expectedKVs4);
+    }
+
+  public void testKeyOnlyFilter() throws Exception {
+
+    // KVs in first 6 rows
+    KeyValue [] expectedKVs = {
+      // testRowOne-0
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowOne-2
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowOne-3
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowTwo-0
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+      // testRowTwo-2
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+      // testRowTwo-3
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1])
+    };
+
+    // Grab all 6 rows
+    long expectedRows = 6;
+    long expectedKeys = this.colsPerRow;
+    for (boolean useLen : new boolean[]{false,true}) {
+      Scan s = new Scan();
+      s.setFilter(new KeyOnlyFilter(useLen));
+      verifyScan(s, expectedRows, expectedKeys);
+      verifyScanFullNoValues(s, expectedKVs, useLen);
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
new file mode 100644
index 0000000..b39ca3a
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java
@@ -0,0 +1,228 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.KeyValue;
+
+
+import junit.framework.TestCase;
+
+/**
+ * Tests filter sets
+ *
+ */
+public class TestFilterList extends TestCase {
+  static final int MAX_PAGES = 2;
+  static final char FIRST_CHAR = 'a';
+  static final char LAST_CHAR = 'e';
+  static byte[] GOOD_BYTES = Bytes.toBytes("abc");
+  static byte[] BAD_BYTES = Bytes.toBytes("def");
+
+  /**
+   * Test "must pass one"
+   * @throws Exception
+   */
+  public void testMPONE() throws Exception {
+    List<Filter> filters = new ArrayList<Filter>();
+    filters.add(new PageFilter(MAX_PAGES));
+    filters.add(new WhileMatchFilter(new PrefixFilter(Bytes.toBytes("yyy"))));
+    Filter filterMPONE =
+        new FilterList(FilterList.Operator.MUST_PASS_ONE, filters);
+    /* Filter must do all below steps:
+     * <ul>
+     * <li>{@link #reset()}</li>
+     * <li>{@link #filterAllRemaining()} -> true indicates scan is over, false, keep going on.</li>
+     * <li>{@link #filterRowKey(byte[],int,int)} -> true to drop this row,
+     * if false, we will also call</li>
+     * <li>{@link #filterKeyValue(org.apache.hadoop.hbase.KeyValue)} -> true to drop this key/value</li>
+     * <li>{@link #filterRow()} -> last chance to drop entire row based on the sequence of
+     * filterValue() calls. Eg: filter a row if it doesn't contain a specified column.
+     * </li>
+     * </ul>
+    */
+    filterMPONE.reset();
+    assertFalse(filterMPONE.filterAllRemaining());
+
+    /* Will pass both */
+    byte [] rowkey = Bytes.toBytes("yyyyyyyyy");
+    for (int i = 0; i < MAX_PAGES - 1; i++) {
+      assertFalse(filterMPONE.filterRowKey(rowkey, 0, rowkey.length));
+      assertFalse(filterMPONE.filterRow());
+      KeyValue kv = new KeyValue(rowkey, rowkey, Bytes.toBytes(i),
+        Bytes.toBytes(i));
+      assertTrue(Filter.ReturnCode.INCLUDE == filterMPONE.filterKeyValue(kv));
+    }
+
+    /* Only pass PageFilter */
+    rowkey = Bytes.toBytes("z");
+    assertFalse(filterMPONE.filterRowKey(rowkey, 0, rowkey.length));
+    assertFalse(filterMPONE.filterRow());
+    KeyValue kv = new KeyValue(rowkey, rowkey, Bytes.toBytes(0),
+        Bytes.toBytes(0));
+    assertTrue(Filter.ReturnCode.INCLUDE == filterMPONE.filterKeyValue(kv));
+
+    /* PageFilter will fail now, but should pass because we match yyy */
+    rowkey = Bytes.toBytes("yyy");
+    assertFalse(filterMPONE.filterRowKey(rowkey, 0, rowkey.length));
+    assertFalse(filterMPONE.filterRow());
+    kv = new KeyValue(rowkey, rowkey, Bytes.toBytes(0),
+        Bytes.toBytes(0));
+    assertTrue(Filter.ReturnCode.INCLUDE == filterMPONE.filterKeyValue(kv));
+
+    /* We should filter any row */
+    rowkey = Bytes.toBytes("z");
+    assertTrue(filterMPONE.filterRowKey(rowkey, 0, rowkey.length));
+    assertTrue(filterMPONE.filterRow());
+    assertTrue(filterMPONE.filterAllRemaining());
+
+  }
+
+  /**
+   * Test "must pass all"
+   * @throws Exception
+   */
+  public void testMPALL() throws Exception {
+    List<Filter> filters = new ArrayList<Filter>();
+    filters.add(new PageFilter(MAX_PAGES));
+    filters.add(new WhileMatchFilter(new PrefixFilter(Bytes.toBytes("yyy"))));
+    Filter filterMPALL =
+      new FilterList(FilterList.Operator.MUST_PASS_ALL, filters);
+    /* Filter must do all below steps:
+     * <ul>
+     * <li>{@link #reset()}</li>
+     * <li>{@link #filterAllRemaining()} -> true indicates scan is over, false, keep going on.</li>
+     * <li>{@link #filterRowKey(byte[],int,int)} -> true to drop this row,
+     * if false, we will also call</li>
+     * <li>{@link #filterKeyValue(org.apache.hadoop.hbase.KeyValue)} -> true to drop this key/value</li>
+     * <li>{@link #filterRow()} -> last chance to drop entire row based on the sequence of
+     * filterValue() calls. Eg: filter a row if it doesn't contain a specified column.
+     * </li>
+     * </ul>
+    */
+    filterMPALL.reset();
+    assertFalse(filterMPALL.filterAllRemaining());
+    byte [] rowkey = Bytes.toBytes("yyyyyyyyy");
+    for (int i = 0; i < MAX_PAGES - 1; i++) {
+      assertFalse(filterMPALL.filterRowKey(rowkey, 0, rowkey.length));
+      KeyValue kv = new KeyValue(rowkey, rowkey, Bytes.toBytes(i),
+        Bytes.toBytes(i));
+      assertTrue(Filter.ReturnCode.INCLUDE == filterMPALL.filterKeyValue(kv));
+    }
+    filterMPALL.reset();
+    rowkey = Bytes.toBytes("z");
+    assertTrue(filterMPALL.filterRowKey(rowkey, 0, rowkey.length));
+    // Should fail here; row should be filtered out.
+    KeyValue kv = new KeyValue(rowkey, rowkey, rowkey, rowkey);
+    assertTrue(Filter.ReturnCode.NEXT_ROW == filterMPALL.filterKeyValue(kv));
+
+    // Both filters in Set should be satisfied by now
+    assertTrue(filterMPALL.filterRow());
+  }
+
+  /**
+   * Test list ordering
+   * @throws Exception
+   */
+  public void testOrdering() throws Exception {
+    List<Filter> filters = new ArrayList<Filter>();
+    filters.add(new PrefixFilter(Bytes.toBytes("yyy")));
+    filters.add(new PageFilter(MAX_PAGES));
+    Filter filterMPONE =
+        new FilterList(FilterList.Operator.MUST_PASS_ONE, filters);
+    /* Filter must do all below steps:
+     * <ul>
+     * <li>{@link #reset()}</li>
+     * <li>{@link #filterAllRemaining()} -> true indicates scan is over, false, keep going on.</li>
+     * <li>{@link #filterRowKey(byte[],int,int)} -> true to drop this row,
+     * if false, we will also call</li>
+     * <li>{@link #filterKeyValue(org.apache.hadoop.hbase.KeyValue)} -> true to drop this key/value</li>
+     * <li>{@link #filterRow()} -> last chance to drop entire row based on the sequence of
+     * filterValue() calls. Eg: filter a row if it doesn't contain a specified column.
+     * </li>
+     * </ul>
+    */
+    filterMPONE.reset();
+    assertFalse(filterMPONE.filterAllRemaining());
+
+    /* We should be able to fill MAX_PAGES without incrementing page counter */
+    byte [] rowkey = Bytes.toBytes("yyyyyyyy");
+    for (int i = 0; i < MAX_PAGES; i++) {
+      assertFalse(filterMPONE.filterRowKey(rowkey, 0, rowkey.length));
+      KeyValue kv = new KeyValue(rowkey, rowkey, Bytes.toBytes(i),
+          Bytes.toBytes(i));
+        assertTrue(Filter.ReturnCode.INCLUDE == filterMPONE.filterKeyValue(kv));
+      assertFalse(filterMPONE.filterRow());
+    }
+
+    /* Now let's fill the page filter */
+    rowkey = Bytes.toBytes("xxxxxxx");
+    for (int i = 0; i < MAX_PAGES; i++) {
+      assertFalse(filterMPONE.filterRowKey(rowkey, 0, rowkey.length));
+      KeyValue kv = new KeyValue(rowkey, rowkey, Bytes.toBytes(i),
+          Bytes.toBytes(i));
+        assertTrue(Filter.ReturnCode.INCLUDE == filterMPONE.filterKeyValue(kv));
+      assertFalse(filterMPONE.filterRow());
+    }
+
+    /* We should still be able to include even though page filter is at max */
+    rowkey = Bytes.toBytes("yyy");
+    for (int i = 0; i < MAX_PAGES; i++) {
+      assertFalse(filterMPONE.filterRowKey(rowkey, 0, rowkey.length));
+      KeyValue kv = new KeyValue(rowkey, rowkey, Bytes.toBytes(i),
+          Bytes.toBytes(i));
+        assertTrue(Filter.ReturnCode.INCLUDE == filterMPONE.filterKeyValue(kv));
+      assertFalse(filterMPONE.filterRow());
+    }
+  }
+
+  /**
+   * Test serialization
+   * @throws Exception
+   */
+  public void testSerialization() throws Exception {
+    List<Filter> filters = new ArrayList<Filter>();
+    filters.add(new PageFilter(MAX_PAGES));
+    filters.add(new WhileMatchFilter(new PrefixFilter(Bytes.toBytes("yyy"))));
+    Filter filterMPALL =
+      new FilterList(FilterList.Operator.MUST_PASS_ALL, filters);
+
+    // Decompose filterMPALL to bytes.
+    ByteArrayOutputStream stream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(stream);
+    filterMPALL.write(out);
+    out.close();
+    byte[] buffer = stream.toByteArray();
+
+    // Recompose filterMPALL.
+    DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+    FilterList newFilter = new FilterList();
+    newFilter.readFields(in);
+
+    // TODO: Run TESTS!!!
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestInclusiveStopFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestInclusiveStopFilter.java
new file mode 100644
index 0000000..92f6eaf
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestInclusiveStopFilter.java
@@ -0,0 +1,89 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests the inclusive stop row filter
+ */
+public class TestInclusiveStopFilter extends TestCase {
+  private final byte [] STOP_ROW = Bytes.toBytes("stop_row");
+  private final byte [] GOOD_ROW = Bytes.toBytes("good_row");
+  private final byte [] PAST_STOP_ROW = Bytes.toBytes("zzzzzz");
+
+  Filter mainFilter;
+
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+    mainFilter = new InclusiveStopFilter(STOP_ROW);
+  }
+
+  /**
+   * Tests identification of the stop row
+   * @throws Exception
+   */
+  public void testStopRowIdentification() throws Exception {
+    stopRowTests(mainFilter);
+  }
+
+  /**
+   * Tests serialization
+   * @throws Exception
+   */
+  public void testSerialization() throws Exception {
+    // Decompose mainFilter to bytes.
+    ByteArrayOutputStream stream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(stream);
+    mainFilter.write(out);
+    out.close();
+    byte[] buffer = stream.toByteArray();
+
+    // Recompose mainFilter.
+    DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+    Filter newFilter = new InclusiveStopFilter();
+    newFilter.readFields(in);
+
+    // Ensure the serialization preserved the filter by running a full test.
+    stopRowTests(newFilter);
+  }
+
+  private void stopRowTests(Filter filter) throws Exception {
+    assertFalse("Filtering on " + Bytes.toString(GOOD_ROW),
+      filter.filterRowKey(GOOD_ROW, 0, GOOD_ROW.length));
+    assertFalse("Filtering on " + Bytes.toString(STOP_ROW),
+      filter.filterRowKey(STOP_ROW, 0, STOP_ROW.length));
+    assertTrue("Filtering on " + Bytes.toString(PAST_STOP_ROW),
+      filter.filterRowKey(PAST_STOP_ROW, 0, PAST_STOP_ROW.length));
+
+    assertTrue("FilterAllRemaining", filter.filterAllRemaining());
+    assertFalse("FilterNotNull", filter.filterRow());
+
+    assertFalse("Filter a null", filter.filterRowKey(null, 0, 0));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestPageFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestPageFilter.java
new file mode 100644
index 0000000..f47ba90
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestPageFilter.java
@@ -0,0 +1,90 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests for the page filter
+ */
+public class TestPageFilter extends TestCase {
+  static final int ROW_LIMIT = 3;
+
+  /**
+   * test page size filter
+   * @throws Exception
+   */
+  public void testPageSize() throws Exception {
+    Filter f = new PageFilter(ROW_LIMIT);
+    pageSizeTests(f);
+  }
+
+  /**
+   * Test filter serialization
+   * @throws Exception
+   */
+  public void testSerialization() throws Exception {
+    Filter f = new PageFilter(ROW_LIMIT);
+    // Decompose mainFilter to bytes.
+    ByteArrayOutputStream stream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(stream);
+    f.write(out);
+    out.close();
+    byte[] buffer = stream.toByteArray();
+    // Recompose mainFilter.
+    DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+    Filter newFilter = new PageFilter();
+    newFilter.readFields(in);
+
+    // Ensure the serialization preserved the filter by running a full test.
+    pageSizeTests(newFilter);
+  }
+
+  private void pageSizeTests(Filter f) throws Exception {
+    testFiltersBeyondPageSize(f, ROW_LIMIT);
+  }
+
+  private void testFiltersBeyondPageSize(final Filter f, final int pageSize) {
+    int count = 0;
+    for (int i = 0; i < (pageSize * 2); i++) {
+      boolean filterOut = f.filterRow();
+
+      if(filterOut) {
+        break;
+      } else {
+        count++;
+      }
+
+      // If at last row, should tell us to skip all remaining
+      if(count == pageSize) {
+        assertTrue(f.filterAllRemaining());
+      } else {
+        assertFalse(f.filterAllRemaining());
+      }
+
+    }
+    assertEquals(pageSize, count);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java
new file mode 100644
index 0000000..50a7d6f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java
@@ -0,0 +1,100 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.filter;
+
+import junit.framework.TestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.UnsupportedEncodingException;
+
+public class TestPrefixFilter extends TestCase {
+  Filter mainFilter;
+  static final char FIRST_CHAR = 'a';
+  static final char LAST_CHAR = 'e';
+  static final String HOST_PREFIX = "org.apache.site-";
+  static byte [] GOOD_BYTES = null;
+
+  static {
+    try {
+      GOOD_BYTES = "abc".getBytes(HConstants.UTF8_ENCODING);
+    } catch (UnsupportedEncodingException e) {
+      fail();
+    }
+  }
+
+  protected void setUp() throws Exception {
+    super.setUp();
+    this.mainFilter = new PrefixFilter(Bytes.toBytes(HOST_PREFIX));
+  }
+
+  public void testPrefixOnRow() throws Exception {
+    prefixRowTests(mainFilter);
+  }
+
+  public void testPrefixOnRowInsideWhileMatchRow() throws Exception {
+    prefixRowTests(new WhileMatchFilter(this.mainFilter), true);
+  }
+
+  public void testSerialization() throws Exception {
+    // Decompose mainFilter to bytes.
+    ByteArrayOutputStream stream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(stream);
+    mainFilter.write(out);
+    out.close();
+    byte[] buffer = stream.toByteArray();
+
+    // Recompose filter.
+    DataInputStream in = new DataInputStream(new ByteArrayInputStream(buffer));
+    Filter newFilter = new PrefixFilter();
+    newFilter.readFields(in);
+
+    // Ensure the serialization preserved the filter by running all test.
+    prefixRowTests(newFilter);
+  }
+
+  private void prefixRowTests(Filter filter) throws Exception {
+    prefixRowTests(filter, false);
+  }
+
+  private void prefixRowTests(Filter filter, boolean lastFilterAllRemaining)
+  throws Exception {
+    for (char c = FIRST_CHAR; c <= LAST_CHAR; c++) {
+      byte [] t = createRow(c);
+      assertFalse("Failed with character " + c,
+        filter.filterRowKey(t, 0, t.length));
+      assertFalse(filter.filterAllRemaining());
+    }
+    String yahooSite = "com.yahoo.www";
+    byte [] yahooSiteBytes = Bytes.toBytes(yahooSite);
+    assertTrue("Failed with character " +
+      yahooSite, filter.filterRowKey(yahooSiteBytes, 0, yahooSiteBytes.length));
+    assertEquals(filter.filterAllRemaining(), lastFilterAllRemaining);
+  }
+
+  private byte [] createRow(final char c) {
+    return Bytes.toBytes(HOST_PREFIX + Character.toString(c));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueExcludeFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueExcludeFilter.java
new file mode 100644
index 0000000..4a1b576
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueExcludeFilter.java
@@ -0,0 +1,78 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Tests for {@link SingleColumnValueExcludeFilter}. Because this filter
+ * extends {@link SingleColumnValueFilter}, only the added functionality is
+ * tested. That is, method filterKeyValue(KeyValue).
+ *
+ * @author ferdy
+ *
+ */
+public class TestSingleColumnValueExcludeFilter extends TestCase {
+  private static final byte[] ROW = Bytes.toBytes("test");
+  private static final byte[] COLUMN_FAMILY = Bytes.toBytes("test");
+  private static final byte[] COLUMN_QUALIFIER = Bytes.toBytes("foo");
+  private static final byte[] COLUMN_QUALIFIER_2 = Bytes.toBytes("foo_2");
+  private static final byte[] VAL_1 = Bytes.toBytes("a");
+  private static final byte[] VAL_2 = Bytes.toBytes("ab");
+
+  /**
+   * Test the overridden functionality of filterKeyValue(KeyValue)
+   * @throws Exception
+   */
+  public void testFilterKeyValue() throws Exception {
+    Filter filter = new SingleColumnValueExcludeFilter(COLUMN_FAMILY, COLUMN_QUALIFIER,
+        CompareOp.EQUAL, VAL_1);
+
+    // A 'match' situation
+    KeyValue kv;
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER_2, VAL_1);
+    // INCLUDE expected because test column has not yet passed
+    assertTrue("otherColumn", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_1);
+    // Test column will pass (will match), will SKIP because test columns are excluded
+    assertTrue("testedMatch", filter.filterKeyValue(kv) == Filter.ReturnCode.SKIP);
+    // Test column has already passed and matched, all subsequent columns are INCLUDE
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER_2, VAL_1);
+    assertTrue("otherColumn", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    assertFalse("allRemainingWhenMatch", filter.filterAllRemaining());
+
+    // A 'mismatch' situation
+    filter.reset();
+    // INCLUDE expected because test column has not yet passed
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER_2, VAL_1);
+    assertTrue("otherColumn", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    // Test column will pass (wont match), expect NEXT_ROW
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_2);
+    assertTrue("testedMismatch", filter.filterKeyValue(kv) == Filter.ReturnCode.NEXT_ROW);
+    // After a mismatch (at least with LatestVersionOnly), subsequent columns are EXCLUDE
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER_2, VAL_1);
+    assertTrue("otherColumn", filter.filterKeyValue(kv) == Filter.ReturnCode.NEXT_ROW);
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java
new file mode 100644
index 0000000..677a625
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java
@@ -0,0 +1,171 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.filter;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests the value filter
+ */
+public class TestSingleColumnValueFilter extends TestCase {
+  private static final byte[] ROW = Bytes.toBytes("test");
+  private static final byte[] COLUMN_FAMILY = Bytes.toBytes("test");
+  private static final byte [] COLUMN_QUALIFIER = Bytes.toBytes("foo");
+  private static final byte[] VAL_1 = Bytes.toBytes("a");
+  private static final byte[] VAL_2 = Bytes.toBytes("ab");
+  private static final byte[] VAL_3 = Bytes.toBytes("abc");
+  private static final byte[] VAL_4 = Bytes.toBytes("abcd");
+  private static final byte[] FULLSTRING_1 =
+    Bytes.toBytes("The quick brown fox jumps over the lazy dog.");
+  private static final byte[] FULLSTRING_2 =
+    Bytes.toBytes("The slow grey fox trips over the lazy dog.");
+  private static final String QUICK_SUBSTR = "quick";
+  private static final String QUICK_REGEX = ".+quick.+";
+
+  Filter basicFilter;
+  Filter substrFilter;
+  Filter regexFilter;
+
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+    basicFilter = basicFilterNew();
+    substrFilter = substrFilterNew();
+    regexFilter = regexFilterNew();
+  }
+
+  private Filter basicFilterNew() {
+    return new SingleColumnValueFilter(COLUMN_FAMILY, COLUMN_QUALIFIER,
+      CompareOp.GREATER_OR_EQUAL, VAL_2);
+  }
+
+  private Filter substrFilterNew() {
+    return new SingleColumnValueFilter(COLUMN_FAMILY, COLUMN_QUALIFIER,
+      CompareOp.EQUAL,
+      new SubstringComparator(QUICK_SUBSTR));
+  }
+
+  private Filter regexFilterNew() {
+    return new SingleColumnValueFilter(COLUMN_FAMILY, COLUMN_QUALIFIER,
+      CompareOp.EQUAL,
+      new RegexStringComparator(QUICK_REGEX));
+  }
+
+  private void basicFilterTests(SingleColumnValueFilter filter)
+      throws Exception {
+    KeyValue kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_2);
+    assertTrue("basicFilter1", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_3);
+    assertTrue("basicFilter2", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_4);
+    assertTrue("basicFilter3", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    assertFalse("basicFilterNotNull", filter.filterRow());
+    filter.reset();
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_1);
+    assertTrue("basicFilter4", filter.filterKeyValue(kv) == Filter.ReturnCode.NEXT_ROW);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_2);
+    assertTrue("basicFilter4", filter.filterKeyValue(kv) == Filter.ReturnCode.NEXT_ROW);
+    assertFalse("basicFilterAllRemaining", filter.filterAllRemaining());
+    assertTrue("basicFilterNotNull", filter.filterRow());
+    filter.reset();
+    filter.setLatestVersionOnly(false);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_1);
+    assertTrue("basicFilter5", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER, VAL_2);
+    assertTrue("basicFilter5", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    assertFalse("basicFilterNotNull", filter.filterRow());
+  }
+
+  private void substrFilterTests(Filter filter)
+      throws Exception {
+    KeyValue kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER,
+      FULLSTRING_1);
+    assertTrue("substrTrue",
+      filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER,
+      FULLSTRING_2);
+    assertTrue("substrFalse", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    assertFalse("substrFilterAllRemaining", filter.filterAllRemaining());
+    assertFalse("substrFilterNotNull", filter.filterRow());
+  }
+
+  private void regexFilterTests(Filter filter)
+      throws Exception {
+    KeyValue kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER,
+      FULLSTRING_1);
+    assertTrue("regexTrue",
+      filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    kv = new KeyValue(ROW, COLUMN_FAMILY, COLUMN_QUALIFIER,
+      FULLSTRING_2);
+    assertTrue("regexFalse", filter.filterKeyValue(kv) == Filter.ReturnCode.INCLUDE);
+    assertFalse("regexFilterAllRemaining", filter.filterAllRemaining());
+    assertFalse("regexFilterNotNull", filter.filterRow());
+  }
+
+  private Filter serializationTest(Filter filter)
+      throws Exception {
+    // Decompose filter to bytes.
+    ByteArrayOutputStream stream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(stream);
+    filter.write(out);
+    out.close();
+    byte[] buffer = stream.toByteArray();
+
+    // Recompose filter.
+    DataInputStream in =
+      new DataInputStream(new ByteArrayInputStream(buffer));
+    Filter newFilter = new SingleColumnValueFilter();
+    newFilter.readFields(in);
+
+    return newFilter;
+  }
+
+  /**
+   * Tests identification of the stop row
+   * @throws Exception
+   */
+  public void testStop() throws Exception {
+    basicFilterTests((SingleColumnValueFilter)basicFilter);
+    substrFilterTests(substrFilter);
+    regexFilterTests(regexFilter);
+  }
+
+  /**
+   * Tests serialization
+   * @throws Exception
+   */
+  public void testSerialization() throws Exception {
+    Filter newFilter = serializationTest(basicFilter);
+    basicFilterTests((SingleColumnValueFilter)newFilter);
+    newFilter = serializationTest(substrFilter);
+    substrFilterTests(newFilter);
+    newFilter = serializationTest(regexFilter);
+    regexFilterTests(newFilter);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
new file mode 100644
index 0000000..47f4c39
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
@@ -0,0 +1,141 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertTrue;
+
+
+public class TestHalfStoreFileReader {
+
+  /**
+   * Test the scanner and reseek of a half hfile scanner. The scanner API
+   * demands that seekTo and reseekTo() only return < 0 if the key lies
+   * before the start of the file (with no position on the scanner). Returning
+   * 0 if perfect match (rare), and return > 1 if we got an imperfect match.
+   *
+   * The latter case being the most common, we should generally be returning 1,
+   * and if we do, there may or may not be a 'next' in the scanner/file.
+   *
+   * A bug in the half file scanner was returning -1 at the end of the bottom
+   * half, and that was causing the infrastructure above to go null causing NPEs
+   * and other problems.  This test reproduces that failure, and also tests
+   * both the bottom and top of the file while we are at it.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testHalfScanAndReseek() throws IOException {
+    HBaseTestingUtility test_util = new HBaseTestingUtility();
+    String root_dir = HBaseTestingUtility.getTestDir("TestHalfStoreFile").toString();
+    Path p = new Path(root_dir, "test");
+
+    FileSystem fs = FileSystem.get(test_util.getConfiguration());
+
+    HFile.Writer w = new HFile.Writer(fs, p, 1024, "none", KeyValue.KEY_COMPARATOR);
+
+    // write some things.
+    List<KeyValue> items = genSomeKeys();
+    for (KeyValue kv : items) {
+      w.append(kv);
+    }
+    w.close();
+
+    HFile.Reader r = new HFile.Reader(fs, p, null, false);
+    r.loadFileInfo();
+    byte [] midkey = r.midkey();
+    KeyValue midKV = KeyValue.createKeyValueFromKey(midkey);
+    midkey = midKV.getRow();
+
+    //System.out.println("midkey: " + midKV + " or: " + Bytes.toStringBinary(midkey));
+
+    Reference bottom = new Reference(midkey, Reference.Range.bottom);
+    doTestOfScanAndReseek(p, fs, bottom);
+
+    Reference top = new Reference(midkey, Reference.Range.top);
+    doTestOfScanAndReseek(p, fs, top);
+  }
+
+  private void doTestOfScanAndReseek(Path p, FileSystem fs, Reference bottom)
+      throws IOException {
+    final HalfStoreFileReader halfreader =
+        new HalfStoreFileReader(fs, p, null, bottom);
+    halfreader.loadFileInfo();
+    final HFileScanner scanner = halfreader.getScanner(false, false);
+
+    scanner.seekTo();
+    KeyValue curr;
+    do {
+      curr = scanner.getKeyValue();
+      KeyValue reseekKv =
+          getLastOnCol(curr);
+      int ret = scanner.reseekTo(reseekKv.getKey());
+      assertTrue("reseek to returned: " + ret, ret > 0);
+      //System.out.println(curr + ": " + ret);
+    } while (scanner.next());
+
+    int ret = scanner.reseekTo(getLastOnCol(curr).getKey());
+    //System.out.println("Last reseek: " + ret);
+    assertTrue( ret > 0 );
+  }
+
+  private KeyValue getLastOnCol(KeyValue curr) {
+    return KeyValue.createLastOnRow(
+        curr.getBuffer(), curr.getRowOffset(), curr.getRowLength(),
+        curr.getBuffer(), curr.getFamilyOffset(), curr.getFamilyLength(),
+        curr.getBuffer(), curr.getQualifierOffset(), curr.getQualifierLength());
+  }
+
+  static final int SIZE = 1000;
+
+  static byte[] _b(String s) {
+    return Bytes.toBytes(s);
+  }
+
+  List<KeyValue> genSomeKeys() {
+    List<KeyValue> ret = new ArrayList<KeyValue>(SIZE);
+    for (int i = 0 ; i < SIZE; i++) {
+      KeyValue kv =
+          new KeyValue(
+              _b(String.format("row_%04d", i)),
+              _b("family"),
+              _b("qualifier"),
+              1000, // timestamp
+              _b("value"));
+      ret.add(kv);
+    }
+    return ret;
+  }
+
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java
new file mode 100644
index 0000000..dac7de6e9
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHbaseObjectWritable.java
@@ -0,0 +1,183 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io;
+
+import java.io.*;
+import java.util.ArrayList;
+import java.util.List;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterBase;
+import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.io.WritableComparator;
+import org.junit.Assert;
+
+public class TestHbaseObjectWritable extends TestCase {
+
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    super.tearDown();
+  }
+
+  @SuppressWarnings("boxing")
+  public void testReadObjectDataInputConfiguration() throws IOException {
+    Configuration conf = HBaseConfiguration.create();
+    // Do primitive type
+    final int COUNT = 101;
+    assertTrue(doType(conf, COUNT, int.class).equals(COUNT));
+    // Do array
+    final byte [] testing = "testing".getBytes();
+    byte [] result = (byte [])doType(conf, testing, testing.getClass());
+    assertTrue(WritableComparator.compareBytes(testing, 0, testing.length,
+       result, 0, result.length) == 0);
+    // Do unsupported type.
+    boolean exception = false;
+    try {
+      doType(conf, new File("a"), File.class);
+    } catch (UnsupportedOperationException uoe) {
+      exception = true;
+    }
+    assertTrue(exception);
+    // Try odd types
+    final byte A = 'A';
+    byte [] bytes = new byte[1];
+    bytes[0] = A;
+    Object obj = doType(conf, bytes, byte [].class);
+    assertTrue(((byte [])obj)[0] == A);
+    // Do 'known' Writable type.
+    obj = doType(conf, new Text(""), Text.class);
+    assertTrue(obj instanceof Text);
+    //List.class
+    List<String> list = new ArrayList<String>();
+    list.add("hello");
+    list.add("world");
+    list.add("universe");
+    obj = doType(conf, list, List.class);
+    assertTrue(obj instanceof List);
+    Assert.assertArrayEquals(list.toArray(), ((List)obj).toArray() );
+    //ArrayList.class
+    ArrayList<String> arr = new ArrayList<String>();
+    arr.add("hello");
+    arr.add("world");
+    arr.add("universe");
+    obj = doType(conf,  arr, ArrayList.class);
+    assertTrue(obj instanceof ArrayList);
+    Assert.assertArrayEquals(list.toArray(), ((ArrayList)obj).toArray() );
+    // Check that filters can be serialized
+    obj = doType(conf, new PrefixFilter(HConstants.EMPTY_BYTE_ARRAY),
+      PrefixFilter.class);
+    assertTrue(obj instanceof PrefixFilter);
+  }
+
+  public void testCustomWritable() throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+
+    // test proper serialization of un-encoded custom writables
+    CustomWritable custom = new CustomWritable("test phrase");
+    Object obj = doType(conf, custom, CustomWritable.class);
+    assertTrue(obj instanceof Writable);
+    assertTrue(obj instanceof CustomWritable);
+    assertEquals("test phrase", ((CustomWritable)obj).getValue());
+
+    // test proper serialization of a custom filter
+    CustomFilter filt = new CustomFilter("mykey");
+    FilterList filtlist = new FilterList(FilterList.Operator.MUST_PASS_ALL);
+    filtlist.addFilter(filt);
+    obj = doType(conf, filtlist, FilterList.class);
+    assertTrue(obj instanceof FilterList);
+    assertNotNull(((FilterList)obj).getFilters());
+    assertEquals(1, ((FilterList)obj).getFilters().size());
+    Filter child = ((FilterList)obj).getFilters().get(0);
+    assertTrue(child instanceof CustomFilter);
+    assertEquals("mykey", ((CustomFilter)child).getKey());
+  }
+
+  private Object doType(final Configuration conf, final Object value,
+      final Class<?> clazz)
+  throws IOException {
+    ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+    DataOutputStream out = new DataOutputStream(byteStream);
+    HbaseObjectWritable.writeObject(out, value, clazz, conf);
+    out.close();
+    ByteArrayInputStream bais =
+      new ByteArrayInputStream(byteStream.toByteArray());
+    DataInputStream dis = new DataInputStream(bais);
+    Object product = HbaseObjectWritable.readObject(dis, conf);
+    dis.close();
+    return product;
+  }
+
+  public static class CustomWritable implements Writable {
+    private String value = null;
+
+    public CustomWritable() {
+    }
+
+    public CustomWritable(String val) {
+      this.value = val;
+    }
+
+    public String getValue() { return value; }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      Text.writeString(out, this.value);
+    }
+
+    @Override
+    public void readFields(DataInput in) throws IOException {
+      this.value = Text.readString(in);
+    }
+  }
+
+  public static class CustomFilter extends FilterBase {
+    private String key = null;
+
+    public CustomFilter() {
+    }
+
+    public CustomFilter(String key) {
+      this.key = key;
+    }
+
+    public String getKey() { return key; }
+
+    public void write(DataOutput out) throws IOException {
+      Text.writeString(out, this.key);
+    }
+
+    public void readFields(DataInput in) throws IOException {
+      this.key = Text.readString(in);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
new file mode 100644
index 0000000..820ea16
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
@@ -0,0 +1,321 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.TreeMap;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentSkipListMap;
+import java.util.concurrent.CopyOnWriteArrayList;
+import java.util.concurrent.CopyOnWriteArraySet;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.hfile.CachedBlock;
+import org.apache.hadoop.hbase.io.hfile.LruBlockCache;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.MemStore;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+
+/**
+ * Testing the sizing that HeapSize offers and compares to the size given by
+ * ClassSize.
+ */
+public class TestHeapSize extends TestCase {
+  static final Log LOG = LogFactory.getLog(TestHeapSize.class);
+  // List of classes implementing HeapSize
+  // BatchOperation, BatchUpdate, BlockIndex, Entry, Entry<K,V>, HStoreKey
+  // KeyValue, LruBlockCache, LruHashMap<K,V>, Put, HLogKey
+
+  /**
+   * Test our hard-coded sizing of native java objects
+   */
+  public void testNativeSizes() throws IOException {
+    @SuppressWarnings("rawtypes")
+    Class cl = null;
+    long expected = 0L;
+    long actual = 0L;
+
+    // ArrayList
+    cl = ArrayList.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.ARRAYLIST;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // ByteBuffer
+    cl = ByteBuffer.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.BYTE_BUFFER;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // Integer
+    cl = Integer.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.INTEGER;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // Map.Entry
+    // Interface is public, all others are not.  Hard to size via ClassSize
+//    cl = Map.Entry.class;
+//    expected = ClassSize.estimateBase(cl, false);
+//    actual = ClassSize.MAP_ENTRY;
+//    if(expected != actual) {
+//      ClassSize.estimateBase(cl, true);
+//      assertEquals(expected, actual);
+//    }
+
+    // Object
+    cl = Object.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.OBJECT;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // TreeMap
+    cl = TreeMap.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.TREEMAP;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // String
+    cl = String.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.STRING;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // ConcurrentHashMap
+    cl = ConcurrentHashMap.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.CONCURRENT_HASHMAP;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // ConcurrentSkipListMap
+    cl = ConcurrentSkipListMap.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.CONCURRENT_SKIPLISTMAP;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // ReentrantReadWriteLock
+    cl = ReentrantReadWriteLock.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.REENTRANT_LOCK;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // AtomicLong
+    cl = AtomicLong.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.ATOMIC_LONG;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // AtomicInteger
+    cl = AtomicInteger.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.ATOMIC_INTEGER;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // AtomicBoolean
+    cl = AtomicBoolean.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.ATOMIC_BOOLEAN;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // CopyOnWriteArraySet
+    cl = CopyOnWriteArraySet.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.COPYONWRITE_ARRAYSET;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // CopyOnWriteArrayList
+    cl = CopyOnWriteArrayList.class;
+    expected = ClassSize.estimateBase(cl, false);
+    actual = ClassSize.COPYONWRITE_ARRAYLIST;
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+
+  }
+
+  /**
+   * Testing the classes that implements HeapSize and are a part of 0.20.
+   * Some are not tested here for example BlockIndex which is tested in
+   * TestHFile since it is a non public class
+   * @throws IOException
+   */
+  public void testSizes() throws IOException {
+    @SuppressWarnings("rawtypes")
+    Class cl = null;
+    long expected = 0L;
+    long actual = 0L;
+
+    //KeyValue
+    cl = KeyValue.class;
+    expected = ClassSize.estimateBase(cl, false);
+    KeyValue kv = new KeyValue();
+    actual = kv.heapSize();
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    //Put
+    cl = Put.class;
+    expected = ClassSize.estimateBase(cl, false);
+    //The actual TreeMap is not included in the above calculation
+    expected += ClassSize.TREEMAP;
+    Put put = new Put(Bytes.toBytes(""));
+    actual = put.heapSize();
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    //LruBlockCache Overhead
+    cl = LruBlockCache.class;
+    actual = LruBlockCache.CACHE_FIXED_OVERHEAD;
+    expected = ClassSize.estimateBase(cl, false);
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // CachedBlock Fixed Overhead
+    // We really need "deep" sizing but ClassSize does not do this.
+    // Perhaps we should do all these more in this style....
+    cl = CachedBlock.class;
+    actual = CachedBlock.PER_BLOCK_OVERHEAD;
+    expected = ClassSize.estimateBase(cl, false);
+    expected += ClassSize.estimateBase(String.class, false);
+    expected += ClassSize.estimateBase(ByteBuffer.class, false);
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      ClassSize.estimateBase(String.class, true);
+      ClassSize.estimateBase(ByteBuffer.class, true);
+      assertEquals(expected, actual);
+    }
+
+    // MemStore Overhead
+    cl = MemStore.class;
+    actual = MemStore.FIXED_OVERHEAD;
+    expected = ClassSize.estimateBase(cl, false);
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // MemStore Deep Overhead
+    actual = MemStore.DEEP_OVERHEAD;
+    expected = ClassSize.estimateBase(cl, false);
+    expected += ClassSize.estimateBase(ReentrantReadWriteLock.class, false);
+    expected += ClassSize.estimateBase(AtomicLong.class, false);
+    expected += ClassSize.estimateBase(ConcurrentSkipListMap.class, false);
+    expected += ClassSize.estimateBase(ConcurrentSkipListMap.class, false);
+    expected += ClassSize.estimateBase(CopyOnWriteArraySet.class, false);
+    expected += ClassSize.estimateBase(CopyOnWriteArrayList.class, false);
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      ClassSize.estimateBase(ReentrantReadWriteLock.class, true);
+      ClassSize.estimateBase(AtomicLong.class, true);
+      ClassSize.estimateBase(ConcurrentSkipListMap.class, true);
+      ClassSize.estimateBase(CopyOnWriteArraySet.class, true);
+      ClassSize.estimateBase(CopyOnWriteArrayList.class, true);
+      assertEquals(expected, actual);
+    }
+
+    // Store Overhead
+    cl = Store.class;
+    actual = Store.FIXED_OVERHEAD;
+    expected = ClassSize.estimateBase(cl, false);
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // Region Overhead
+    cl = HRegion.class;
+    actual = HRegion.FIXED_OVERHEAD;
+    expected = ClassSize.estimateBase(cl, false);
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+
+    // Currently NOT testing Deep Overheads of many of these classes.
+    // Deep overheads cover a vast majority of stuff, but will not be 100%
+    // accurate because it's unclear when we're referencing stuff that's already
+    // accounted for.  But we have satisfied our two core requirements.
+    // Sizing is quite accurate now, and our tests will throw errors if
+    // any of these classes are modified without updating overhead sizes.
+
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/TestImmutableBytesWritable.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestImmutableBytesWritable.java
new file mode 100644
index 0000000..77c4506
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/TestImmutableBytesWritable.java
@@ -0,0 +1,131 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.io;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+
+public class TestImmutableBytesWritable extends TestCase {
+  public void testHash() throws Exception {
+    assertEquals(
+      new ImmutableBytesWritable(Bytes.toBytes("xxabc"), 2, 3).hashCode(),
+      new ImmutableBytesWritable(Bytes.toBytes("abc")).hashCode());
+    assertEquals(
+      new ImmutableBytesWritable(Bytes.toBytes("xxabcd"), 2, 3).hashCode(),
+      new ImmutableBytesWritable(Bytes.toBytes("abc")).hashCode());
+    assertNotSame(
+      new ImmutableBytesWritable(Bytes.toBytes("xxabc"), 2, 3).hashCode(),
+      new ImmutableBytesWritable(Bytes.toBytes("xxabc"), 2, 2).hashCode());
+  }
+
+  public void testSpecificCompare() {
+    ImmutableBytesWritable ibw1 = new ImmutableBytesWritable(new byte[]{0x0f});
+    ImmutableBytesWritable ibw2 = new ImmutableBytesWritable(new byte[]{0x00, 0x00});
+    ImmutableBytesWritable.Comparator c = new ImmutableBytesWritable.Comparator();
+    assertFalse("ibw1 < ibw2", c.compare( ibw1, ibw2 ) < 0 );
+  }
+
+  public void testComparison() throws Exception {
+    runTests("aa", "b", -1);
+    runTests("aa", "aa", 0);
+    runTests("aa", "ab", -1);
+    runTests("aa", "aaa", -1);
+    runTests("", "", 0);
+    runTests("", "a", -1);
+  }
+
+  private void runTests(String aStr, String bStr, int signum)
+    throws Exception {
+    ImmutableBytesWritable a = new ImmutableBytesWritable(
+      Bytes.toBytes(aStr));
+    ImmutableBytesWritable b = new ImmutableBytesWritable(
+      Bytes.toBytes(bStr));
+
+    doComparisonsOnObjects(a, b, signum);
+    doComparisonsOnRaw(a, b, signum);
+
+    // Tests for when the offset is non-zero
+    a = new ImmutableBytesWritable(Bytes.toBytes("xxx" + aStr),
+                                   3, aStr.length());
+    b = new ImmutableBytesWritable(Bytes.toBytes("yy" + bStr),
+                                   2, bStr.length());
+    doComparisonsOnObjects(a, b, signum);
+    doComparisonsOnRaw(a, b, signum);
+
+    // Tests for when offset is nonzero and length doesn't extend to end
+    a = new ImmutableBytesWritable(Bytes.toBytes("xxx" + aStr + "zzz"),
+                                   3, aStr.length());
+    b = new ImmutableBytesWritable(Bytes.toBytes("yy" + bStr + "aaa"),
+                                   2, bStr.length());
+    doComparisonsOnObjects(a, b, signum);
+    doComparisonsOnRaw(a, b, signum);
+  }
+
+
+  private int signum(int i) {
+    if (i > 0) return 1;
+    if (i == 0) return 0;
+    return -1;
+  }
+
+  private void doComparisonsOnRaw(ImmutableBytesWritable a,
+                                  ImmutableBytesWritable b,
+                                  int expectedSignum)
+    throws IOException {
+    ImmutableBytesWritable.Comparator comparator =
+      new ImmutableBytesWritable.Comparator();
+
+    ByteArrayOutputStream baosA = new ByteArrayOutputStream();
+    ByteArrayOutputStream baosB = new ByteArrayOutputStream();
+
+    a.write(new DataOutputStream(baosA));
+    b.write(new DataOutputStream(baosB));
+
+    assertEquals(
+      "Comparing " + a + " and " + b + " as raw",
+      signum(comparator.compare(baosA.toByteArray(), 0, baosA.size(),
+                                baosB.toByteArray(), 0, baosB.size())),
+      expectedSignum);
+
+    assertEquals(
+      "Comparing " + a + " and " + b + " as raw (inverse)",
+      -signum(comparator.compare(baosB.toByteArray(), 0, baosB.size(),
+                                 baosA.toByteArray(), 0, baosA.size())),
+      expectedSignum);
+  }
+
+  private void doComparisonsOnObjects(ImmutableBytesWritable a,
+                                      ImmutableBytesWritable b,
+                                      int expectedSignum) {
+    ImmutableBytesWritable.Comparator comparator =
+      new ImmutableBytesWritable.Comparator();
+    assertEquals(
+      "Comparing " + a + " and " + b + " as objects",
+      signum(comparator.compare(a, b)), expectedSignum);
+    assertEquals(
+      "Comparing " + a + " and " + b + " as objects (inverse)",
+      -signum(comparator.compare(b, a)), expectedSignum);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/KVGenerator.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/KVGenerator.java
new file mode 100644
index 0000000..b22cb8c
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/KVGenerator.java
@@ -0,0 +1,111 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.Random;
+
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.WritableComparator;
+
+/**
+ * Generate random <key, value> pairs.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+class KVGenerator {
+  private final Random random;
+  private final byte[][] dict;
+  private final boolean sorted;
+  private final RandomDistribution.DiscreteRNG keyLenRNG, valLenRNG;
+  private BytesWritable lastKey;
+  private static final int MIN_KEY_LEN = 4;
+  private final byte prefix[] = new byte[MIN_KEY_LEN];
+
+  public KVGenerator(Random random, boolean sorted,
+      RandomDistribution.DiscreteRNG keyLenRNG,
+      RandomDistribution.DiscreteRNG valLenRNG,
+      RandomDistribution.DiscreteRNG wordLenRNG, int dictSize) {
+    this.random = random;
+    dict = new byte[dictSize][];
+    this.sorted = sorted;
+    this.keyLenRNG = keyLenRNG;
+    this.valLenRNG = valLenRNG;
+    for (int i = 0; i < dictSize; ++i) {
+      int wordLen = wordLenRNG.nextInt();
+      dict[i] = new byte[wordLen];
+      random.nextBytes(dict[i]);
+    }
+    lastKey = new BytesWritable();
+    fillKey(lastKey);
+  }
+
+  private void fillKey(BytesWritable o) {
+    int len = keyLenRNG.nextInt();
+    if (len < MIN_KEY_LEN) len = MIN_KEY_LEN;
+    o.setSize(len);
+    int n = MIN_KEY_LEN;
+    while (n < len) {
+      byte[] word = dict[random.nextInt(dict.length)];
+      int l = Math.min(word.length, len - n);
+      System.arraycopy(word, 0, o.get(), n, l);
+      n += l;
+    }
+    if (sorted
+        && WritableComparator.compareBytes(lastKey.get(), MIN_KEY_LEN, lastKey
+            .getSize()
+            - MIN_KEY_LEN, o.get(), MIN_KEY_LEN, o.getSize() - MIN_KEY_LEN) > 0) {
+      incrementPrefix();
+    }
+
+    System.arraycopy(prefix, 0, o.get(), 0, MIN_KEY_LEN);
+    lastKey.set(o);
+  }
+
+  private void fillValue(BytesWritable o) {
+    int len = valLenRNG.nextInt();
+    o.setSize(len);
+    int n = 0;
+    while (n < len) {
+      byte[] word = dict[random.nextInt(dict.length)];
+      int l = Math.min(word.length, len - n);
+      System.arraycopy(word, 0, o.get(), n, l);
+      n += l;
+    }
+  }
+
+  private void incrementPrefix() {
+    for (int i = MIN_KEY_LEN - 1; i >= 0; --i) {
+      ++prefix[i];
+      if (prefix[i] != 0) return;
+    }
+
+    throw new RuntimeException("Prefix overflown");
+  }
+
+  public void next(BytesWritable key, BytesWritable value, boolean dupKey) {
+    if (dupKey) {
+      key.set(lastKey);
+    }
+    else {
+      fillKey(key);
+    }
+    fillValue(value);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/KeySampler.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/KeySampler.java
new file mode 100644
index 0000000..2489029
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/KeySampler.java
@@ -0,0 +1,62 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.Random;
+
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.hbase.io.hfile.RandomDistribution.DiscreteRNG;
+
+/*
+* <p>
+* Copied from
+* <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+* Remove after tfile is committed and use the tfile version of this class
+* instead.</p>
+*/
+class KeySampler {
+  Random random;
+  int min, max;
+  DiscreteRNG keyLenRNG;
+  private static final int MIN_KEY_LEN = 4;
+
+  public KeySampler(Random random, byte [] first, byte [] last,
+      DiscreteRNG keyLenRNG) {
+    this.random = random;
+    min = keyPrefixToInt(first);
+    max = keyPrefixToInt(last);
+    this.keyLenRNG = keyLenRNG;
+  }
+
+  private int keyPrefixToInt(byte [] key) {
+    byte[] b = key;
+    int o = 0;
+    return (b[o] & 0xff) << 24 | (b[o + 1] & 0xff) << 16
+        | (b[o + 2] & 0xff) << 8 | (b[o + 3] & 0xff);
+  }
+
+  public void next(BytesWritable key) {
+    key.setSize(Math.max(MIN_KEY_LEN, keyLenRNG.nextInt()));
+    random.nextBytes(key.get());
+    int n = random.nextInt(max - min) + min;
+    byte[] b = key.get();
+    b[0] = (byte) (n >> 24);
+    b[1] = (byte) (n >> 16);
+    b[2] = (byte) (n >> 8);
+    b[3] = (byte) n;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/NanoTimer.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/NanoTimer.java
new file mode 100644
index 0000000..a133cb4
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/NanoTimer.java
@@ -0,0 +1,198 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+/**
+ * A nano-second timer.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class NanoTimer {
+  private long last = -1;
+  private boolean started = false;
+  private long cumulate = 0;
+
+  /**
+   * Constructor
+   *
+   * @param start
+   *          Start the timer upon construction.
+   */
+  public NanoTimer(boolean start) {
+    if (start) this.start();
+  }
+
+  /**
+   * Start the timer.
+   *
+   * Note: No effect if timer is already started.
+   */
+  public void start() {
+    if (!this.started) {
+      this.last = System.nanoTime();
+      this.started = true;
+    }
+  }
+
+  /**
+   * Stop the timer.
+   *
+   * Note: No effect if timer is already stopped.
+   */
+  public void stop() {
+    if (this.started) {
+      this.started = false;
+      this.cumulate += System.nanoTime() - this.last;
+    }
+  }
+
+  /**
+   * Read the timer.
+   *
+   * @return the elapsed time in nano-seconds. Note: If the timer is never
+   *         started before, -1 is returned.
+   */
+  public long read() {
+    if (!readable()) return -1;
+
+    return this.cumulate;
+  }
+
+  /**
+   * Reset the timer.
+   */
+  public void reset() {
+    this.last = -1;
+    this.started = false;
+    this.cumulate = 0;
+  }
+
+  /**
+   * Checking whether the timer is started
+   *
+   * @return true if timer is started.
+   */
+  public boolean isStarted() {
+    return this.started;
+  }
+
+  /**
+   * Format the elapsed time to a human understandable string.
+   *
+   * Note: If timer is never started, "ERR" will be returned.
+   */
+  public String toString() {
+    if (!readable()) {
+      return "ERR";
+    }
+
+    return NanoTimer.nanoTimeToString(this.cumulate);
+  }
+
+  /**
+   * A utility method to format a time duration in nano seconds into a human
+   * understandable stirng.
+   *
+   * @param t
+   *          Time duration in nano seconds.
+   * @return String representation.
+   */
+  public static String nanoTimeToString(long t) {
+    if (t < 0) return "ERR";
+
+    if (t == 0) return "0";
+
+    if (t < 1000) {
+      return t + "ns";
+    }
+
+    double us = (double) t / 1000;
+    if (us < 1000) {
+      return String.format("%.2fus", us);
+    }
+
+    double ms = us / 1000;
+    if (ms < 1000) {
+      return String.format("%.2fms", ms);
+    }
+
+    double ss = ms / 1000;
+    if (ss < 1000) {
+      return String.format("%.2fs", ss);
+    }
+
+    long mm = (long) ss / 60;
+    ss -= mm * 60;
+    long hh = mm / 60;
+    mm -= hh * 60;
+    long dd = hh / 24;
+    hh -= dd * 24;
+
+    if (dd > 0) {
+      return String.format("%dd%dh", dd, hh);
+    }
+
+    if (hh > 0) {
+      return String.format("%dh%dm", hh, mm);
+    }
+
+    if (mm > 0) {
+      return String.format("%dm%.1fs", mm, ss);
+    }
+
+    return String.format("%.2fs", ss);
+
+    /**
+     * StringBuilder sb = new StringBuilder(); String sep = "";
+     *
+     * if (dd > 0) { String unit = (dd > 1) ? "days" : "day";
+     * sb.append(String.format("%s%d%s", sep, dd, unit)); sep = " "; }
+     *
+     * if (hh > 0) { String unit = (hh > 1) ? "hrs" : "hr";
+     * sb.append(String.format("%s%d%s", sep, hh, unit)); sep = " "; }
+     *
+     * if (mm > 0) { String unit = (mm > 1) ? "mins" : "min";
+     * sb.append(String.format("%s%d%s", sep, mm, unit)); sep = " "; }
+     *
+     * if (ss > 0) { String unit = (ss > 1) ? "secs" : "sec";
+     * sb.append(String.format("%s%.3f%s", sep, ss, unit)); sep = " "; }
+     *
+     * return sb.toString();
+     */
+  }
+
+  private boolean readable() {
+    return this.last != -1;
+  }
+
+  /**
+   * Simple tester.
+   *
+   * @param args
+   */
+  public static void main(String[] args) {
+    long i = 7;
+
+    for (int x = 0; x < 20; ++x, i *= 7) {
+      System.out.println(NanoTimer.nanoTimeToString(i));
+    }
+  }
+}
+
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomDistribution.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomDistribution.java
new file mode 100644
index 0000000..7232cad
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomDistribution.java
@@ -0,0 +1,271 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Random;
+
+/**
+ * A class that generates random numbers that follow some distribution.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class RandomDistribution {
+  /**
+   * Interface for discrete (integer) random distributions.
+   */
+  public static interface DiscreteRNG {
+    /**
+     * Get the next random number
+     *
+     * @return the next random number.
+     */
+    public int nextInt();
+  }
+
+  /**
+   * P(i)=1/(max-min)
+   */
+  public static final class Flat implements DiscreteRNG {
+    private final Random random;
+    private final int min;
+    private final int max;
+
+    /**
+     * Generate random integers from min (inclusive) to max (exclusive)
+     * following even distribution.
+     *
+     * @param random
+     *          The basic random number generator.
+     * @param min
+     *          Minimum integer
+     * @param max
+     *          maximum integer (exclusive).
+     *
+     */
+    public Flat(Random random, int min, int max) {
+      if (min >= max) {
+        throw new IllegalArgumentException("Invalid range");
+      }
+      this.random = random;
+      this.min = min;
+      this.max = max;
+    }
+
+    /**
+     * @see DiscreteRNG#nextInt()
+     */
+    @Override
+    public int nextInt() {
+      return random.nextInt(max - min) + min;
+    }
+  }
+
+  /**
+   * Zipf distribution. The ratio of the probabilities of integer i and j is
+   * defined as follows:
+   *
+   * P(i)/P(j)=((j-min+1)/(i-min+1))^sigma.
+   */
+  public static final class Zipf implements DiscreteRNG {
+    private static final double DEFAULT_EPSILON = 0.001;
+    private final Random random;
+    private final ArrayList<Integer> k;
+    private final ArrayList<Double> v;
+
+    /**
+     * Constructor
+     *
+     * @param r
+     *          The random number generator.
+     * @param min
+     *          minimum integer (inclusvie)
+     * @param max
+     *          maximum integer (exclusive)
+     * @param sigma
+     *          parameter sigma. (sigma > 1.0)
+     */
+    public Zipf(Random r, int min, int max, double sigma) {
+      this(r, min, max, sigma, DEFAULT_EPSILON);
+    }
+
+    /**
+     * Constructor.
+     *
+     * @param r
+     *          The random number generator.
+     * @param min
+     *          minimum integer (inclusvie)
+     * @param max
+     *          maximum integer (exclusive)
+     * @param sigma
+     *          parameter sigma. (sigma > 1.0)
+     * @param epsilon
+     *          Allowable error percentage (0 < epsilon < 1.0).
+     */
+    public Zipf(Random r, int min, int max, double sigma, double epsilon) {
+      if ((max <= min) || (sigma <= 1) || (epsilon <= 0)
+          || (epsilon >= 0.5)) {
+        throw new IllegalArgumentException("Invalid arguments");
+      }
+      random = r;
+      k = new ArrayList<Integer>();
+      v = new ArrayList<Double>();
+
+      double sum = 0;
+      int last = -1;
+      for (int i = min; i < max; ++i) {
+        sum += Math.exp(-sigma * Math.log(i - min + 1));
+        if ((last == -1) || i * (1 - epsilon) > last) {
+          k.add(i);
+          v.add(sum);
+          last = i;
+        }
+      }
+
+      if (last != max - 1) {
+        k.add(max - 1);
+        v.add(sum);
+      }
+
+      v.set(v.size() - 1, 1.0);
+
+      for (int i = v.size() - 2; i >= 0; --i) {
+        v.set(i, v.get(i) / sum);
+      }
+    }
+
+    /**
+     * @see DiscreteRNG#nextInt()
+     */
+    @Override
+    public int nextInt() {
+      double d = random.nextDouble();
+      int idx = Collections.binarySearch(v, d);
+
+      if (idx > 0) {
+        ++idx;
+      }
+      else {
+        idx = -(idx + 1);
+      }
+
+      if (idx >= v.size()) {
+        idx = v.size() - 1;
+      }
+
+      if (idx == 0) {
+        return k.get(0);
+      }
+
+      int ceiling = k.get(idx);
+      int lower = k.get(idx - 1);
+
+      return ceiling - random.nextInt(ceiling - lower);
+    }
+  }
+
+  /**
+   * Binomial distribution.
+   *
+   * P(k)=select(n, k)*p^k*(1-p)^(n-k) (k = 0, 1, ..., n)
+   *
+   * P(k)=select(max-min-1, k-min)*p^(k-min)*(1-p)^(k-min)*(1-p)^(max-k-1)
+   */
+  public static final class Binomial implements DiscreteRNG {
+    private final Random random;
+    private final int min;
+    private final int n;
+    private final double[] v;
+
+    private static double select(int n, int k) {
+      double ret = 1.0;
+      for (int i = k + 1; i <= n; ++i) {
+        ret *= (double) i / (i - k);
+      }
+      return ret;
+    }
+
+    private static double power(double p, int k) {
+      return Math.exp(k * Math.log(p));
+    }
+
+    /**
+     * Generate random integers from min (inclusive) to max (exclusive)
+     * following Binomial distribution.
+     *
+     * @param random
+     *          The basic random number generator.
+     * @param min
+     *          Minimum integer
+     * @param max
+     *          maximum integer (exclusive).
+     * @param p
+     *          parameter.
+     *
+     */
+    public Binomial(Random random, int min, int max, double p) {
+      if (min >= max) {
+        throw new IllegalArgumentException("Invalid range");
+      }
+      this.random = random;
+      this.min = min;
+      this.n = max - min - 1;
+      if (n > 0) {
+        v = new double[n + 1];
+        double sum = 0.0;
+        for (int i = 0; i <= n; ++i) {
+          sum += select(n, i) * power(p, i) * power(1 - p, n - i);
+          v[i] = sum;
+        }
+        for (int i = 0; i <= n; ++i) {
+          v[i] /= sum;
+        }
+      }
+      else {
+        v = null;
+      }
+    }
+
+    /**
+     * @see DiscreteRNG#nextInt()
+     */
+    @Override
+    public int nextInt() {
+      if (v == null) {
+        return min;
+      }
+      double d = random.nextDouble();
+      int idx = Arrays.binarySearch(v, d);
+      if (idx > 0) {
+        ++idx;
+      } else {
+        idx = -(idx + 1);
+      }
+
+      if (idx >= v.length) {
+        idx = v.length - 1;
+      }
+      return idx + min;
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomSeek.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomSeek.java
new file mode 100644
index 0000000..61b20ca
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomSeek.java
@@ -0,0 +1,127 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.BufferedReader;
+import java.io.FileReader;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Random;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Random seek test.
+ */
+public class RandomSeek {
+  private static List<String> slurp(String fname) throws IOException {
+    BufferedReader istream = new BufferedReader(new FileReader(fname));
+    String str;
+    List<String> l = new ArrayList<String>();
+    while ( (str=istream.readLine()) != null) {
+      String [] parts = str.split(",");
+      l.add(parts[0] + ":" + parts[1] + ":" + parts[2]);
+    }
+    istream.close();
+    return l;
+  }
+
+  private static String randKey(List<String> keys) {
+    Random r = new Random();
+    //return keys.get(r.nextInt(keys.size()));
+    return "2" + Integer.toString(7+r.nextInt(2)) + Integer.toString(r.nextInt(100));
+    //return new String(r.nextInt(100));
+  }
+
+  public static void main(String [] argv) throws IOException {
+    Configuration conf = new Configuration();
+    conf.setInt("io.file.buffer.size", 64*1024);
+    RawLocalFileSystem rlfs = new RawLocalFileSystem();
+    rlfs.setConf(conf);
+    LocalFileSystem lfs = new LocalFileSystem(rlfs);
+
+    Path path = new Path("/Users/ryan/rfile.big.txt");
+    long start = System.currentTimeMillis();
+    SimpleBlockCache cache = new SimpleBlockCache();
+    //LruBlockCache cache = new LruBlockCache();
+    Reader reader = new HFile.Reader(lfs, path, cache, false);
+    reader.loadFileInfo();
+    System.out.println(reader.trailer);
+    long end = System.currentTimeMillis();
+
+    System.out.println("Index read time: " + (end - start));
+
+    List<String> keys = slurp("/Users/ryan/xaa.50k");
+
+    // Get a scanner that doesn't cache and that uses pread.
+    HFileScanner scanner = reader.getScanner(false, true);
+    int count;
+    long totalBytes = 0;
+    int notFound = 0;
+
+    start = System.nanoTime();
+    for(count = 0; count < 500000; ++count) {
+      String key = randKey(keys);
+      byte [] bkey = Bytes.toBytes(key);
+      int res = scanner.seekTo(bkey);
+      if (res == 0) {
+        ByteBuffer k = scanner.getKey();
+        ByteBuffer v = scanner.getValue();
+        totalBytes += k.limit();
+        totalBytes += v.limit();
+      } else {
+        ++ notFound;
+      }
+      if (res == -1) {
+        scanner.seekTo();
+      }
+      // Scan for another 1000 rows.
+      for (int i = 0; i < 1000; ++i) {
+        if (!scanner.next())
+          break;
+        ByteBuffer k = scanner.getKey();
+        ByteBuffer v = scanner.getValue();
+        totalBytes += k.limit();
+        totalBytes += v.limit();
+      }
+
+      if ( count % 1000 == 0 ) {
+        end = System.nanoTime();
+
+            System.out.println("Cache block count: " + cache.size() + " dumped: "+ cache.dumps);
+            //System.out.println("Cache size: " + cache.heapSize());
+            double msTime = ((end - start) / 1000000.0);
+            System.out.println("Seeked: "+ count + " in " + msTime + " (ms) "
+                + (1000.0 / msTime ) + " seeks/ms "
+                + (msTime / 1000.0) + " ms/seek");
+
+            start = System.nanoTime();
+      }
+    }
+    System.out.println("Total bytes: " + totalBytes + " not found: " + notFound);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java
new file mode 100644
index 0000000..ba79c82
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java
@@ -0,0 +1,136 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.nio.ByteBuffer;
+import java.util.LinkedList;
+
+import junit.framework.TestCase;
+
+public class TestCachedBlockQueue extends TestCase {
+
+  public void testQueue() throws Exception {
+
+    CachedBlock cb1 = new CachedBlock(1000, "cb1", 1);
+    CachedBlock cb2 = new CachedBlock(1500, "cb2", 2);
+    CachedBlock cb3 = new CachedBlock(1000, "cb3", 3);
+    CachedBlock cb4 = new CachedBlock(1500, "cb4", 4);
+    CachedBlock cb5 = new CachedBlock(1000, "cb5", 5);
+    CachedBlock cb6 = new CachedBlock(1750, "cb6", 6);
+    CachedBlock cb7 = new CachedBlock(1000, "cb7", 7);
+    CachedBlock cb8 = new CachedBlock(1500, "cb8", 8);
+    CachedBlock cb9 = new CachedBlock(1000, "cb9", 9);
+    CachedBlock cb10 = new CachedBlock(1500, "cb10", 10);
+
+    CachedBlockQueue queue = new CachedBlockQueue(10000,1000);
+
+    queue.add(cb1);
+    queue.add(cb2);
+    queue.add(cb3);
+    queue.add(cb4);
+    queue.add(cb5);
+    queue.add(cb6);
+    queue.add(cb7);
+    queue.add(cb8);
+    queue.add(cb9);
+    queue.add(cb10);
+
+    // We expect cb1 through cb8 to be in the queue
+    long expectedSize = cb1.heapSize() + cb2.heapSize() + cb3.heapSize() +
+      cb4.heapSize() + cb5.heapSize() + cb6.heapSize() + cb7.heapSize() +
+      cb8.heapSize();
+
+    assertEquals(queue.heapSize(), expectedSize);
+
+    LinkedList<org.apache.hadoop.hbase.io.hfile.CachedBlock> blocks =
+      queue.get();
+    assertEquals(blocks.poll().getName(), "cb1");
+    assertEquals(blocks.poll().getName(), "cb2");
+    assertEquals(blocks.poll().getName(), "cb3");
+    assertEquals(blocks.poll().getName(), "cb4");
+    assertEquals(blocks.poll().getName(), "cb5");
+    assertEquals(blocks.poll().getName(), "cb6");
+    assertEquals(blocks.poll().getName(), "cb7");
+    assertEquals(blocks.poll().getName(), "cb8");
+
+  }
+
+  public void testQueueSmallBlockEdgeCase() throws Exception {
+
+    CachedBlock cb1 = new CachedBlock(1000, "cb1", 1);
+    CachedBlock cb2 = new CachedBlock(1500, "cb2", 2);
+    CachedBlock cb3 = new CachedBlock(1000, "cb3", 3);
+    CachedBlock cb4 = new CachedBlock(1500, "cb4", 4);
+    CachedBlock cb5 = new CachedBlock(1000, "cb5", 5);
+    CachedBlock cb6 = new CachedBlock(1750, "cb6", 6);
+    CachedBlock cb7 = new CachedBlock(1000, "cb7", 7);
+    CachedBlock cb8 = new CachedBlock(1500, "cb8", 8);
+    CachedBlock cb9 = new CachedBlock(1000, "cb9", 9);
+    CachedBlock cb10 = new CachedBlock(1500, "cb10", 10);
+
+    CachedBlockQueue queue = new CachedBlockQueue(10000,1000);
+
+    queue.add(cb1);
+    queue.add(cb2);
+    queue.add(cb3);
+    queue.add(cb4);
+    queue.add(cb5);
+    queue.add(cb6);
+    queue.add(cb7);
+    queue.add(cb8);
+    queue.add(cb9);
+    queue.add(cb10);
+
+    CachedBlock cb0 = new CachedBlock(10 + CachedBlock.PER_BLOCK_OVERHEAD, "cb0", 0);
+    queue.add(cb0);
+
+    // This is older so we must include it, but it will not end up kicking
+    // anything out because (heapSize - cb8.heapSize + cb0.heapSize < maxSize)
+    // and we must always maintain heapSize >= maxSize once we achieve it.
+
+    // We expect cb0 through cb8 to be in the queue
+    long expectedSize = cb1.heapSize() + cb2.heapSize() + cb3.heapSize() +
+      cb4.heapSize() + cb5.heapSize() + cb6.heapSize() + cb7.heapSize() +
+      cb8.heapSize() + cb0.heapSize();
+
+    assertEquals(queue.heapSize(), expectedSize);
+
+    LinkedList<org.apache.hadoop.hbase.io.hfile.CachedBlock> blocks = queue.get();
+    assertEquals(blocks.poll().getName(), "cb0");
+    assertEquals(blocks.poll().getName(), "cb1");
+    assertEquals(blocks.poll().getName(), "cb2");
+    assertEquals(blocks.poll().getName(), "cb3");
+    assertEquals(blocks.poll().getName(), "cb4");
+    assertEquals(blocks.poll().getName(), "cb5");
+    assertEquals(blocks.poll().getName(), "cb6");
+    assertEquals(blocks.poll().getName(), "cb7");
+    assertEquals(blocks.poll().getName(), "cb8");
+
+  }
+
+  private static class CachedBlock extends org.apache.hadoop.hbase.io.hfile.CachedBlock
+  {
+    public CachedBlock(long heapSize, String name, long accessTime) {
+      super(name,
+          ByteBuffer.allocate((int)(heapSize - CachedBlock.PER_BLOCK_OVERHEAD)),
+          accessTime,false);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
new file mode 100644
index 0000000..94aff3d
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
@@ -0,0 +1,303 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Arrays;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.KeyValue.KeyComparator;
+import org.apache.hadoop.hbase.io.hfile.HFile.BlockIndex;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.io.hfile.HFile.Writer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.ClassSize;
+import org.apache.hadoop.io.Writable;
+
+/**
+ * test hfile features.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class TestHFile extends HBaseTestCase {
+  static final Log LOG = LogFactory.getLog(TestHFile.class);
+
+  private static String ROOT_DIR =
+    HBaseTestingUtility.getTestDir("TestHFile").toString();
+  private final int minBlockSize = 512;
+  private static String localFormatter = "%010d";
+
+  /**
+   * Test empty HFile.
+   * Test all features work reasonably when hfile is empty of entries.
+   * @throws IOException
+   */
+  public void testEmptyHFile() throws IOException {
+    Path f = new Path(ROOT_DIR, getName());
+    Writer w = new Writer(this.fs, f);
+    w.close();
+    Reader r = new Reader(fs, f, null, false);
+    r.loadFileInfo();
+    assertNull(r.getFirstKey());
+    assertNull(r.getLastKey());
+  }
+
+  // write some records into the tfile
+  // write them twice
+  private int writeSomeRecords(Writer writer, int start, int n)
+      throws IOException {
+    String value = "value";
+    for (int i = start; i < (start + n); i++) {
+      String key = String.format(localFormatter, Integer.valueOf(i));
+      writer.append(Bytes.toBytes(key), Bytes.toBytes(value + key));
+    }
+    return (start + n);
+  }
+
+  private void readAllRecords(HFileScanner scanner) throws IOException {
+    readAndCheckbytes(scanner, 0, 100);
+  }
+
+  // read the records and check
+  private int readAndCheckbytes(HFileScanner scanner, int start, int n)
+      throws IOException {
+    String value = "value";
+    int i = start;
+    for (; i < (start + n); i++) {
+      ByteBuffer key = scanner.getKey();
+      ByteBuffer val = scanner.getValue();
+      String keyStr = String.format(localFormatter, Integer.valueOf(i));
+      String valStr = value + keyStr;
+      byte [] keyBytes = Bytes.toBytes(key);
+      assertTrue("bytes for keys do not match " + keyStr + " " +
+        Bytes.toString(Bytes.toBytes(key)),
+          Arrays.equals(Bytes.toBytes(keyStr), keyBytes));
+      byte [] valBytes = Bytes.toBytes(val);
+      assertTrue("bytes for vals do not match " + valStr + " " +
+        Bytes.toString(valBytes),
+        Arrays.equals(Bytes.toBytes(valStr), valBytes));
+      if (!scanner.next()) {
+        break;
+      }
+    }
+    assertEquals(i, start + n - 1);
+    return (start + n);
+  }
+
+  private byte[] getSomeKey(int rowId) {
+    return String.format(localFormatter, Integer.valueOf(rowId)).getBytes();
+  }
+
+  private void writeRecords(Writer writer) throws IOException {
+    writeSomeRecords(writer, 0, 100);
+    writer.close();
+  }
+
+  private FSDataOutputStream createFSOutput(Path name) throws IOException {
+    if (fs.exists(name)) fs.delete(name, true);
+    FSDataOutputStream fout = fs.create(name);
+    return fout;
+  }
+
+  /**
+   * test none codecs
+   */
+  void basicWithSomeCodec(String codec) throws IOException {
+    Path ncTFile = new Path(ROOT_DIR, "basic.hfile");
+    FSDataOutputStream fout = createFSOutput(ncTFile);
+    Writer writer = new Writer(fout, minBlockSize,
+      Compression.getCompressionAlgorithmByName(codec), null);
+    LOG.info(writer);
+    writeRecords(writer);
+    fout.close();
+    FSDataInputStream fin = fs.open(ncTFile);
+    Reader reader = new Reader(fs.open(ncTFile),
+      fs.getFileStatus(ncTFile).getLen(), null, false);
+    // Load up the index.
+    reader.loadFileInfo();
+    // Get a scanner that caches and that does not use pread.
+    HFileScanner scanner = reader.getScanner(true, false);
+    // Align scanner at start of the file.
+    scanner.seekTo();
+    readAllRecords(scanner);
+    scanner.seekTo(getSomeKey(50));
+    assertTrue("location lookup failed", scanner.seekTo(getSomeKey(50)) == 0);
+    // read the key and see if it matches
+    ByteBuffer readKey = scanner.getKey();
+    assertTrue("seeked key does not match", Arrays.equals(getSomeKey(50),
+      Bytes.toBytes(readKey)));
+
+    scanner.seekTo(new byte[0]);
+    ByteBuffer val1 = scanner.getValue();
+    scanner.seekTo(new byte[0]);
+    ByteBuffer val2 = scanner.getValue();
+    assertTrue(Arrays.equals(Bytes.toBytes(val1), Bytes.toBytes(val2)));
+
+    reader.close();
+    fin.close();
+    fs.delete(ncTFile, true);
+  }
+
+  public void testTFileFeatures() throws IOException {
+    basicWithSomeCodec("none");
+    basicWithSomeCodec("gz");
+  }
+
+  private void writeNumMetablocks(Writer writer, int n) {
+    for (int i = 0; i < n; i++) {
+      writer.appendMetaBlock("HFileMeta" + i, new Writable() {
+        private int val;
+        public Writable setVal(int val) { this.val = val; return this; }
+        
+        @Override
+        public void write(DataOutput out) throws IOException {
+          out.write(("something to test" + val).getBytes());
+        }
+        
+        @Override
+        public void readFields(DataInput in) throws IOException { }
+      }.setVal(i));
+    }
+  }
+
+  private void someTestingWithMetaBlock(Writer writer) {
+    writeNumMetablocks(writer, 10);
+  }
+
+  private void readNumMetablocks(Reader reader, int n) throws IOException {
+    for (int i = 0; i < n; i++) {
+      ByteBuffer actual = reader.getMetaBlock("HFileMeta" + i, false);
+      ByteBuffer expected = 
+        ByteBuffer.wrap(("something to test" + i).getBytes());
+      assertTrue("failed to match metadata", actual.compareTo(expected) == 0);
+    }
+  }
+
+  private void someReadingWithMetaBlock(Reader reader) throws IOException {
+    readNumMetablocks(reader, 10);
+  }
+
+  private void metablocks(final String compress) throws Exception {
+    Path mFile = new Path(ROOT_DIR, "meta.hfile");
+    FSDataOutputStream fout = createFSOutput(mFile);
+    Writer writer = new Writer(fout, minBlockSize,
+      Compression.getCompressionAlgorithmByName(compress), null);
+    someTestingWithMetaBlock(writer);
+    writer.close();
+    fout.close();
+    FSDataInputStream fin = fs.open(mFile);
+    Reader reader = new Reader(fs.open(mFile), this.fs.getFileStatus(mFile)
+        .getLen(), null, false);
+    reader.loadFileInfo();
+    // No data -- this should return false.
+    assertFalse(reader.getScanner(false, false).seekTo());
+    someReadingWithMetaBlock(reader);
+    fs.delete(mFile, true);
+    reader.close();
+    fin.close();
+  }
+
+  // test meta blocks for tfiles
+  public void testMetaBlocks() throws Exception {
+    metablocks("none");
+    metablocks("gz");
+  }
+
+  public void testNullMetaBlocks() throws Exception {
+    Path mFile = new Path(ROOT_DIR, "nometa.hfile");
+    FSDataOutputStream fout = createFSOutput(mFile);
+    Writer writer = new Writer(fout, minBlockSize,
+        Compression.Algorithm.NONE, null);
+    writer.append("foo".getBytes(), "value".getBytes());
+    writer.close();
+    fout.close();
+    Reader reader = new Reader(fs, mFile, null, false);
+    reader.loadFileInfo();
+    assertNull(reader.getMetaBlock("non-existant", false));
+  }
+
+  /**
+   * Make sure the orginals for our compression libs doesn't change on us.
+   */
+  public void testCompressionOrdinance() {
+    //assertTrue(Compression.Algorithm.LZO.ordinal() == 0);
+    assertTrue(Compression.Algorithm.GZ.ordinal() == 1);
+    assertTrue(Compression.Algorithm.NONE.ordinal() == 2);
+  }
+
+
+  public void testComparator() throws IOException {
+    Path mFile = new Path(ROOT_DIR, "meta.tfile");
+    FSDataOutputStream fout = createFSOutput(mFile);
+    Writer writer = new Writer(fout, minBlockSize, (Compression.Algorithm) null,
+      new KeyComparator() {
+        @Override
+        public int compare(byte[] b1, int s1, int l1, byte[] b2, int s2,
+            int l2) {
+          return -Bytes.compareTo(b1, s1, l1, b2, s2, l2);
+
+        }
+        @Override
+        public int compare(byte[] o1, byte[] o2) {
+          return compare(o1, 0, o1.length, o2, 0, o2.length);
+        }
+      });
+    writer.append("3".getBytes(), "0".getBytes());
+    writer.append("2".getBytes(), "0".getBytes());
+    writer.append("1".getBytes(), "0".getBytes());
+    writer.close();
+  }
+
+  /**
+   * Checks if the HeapSize calculator is within reason
+   */
+  @SuppressWarnings("unchecked")
+  public void testHeapSizeForBlockIndex() throws IOException{
+    Class cl = null;
+    long expected = 0L;
+    long actual = 0L;
+
+    cl = BlockIndex.class;
+    expected = ClassSize.estimateBase(cl, false);
+    BlockIndex bi = new BlockIndex(Bytes.BYTES_RAWCOMPARATOR);
+    actual = bi.heapSize();
+    //Since the arrays in BlockIndex(byte [][] blockKeys, long [] blockOffsets,
+    //int [] blockDataSizes) are all null they are not going to show up in the
+    //HeapSize calculation, so need to remove those array costs from ecpected.
+    expected -= ClassSize.align(3 * ClassSize.ARRAY);
+    if(expected != actual) {
+      ClassSize.estimateBase(cl, true);
+      assertEquals(expected, actual);
+    }
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
new file mode 100644
index 0000000..d99fc1c
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
@@ -0,0 +1,384 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.text.DateFormat;
+import java.text.SimpleDateFormat;
+import java.util.Random;
+
+import junit.framework.TestCase;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.hadoop.io.compress.CompressionCodec;
+import org.apache.hadoop.io.compress.GzipCodec;
+
+/**
+ *  Set of long-running tests to measure performance of HFile.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class TestHFilePerformance extends TestCase {
+  private static String ROOT_DIR =
+    HBaseTestingUtility.getTestDir("TestHFilePerformance").toString();
+  private FileSystem fs;
+  private Configuration conf;
+  private long startTimeEpoch;
+  private long finishTimeEpoch;
+  private DateFormat formatter;
+
+  @Override
+  public void setUp() throws IOException {
+    conf = new Configuration();
+    fs = FileSystem.get(conf);
+    formatter = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
+  }
+
+  public void startTime() {
+    startTimeEpoch = System.currentTimeMillis();
+    System.out.println(formatTime() + " Started timing.");
+  }
+
+  public void stopTime() {
+    finishTimeEpoch = System.currentTimeMillis();
+    System.out.println(formatTime() + " Stopped timing.");
+  }
+
+  public long getIntervalMillis() {
+    return finishTimeEpoch - startTimeEpoch;
+  }
+
+  public void printlnWithTimestamp(String message) {
+    System.out.println(formatTime() + "  " +  message);
+  }
+
+  /*
+   * Format millis into minutes and seconds.
+   */
+  public String formatTime(long milis){
+    return formatter.format(milis);
+  }
+
+  public String formatTime(){
+    return formatTime(System.currentTimeMillis());
+  }
+
+  private FSDataOutputStream createFSOutput(Path name) throws IOException {
+    if (fs.exists(name))
+      fs.delete(name, true);
+    FSDataOutputStream fout = fs.create(name);
+    return fout;
+  }
+
+  //TODO have multiple ways of generating key/value e.g. dictionary words
+  //TODO to have a sample compressable data, for now, made 1 out of 3 values random
+  //     keys are all random.
+
+  private static class KeyValueGenerator {
+    Random keyRandomizer;
+    Random valueRandomizer;
+    long randomValueRatio = 3; // 1 out of randomValueRatio generated values will be random.
+    long valueSequence = 0 ;
+
+
+    KeyValueGenerator() {
+      keyRandomizer = new Random(0L); //TODO with seed zero
+      valueRandomizer = new Random(1L); //TODO with seed one
+    }
+
+    // Key is always random now.
+    void getKey(byte[] key) {
+      keyRandomizer.nextBytes(key);
+    }
+
+    void getValue(byte[] value) {
+      if (valueSequence % randomValueRatio == 0)
+          valueRandomizer.nextBytes(value);
+      valueSequence++;
+    }
+  }
+
+  /**
+   *
+   * @param fileType "HFile" or "SequenceFile"
+   * @param keyLength
+   * @param valueLength
+   * @param codecName "none", "lzo", "gz"
+   * @param rows number of rows to be written.
+   * @param writeMethod used for HFile only.
+   * @param minBlockSize used for HFile only.
+   * @throws IOException
+   */
+   //TODO writeMethod: implement multiple ways of writing e.g. A) known length (no chunk) B) using a buffer and streaming (for many chunks).
+  public void timeWrite(String fileType, int keyLength, int valueLength,
+    String codecName, long rows, String writeMethod, int minBlockSize)
+  throws IOException {
+    System.out.println("File Type: " + fileType);
+    System.out.println("Writing " + fileType + " with codecName: " + codecName);
+    long totalBytesWritten = 0;
+
+
+    //Using separate randomizer for key/value with seeds matching Sequence File.
+    byte[] key = new byte[keyLength];
+    byte[] value = new byte[valueLength];
+    KeyValueGenerator generator = new KeyValueGenerator();
+
+    startTime();
+
+    Path path = new Path(ROOT_DIR, fileType + ".Performance");
+    System.out.println(ROOT_DIR + path.getName());
+    FSDataOutputStream fout =  createFSOutput(path);
+
+    if ("HFile".equals(fileType)){
+        System.out.println("HFile write method: ");
+        HFile.Writer writer =
+          new HFile.Writer(fout, minBlockSize, codecName, null);
+
+        // Writing value in one shot.
+        for (long l=0 ; l<rows ; l++ ) {
+          generator.getKey(key);
+          generator.getValue(value);
+          writer.append(key, value);
+          totalBytesWritten += key.length;
+          totalBytesWritten += value.length;
+         }
+        writer.close();
+    } else if ("SequenceFile".equals(fileType)){
+        CompressionCodec codec = null;
+        if ("gz".equals(codecName))
+          codec = new GzipCodec();
+        else if (!"none".equals(codecName))
+          throw new IOException("Codec not supported.");
+
+        SequenceFile.Writer writer;
+
+        //TODO
+        //JobConf conf = new JobConf();
+
+        if (!"none".equals(codecName))
+          writer = SequenceFile.createWriter(conf, fout, BytesWritable.class,
+            BytesWritable.class, SequenceFile.CompressionType.BLOCK, codec);
+        else
+          writer = SequenceFile.createWriter(conf, fout, BytesWritable.class,
+            BytesWritable.class, SequenceFile.CompressionType.NONE, null);
+
+        BytesWritable keyBsw;
+        BytesWritable valBsw;
+        for (long l=0 ; l<rows ; l++ ) {
+
+           generator.getKey(key);
+           keyBsw = new BytesWritable(key);
+           totalBytesWritten += keyBsw.getSize();
+
+           generator.getValue(value);
+           valBsw = new BytesWritable(value);
+           writer.append(keyBsw, valBsw);
+           totalBytesWritten += valBsw.getSize();
+        }
+
+        writer.close();
+    } else
+       throw new IOException("File Type is not supported");
+
+    fout.close();
+    stopTime();
+
+    printlnWithTimestamp("Data written: ");
+    printlnWithTimestamp("  rate  = " +
+      totalBytesWritten / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+    printlnWithTimestamp("  total = " + totalBytesWritten + "B");
+
+    printlnWithTimestamp("File written: ");
+    printlnWithTimestamp("  rate  = " +
+      fs.getFileStatus(path).getLen() / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+    printlnWithTimestamp("  total = " + fs.getFileStatus(path).getLen() + "B");
+  }
+
+  public void timeReading(String fileType, int keyLength, int valueLength,
+      long rows, int method) throws IOException {
+    System.out.println("Reading file of type: " + fileType);
+    Path path = new Path(ROOT_DIR, fileType + ".Performance");
+    System.out.println("Input file size: " + fs.getFileStatus(path).getLen());
+    long totalBytesRead = 0;
+
+
+    ByteBuffer val;
+
+    ByteBuffer key;
+
+    startTime();
+    FSDataInputStream fin = fs.open(path);
+
+    if ("HFile".equals(fileType)){
+        HFile.Reader reader = new HFile.Reader(fs.open(path),
+          fs.getFileStatus(path).getLen(), null, false);
+        reader.loadFileInfo();
+        switch (method) {
+
+          case 0:
+          case 1:
+          default:
+            {
+              HFileScanner scanner = reader.getScanner(false, false);
+              scanner.seekTo();
+              for (long l=0 ; l<rows ; l++ ) {
+                key = scanner.getKey();
+                val = scanner.getValue();
+                totalBytesRead += key.limit() + val.limit();
+                scanner.next();
+              }
+            }
+            break;
+        }
+    } else if("SequenceFile".equals(fileType)){
+
+        SequenceFile.Reader reader;
+        reader = new SequenceFile.Reader(fs, path, new Configuration());
+
+        if (reader.getCompressionCodec() != null) {
+            printlnWithTimestamp("Compression codec class: " + reader.getCompressionCodec().getClass());
+        } else
+            printlnWithTimestamp("Compression codec class: " + "none");
+
+        BytesWritable keyBsw = new BytesWritable();
+        BytesWritable valBsw = new BytesWritable();
+
+        for (long l=0 ; l<rows ; l++ ) {
+          reader.next(keyBsw, valBsw);
+          totalBytesRead += keyBsw.getSize() + valBsw.getSize();
+        }
+        reader.close();
+
+        //TODO make a tests for other types of SequenceFile reading scenarios
+
+    } else {
+        throw new IOException("File Type not supported.");
+    }
+
+
+    //printlnWithTimestamp("Closing reader");
+    fin.close();
+    stopTime();
+    //printlnWithTimestamp("Finished close");
+
+    printlnWithTimestamp("Finished in " + getIntervalMillis() + "ms");
+    printlnWithTimestamp("Data read: ");
+    printlnWithTimestamp("  rate  = " +
+      totalBytesRead / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+    printlnWithTimestamp("  total = " + totalBytesRead + "B");
+
+    printlnWithTimestamp("File read: ");
+    printlnWithTimestamp("  rate  = " +
+      fs.getFileStatus(path).getLen() / getIntervalMillis() * 1000 / 1024 / 1024 + "MB/s");
+    printlnWithTimestamp("  total = " + fs.getFileStatus(path).getLen() + "B");
+
+    //TODO uncomment this for final committing so test files is removed.
+    //fs.delete(path, true);
+  }
+
+  public void testRunComparisons() throws IOException {
+
+    int keyLength = 100; // 100B
+    int valueLength = 5*1024; // 5KB
+    int minBlockSize = 10*1024*1024; // 10MB
+    int rows = 10000;
+
+    System.out.println("****************************** Sequence File *****************************");
+
+    timeWrite("SequenceFile", keyLength, valueLength, "none", rows, null, minBlockSize);
+    System.out.println("\n+++++++\n");
+    timeReading("SequenceFile", keyLength, valueLength, rows, -1);
+
+    System.out.println("");
+    System.out.println("----------------------");
+    System.out.println("");
+
+    /* DISABLED LZO
+    timeWrite("SequenceFile", keyLength, valueLength, "lzo", rows, null, minBlockSize);
+    System.out.println("\n+++++++\n");
+    timeReading("SequenceFile", keyLength, valueLength, rows, -1);
+
+    System.out.println("");
+    System.out.println("----------------------");
+    System.out.println("");
+
+    /* Sequence file can only use native hadoop libs gzipping so commenting out.
+     */
+    try {
+      timeWrite("SequenceFile", keyLength, valueLength, "gz", rows, null,
+        minBlockSize);
+      System.out.println("\n+++++++\n");
+      timeReading("SequenceFile", keyLength, valueLength, rows, -1);
+    } catch (IllegalArgumentException e) {
+      System.out.println("Skipping sequencefile gz: " + e.getMessage());
+    }
+
+
+    System.out.println("\n\n\n");
+    System.out.println("****************************** HFile *****************************");
+
+    timeWrite("HFile", keyLength, valueLength, "none", rows, null, minBlockSize);
+    System.out.println("\n+++++++\n");
+    timeReading("HFile", keyLength, valueLength, rows, 0 );
+
+    System.out.println("");
+    System.out.println("----------------------");
+    System.out.println("");
+/* DISABLED LZO
+    timeWrite("HFile", keyLength, valueLength, "lzo", rows, null, minBlockSize);
+    System.out.println("\n+++++++\n");
+    timeReading("HFile", keyLength, valueLength, rows, 0 );
+    System.out.println("\n+++++++\n");
+    timeReading("HFile", keyLength, valueLength, rows, 1 );
+    System.out.println("\n+++++++\n");
+    timeReading("HFile", keyLength, valueLength, rows, 2 );
+
+    System.out.println("");
+    System.out.println("----------------------");
+    System.out.println("");
+*/
+    timeWrite("HFile", keyLength, valueLength, "gz", rows, null, minBlockSize);
+    System.out.println("\n+++++++\n");
+    timeReading("HFile", keyLength, valueLength, rows, 0 );
+
+    System.out.println("\n\n\n\nNotes: ");
+    System.out.println(" * Timing includes open/closing of files.");
+    System.out.println(" * Timing includes reading both Key and Value");
+    System.out.println(" * Data is generated as random bytes. Other methods e.g. using " +
+            "dictionary with care for distributation of words is under development.");
+    System.out.println(" * Timing of write currently, includes random value/key generations. " +
+            "Which is the same for Sequence File and HFile. Another possibility is to generate " +
+            "test data beforehand");
+    System.out.println(" * We need to mitigate cache effect on benchmark. We can apply several " +
+            "ideas, for next step we do a large dummy read between benchmark read to dismantle " +
+            "caching of data. Renaming of file may be helpful. We can have a loop that reads with" +
+            " the same method several times and flood cache every time and average it to get a" +
+            " better number.");
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
new file mode 100644
index 0000000..307e642
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
@@ -0,0 +1,496 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.Random;
+import java.util.StringTokenizer;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.io.hfile.HFile.Reader;
+import org.apache.hadoop.hbase.io.hfile.HFile.Writer;
+import org.apache.hadoop.io.BytesWritable;
+
+/**
+ * test the performance for seek.
+ * <p>
+ * Copied from
+ * <a href="https://issues.apache.org/jira/browse/HADOOP-3315">hadoop-3315 tfile</a>.
+ * Remove after tfile is committed and use the tfile version of this class
+ * instead.</p>
+ */
+public class TestHFileSeek extends TestCase {
+  private MyOptions options;
+  private Configuration conf;
+  private Path path;
+  private FileSystem fs;
+  private NanoTimer timer;
+  private Random rng;
+  private RandomDistribution.DiscreteRNG keyLenGen;
+  private KVGenerator kvGen;
+
+  @Override
+  public void setUp() throws IOException {
+    if (options == null) {
+      options = new MyOptions(new String[0]);
+    }
+
+    conf = new Configuration();
+    conf.setInt("tfile.fs.input.buffer.size", options.fsInputBufferSize);
+    conf.setInt("tfile.fs.output.buffer.size", options.fsOutputBufferSize);
+    path = new Path(new Path(options.rootDir), options.file);
+    fs = path.getFileSystem(conf);
+    timer = new NanoTimer(false);
+    rng = new Random(options.seed);
+    keyLenGen =
+        new RandomDistribution.Zipf(new Random(rng.nextLong()),
+            options.minKeyLen, options.maxKeyLen, 1.2);
+    RandomDistribution.DiscreteRNG valLenGen =
+        new RandomDistribution.Flat(new Random(rng.nextLong()),
+            options.minValLength, options.maxValLength);
+    RandomDistribution.DiscreteRNG wordLenGen =
+        new RandomDistribution.Flat(new Random(rng.nextLong()),
+            options.minWordLen, options.maxWordLen);
+    kvGen =
+        new KVGenerator(rng, true, keyLenGen, valLenGen, wordLenGen,
+            options.dictSize);
+  }
+
+  @Override
+  public void tearDown() {
+    try {
+      fs.close();
+    }
+    catch (Exception e) {
+      // Nothing
+    }
+  }
+
+  private static FSDataOutputStream createFSOutput(Path name, FileSystem fs)
+    throws IOException {
+    if (fs.exists(name)) {
+      fs.delete(name, true);
+    }
+    FSDataOutputStream fout = fs.create(name);
+    return fout;
+  }
+
+  private void createTFile() throws IOException {
+    long totalBytes = 0;
+    FSDataOutputStream fout = createFSOutput(path, fs);
+    try {
+      Writer writer =
+          new Writer(fout, options.minBlockSize, options.compress, null);
+      try {
+        BytesWritable key = new BytesWritable();
+        BytesWritable val = new BytesWritable();
+        timer.start();
+        for (long i = 0; true; ++i) {
+          if (i % 1000 == 0) { // test the size for every 1000 rows.
+            if (fs.getFileStatus(path).getLen() >= options.fileSize) {
+              break;
+            }
+          }
+          kvGen.next(key, val, false);
+          byte [] k = new byte [key.getLength()];
+          System.arraycopy(key.getBytes(), 0, k, 0, key.getLength());
+          byte [] v = new byte [val.getLength()];
+          System.arraycopy(val.getBytes(), 0, v, 0, key.getLength());
+          writer.append(k, v);
+          totalBytes += key.getLength();
+          totalBytes += val.getLength();
+        }
+        timer.stop();
+      }
+      finally {
+        writer.close();
+      }
+    }
+    finally {
+      fout.close();
+    }
+    double duration = (double)timer.read()/1000; // in us.
+    long fsize = fs.getFileStatus(path).getLen();
+
+    System.out.printf(
+        "time: %s...uncompressed: %.2fMB...raw thrpt: %.2fMB/s\n",
+        timer.toString(), (double) totalBytes / 1024 / 1024, totalBytes
+            / duration);
+    System.out.printf("time: %s...file size: %.2fMB...disk thrpt: %.2fMB/s\n",
+        timer.toString(), (double) fsize / 1024 / 1024, fsize / duration);
+  }
+
+  public void seekTFile() throws IOException {
+    int miss = 0;
+    long totalBytes = 0;
+    FSDataInputStream fsdis = fs.open(path);
+    Reader reader =
+      new Reader(fsdis, fs.getFileStatus(path).getLen(), null, false);
+    reader.loadFileInfo();
+    KeySampler kSampler =
+        new KeySampler(rng, reader.getFirstKey(), reader.getLastKey(),
+            keyLenGen);
+    HFileScanner scanner = reader.getScanner(false, false);
+    BytesWritable key = new BytesWritable();
+    timer.reset();
+    timer.start();
+    for (int i = 0; i < options.seekCount; ++i) {
+      kSampler.next(key);
+      byte [] k = new byte [key.getLength()];
+      System.arraycopy(key.getBytes(), 0, k, 0, key.getLength());
+      if (scanner.seekTo(k) >= 0) {
+        ByteBuffer bbkey = scanner.getKey();
+        ByteBuffer bbval = scanner.getValue();
+        totalBytes += bbkey.limit();
+        totalBytes += bbval.limit();
+      }
+      else {
+        ++miss;
+      }
+    }
+    timer.stop();
+    System.out.printf(
+        "time: %s...avg seek: %s...%d hit...%d miss...avg I/O size: %.2fKB\n",
+        timer.toString(), NanoTimer.nanoTimeToString(timer.read()
+            / options.seekCount), options.seekCount - miss, miss,
+        (double) totalBytes / 1024 / (options.seekCount - miss));
+
+  }
+
+  public void testSeeks() throws IOException {
+    if (options.doCreate()) {
+      createTFile();
+    }
+
+    if (options.doRead()) {
+      seekTFile();
+    }
+
+    if (options.doCreate()) {
+      fs.delete(path, true);
+    }
+  }
+
+  private static class IntegerRange {
+    private final int from, to;
+
+    public IntegerRange(int from, int to) {
+      this.from = from;
+      this.to = to;
+    }
+
+    public static IntegerRange parse(String s) throws ParseException {
+      StringTokenizer st = new StringTokenizer(s, " \t,");
+      if (st.countTokens() != 2) {
+        throw new ParseException("Bad integer specification: " + s);
+      }
+      int from = Integer.parseInt(st.nextToken());
+      int to = Integer.parseInt(st.nextToken());
+      return new IntegerRange(from, to);
+    }
+
+    public int from() {
+      return from;
+    }
+
+    public int to() {
+      return to;
+    }
+  }
+
+  private static class MyOptions {
+    // hard coded constants
+    int dictSize = 1000;
+    int minWordLen = 5;
+    int maxWordLen = 20;
+
+    String rootDir =
+      HBaseTestingUtility.getTestDir("TestTFileSeek").toString();
+    String file = "TestTFileSeek";
+    // String compress = "lzo"; DISABLED
+    String compress = "none";
+    int minKeyLen = 10;
+    int maxKeyLen = 50;
+    int minValLength = 1024;
+    int maxValLength = 2 * 1024;
+    int minBlockSize = 1 * 1024 * 1024;
+    int fsOutputBufferSize = 1;
+    int fsInputBufferSize = 0;
+    // Default writing 10MB.
+    long fileSize = 10 * 1024 * 1024;
+    long seekCount = 1000;
+    long seed;
+
+    static final int OP_CREATE = 1;
+    static final int OP_READ = 2;
+    int op = OP_CREATE | OP_READ;
+
+    boolean proceed = false;
+
+    public MyOptions(String[] args) {
+      seed = System.nanoTime();
+
+      try {
+        Options opts = buildOptions();
+        CommandLineParser parser = new GnuParser();
+        CommandLine line = parser.parse(opts, args, true);
+        processOptions(line, opts);
+        validateOptions();
+      }
+      catch (ParseException e) {
+        System.out.println(e.getMessage());
+        System.out.println("Try \"--help\" option for details.");
+        setStopProceed();
+      }
+    }
+
+    public boolean proceed() {
+      return proceed;
+    }
+
+    private Options buildOptions() {
+      Option compress =
+          OptionBuilder.withLongOpt("compress").withArgName("[none|lzo|gz]")
+              .hasArg().withDescription("compression scheme").create('c');
+
+      Option fileSize =
+          OptionBuilder.withLongOpt("file-size").withArgName("size-in-MB")
+              .hasArg().withDescription("target size of the file (in MB).")
+              .create('s');
+
+      Option fsInputBufferSz =
+          OptionBuilder.withLongOpt("fs-input-buffer").withArgName("size")
+              .hasArg().withDescription(
+                  "size of the file system input buffer (in bytes).").create(
+                  'i');
+
+      Option fsOutputBufferSize =
+          OptionBuilder.withLongOpt("fs-output-buffer").withArgName("size")
+              .hasArg().withDescription(
+                  "size of the file system output buffer (in bytes).").create(
+                  'o');
+
+      Option keyLen =
+          OptionBuilder
+              .withLongOpt("key-length")
+              .withArgName("min,max")
+              .hasArg()
+              .withDescription(
+                  "the length range of the key (in bytes)")
+              .create('k');
+
+      Option valueLen =
+          OptionBuilder
+              .withLongOpt("value-length")
+              .withArgName("min,max")
+              .hasArg()
+              .withDescription(
+                  "the length range of the value (in bytes)")
+              .create('v');
+
+      Option blockSz =
+          OptionBuilder.withLongOpt("block").withArgName("size-in-KB").hasArg()
+              .withDescription("minimum block size (in KB)").create('b');
+
+      Option operation =
+          OptionBuilder.withLongOpt("operation").withArgName("r|w|rw").hasArg()
+              .withDescription(
+                  "action: seek-only, create-only, seek-after-create").create(
+                  'x');
+
+      Option rootDir =
+          OptionBuilder.withLongOpt("root-dir").withArgName("path").hasArg()
+              .withDescription(
+                  "specify root directory where files will be created.")
+              .create('r');
+
+      Option file =
+          OptionBuilder.withLongOpt("file").withArgName("name").hasArg()
+              .withDescription("specify the file name to be created or read.")
+              .create('f');
+
+      Option seekCount =
+          OptionBuilder
+              .withLongOpt("seek")
+              .withArgName("count")
+              .hasArg()
+              .withDescription(
+                  "specify how many seek operations we perform (requires -x r or -x rw.")
+              .create('n');
+
+      Option help =
+          OptionBuilder.withLongOpt("help").hasArg(false).withDescription(
+              "show this screen").create("h");
+
+      return new Options().addOption(compress).addOption(fileSize).addOption(
+          fsInputBufferSz).addOption(fsOutputBufferSize).addOption(keyLen)
+          .addOption(blockSz).addOption(rootDir).addOption(valueLen).addOption(
+              operation).addOption(seekCount).addOption(file).addOption(help);
+
+    }
+
+    private void processOptions(CommandLine line, Options opts)
+        throws ParseException {
+      // --help -h and --version -V must be processed first.
+      if (line.hasOption('h')) {
+        HelpFormatter formatter = new HelpFormatter();
+        System.out.println("TFile and SeqFile benchmark.");
+        System.out.println();
+        formatter.printHelp(100,
+            "java ... TestTFileSeqFileComparison [options]",
+            "\nSupported options:", opts, "");
+        return;
+      }
+
+      if (line.hasOption('c')) {
+        compress = line.getOptionValue('c');
+      }
+
+      if (line.hasOption('d')) {
+        dictSize = Integer.parseInt(line.getOptionValue('d'));
+      }
+
+      if (line.hasOption('s')) {
+        fileSize = Long.parseLong(line.getOptionValue('s')) * 1024 * 1024;
+      }
+
+      if (line.hasOption('i')) {
+        fsInputBufferSize = Integer.parseInt(line.getOptionValue('i'));
+      }
+
+      if (line.hasOption('o')) {
+        fsOutputBufferSize = Integer.parseInt(line.getOptionValue('o'));
+      }
+
+      if (line.hasOption('n')) {
+        seekCount = Integer.parseInt(line.getOptionValue('n'));
+      }
+
+      if (line.hasOption('k')) {
+        IntegerRange ir = IntegerRange.parse(line.getOptionValue('k'));
+        minKeyLen = ir.from();
+        maxKeyLen = ir.to();
+      }
+
+      if (line.hasOption('v')) {
+        IntegerRange ir = IntegerRange.parse(line.getOptionValue('v'));
+        minValLength = ir.from();
+        maxValLength = ir.to();
+      }
+
+      if (line.hasOption('b')) {
+        minBlockSize = Integer.parseInt(line.getOptionValue('b')) * 1024;
+      }
+
+      if (line.hasOption('r')) {
+        rootDir = line.getOptionValue('r');
+      }
+
+      if (line.hasOption('f')) {
+        file = line.getOptionValue('f');
+      }
+
+      if (line.hasOption('S')) {
+        seed = Long.parseLong(line.getOptionValue('S'));
+      }
+
+      if (line.hasOption('x')) {
+        String strOp = line.getOptionValue('x');
+        if (strOp.equals("r")) {
+          op = OP_READ;
+        }
+        else if (strOp.equals("w")) {
+          op = OP_CREATE;
+        }
+        else if (strOp.equals("rw")) {
+          op = OP_CREATE | OP_READ;
+        }
+        else {
+          throw new ParseException("Unknown action specifier: " + strOp);
+        }
+      }
+
+      proceed = true;
+    }
+
+    private void validateOptions() throws ParseException {
+      if (!compress.equals("none") && !compress.equals("lzo")
+          && !compress.equals("gz")) {
+        throw new ParseException("Unknown compression scheme: " + compress);
+      }
+
+      if (minKeyLen >= maxKeyLen) {
+        throw new ParseException(
+            "Max key length must be greater than min key length.");
+      }
+
+      if (minValLength >= maxValLength) {
+        throw new ParseException(
+            "Max value length must be greater than min value length.");
+      }
+
+      if (minWordLen >= maxWordLen) {
+        throw new ParseException(
+            "Max word length must be greater than min word length.");
+      }
+      return;
+    }
+
+    private void setStopProceed() {
+      proceed = false;
+    }
+
+    public boolean doCreate() {
+      return (op & OP_CREATE) != 0;
+    }
+
+    public boolean doRead() {
+      return (op & OP_READ) != 0;
+    }
+  }
+
+  public static void main(String[] argv) throws IOException {
+    TestHFileSeek testCase = new TestHFileSeek();
+    MyOptions options = new MyOptions(argv);
+
+    if (options.proceed == false) {
+      return;
+    }
+
+    testCase.options = options;
+    testCase.setUp();
+    testCase.testSeeks();
+    testCase.tearDown();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
new file mode 100644
index 0000000..2713dc1
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
@@ -0,0 +1,529 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.nio.ByteBuffer;
+import java.util.Random;
+
+import org.apache.hadoop.hbase.io.HeapSize;
+import org.apache.hadoop.hbase.util.ClassSize;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests the concurrent LruBlockCache.<p>
+ *
+ * Tests will ensure it grows and shrinks in size properly,
+ * evictions run when they're supposed to and do what they should,
+ * and that cached blocks are accessible when expected to be.
+ */
+public class TestLruBlockCache extends TestCase {
+
+  public void testBackgroundEvictionThread() throws Exception {
+
+    long maxSize = 100000;
+    long blockSize = calculateBlockSizeDefault(maxSize, 9); // room for 9, will evict
+
+    LruBlockCache cache = new LruBlockCache(maxSize,blockSize);
+
+    Block [] blocks = generateFixedBlocks(10, blockSize, "block");
+
+    // Add all the blocks
+    for(Block block : blocks) {
+      cache.cacheBlock(block.blockName, block.buf);
+    }
+
+    // Let the eviction run
+    int n = 0;
+    while(cache.getEvictionCount() == 0) {
+      System.out.println("sleep");
+      Thread.sleep(1000);
+      assertTrue(n++ < 2);
+    }
+    System.out.println("Background Evictions run: " + cache.getEvictionCount());
+
+    // A single eviction run should have occurred
+    assertEquals(cache.getEvictionCount(), 1);
+  }
+
+  public void testCacheSimple() throws Exception {
+
+    long maxSize = 1000000;
+    long blockSize = calculateBlockSizeDefault(maxSize, 101);
+
+    LruBlockCache cache = new LruBlockCache(maxSize, blockSize);
+
+    Block [] blocks = generateRandomBlocks(100, blockSize);
+
+    long expectedCacheSize = cache.heapSize();
+
+    // Confirm empty
+    for(Block block : blocks) {
+      assertTrue(cache.getBlock(block.blockName, true) == null);
+    }
+
+    // Add blocks
+    for(Block block : blocks) {
+      cache.cacheBlock(block.blockName, block.buf);
+      expectedCacheSize += block.heapSize();
+    }
+
+    // Verify correctly calculated cache heap size
+    assertEquals(expectedCacheSize, cache.heapSize());
+
+    // Check if all blocks are properly cached and retrieved
+    for(Block block : blocks) {
+      ByteBuffer buf = cache.getBlock(block.blockName, true);
+      assertTrue(buf != null);
+      assertEquals(buf.capacity(), block.buf.capacity());
+    }
+
+    // Re-add same blocks and ensure nothing has changed
+    for(Block block : blocks) {
+      try {
+        cache.cacheBlock(block.blockName, block.buf);
+        assertTrue("Cache should not allow re-caching a block", false);
+      } catch(RuntimeException re) {
+        // expected
+      }
+    }
+
+    // Verify correctly calculated cache heap size
+    assertEquals(expectedCacheSize, cache.heapSize());
+
+    // Check if all blocks are properly cached and retrieved
+    for(Block block : blocks) {
+      ByteBuffer buf = cache.getBlock(block.blockName, true);
+      assertTrue(buf != null);
+      assertEquals(buf.capacity(), block.buf.capacity());
+    }
+
+    // Expect no evictions
+    assertEquals(0, cache.getEvictionCount());
+    Thread t = new LruBlockCache.StatisticsThread(cache);
+    t.start();
+    t.join();
+  }
+
+  public void testCacheEvictionSimple() throws Exception {
+
+    long maxSize = 100000;
+    long blockSize = calculateBlockSizeDefault(maxSize, 10);
+
+    LruBlockCache cache = new LruBlockCache(maxSize,blockSize,false);
+
+    Block [] blocks = generateFixedBlocks(10, blockSize, "block");
+
+    long expectedCacheSize = cache.heapSize();
+
+    // Add all the blocks
+    for(Block block : blocks) {
+      cache.cacheBlock(block.blockName, block.buf);
+      expectedCacheSize += block.heapSize();
+    }
+
+    // A single eviction run should have occurred
+    assertEquals(1, cache.getEvictionCount());
+
+    // Our expected size overruns acceptable limit
+    assertTrue(expectedCacheSize >
+      (maxSize * LruBlockCache.DEFAULT_ACCEPTABLE_FACTOR));
+
+    // But the cache did not grow beyond max
+    assertTrue(cache.heapSize() < maxSize);
+
+    // And is still below the acceptable limit
+    assertTrue(cache.heapSize() <
+        (maxSize * LruBlockCache.DEFAULT_ACCEPTABLE_FACTOR));
+
+    // All blocks except block 0 and 1 should be in the cache
+    assertTrue(cache.getBlock(blocks[0].blockName, true) == null);
+    assertTrue(cache.getBlock(blocks[1].blockName, true) == null);
+    for(int i=2;i<blocks.length;i++) {
+      assertEquals(cache.getBlock(blocks[i].blockName, true),
+          blocks[i].buf);
+    }
+  }
+
+  public void testCacheEvictionTwoPriorities() throws Exception {
+
+    long maxSize = 100000;
+    long blockSize = calculateBlockSizeDefault(maxSize, 10);
+
+    LruBlockCache cache = new LruBlockCache(maxSize,blockSize,false);
+
+    Block [] singleBlocks = generateFixedBlocks(5, 10000, "single");
+    Block [] multiBlocks = generateFixedBlocks(5, 10000, "multi");
+
+    long expectedCacheSize = cache.heapSize();
+
+    // Add and get the multi blocks
+    for(Block block : multiBlocks) {
+      cache.cacheBlock(block.blockName, block.buf);
+      expectedCacheSize += block.heapSize();
+      assertEquals(cache.getBlock(block.blockName, true), block.buf);
+    }
+
+    // Add the single blocks (no get)
+    for(Block block : singleBlocks) {
+      cache.cacheBlock(block.blockName, block.buf);
+      expectedCacheSize += block.heapSize();
+    }
+
+    // A single eviction run should have occurred
+    assertEquals(cache.getEvictionCount(), 1);
+
+    // We expect two entries evicted
+    assertEquals(cache.getEvictedCount(), 2);
+
+    // Our expected size overruns acceptable limit
+    assertTrue(expectedCacheSize >
+      (maxSize * LruBlockCache.DEFAULT_ACCEPTABLE_FACTOR));
+
+    // But the cache did not grow beyond max
+    assertTrue(cache.heapSize() <= maxSize);
+
+    // And is now below the acceptable limit
+    assertTrue(cache.heapSize() <=
+        (maxSize * LruBlockCache.DEFAULT_ACCEPTABLE_FACTOR));
+
+    // We expect fairness across the two priorities.
+    // This test makes multi go barely over its limit, in-memory
+    // empty, and the rest in single.  Two single evictions and
+    // one multi eviction expected.
+    assertTrue(cache.getBlock(singleBlocks[0].blockName, true) == null);
+    assertTrue(cache.getBlock(multiBlocks[0].blockName, true) == null);
+
+    // And all others to be cached
+    for(int i=1;i<4;i++) {
+      assertEquals(cache.getBlock(singleBlocks[i].blockName, true),
+          singleBlocks[i].buf);
+      assertEquals(cache.getBlock(multiBlocks[i].blockName, true),
+          multiBlocks[i].buf);
+    }
+  }
+
+  public void testCacheEvictionThreePriorities() throws Exception {
+
+    long maxSize = 100000;
+    long blockSize = calculateBlockSize(maxSize, 10);
+
+    LruBlockCache cache = new LruBlockCache(maxSize, blockSize, false,
+        (int)Math.ceil(1.2*maxSize/blockSize),
+        LruBlockCache.DEFAULT_LOAD_FACTOR,
+        LruBlockCache.DEFAULT_CONCURRENCY_LEVEL,
+        0.98f, // min
+        0.99f, // acceptable
+        0.33f, // single
+        0.33f, // multi
+        0.34f);// memory
+
+
+    Block [] singleBlocks = generateFixedBlocks(5, blockSize, "single");
+    Block [] multiBlocks = generateFixedBlocks(5, blockSize, "multi");
+    Block [] memoryBlocks = generateFixedBlocks(5, blockSize, "memory");
+
+    long expectedCacheSize = cache.heapSize();
+
+    // Add 3 blocks from each priority
+    for(int i=0;i<3;i++) {
+
+      // Just add single blocks
+      cache.cacheBlock(singleBlocks[i].blockName, singleBlocks[i].buf);
+      expectedCacheSize += singleBlocks[i].heapSize();
+
+      // Add and get multi blocks
+      cache.cacheBlock(multiBlocks[i].blockName, multiBlocks[i].buf);
+      expectedCacheSize += multiBlocks[i].heapSize();
+      cache.getBlock(multiBlocks[i].blockName, true);
+
+      // Add memory blocks as such
+      cache.cacheBlock(memoryBlocks[i].blockName, memoryBlocks[i].buf, true);
+      expectedCacheSize += memoryBlocks[i].heapSize();
+
+    }
+
+    // Do not expect any evictions yet
+    assertEquals(0, cache.getEvictionCount());
+
+    // Verify cache size
+    assertEquals(expectedCacheSize, cache.heapSize());
+
+    // Insert a single block, oldest single should be evicted
+    cache.cacheBlock(singleBlocks[3].blockName, singleBlocks[3].buf);
+
+    // Single eviction, one thing evicted
+    assertEquals(1, cache.getEvictionCount());
+    assertEquals(1, cache.getEvictedCount());
+
+    // Verify oldest single block is the one evicted
+    assertEquals(null, cache.getBlock(singleBlocks[0].blockName, true));
+
+    // Change the oldest remaining single block to a multi
+    cache.getBlock(singleBlocks[1].blockName, true);
+
+    // Insert another single block
+    cache.cacheBlock(singleBlocks[4].blockName, singleBlocks[4].buf);
+
+    // Two evictions, two evicted.
+    assertEquals(2, cache.getEvictionCount());
+    assertEquals(2, cache.getEvictedCount());
+
+    // Oldest multi block should be evicted now
+    assertEquals(null, cache.getBlock(multiBlocks[0].blockName, true));
+
+    // Insert another memory block
+    cache.cacheBlock(memoryBlocks[3].blockName, memoryBlocks[3].buf, true);
+
+    // Three evictions, three evicted.
+    assertEquals(3, cache.getEvictionCount());
+    assertEquals(3, cache.getEvictedCount());
+
+    // Oldest memory block should be evicted now
+    assertEquals(null, cache.getBlock(memoryBlocks[0].blockName, true));
+
+    // Add a block that is twice as big (should force two evictions)
+    Block [] bigBlocks = generateFixedBlocks(3, blockSize*3, "big");
+    cache.cacheBlock(bigBlocks[0].blockName, bigBlocks[0].buf);
+
+    // Four evictions, six evicted (inserted block 3X size, expect +3 evicted)
+    assertEquals(4, cache.getEvictionCount());
+    assertEquals(6, cache.getEvictedCount());
+
+    // Expect three remaining singles to be evicted
+    assertEquals(null, cache.getBlock(singleBlocks[2].blockName, true));
+    assertEquals(null, cache.getBlock(singleBlocks[3].blockName, true));
+    assertEquals(null, cache.getBlock(singleBlocks[4].blockName, true));
+
+    // Make the big block a multi block
+    cache.getBlock(bigBlocks[0].blockName, true);
+
+    // Cache another single big block
+    cache.cacheBlock(bigBlocks[1].blockName, bigBlocks[1].buf);
+
+    // Five evictions, nine evicted (3 new)
+    assertEquals(5, cache.getEvictionCount());
+    assertEquals(9, cache.getEvictedCount());
+
+    // Expect three remaining multis to be evicted
+    assertEquals(null, cache.getBlock(singleBlocks[1].blockName, true));
+    assertEquals(null, cache.getBlock(multiBlocks[1].blockName, true));
+    assertEquals(null, cache.getBlock(multiBlocks[2].blockName, true));
+
+    // Cache a big memory block
+    cache.cacheBlock(bigBlocks[2].blockName, bigBlocks[2].buf, true);
+
+    // Six evictions, twelve evicted (3 new)
+    assertEquals(6, cache.getEvictionCount());
+    assertEquals(12, cache.getEvictedCount());
+
+    // Expect three remaining in-memory to be evicted
+    assertEquals(null, cache.getBlock(memoryBlocks[1].blockName, true));
+    assertEquals(null, cache.getBlock(memoryBlocks[2].blockName, true));
+    assertEquals(null, cache.getBlock(memoryBlocks[3].blockName, true));
+
+
+  }
+
+  // test scan resistance
+  public void testScanResistance() throws Exception {
+
+    long maxSize = 100000;
+    long blockSize = calculateBlockSize(maxSize, 10);
+
+    LruBlockCache cache = new LruBlockCache(maxSize, blockSize, false,
+        (int)Math.ceil(1.2*maxSize/blockSize),
+        LruBlockCache.DEFAULT_LOAD_FACTOR,
+        LruBlockCache.DEFAULT_CONCURRENCY_LEVEL,
+        0.66f, // min
+        0.99f, // acceptable
+        0.33f, // single
+        0.33f, // multi
+        0.34f);// memory
+
+    Block [] singleBlocks = generateFixedBlocks(20, blockSize, "single");
+    Block [] multiBlocks = generateFixedBlocks(5, blockSize, "multi");
+
+    // Add 5 multi blocks
+    for(Block block : multiBlocks) {
+      cache.cacheBlock(block.blockName, block.buf);
+      cache.getBlock(block.blockName, true);
+    }
+
+    // Add 5 single blocks
+    for(int i=0;i<5;i++) {
+      cache.cacheBlock(singleBlocks[i].blockName, singleBlocks[i].buf);
+    }
+
+    // An eviction ran
+    assertEquals(1, cache.getEvictionCount());
+
+    // To drop down to 2/3 capacity, we'll need to evict 4 blocks
+    assertEquals(4, cache.getEvictedCount());
+
+    // Should have been taken off equally from single and multi
+    assertEquals(null, cache.getBlock(singleBlocks[0].blockName, true));
+    assertEquals(null, cache.getBlock(singleBlocks[1].blockName, true));
+    assertEquals(null, cache.getBlock(multiBlocks[0].blockName, true));
+    assertEquals(null, cache.getBlock(multiBlocks[1].blockName, true));
+
+    // Let's keep "scanning" by adding single blocks.  From here on we only
+    // expect evictions from the single bucket.
+
+    // Every time we reach 10 total blocks (every 4 inserts) we get 4 single
+    // blocks evicted.  Inserting 13 blocks should yield 3 more evictions and
+    // 12 more evicted.
+
+    for(int i=5;i<18;i++) {
+      cache.cacheBlock(singleBlocks[i].blockName, singleBlocks[i].buf);
+    }
+
+    // 4 total evictions, 16 total evicted
+    assertEquals(4, cache.getEvictionCount());
+    assertEquals(16, cache.getEvictedCount());
+
+    // Should now have 7 total blocks
+    assertEquals(7, cache.size());
+
+  }
+
+  // test setMaxSize
+  public void testResizeBlockCache() throws Exception {
+
+    long maxSize = 300000;
+    long blockSize = calculateBlockSize(maxSize, 31);
+
+    LruBlockCache cache = new LruBlockCache(maxSize, blockSize, false,
+        (int)Math.ceil(1.2*maxSize/blockSize),
+        LruBlockCache.DEFAULT_LOAD_FACTOR,
+        LruBlockCache.DEFAULT_CONCURRENCY_LEVEL,
+        0.98f, // min
+        0.99f, // acceptable
+        0.33f, // single
+        0.33f, // multi
+        0.34f);// memory
+
+    Block [] singleBlocks = generateFixedBlocks(10, blockSize, "single");
+    Block [] multiBlocks = generateFixedBlocks(10, blockSize, "multi");
+    Block [] memoryBlocks = generateFixedBlocks(10, blockSize, "memory");
+
+    // Add all blocks from all priorities
+    for(int i=0;i<10;i++) {
+
+      // Just add single blocks
+      cache.cacheBlock(singleBlocks[i].blockName, singleBlocks[i].buf);
+
+      // Add and get multi blocks
+      cache.cacheBlock(multiBlocks[i].blockName, multiBlocks[i].buf);
+      cache.getBlock(multiBlocks[i].blockName, true);
+
+      // Add memory blocks as such
+      cache.cacheBlock(memoryBlocks[i].blockName, memoryBlocks[i].buf, true);
+    }
+
+    // Do not expect any evictions yet
+    assertEquals(0, cache.getEvictionCount());
+
+    // Resize to half capacity plus an extra block (otherwise we evict an extra)
+    cache.setMaxSize((long)(maxSize * 0.5f));
+
+    // Should have run a single eviction
+    assertEquals(1, cache.getEvictionCount());
+
+    // And we expect 1/2 of the blocks to be evicted
+    assertEquals(15, cache.getEvictedCount());
+
+    // And the oldest 5 blocks from each category should be gone
+    for(int i=0;i<5;i++) {
+      assertEquals(null, cache.getBlock(singleBlocks[i].blockName, true));
+      assertEquals(null, cache.getBlock(multiBlocks[i].blockName, true));
+      assertEquals(null, cache.getBlock(memoryBlocks[i].blockName, true));
+    }
+
+    // And the newest 5 blocks should still be accessible
+    for(int i=5;i<10;i++) {
+      assertEquals(singleBlocks[i].buf, cache.getBlock(singleBlocks[i].blockName, true));
+      assertEquals(multiBlocks[i].buf, cache.getBlock(multiBlocks[i].blockName, true));
+      assertEquals(memoryBlocks[i].buf, cache.getBlock(memoryBlocks[i].blockName, true));
+    }
+  }
+
+  private Block [] generateFixedBlocks(int numBlocks, int size, String pfx) {
+    Block [] blocks = new Block[numBlocks];
+    for(int i=0;i<numBlocks;i++) {
+      blocks[i] = new Block(pfx + i, size);
+    }
+    return blocks;
+  }
+
+  private Block [] generateFixedBlocks(int numBlocks, long size, String pfx) {
+    return generateFixedBlocks(numBlocks, (int)size, pfx);
+  }
+
+  private Block [] generateRandomBlocks(int numBlocks, long maxSize) {
+    Block [] blocks = new Block[numBlocks];
+    Random r = new Random();
+    for(int i=0;i<numBlocks;i++) {
+      blocks[i] = new Block("block" + i, r.nextInt((int)maxSize)+1);
+    }
+    return blocks;
+  }
+
+  private long calculateBlockSize(long maxSize, int numBlocks) {
+    long roughBlockSize = maxSize / numBlocks;
+    int numEntries = (int)Math.ceil((1.2)*maxSize/roughBlockSize);
+    long totalOverhead = LruBlockCache.CACHE_FIXED_OVERHEAD +
+        ClassSize.CONCURRENT_HASHMAP +
+        (numEntries * ClassSize.CONCURRENT_HASHMAP_ENTRY) +
+        (LruBlockCache.DEFAULT_CONCURRENCY_LEVEL * ClassSize.CONCURRENT_HASHMAP_SEGMENT);
+    long negateBlockSize = (long)(totalOverhead/numEntries);
+    negateBlockSize += CachedBlock.PER_BLOCK_OVERHEAD;
+    return ClassSize.align((long)Math.floor((roughBlockSize - negateBlockSize)*0.99f));
+  }
+
+  private long calculateBlockSizeDefault(long maxSize, int numBlocks) {
+    long roughBlockSize = maxSize / numBlocks;
+    int numEntries = (int)Math.ceil((1.2)*maxSize/roughBlockSize);
+    long totalOverhead = LruBlockCache.CACHE_FIXED_OVERHEAD +
+        ClassSize.CONCURRENT_HASHMAP +
+        (numEntries * ClassSize.CONCURRENT_HASHMAP_ENTRY) +
+        (LruBlockCache.DEFAULT_CONCURRENCY_LEVEL * ClassSize.CONCURRENT_HASHMAP_SEGMENT);
+    long negateBlockSize = totalOverhead / numEntries;
+    negateBlockSize += CachedBlock.PER_BLOCK_OVERHEAD;
+    return ClassSize.align((long)Math.floor((roughBlockSize - negateBlockSize)*
+        LruBlockCache.DEFAULT_ACCEPTABLE_FACTOR));
+  }
+
+  private static class Block implements HeapSize {
+    String blockName;
+    ByteBuffer buf;
+
+    Block(String blockName, int size) {
+      this.blockName = blockName;
+      this.buf = ByteBuffer.allocate(size);
+    }
+
+    public long heapSize() {
+      return CachedBlock.PER_BLOCK_OVERHEAD +
+      ClassSize.align(blockName.length()) +
+      ClassSize.align(buf.capacity());
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java
new file mode 100644
index 0000000..1eb1cb6
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java
@@ -0,0 +1,88 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+import static org.junit.Assert.*;
+
+/**
+ * Test {@link HFileScanner#reseekTo(byte[])}
+ */
+public class TestReseekTo {
+
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  @Test
+  public void testReseekTo() throws Exception {
+
+    Path ncTFile = new Path(HBaseTestingUtility.getTestDir(), "basic.hfile");
+    FSDataOutputStream fout = TEST_UTIL.getTestFileSystem().create(ncTFile);
+    HFile.Writer writer = new HFile.Writer(fout, 4000, "none", null);
+    int numberOfKeys = 1000;
+
+    String valueString = "Value";
+
+    List<Integer> keyList = new ArrayList<Integer>();
+    List<String> valueList = new ArrayList<String>();
+
+    for (int key = 0; key < numberOfKeys; key++) {
+      String value = valueString + key;
+      keyList.add(key);
+      valueList.add(value);
+      writer.append(Bytes.toBytes(key), Bytes.toBytes(value));
+    }
+    writer.close();
+    fout.close();
+
+    HFile.Reader reader = new HFile.Reader(TEST_UTIL.getTestFileSystem(),
+        ncTFile, null, false);
+    reader.loadFileInfo();
+    HFileScanner scanner = reader.getScanner(false, true);
+
+    scanner.seekTo();
+    for (int i = 0; i < keyList.size(); i++) {
+      Integer key = keyList.get(i);
+      String value = valueList.get(i);
+      long start = System.nanoTime();
+      scanner.seekTo(Bytes.toBytes(key));
+      System.out.println("Seek Finished in: " + (System.nanoTime() - start)/1000 + " micro s");
+      assertEquals(value, scanner.getValueString());
+    }
+
+    scanner.seekTo();
+    for (int i = 0; i < keyList.size(); i += 10) {
+      Integer key = keyList.get(i);
+      String value = valueList.get(i);
+      long start = System.nanoTime();
+      scanner.reseekTo(Bytes.toBytes(key));
+      System.out.println("Reseek Finished in: " + (System.nanoTime() - start)/1000 + " micro s");
+      assertEquals(value, scanner.getValueString());
+    }
+  }
+
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
new file mode 100644
index 0000000..d2ba71f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
@@ -0,0 +1,123 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import java.io.IOException;
+
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test {@link HFileScanner#seekTo(byte[])} and its variants.
+ */
+public class TestSeekTo extends HBaseTestCase {
+
+  Path makeNewFile() throws IOException {
+    Path ncTFile = new Path(this.testDir, "basic.hfile");
+    FSDataOutputStream fout = this.fs.create(ncTFile);
+    HFile.Writer writer = new HFile.Writer(fout, 40, "none", null);
+    // 4 bytes * 3 * 2 for each key/value +
+    // 3 for keys, 15 for values = 42 (woot)
+    writer.append(Bytes.toBytes("c"), Bytes.toBytes("value"));
+    writer.append(Bytes.toBytes("e"), Bytes.toBytes("value"));
+    writer.append(Bytes.toBytes("g"), Bytes.toBytes("value"));
+    // block transition
+    writer.append(Bytes.toBytes("i"), Bytes.toBytes("value"));
+    writer.append(Bytes.toBytes("k"), Bytes.toBytes("value"));
+    writer.close();
+    fout.close();
+    return ncTFile;
+  }
+  public void testSeekBefore() throws Exception {
+    Path p = makeNewFile();
+    HFile.Reader reader = new HFile.Reader(fs, p, null, false);
+    reader.loadFileInfo();
+    HFileScanner scanner = reader.getScanner(false, true);
+    assertEquals(false, scanner.seekBefore(Bytes.toBytes("a")));
+
+    assertEquals(false, scanner.seekBefore(Bytes.toBytes("c")));
+
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("d")));
+    assertEquals("c", scanner.getKeyString());
+
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("e")));
+    assertEquals("c", scanner.getKeyString());
+
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("f")));
+    assertEquals("e", scanner.getKeyString());
+
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("g")));
+    assertEquals("e", scanner.getKeyString());
+
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("h")));
+    assertEquals("g", scanner.getKeyString());
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("i")));
+    assertEquals("g", scanner.getKeyString());
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("j")));
+    assertEquals("i", scanner.getKeyString());
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("k")));
+    assertEquals("i", scanner.getKeyString());
+    assertEquals(true, scanner.seekBefore(Bytes.toBytes("l")));
+    assertEquals("k", scanner.getKeyString());
+  }
+
+  public void testSeekTo() throws Exception {
+    Path p = makeNewFile();
+    HFile.Reader reader = new HFile.Reader(fs, p, null, false);
+    reader.loadFileInfo();
+    assertEquals(2, reader.blockIndex.count);
+    HFileScanner scanner = reader.getScanner(false, true);
+    // lies before the start of the file.
+    assertEquals(-1, scanner.seekTo(Bytes.toBytes("a")));
+
+    assertEquals(1, scanner.seekTo(Bytes.toBytes("d")));
+    assertEquals("c", scanner.getKeyString());
+
+    // Across a block boundary now.
+    assertEquals(1, scanner.seekTo(Bytes.toBytes("h")));
+    assertEquals("g", scanner.getKeyString());
+
+    assertEquals(1, scanner.seekTo(Bytes.toBytes("l")));
+    assertEquals("k", scanner.getKeyString());
+  }
+
+  public void testBlockContainingKey() throws Exception {
+    Path p = makeNewFile();
+    HFile.Reader reader = new HFile.Reader(fs, p, null, false);
+    reader.loadFileInfo();
+    System.out.println(reader.blockIndex.toString());
+    // falls before the start of the file.
+    assertEquals(-1, reader.blockIndex.blockContainingKey(Bytes.toBytes("a"), 0, 1));
+    assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("c"), 0, 1));
+    assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("d"), 0, 1));
+    assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("e"), 0, 1));
+    assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("g"), 0, 1));
+    assertEquals(0, reader.blockIndex.blockContainingKey(Bytes.toBytes("h"), 0, 1));
+    assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("i"), 0, 1));
+    assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("j"), 0, 1));
+    assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("k"), 0, 1));
+    assertEquals(1, reader.blockIndex.blockContainingKey(Bytes.toBytes("l"), 0, 1));
+
+
+
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java
new file mode 100644
index 0000000..5a5c3c6
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java
@@ -0,0 +1,244 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapred;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.hbase.*;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.JobClient;
+import org.apache.hadoop.mapred.JobConf;
+import org.apache.hadoop.mapred.MapReduceBase;
+import org.apache.hadoop.mapred.MiniMRCluster;
+import org.apache.hadoop.mapred.OutputCollector;
+import org.apache.hadoop.mapred.Reporter;
+
+/**
+ * Test Map/Reduce job over HBase tables. The map/reduce process we're testing
+ * on our tables is simple - take every row in the table, reverse the value of
+ * a particular cell, and write it back to the table.
+ */
+public class TestTableMapReduce extends MultiRegionTable {
+  private static final Log LOG =
+    LogFactory.getLog(TestTableMapReduce.class.getName());
+
+  static final String MULTI_REGION_TABLE_NAME = "mrtest";
+  static final byte[] INPUT_FAMILY = Bytes.toBytes("contents");
+  static final byte[] OUTPUT_FAMILY = Bytes.toBytes("text");
+
+  private static final byte [][] columns = new byte [][] {
+    INPUT_FAMILY,
+    OUTPUT_FAMILY
+  };
+
+  /** constructor */
+  public TestTableMapReduce() {
+    super(Bytes.toString(INPUT_FAMILY));
+    desc = new HTableDescriptor(MULTI_REGION_TABLE_NAME);
+    desc.addFamily(new HColumnDescriptor(INPUT_FAMILY));
+    desc.addFamily(new HColumnDescriptor(OUTPUT_FAMILY));
+  }
+
+  /**
+   * Pass the given key and processed record reduce
+   */
+  public static class ProcessContentsMapper
+  extends MapReduceBase
+  implements TableMap<ImmutableBytesWritable, Put> {
+    /**
+     * Pass the key, and reversed value to reduce
+     * @param key
+     * @param value
+     * @param output
+     * @param reporter
+     * @throws IOException
+     */
+    public void map(ImmutableBytesWritable key, Result value,
+      OutputCollector<ImmutableBytesWritable, Put> output,
+      Reporter reporter)
+    throws IOException {
+      if (value.size() != 1) {
+        throw new IOException("There should only be one input column");
+      }
+      Map<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>>
+        cf = value.getMap();
+      if(!cf.containsKey(INPUT_FAMILY)) {
+        throw new IOException("Wrong input columns. Missing: '" +
+          Bytes.toString(INPUT_FAMILY) + "'.");
+      }
+
+      // Get the original value and reverse it
+
+      String originalValue = new String(value.getValue(INPUT_FAMILY, null),
+        HConstants.UTF8_ENCODING);
+      StringBuilder newValue = new StringBuilder(originalValue);
+      newValue.reverse();
+
+      // Now set the value to be collected
+
+      Put outval = new Put(key.get());
+      outval.add(OUTPUT_FAMILY, null, Bytes.toBytes(newValue.toString()));
+      output.collect(key, outval);
+    }
+  }
+
+  /**
+   * Test a map/reduce against a multi-region table
+   * @throws IOException
+   */
+  public void testMultiRegionTable() throws IOException {
+    runTestOnTable(new HTable(conf, MULTI_REGION_TABLE_NAME));
+  }
+
+  private void runTestOnTable(HTable table) throws IOException {
+    MiniMRCluster mrCluster = new MiniMRCluster(2, fs.getUri().toString(), 1);
+
+    JobConf jobConf = null;
+    try {
+      LOG.info("Before map/reduce startup");
+      jobConf = new JobConf(conf, TestTableMapReduce.class);
+      jobConf.setJobName("process column contents");
+      jobConf.setNumReduceTasks(1);
+      TableMapReduceUtil.initTableMapJob(Bytes.toString(table.getTableName()),
+        Bytes.toString(INPUT_FAMILY), ProcessContentsMapper.class,
+        ImmutableBytesWritable.class, Put.class, jobConf);
+      TableMapReduceUtil.initTableReduceJob(Bytes.toString(table.getTableName()),
+        IdentityTableReduce.class, jobConf);
+
+      LOG.info("Started " + Bytes.toString(table.getTableName()));
+      JobClient.runJob(jobConf);
+      LOG.info("After map/reduce completion");
+
+      // verify map-reduce results
+      verify(Bytes.toString(table.getTableName()));
+    } finally {
+      mrCluster.shutdown();
+      if (jobConf != null) {
+        FileUtil.fullyDelete(new File(jobConf.get("hadoop.tmp.dir")));
+      }
+    }
+  }
+
+  private void verify(String tableName) throws IOException {
+    HTable table = new HTable(conf, tableName);
+    boolean verified = false;
+    long pause = conf.getLong("hbase.client.pause", 5 * 1000);
+    int numRetries = conf.getInt("hbase.client.retries.number", 5);
+    for (int i = 0; i < numRetries; i++) {
+      try {
+        LOG.info("Verification attempt #" + i);
+        verifyAttempt(table);
+        verified = true;
+        break;
+      } catch (NullPointerException e) {
+        // If here, a cell was empty.  Presume its because updates came in
+        // after the scanner had been opened.  Wait a while and retry.
+        LOG.debug("Verification attempt failed: " + e.getMessage());
+      }
+      try {
+        Thread.sleep(pause);
+      } catch (InterruptedException e) {
+        // continue
+      }
+    }
+    assertTrue(verified);
+  }
+
+  /**
+   * Looks at every value of the mapreduce output and verifies that indeed
+   * the values have been reversed.
+   * @param table Table to scan.
+   * @throws IOException
+   * @throws NullPointerException if we failed to find a cell value
+   */
+  private void verifyAttempt(final HTable table) throws IOException, NullPointerException {
+    Scan scan = new Scan();
+    scan.addColumns(columns);
+    ResultScanner scanner = table.getScanner(scan);
+    try {
+      for (Result r : scanner) {
+        if (LOG.isDebugEnabled()) {
+          if (r.size() > 2 ) {
+            throw new IOException("Too many results, expected 2 got " +
+              r.size());
+          }
+        }
+        byte[] firstValue = null;
+        byte[] secondValue = null;
+        int count = 0;
+         for(KeyValue kv : r.list()) {
+          if (count == 0) {
+            firstValue = kv.getValue();
+          }
+          if (count == 1) {
+            secondValue = kv.getValue();
+          }
+          count++;
+          if (count == 2) {
+            break;
+          }
+        }
+
+
+        String first = "";
+        if (firstValue == null) {
+          throw new NullPointerException(Bytes.toString(r.getRow()) +
+            ": first value is null");
+        }
+        first = new String(firstValue, HConstants.UTF8_ENCODING);
+
+        String second = "";
+        if (secondValue == null) {
+          throw new NullPointerException(Bytes.toString(r.getRow()) +
+            ": second value is null");
+        }
+        byte[] secondReversed = new byte[secondValue.length];
+        for (int i = 0, j = secondValue.length - 1; j >= 0; j--, i++) {
+          secondReversed[i] = secondValue[j];
+        }
+        second = new String(secondReversed, HConstants.UTF8_ENCODING);
+
+        if (first.compareTo(second) != 0) {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("second key is not the reverse of first. row=" +
+                r.getRow() + ", first value=" + first + ", second value=" +
+                second);
+          }
+          fail();
+        }
+      }
+    } finally {
+      scanner.close();
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/NMapInputFormat.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/NMapInputFormat.java
new file mode 100644
index 0000000..563ee57
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/NMapInputFormat.java
@@ -0,0 +1,127 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputFormat;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+
+/**
+ * Input format that creates as many map tasks as configured in
+ * mapred.map.tasks, each provided with a single row of
+ * NullWritables. This can be useful when trying to write mappers
+ * which don't have any real input (eg when the mapper is simply
+ * producing random data as output)
+ */
+public class NMapInputFormat extends InputFormat<NullWritable, NullWritable> {
+
+  @Override
+  public RecordReader<NullWritable, NullWritable> createRecordReader(
+      InputSplit split,
+      TaskAttemptContext tac) throws IOException, InterruptedException {
+    return new SingleRecordReader<NullWritable, NullWritable>(
+        NullWritable.get(), NullWritable.get());
+  }
+
+  @Override
+  public List<InputSplit> getSplits(JobContext context) throws IOException,
+      InterruptedException {
+    int count = context.getConfiguration().getInt("mapred.map.tasks", 1);
+    List<InputSplit> splits = new ArrayList<InputSplit>(count);
+    for (int i = 0; i < count; i++) {
+      splits.add(new NullInputSplit());
+    }
+    return splits;
+  }
+
+  private static class NullInputSplit extends InputSplit implements Writable {
+    @Override
+    public long getLength() throws IOException, InterruptedException {
+      return 0;
+    }
+
+    @Override
+    public String[] getLocations() throws IOException, InterruptedException {
+      return new String[] {};
+    }
+
+    @Override
+    public void readFields(DataInput in) throws IOException {
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+    }
+  }
+  
+  private static class SingleRecordReader<K, V>
+    extends RecordReader<K, V> {
+    
+    private final K key;
+    private final V value;
+    boolean providedKey = false;
+
+    SingleRecordReader(K key, V value) {
+      this.key = key;
+      this.value = value;
+    }
+
+    @Override
+    public void close() {
+    }
+
+    @Override
+    public K getCurrentKey() {
+      return key;
+    }
+
+    @Override
+    public V getCurrentValue(){
+      return value;
+    }
+
+    @Override
+    public float getProgress() {
+      return 0;
+    }
+
+    @Override
+    public void initialize(InputSplit split, TaskAttemptContext tac) {
+    }
+
+    @Override
+    public boolean nextKeyValue() {
+      if (providedKey) return false;
+      providedKey = true;
+      return true;
+    }
+    
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
new file mode 100644
index 0000000..c5d56cc
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
@@ -0,0 +1,370 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotSame;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.PerformanceEvaluation;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.TaskAttemptID;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+/**
+ * Simple test for {@link KeyValueSortReducer} and {@link HFileOutputFormat}.
+ * Sets up and runs a mapreduce job that writes hfile output.
+ * Creates a few inner classes to implement splits and an inputformat that
+ * emits keys and values like those of {@link PerformanceEvaluation}.  Makes
+ * as many splits as "mapred.map.tasks" maps.
+ */
+public class TestHFileOutputFormat  {
+  private final static int ROWSPERSPLIT = 1024;
+
+  private static final byte[] FAMILY_NAME = PerformanceEvaluation.FAMILY_NAME;
+  private static final byte[] TABLE_NAME = Bytes.toBytes("TestTable");
+  
+  private HBaseTestingUtility util = new HBaseTestingUtility();
+  
+  private static Log LOG = LogFactory.getLog(TestHFileOutputFormat.class);
+  
+  /**
+   * Simple mapper that makes KeyValue output.
+   */
+  static class RandomKVGeneratingMapper
+  extends Mapper<NullWritable, NullWritable,
+                 ImmutableBytesWritable, KeyValue> {
+    
+    private int keyLength;
+    private static final int KEYLEN_DEFAULT=10;
+    private static final String KEYLEN_CONF="randomkv.key.length";
+
+    private int valLength;
+    private static final int VALLEN_DEFAULT=10;
+    private static final String VALLEN_CONF="randomkv.val.length";
+    
+    @Override
+    protected void setup(Context context) throws IOException,
+        InterruptedException {
+      super.setup(context);
+      
+      Configuration conf = context.getConfiguration();
+      keyLength = conf.getInt(KEYLEN_CONF, KEYLEN_DEFAULT);
+      valLength = conf.getInt(VALLEN_CONF, VALLEN_DEFAULT);
+    }
+
+    protected void map(
+        NullWritable n1, NullWritable n2,
+        Mapper<NullWritable, NullWritable,
+               ImmutableBytesWritable,KeyValue>.Context context)
+        throws java.io.IOException ,InterruptedException
+    {
+
+      byte keyBytes[] = new byte[keyLength];
+      byte valBytes[] = new byte[valLength];
+      
+      int taskId = context.getTaskAttemptID().getTaskID().getId();
+      assert taskId < Byte.MAX_VALUE : "Unit tests dont support > 127 tasks!";
+
+      Random random = new Random();
+      for (int i = 0; i < ROWSPERSPLIT; i++) {
+
+        random.nextBytes(keyBytes);
+        // Ensure that unique tasks generate unique keys
+        keyBytes[keyLength - 1] = (byte)(taskId & 0xFF);
+        random.nextBytes(valBytes);
+        ImmutableBytesWritable key = new ImmutableBytesWritable(keyBytes);
+
+        KeyValue kv = new KeyValue(keyBytes, PerformanceEvaluation.FAMILY_NAME,
+            PerformanceEvaluation.QUALIFIER_NAME, valBytes);
+        context.write(key, kv);
+      }
+    }
+  }
+
+  @Before
+  public void cleanupDir() throws IOException {
+    util.cleanupTestDir();
+  }
+  
+  
+  private void setupRandomGeneratorMapper(Job job) {
+    job.setInputFormatClass(NMapInputFormat.class);
+    job.setMapperClass(RandomKVGeneratingMapper.class);
+    job.setMapOutputKeyClass(ImmutableBytesWritable.class);
+    job.setMapOutputValueClass(KeyValue.class);
+  }
+
+  /**
+   * Test that {@link HFileOutputFormat} RecordWriter amends timestamps if
+   * passed a keyvalue whose timestamp is {@link HConstants#LATEST_TIMESTAMP}.
+   * @see <a href="https://issues.apache.org/jira/browse/HBASE-2615">HBASE-2615</a>
+   */
+  @Test
+  public void test_LATEST_TIMESTAMP_isReplaced()
+  throws IOException, InterruptedException {
+    Configuration conf = new Configuration(this.util.getConfiguration());
+    RecordWriter<ImmutableBytesWritable, KeyValue> writer = null;
+    TaskAttemptContext context = null;
+    Path dir =
+      HBaseTestingUtility.getTestDir("test_LATEST_TIMESTAMP_isReplaced");
+    try {
+      Job job = new Job(conf);
+      FileOutputFormat.setOutputPath(job, dir);
+      context = new TaskAttemptContext(job.getConfiguration(),
+        new TaskAttemptID());
+      HFileOutputFormat hof = new HFileOutputFormat();
+      writer = hof.getRecordWriter(context);
+      final byte [] b = Bytes.toBytes("b");
+
+      // Test 1.  Pass a KV that has a ts of LATEST_TIMESTAMP.  It should be
+      // changed by call to write.  Check all in kv is same but ts.
+      KeyValue kv = new KeyValue(b, b, b);
+      KeyValue original = kv.clone();
+      writer.write(new ImmutableBytesWritable(), kv);
+      assertFalse(original.equals(kv));
+      assertTrue(Bytes.equals(original.getRow(), kv.getRow()));
+      assertTrue(original.matchingColumn(kv.getFamily(), kv.getQualifier()));
+      assertNotSame(original.getTimestamp(), kv.getTimestamp());
+      assertNotSame(HConstants.LATEST_TIMESTAMP, kv.getTimestamp());
+
+      // Test 2. Now test passing a kv that has explicit ts.  It should not be
+      // changed by call to record write.
+      kv = new KeyValue(b, b, b, kv.getTimestamp() - 1, b);
+      original = kv.clone();
+      writer.write(new ImmutableBytesWritable(), kv);
+      assertTrue(original.equals(kv));
+    } finally {
+      if (writer != null && context != null) writer.close(context);
+      dir.getFileSystem(conf).delete(dir, true);
+    }
+  }
+
+  /**
+   * Run small MR job.
+   */
+  @Test
+  public void testWritingPEData() throws Exception {
+    Configuration conf = util.getConfiguration();
+    Path testDir = HBaseTestingUtility.getTestDir("testWritingPEData");
+    FileSystem fs = testDir.getFileSystem(conf);
+    
+    // Set down this value or we OOME in eclipse.
+    conf.setInt("io.sort.mb", 20);
+    // Write a few files.
+    conf.setLong("hbase.hregion.max.filesize", 64 * 1024);
+    
+    Job job = new Job(conf, "testWritingPEData");
+    setupRandomGeneratorMapper(job);
+    // This partitioner doesn't work well for number keys but using it anyways
+    // just to demonstrate how to configure it.
+    byte[] startKey = new byte[RandomKVGeneratingMapper.KEYLEN_DEFAULT];
+    byte[] endKey = new byte[RandomKVGeneratingMapper.KEYLEN_DEFAULT];
+    
+    Arrays.fill(startKey, (byte)0);
+    Arrays.fill(endKey, (byte)0xff);
+    
+    job.setPartitionerClass(SimpleTotalOrderPartitioner.class);
+    // Set start and end rows for partitioner.
+    SimpleTotalOrderPartitioner.setStartKey(job.getConfiguration(), startKey);
+    SimpleTotalOrderPartitioner.setEndKey(job.getConfiguration(), endKey);
+    job.setReducerClass(KeyValueSortReducer.class);
+    job.setOutputFormatClass(HFileOutputFormat.class);
+    job.setNumReduceTasks(4);
+    
+    FileOutputFormat.setOutputPath(job, testDir);
+    assertTrue(job.waitForCompletion(false));
+    FileStatus [] files = fs.listStatus(testDir);
+    assertTrue(files.length > 0);
+  }
+  
+  @Test
+  public void testJobConfiguration() throws Exception {
+    Job job = new Job();
+    HTable table = Mockito.mock(HTable.class);
+    byte[][] mockKeys = new byte[][] {
+        HConstants.EMPTY_BYTE_ARRAY,
+        Bytes.toBytes("aaa"),
+        Bytes.toBytes("ggg"),
+        Bytes.toBytes("zzz")
+    };
+    Mockito.doReturn(mockKeys).when(table).getStartKeys();
+    
+    HFileOutputFormat.configureIncrementalLoad(job, table);
+    assertEquals(job.getNumReduceTasks(), 4);
+  }
+  
+  private byte [][] generateRandomStartKeys(int numKeys) {
+    Random random = new Random();
+    byte[][] ret = new byte[numKeys][];
+    // first region start key is always empty
+    ret[0] = HConstants.EMPTY_BYTE_ARRAY;
+    for (int i = 1; i < numKeys; i++) {
+      ret[i] = PerformanceEvaluation.generateValue(random);
+    }
+    return ret;
+  }
+
+  @Test
+  public void testMRIncrementalLoad() throws Exception {
+    doIncrementalLoadTest(false);
+  }
+
+  @Test
+  public void testMRIncrementalLoadWithSplit() throws Exception {
+    doIncrementalLoadTest(true);
+  }
+  
+  private void doIncrementalLoadTest(
+      boolean shouldChangeRegions) throws Exception {
+    Configuration conf = util.getConfiguration();
+    Path testDir = HBaseTestingUtility.getTestDir("testLocalMRIncrementalLoad");
+    byte[][] startKeys = generateRandomStartKeys(5);
+    
+    try {
+      util.startMiniCluster();
+      HBaseAdmin admin = new HBaseAdmin(conf);
+      HTable table = util.createTable(TABLE_NAME, FAMILY_NAME);
+      int numRegions = util.createMultiRegions(
+          util.getConfiguration(), table, FAMILY_NAME,
+          startKeys);
+      assertEquals("Should make 5 regions",
+          numRegions, 5);
+      assertEquals("Should start with empty table",
+          0, util.countRows(table));
+
+      // Generate the bulk load files
+      util.startMiniMapReduceCluster();
+      runIncrementalPELoad(conf, table, testDir);
+      // This doesn't write into the table, just makes files
+      assertEquals("HFOF should not touch actual table",
+          0, util.countRows(table));
+  
+      if (shouldChangeRegions) {
+        LOG.info("Changing regions in table");
+        admin.disableTable(table.getTableName());
+        while(util.getMiniHBaseCluster().getMaster().getAssignmentManager().
+            isRegionsInTransition()) {
+          Threads.sleep(1000);
+          LOG.info("Waiting on table to finish disabling");
+        }
+        byte[][] newStartKeys = generateRandomStartKeys(15);
+        util.createMultiRegions(util.getConfiguration(),
+            table, FAMILY_NAME, newStartKeys);
+        admin.enableTable(table.getTableName());
+        while (table.getRegionsInfo().size() != 15 ||
+            !admin.isTableAvailable(table.getTableName())) {
+          Thread.sleep(1000);
+          LOG.info("Waiting for new region assignment to happen");
+        }
+      }
+      
+      // Perform the actual load
+      new LoadIncrementalHFiles(conf).doBulkLoad(testDir, table);
+      
+      // Ensure data shows up
+      int expectedRows = conf.getInt("mapred.map.tasks", 1) * ROWSPERSPLIT;
+      assertEquals("LoadIncrementalHFiles should put expected data in table",
+          expectedRows, util.countRows(table));
+      String tableDigestBefore = util.checksumRows(table);
+            
+      // Cause regions to reopen
+      admin.disableTable(TABLE_NAME);
+      while (!admin.isTableDisabled(TABLE_NAME)) {
+        Thread.sleep(1000);
+        LOG.info("Waiting for table to disable"); 
+      }
+      admin.enableTable(TABLE_NAME);
+      util.waitTableAvailable(TABLE_NAME, 30000);
+      assertEquals("Data should remain after reopening of regions",
+          tableDigestBefore, util.checksumRows(table));
+    } finally {
+      util.shutdownMiniMapReduceCluster();
+      util.shutdownMiniCluster();
+    }
+  }
+
+  private void runIncrementalPELoad(
+      Configuration conf, HTable table, Path outDir)
+  throws Exception {
+    Job job = new Job(conf, "testLocalMRIncrementalLoad");
+    setupRandomGeneratorMapper(job);
+    HFileOutputFormat.configureIncrementalLoad(job, table);
+    FileOutputFormat.setOutputPath(job, outDir);
+    
+    assertEquals(table.getRegionsInfo().size(),
+        job.getNumReduceTasks());
+    
+    assertTrue(job.waitForCompletion(true));
+  }
+  
+  public static void main(String args[]) throws Exception {
+    new TestHFileOutputFormat().manualTest(args);
+  }
+  
+  public void manualTest(String args[]) throws Exception {
+    Configuration conf = HBaseConfiguration.create();    
+    util = new HBaseTestingUtility(conf);
+    if ("newtable".equals(args[0])) {
+      byte[] tname = args[1].getBytes();
+      HTable table = util.createTable(tname, FAMILY_NAME);
+      HBaseAdmin admin = new HBaseAdmin(conf);
+      admin.disableTable(tname);
+      util.createMultiRegions(conf, table, FAMILY_NAME,
+          generateRandomStartKeys(5));
+      admin.enableTable(tname);
+    } else if ("incremental".equals(args[0])) {
+      byte[] tname = args[1].getBytes();
+      HTable table = new HTable(conf, tname);
+      Path outDir = new Path("incremental-out");
+      runIncrementalPELoad(conf, table, outDir);
+    } else {
+      throw new RuntimeException(
+          "usage: TestHFileOutputFormat newtable | incremental");
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
new file mode 100644
index 0000000..c6f3603
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java
@@ -0,0 +1,128 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.util.ArrayList;
+
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser;
+import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.BadTsvLineException;
+import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.ParsedLine;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+import com.google.common.base.Joiner;
+import com.google.common.base.Splitter;
+import com.google.common.collect.Iterables;
+
+import static org.junit.Assert.*;
+
+public class TestImportTsv {
+  @Test
+  public void testTsvParserSpecParsing() {
+    TsvParser parser;
+
+    parser = new TsvParser("HBASE_ROW_KEY", "\t");
+    assertNull(parser.getFamily(0));
+    assertNull(parser.getQualifier(0));
+    assertEquals(0, parser.getRowKeyColumnIndex());
+
+    parser = new TsvParser("HBASE_ROW_KEY,col1:scol1", "\t");
+    assertNull(parser.getFamily(0));
+    assertNull(parser.getQualifier(0));
+    assertBytesEquals(Bytes.toBytes("col1"), parser.getFamily(1));
+    assertBytesEquals(Bytes.toBytes("scol1"), parser.getQualifier(1));
+    assertEquals(0, parser.getRowKeyColumnIndex());
+
+    parser = new TsvParser("HBASE_ROW_KEY,col1:scol1,col1:scol2", "\t");
+    assertNull(parser.getFamily(0));
+    assertNull(parser.getQualifier(0));
+    assertBytesEquals(Bytes.toBytes("col1"), parser.getFamily(1));
+    assertBytesEquals(Bytes.toBytes("scol1"), parser.getQualifier(1));
+    assertBytesEquals(Bytes.toBytes("col1"), parser.getFamily(2));
+    assertBytesEquals(Bytes.toBytes("scol2"), parser.getQualifier(2));
+    assertEquals(0, parser.getRowKeyColumnIndex());
+  }
+
+  @Test
+  public void testTsvParser() throws BadTsvLineException {
+    TsvParser parser = new TsvParser("col_a,col_b:qual,HBASE_ROW_KEY,col_d", "\t");
+    assertBytesEquals(Bytes.toBytes("col_a"), parser.getFamily(0));
+    assertBytesEquals(HConstants.EMPTY_BYTE_ARRAY, parser.getQualifier(0));
+    assertBytesEquals(Bytes.toBytes("col_b"), parser.getFamily(1));
+    assertBytesEquals(Bytes.toBytes("qual"), parser.getQualifier(1));
+    assertNull(parser.getFamily(2));
+    assertNull(parser.getQualifier(2));
+    assertEquals(2, parser.getRowKeyColumnIndex());
+    
+    byte[] line = Bytes.toBytes("val_a\tval_b\tval_c\tval_d");
+    ParsedLine parsed = parser.parse(line, line.length);
+    checkParsing(parsed, Splitter.on("\t").split(Bytes.toString(line)));
+  }
+
+  private void checkParsing(ParsedLine parsed, Iterable<String> expected) {
+    ArrayList<String> parsedCols = new ArrayList<String>();
+    for (int i = 0; i < parsed.getColumnCount(); i++) {
+      parsedCols.add(Bytes.toString(
+          parsed.getLineBytes(),
+          parsed.getColumnOffset(i),
+          parsed.getColumnLength(i)));
+    }
+    if (!Iterables.elementsEqual(parsedCols, expected)) {
+      fail("Expected: " + Joiner.on(",").join(expected) + "\n" + 
+          "Got:" + Joiner.on(",").join(parsedCols));
+    }
+  }
+  
+  private void assertBytesEquals(byte[] a, byte[] b) {
+    assertEquals(Bytes.toStringBinary(a), Bytes.toStringBinary(b));
+  }
+
+  /**
+   * Test cases that throw BadTsvLineException
+   */
+  @Test(expected=BadTsvLineException.class)
+  public void testTsvParserBadTsvLineExcessiveColumns() throws BadTsvLineException {
+    TsvParser parser = new TsvParser("HBASE_ROW_KEY,col_a", "\t");
+    byte[] line = Bytes.toBytes("val_a\tval_b\tval_c");
+    ParsedLine parsed = parser.parse(line, line.length);
+  }
+
+  @Test(expected=BadTsvLineException.class)
+  public void testTsvParserBadTsvLineZeroColumn() throws BadTsvLineException {
+    TsvParser parser = new TsvParser("HBASE_ROW_KEY,col_a", "\t");
+    byte[] line = Bytes.toBytes("");
+    ParsedLine parsed = parser.parse(line, line.length);
+  }
+
+  @Test(expected=BadTsvLineException.class)
+  public void testTsvParserBadTsvLineOnlyKey() throws BadTsvLineException {
+    TsvParser parser = new TsvParser("HBASE_ROW_KEY,col_a", "\t");
+    byte[] line = Bytes.toBytes("key_only");
+    ParsedLine parsed = parser.parse(line, line.length);
+  }
+
+  @Test(expected=BadTsvLineException.class)
+  public void testTsvParserBadTsvLineNoRowKey() throws BadTsvLineException {
+    TsvParser parser = new TsvParser("col_a,HBASE_ROW_KEY", "\t");
+    byte[] line = Bytes.toBytes("only_cola_data_and_no_row_key");
+    ParsedLine parsed = parser.parse(line, line.length);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
new file mode 100644
index 0000000..1f0eb94
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
@@ -0,0 +1,189 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+import static org.junit.Assert.*;
+
+/**
+ * Test cases for the "load" half of the HFileOutputFormat bulk load
+ * functionality. These tests run faster than the full MR cluster
+ * tests in TestHFileOutputFormat
+ */
+public class TestLoadIncrementalHFiles {
+
+  private static final byte[] TABLE = Bytes.toBytes("mytable");
+  private static final byte[] QUALIFIER = Bytes.toBytes("myqual");
+  private static final byte[] FAMILY = Bytes.toBytes("myfam");
+
+  private static final byte[][] SPLIT_KEYS = new byte[][] {
+    Bytes.toBytes("ddd"),
+    Bytes.toBytes("ppp")
+  };
+
+  public static int BLOCKSIZE = 64*1024;
+  public static String COMPRESSION =
+    Compression.Algorithm.NONE.getName();
+
+  private HBaseTestingUtility util = new HBaseTestingUtility();
+
+  /**
+   * Test case that creates some regions and loads
+   * HFiles that fit snugly inside those regions
+   */
+  @Test
+  public void testSimpleLoad() throws Exception {
+    runTest("testSimpleLoad",
+        new byte[][][] {
+          new byte[][]{ Bytes.toBytes("aaaa"), Bytes.toBytes("cccc") },
+          new byte[][]{ Bytes.toBytes("ddd"), Bytes.toBytes("ooo") },
+    });
+  }
+
+  /**
+   * Test case that creates some regions and loads
+   * HFiles that cross the boundaries of those regions
+   */
+  @Test
+  public void testRegionCrossingLoad() throws Exception {
+    runTest("testRegionCrossingLoad",
+        new byte[][][] {
+          new byte[][]{ Bytes.toBytes("aaaa"), Bytes.toBytes("eee") },
+          new byte[][]{ Bytes.toBytes("fff"), Bytes.toBytes("zzz") },
+    });
+  }
+
+  private void runTest(String testName, byte[][][] hfileRanges)
+  throws Exception {
+    Path dir = HBaseTestingUtility.getTestDir(testName);
+    FileSystem fs = util.getTestFileSystem();
+    dir = dir.makeQualified(fs);
+    Path familyDir = new Path(dir, Bytes.toString(FAMILY));
+
+    int hfileIdx = 0;
+    for (byte[][] range : hfileRanges) {
+      byte[] from = range[0];
+      byte[] to = range[1];
+      createHFile(fs, new Path(familyDir, "hfile_" + hfileIdx++),
+          FAMILY, QUALIFIER, from, to, 1000);
+    }
+    int expectedRows = hfileIdx * 1000;
+
+
+    util.startMiniCluster();
+    try {
+      HBaseAdmin admin = new HBaseAdmin(util.getConfiguration());
+      HTableDescriptor htd = new HTableDescriptor(TABLE);
+      htd.addFamily(new HColumnDescriptor(FAMILY));
+      admin.createTable(htd, SPLIT_KEYS);
+
+      HTable table = new HTable(util.getConfiguration(), TABLE);
+      util.waitTableAvailable(TABLE, 30000);
+      LoadIncrementalHFiles loader = new LoadIncrementalHFiles(
+          util.getConfiguration());
+      loader.doBulkLoad(dir, table);
+
+      assertEquals(expectedRows, util.countRows(table));
+    } finally {
+      util.shutdownMiniCluster();
+    }
+  }
+
+  @Test
+  public void testSplitStoreFile() throws IOException {
+    Path dir = HBaseTestingUtility.getTestDir("testSplitHFile");
+    FileSystem fs = util.getTestFileSystem();
+    Path testIn = new Path(dir, "testhfile");
+    HColumnDescriptor familyDesc = new HColumnDescriptor(FAMILY);
+    createHFile(fs, testIn, FAMILY, QUALIFIER,
+        Bytes.toBytes("aaa"), Bytes.toBytes("zzz"), 1000);
+
+    Path bottomOut = new Path(dir, "bottom.out");
+    Path topOut = new Path(dir, "top.out");
+
+    LoadIncrementalHFiles.splitStoreFile(
+        util.getConfiguration(), testIn,
+        familyDesc, Bytes.toBytes("ggg"),
+        bottomOut,
+        topOut);
+
+    int rowCount = verifyHFile(bottomOut);
+    rowCount += verifyHFile(topOut);
+    assertEquals(1000, rowCount);
+  }
+
+  private int verifyHFile(Path p) throws IOException {
+    Configuration conf = util.getConfiguration();
+    HFile.Reader reader = new HFile.Reader(
+        p.getFileSystem(conf), p, null, false);
+    reader.loadFileInfo();
+    HFileScanner scanner = reader.getScanner(false, false);
+    scanner.seekTo();
+    int count = 0;
+    do {
+      count++;
+    } while (scanner.next());
+    assertTrue(count > 0);
+    return count;
+  }
+
+
+  /**
+   * Create an HFile with the given number of rows between a given
+   * start key and end key.
+   * TODO put me in an HFileTestUtil or something?
+   */
+  static void createHFile(
+      FileSystem fs, Path path,
+      byte[] family, byte[] qualifier,
+      byte[] startKey, byte[] endKey, int numRows) throws IOException
+  {
+    HFile.Writer writer = new HFile.Writer(fs, path, BLOCKSIZE, COMPRESSION,
+        KeyValue.KEY_COMPARATOR);
+    long now = System.currentTimeMillis();
+    try {
+      // subtract 2 since iterateOnSplits doesn't include boundary keys
+      for (byte[] key : Bytes.iterateOnSplits(startKey, endKey, numRows-2)) {
+        KeyValue kv = new KeyValue(key, family, qualifier, now, key);
+        writer.append(kv);
+      }
+    } finally {
+      writer.close();
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSimpleTotalOrderPartitioner.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSimpleTotalOrderPartitioner.java
new file mode 100644
index 0000000..a69600e
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSimpleTotalOrderPartitioner.java
@@ -0,0 +1,68 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Test of simple partitioner.
+ */
+public class TestSimpleTotalOrderPartitioner extends HBaseTestCase {
+  public void testSplit() throws Exception {
+    String start = "a";
+    String end = "{";
+    SimpleTotalOrderPartitioner<byte []> p =
+      new SimpleTotalOrderPartitioner<byte []>();
+    this.conf.set(SimpleTotalOrderPartitioner.START, start);
+    this.conf.set(SimpleTotalOrderPartitioner.END, end);
+    p.setConf(this.conf);
+    ImmutableBytesWritable c = new ImmutableBytesWritable(Bytes.toBytes("c"));
+    // If one reduce, partition should be 0.
+    int partition = p.getPartition(c, HConstants.EMPTY_BYTE_ARRAY, 1);
+    assertEquals(0, partition);
+    // If two reduces, partition should be 0.
+    partition = p.getPartition(c, HConstants.EMPTY_BYTE_ARRAY, 2);
+    assertEquals(0, partition);
+    // Divide in 3.
+    partition = p.getPartition(c, HConstants.EMPTY_BYTE_ARRAY, 3);
+    assertEquals(0, partition);
+    ImmutableBytesWritable q = new ImmutableBytesWritable(Bytes.toBytes("q"));
+    partition = p.getPartition(q, HConstants.EMPTY_BYTE_ARRAY, 2);
+    assertEquals(1, partition);
+    partition = p.getPartition(q, HConstants.EMPTY_BYTE_ARRAY, 3);
+    assertEquals(2, partition);
+    // What about end and start keys.
+    ImmutableBytesWritable startBytes =
+      new ImmutableBytesWritable(Bytes.toBytes(start));
+    partition = p.getPartition(startBytes, HConstants.EMPTY_BYTE_ARRAY, 2);
+    assertEquals(0, partition);
+    partition = p.getPartition(startBytes, HConstants.EMPTY_BYTE_ARRAY, 3);
+    assertEquals(0, partition);
+    ImmutableBytesWritable endBytes =
+      new ImmutableBytesWritable(Bytes.toBytes("z"));
+    partition = p.getPartition(endBytes, HConstants.EMPTY_BYTE_ARRAY, 2);
+    assertEquals(1, partition);
+    partition = p.getPartition(endBytes, HConstants.EMPTY_BYTE_ARRAY, 3);
+    assertEquals(2, partition);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan.java
new file mode 100644
index 0000000..cf53671
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan.java
@@ -0,0 +1,358 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Reducer;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Tests various scan start and stop row scenarios. This is set in a scan and
+ * tested in a MapReduce job to see if that is handed over and done properly
+ * too.
+ */
+public class TestTableInputFormatScan {
+
+  static final Log LOG = LogFactory.getLog(TestTableInputFormatScan.class);
+  static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  static final byte[] TABLE_NAME = Bytes.toBytes("scantest");
+  static final byte[] INPUT_FAMILY = Bytes.toBytes("contents");
+  static final String KEY_STARTROW = "startRow";
+  static final String KEY_LASTROW = "stpRow";
+
+  private static HTable table = null;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    // switch TIF to log at DEBUG level
+    TEST_UTIL.enableDebug(TableInputFormat.class);
+    TEST_UTIL.enableDebug(TableInputFormatBase.class);
+    // start mini hbase cluster
+    TEST_UTIL.startMiniCluster(3);
+    // create and fill table
+    table = TEST_UTIL.createTable(TABLE_NAME, INPUT_FAMILY);
+    TEST_UTIL.createMultiRegions(table, INPUT_FAMILY);
+    TEST_UTIL.loadTable(table, INPUT_FAMILY);
+    // start MR cluster
+    TEST_UTIL.startMiniMapReduceCluster();
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniMapReduceCluster();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    // nothing
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @After
+  public void tearDown() throws Exception {
+    Configuration c = TEST_UTIL.getConfiguration();
+    FileUtil.fullyDelete(new File(c.get("hadoop.tmp.dir")));
+  }
+
+  /**
+   * Pass the key and value to reduce.
+   */
+  public static class ScanMapper
+  extends TableMapper<ImmutableBytesWritable, ImmutableBytesWritable> {
+
+    /**
+     * Pass the key and value to reduce.
+     *
+     * @param key  The key, here "aaa", "aab" etc.
+     * @param value  The value is the same as the key.
+     * @param context  The task context.
+     * @throws IOException When reading the rows fails.
+     */
+    @Override
+    public void map(ImmutableBytesWritable key, Result value,
+      Context context)
+    throws IOException, InterruptedException {
+      if (value.size() != 1) {
+        throw new IOException("There should only be one input column");
+      }
+      Map<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>>
+        cf = value.getMap();
+      if(!cf.containsKey(INPUT_FAMILY)) {
+        throw new IOException("Wrong input columns. Missing: '" +
+          Bytes.toString(INPUT_FAMILY) + "'.");
+      }
+      String val = Bytes.toStringBinary(value.getValue(INPUT_FAMILY, null));
+      LOG.info("map: key -> " + Bytes.toStringBinary(key.get()) +
+        ", value -> " + val);
+      context.write(key, key);
+    }
+
+  }
+
+  /**
+   * Checks the last and first key seen against the scanner boundaries.
+   */
+  public static class ScanReducer
+  extends Reducer<ImmutableBytesWritable, ImmutableBytesWritable,
+                  NullWritable, NullWritable> {
+
+    private String first = null;
+    private String last = null;
+
+    protected void reduce(ImmutableBytesWritable key,
+        Iterable<ImmutableBytesWritable> values, Context context)
+    throws IOException ,InterruptedException {
+      int count = 0;
+      for (ImmutableBytesWritable value : values) {
+        String val = Bytes.toStringBinary(value.get());
+        LOG.info("reduce: key[" + count + "] -> " +
+          Bytes.toStringBinary(key.get()) + ", value -> " + val);
+        if (first == null) first = val;
+        last = val;
+        count++;
+      }
+    }
+
+    protected void cleanup(Context context)
+    throws IOException, InterruptedException {
+      Configuration c = context.getConfiguration();
+      String startRow = c.get(KEY_STARTROW);
+      String lastRow = c.get(KEY_LASTROW);
+      LOG.info("cleanup: first -> \"" + first + "\", start row -> \"" + startRow + "\"");
+      LOG.info("cleanup: last -> \"" + last + "\", last row -> \"" + lastRow + "\"");
+      if (startRow != null && startRow.length() > 0) {
+        assertEquals(startRow, first);
+      }
+      if (lastRow != null && lastRow.length() > 0) {
+        assertEquals(lastRow, last);
+      }
+    }
+
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanEmptyToEmpty()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan(null, null, null);
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanEmptyToAPP()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan(null, "app", "apo");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanEmptyToBBA()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan(null, "bba", "baz");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanEmptyToBBB()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan(null, "bbb", "bba");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanEmptyToOPP()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan(null, "opp", "opo");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanOBBToOPP()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan("obb", "opp", "opo");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanOBBToQPP()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan("obb", "qpp", "qpo");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanOPPToEmpty()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan("opp", null, "zzz");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanYYXToEmpty()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan("yyx", null, "zzz");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanYYYToEmpty()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan("yyy", null, "zzz");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  @Test
+  public void testScanYZYToEmpty()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    testScan("yzy", null, "zzz");
+  }
+
+  /**
+   * Tests a MR scan using specific start and stop rows.
+   *
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  private void testScan(String start, String stop, String last)
+  throws IOException, InterruptedException, ClassNotFoundException {
+    String jobName = "Scan" + (start != null ? start.toUpperCase() : "Empty") +
+    "To" + (stop != null ? stop.toUpperCase() : "Empty");
+    LOG.info("Before map/reduce startup - job " + jobName);
+    Configuration c = new Configuration(TEST_UTIL.getConfiguration());
+    Scan scan = new Scan();
+    scan.addFamily(INPUT_FAMILY);
+    if (start != null) {
+      scan.setStartRow(Bytes.toBytes(start));
+    }
+    c.set(KEY_STARTROW, start != null ? start : "");
+    if (stop != null) {
+      scan.setStopRow(Bytes.toBytes(stop));
+    }
+    c.set(KEY_LASTROW, last != null ? last : "");
+    LOG.info("scan before: " + scan);
+    Job job = new Job(c, jobName);
+    TableMapReduceUtil.initTableMapperJob(
+      Bytes.toString(TABLE_NAME), scan, ScanMapper.class,
+      ImmutableBytesWritable.class, ImmutableBytesWritable.class, job);
+    job.setReducerClass(ScanReducer.class);
+    job.setNumReduceTasks(1); // one to get final "first" and "last" key
+    FileOutputFormat.setOutputPath(job, new Path(job.getJobName()));
+    LOG.info("Started " + job.getJobName());
+    job.waitForCompletion(true);
+    assertTrue(job.isComplete());
+    LOG.info("After map/reduce completion - job " + jobName);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
new file mode 100644
index 0000000..624f4a8
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java
@@ -0,0 +1,262 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.Map;
+import java.util.NavigableMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.MultiRegionTable;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.mapred.MiniMRCluster;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+
+/**
+ * Test Map/Reduce job over HBase tables. The map/reduce process we're testing
+ * on our tables is simple - take every row in the table, reverse the value of
+ * a particular cell, and write it back to the table.
+ */
+public class TestTableMapReduce extends MultiRegionTable {
+  private static final Log LOG = LogFactory.getLog(TestTableMapReduce.class);
+  static final String MULTI_REGION_TABLE_NAME = "mrtest";
+  static final byte[] INPUT_FAMILY = Bytes.toBytes("contents");
+  static final byte[] OUTPUT_FAMILY = Bytes.toBytes("text");
+
+  /** constructor */
+  public TestTableMapReduce() {
+    super(Bytes.toString(INPUT_FAMILY));
+    desc = new HTableDescriptor(MULTI_REGION_TABLE_NAME);
+    desc.addFamily(new HColumnDescriptor(INPUT_FAMILY));
+    desc.addFamily(new HColumnDescriptor(OUTPUT_FAMILY));
+  }
+
+  /**
+   * Pass the given key and processed record reduce
+   */
+  public static class ProcessContentsMapper
+  extends TableMapper<ImmutableBytesWritable, Put> {
+
+    /**
+     * Pass the key, and reversed value to reduce
+     *
+     * @param key
+     * @param value
+     * @param context
+     * @throws IOException
+     */
+    public void map(ImmutableBytesWritable key, Result value,
+      Context context)
+    throws IOException, InterruptedException {
+      if (value.size() != 1) {
+        throw new IOException("There should only be one input column");
+      }
+      Map<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>>
+        cf = value.getMap();
+      if(!cf.containsKey(INPUT_FAMILY)) {
+        throw new IOException("Wrong input columns. Missing: '" +
+          Bytes.toString(INPUT_FAMILY) + "'.");
+      }
+
+      // Get the original value and reverse it
+      String originalValue = new String(value.getValue(INPUT_FAMILY, null),
+        HConstants.UTF8_ENCODING);
+      StringBuilder newValue = new StringBuilder(originalValue);
+      newValue.reverse();
+      // Now set the value to be collected
+      Put outval = new Put(key.get());
+      outval.add(OUTPUT_FAMILY, null, Bytes.toBytes(newValue.toString()));
+      context.write(key, outval);
+    }
+  }
+
+  /**
+   * Test a map/reduce against a multi-region table
+   * @throws IOException
+   * @throws ClassNotFoundException
+   * @throws InterruptedException
+   */
+  public void testMultiRegionTable()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    runTestOnTable(new HTable(new Configuration(conf), MULTI_REGION_TABLE_NAME));
+  }
+
+  private void runTestOnTable(HTable table)
+  throws IOException, InterruptedException, ClassNotFoundException {
+    MiniMRCluster mrCluster = new MiniMRCluster(2, fs.getUri().toString(), 1);
+    Job job = null;
+    try {
+      LOG.info("Before map/reduce startup");
+      job = new Job(table.getConfiguration(), "process column contents");
+      job.setNumReduceTasks(1);
+      Scan scan = new Scan();
+      scan.addFamily(INPUT_FAMILY);
+      TableMapReduceUtil.initTableMapperJob(
+        Bytes.toString(table.getTableName()), scan,
+        ProcessContentsMapper.class, ImmutableBytesWritable.class,
+        Put.class, job);
+      TableMapReduceUtil.initTableReducerJob(
+        Bytes.toString(table.getTableName()),
+        IdentityTableReducer.class, job);
+      FileOutputFormat.setOutputPath(job, new Path("test"));
+      LOG.info("Started " + Bytes.toString(table.getTableName()));
+      job.waitForCompletion(true);
+      LOG.info("After map/reduce completion");
+
+      // verify map-reduce results
+      verify(Bytes.toString(table.getTableName()));
+    } finally {
+      mrCluster.shutdown();
+      if (job != null) {
+        FileUtil.fullyDelete(
+          new File(job.getConfiguration().get("hadoop.tmp.dir")));
+      }
+    }
+  }
+
+  private void verify(String tableName) throws IOException {
+    HTable table = new HTable(new Configuration(conf), tableName);
+    boolean verified = false;
+    long pause = conf.getLong("hbase.client.pause", 5 * 1000);
+    int numRetries = conf.getInt("hbase.client.retries.number", 5);
+    for (int i = 0; i < numRetries; i++) {
+      try {
+        LOG.info("Verification attempt #" + i);
+        verifyAttempt(table);
+        verified = true;
+        break;
+      } catch (NullPointerException e) {
+        // If here, a cell was empty.  Presume its because updates came in
+        // after the scanner had been opened.  Wait a while and retry.
+        LOG.debug("Verification attempt failed: " + e.getMessage());
+      }
+      try {
+        Thread.sleep(pause);
+      } catch (InterruptedException e) {
+        // continue
+      }
+    }
+    assertTrue(verified);
+  }
+
+  /**
+   * Looks at every value of the mapreduce output and verifies that indeed
+   * the values have been reversed.
+   *
+   * @param table Table to scan.
+   * @throws IOException
+   * @throws NullPointerException if we failed to find a cell value
+   */
+  private void verifyAttempt(final HTable table) throws IOException, NullPointerException {
+    Scan scan = new Scan();
+    scan.addFamily(INPUT_FAMILY);
+    scan.addFamily(OUTPUT_FAMILY);
+    ResultScanner scanner = table.getScanner(scan);
+    try {
+      for (Result r : scanner) {
+        if (LOG.isDebugEnabled()) {
+          if (r.size() > 2 ) {
+            throw new IOException("Too many results, expected 2 got " +
+              r.size());
+          }
+        }
+        byte[] firstValue = null;
+        byte[] secondValue = null;
+        int count = 0;
+        for(KeyValue kv : r.list()) {
+          if (count == 0) {
+            firstValue = kv.getValue();
+          }
+          if (count == 1) {
+            secondValue = kv.getValue();
+          }
+          count++;
+          if (count == 2) {
+            break;
+          }
+        }
+
+        String first = "";
+        if (firstValue == null) {
+          throw new NullPointerException(Bytes.toString(r.getRow()) +
+            ": first value is null");
+        }
+        first = new String(firstValue, HConstants.UTF8_ENCODING);
+
+        String second = "";
+        if (secondValue == null) {
+          throw new NullPointerException(Bytes.toString(r.getRow()) +
+            ": second value is null");
+        }
+        byte[] secondReversed = new byte[secondValue.length];
+        for (int i = 0, j = secondValue.length - 1; j >= 0; j--, i++) {
+          secondReversed[i] = secondValue[j];
+        }
+        second = new String(secondReversed, HConstants.UTF8_ENCODING);
+
+        if (first.compareTo(second) != 0) {
+          if (LOG.isDebugEnabled()) {
+            LOG.debug("second key is not the reverse of first. row=" +
+                Bytes.toStringBinary(r.getRow()) + ", first value=" + first +
+                ", second value=" + second);
+          }
+          fail();
+        }
+      }
+    } finally {
+      scanner.close();
+    }
+  }
+
+  /**
+   * Test that we add tmpjars correctly including the ZK jar.
+   */
+  public void testAddDependencyJars() throws Exception {
+    Job job = new Job();
+    TableMapReduceUtil.addDependencyJars(job);
+    String tmpjars = job.getConfiguration().get("tmpjars");
+
+    System.err.println("tmpjars: " + tmpjars);
+    assertTrue(tmpjars.contains("zookeeper"));
+    assertFalse(tmpjars.contains("guava"));
+
+    System.err.println("appending guava jar");
+    TableMapReduceUtil.addDependencyJars(job.getConfiguration(), 
+        com.google.common.base.Function.class);
+    tmpjars = job.getConfiguration().get("tmpjars");
+    assertTrue(tmpjars.contains("guava"));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java
new file mode 100644
index 0000000..a772360
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java
@@ -0,0 +1,199 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mapreduce;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.TreeMap;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configurable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.io.MapWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.mapred.MiniMRCluster;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
+
+public class TestTimeRangeMapRed extends HBaseClusterTestCase {
+
+  private final static Log log = LogFactory.getLog(TestTimeRangeMapRed.class);
+
+  private static final byte [] KEY = Bytes.toBytes("row1");
+  private static final NavigableMap<Long, Boolean> TIMESTAMP =
+    new TreeMap<Long, Boolean>();
+  static {
+    TIMESTAMP.put((long)1245620000, false);
+    TIMESTAMP.put((long)1245620005, true); // include
+    TIMESTAMP.put((long)1245620010, true); // include
+    TIMESTAMP.put((long)1245620055, true); // include
+    TIMESTAMP.put((long)1245620100, true); // include
+    TIMESTAMP.put((long)1245620150, false);
+    TIMESTAMP.put((long)1245620250, false);
+  }
+  static final long MINSTAMP = 1245620005;
+  static final long MAXSTAMP = 1245620100 + 1; // maxStamp itself is excluded. so increment it.
+
+  static final byte[] TABLE_NAME = Bytes.toBytes("table123");
+  static final byte[] FAMILY_NAME = Bytes.toBytes("text");
+  static final byte[] COLUMN_NAME = Bytes.toBytes("input");
+
+  protected HTableDescriptor desc;
+  protected HTable table;
+
+  public TestTimeRangeMapRed() {
+    super();
+    System.setProperty("hadoop.log.dir", conf.get("hadoop.log.dir"));
+    conf.set("mapred.output.dir", conf.get("hadoop.tmp.dir"));
+    this.setOpenMetaTable(true);
+  }
+
+  @Override
+  public void setUp() throws Exception {
+    super.setUp();
+    desc = new HTableDescriptor(TABLE_NAME);
+    HColumnDescriptor col = new HColumnDescriptor(FAMILY_NAME);
+    col.setMaxVersions(Integer.MAX_VALUE);
+    desc.addFamily(col);
+    HBaseAdmin admin = new HBaseAdmin(conf);
+    admin.createTable(desc);
+    table = new HTable(conf, desc.getName());
+  }
+
+  private static class ProcessTimeRangeMapper
+  extends TableMapper<ImmutableBytesWritable, MapWritable>
+  implements Configurable {
+
+    private Configuration conf = null;
+    private HTable table = null;
+
+    @Override
+    public void map(ImmutableBytesWritable key, Result result,
+        Context context)
+    throws IOException {
+      List<Long> tsList = new ArrayList<Long>();
+      for (KeyValue kv : result.sorted()) {
+        tsList.add(kv.getTimestamp());
+      }
+
+      for (Long ts : tsList) {
+        Put put = new Put(key.get());
+        put.add(FAMILY_NAME, COLUMN_NAME, ts, Bytes.toBytes(true));
+        table.put(put);
+      }
+      table.flushCommits();
+    }
+
+    @Override
+    public Configuration getConf() {
+      return conf;
+    }
+
+    @Override
+    public void setConf(Configuration configuration) {
+      this.conf = configuration;
+      try {
+        table = new HTable(HBaseConfiguration.create(conf), TABLE_NAME);
+      } catch (IOException e) {
+        e.printStackTrace();
+      }
+    }
+
+  }
+
+  public void testTimeRangeMapRed()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    prepareTest();
+    runTestOnTable();
+    verify();
+  }
+
+  private void prepareTest() throws IOException {
+    for (Map.Entry<Long, Boolean> entry : TIMESTAMP.entrySet()) {
+      Put put = new Put(KEY);
+      put.add(FAMILY_NAME, COLUMN_NAME, entry.getKey(), Bytes.toBytes(false));
+      table.put(put);
+    }
+    table.flushCommits();
+  }
+
+  private void runTestOnTable()
+  throws IOException, InterruptedException, ClassNotFoundException {
+    MiniMRCluster mrCluster = new MiniMRCluster(2, fs.getUri().toString(), 1);
+    Job job = null;
+    try {
+      job = new Job(conf, "test123");
+      job.setOutputFormatClass(NullOutputFormat.class);
+      job.setNumReduceTasks(0);
+      Scan scan = new Scan();
+      scan.addColumn(FAMILY_NAME, COLUMN_NAME);
+      scan.setTimeRange(MINSTAMP, MAXSTAMP);
+      scan.setMaxVersions();
+      TableMapReduceUtil.initTableMapperJob(Bytes.toString(TABLE_NAME),
+        scan, ProcessTimeRangeMapper.class, Text.class, Text.class, job);
+      job.waitForCompletion(true);
+    } catch (IOException e) {
+      // TODO Auto-generated catch block
+      e.printStackTrace();
+    } finally {
+      mrCluster.shutdown();
+      if (job != null) {
+        FileUtil.fullyDelete(
+          new File(job.getConfiguration().get("hadoop.tmp.dir")));
+      }
+    }
+  }
+
+  private void verify() throws IOException {
+    Scan scan = new Scan();
+    scan.addColumn(FAMILY_NAME, COLUMN_NAME);
+    scan.setMaxVersions(1);
+    ResultScanner scanner = table.getScanner(scan);
+    for (Result r: scanner) {
+      for (KeyValue kv : r.sorted()) {
+        log.debug(Bytes.toString(r.getRow()) + "\t" + Bytes.toString(kv.getFamily())
+            + "\t" + Bytes.toString(kv.getQualifier())
+            + "\t" + kv.getTimestamp() + "\t" + Bytes.toBoolean(kv.getValue()));
+        assertEquals(TIMESTAMP.get(kv.getTimestamp()), (Boolean)Bytes.toBoolean(kv.getValue()));
+      }
+    }
+    scanner.close();
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/BROKE_FIX_TestKillingServersFromMaster.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/BROKE_FIX_TestKillingServersFromMaster.java
new file mode 100644
index 0000000..21b76fa
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/BROKE_FIX_TestKillingServersFromMaster.java
@@ -0,0 +1,103 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.YouAreDeadException;
+import org.apache.hadoop.hbase.MiniHBaseCluster.MiniHBaseClusterRegionServer;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+public class BROKE_FIX_TestKillingServersFromMaster {
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static MiniHBaseCluster cluster;
+
+  @BeforeClass
+  public static void beforeAllTests() throws Exception {
+    TEST_UTIL.startMiniCluster(2);
+    cluster = TEST_UTIL.getHBaseCluster();
+  }
+
+  @AfterClass
+  public static void afterAllTests() throws IOException {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Before
+  public void setup() throws IOException {
+    TEST_UTIL.ensureSomeRegionServersAvailable(2);
+  }
+
+  /**
+   * Test that a region server that reports with the wrong start code
+   * gets shut down
+   * See HBASE-2613
+   * @throws Exception
+   */
+  @Ignore @Test (timeout=180000)
+  public void testRsReportsWrongStartCode() throws Exception {
+    MiniHBaseClusterRegionServer firstServer =
+      (MiniHBaseClusterRegionServer)cluster.getRegionServer(0);
+    HServerInfo hsi = firstServer.getServerInfo();
+    // This constructor creates a new startcode
+    firstServer.setHServerInfo(new HServerInfo(hsi.getServerAddress(),
+      hsi.getInfoPort(), hsi.getHostname()));
+    cluster.waitOnRegionServer(0);
+    assertEquals(1, cluster.getLiveRegionServerThreads().size());
+  }
+
+  /**
+   * Test that a region server that reports with the wrong address
+   * gets shut down
+   * See HBASE-2613
+   * @throws Exception
+   */
+  @Ignore @Test (timeout=180000)
+  public void testRsReportsWrongAddress() throws Exception {
+    MiniHBaseClusterRegionServer firstServer =
+      (MiniHBaseClusterRegionServer)cluster.getRegionServer(0);
+    firstServer.getHServerInfo().setServerAddress(
+      new HServerAddress("0.0.0.0", 60010));
+    cluster.waitOnRegionServer(0);
+    assertEquals(1, cluster.getLiveRegionServerThreads().size());
+  }
+
+  /**
+   * Send a YouAreDeadException to the region server and expect it to shut down
+   * See HBASE-2691
+   * @throws Exception
+   */
+  @Ignore @Test (timeout=180000)
+  public void testSendYouAreDead() throws Exception {
+    cluster.addExceptionToSendRegionServer(0, new YouAreDeadException("bam!"));
+    cluster.waitOnRegionServer(0);
+    assertEquals(1, cluster.getLiveRegionServerThreads().size());
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/OOMEHMaster.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/OOMEHMaster.java
new file mode 100644
index 0000000..bf5ed03
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/OOMEHMaster.java
@@ -0,0 +1,58 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HMsg;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.zookeeper.KeeperException;
+
+/**
+ * An HMaster that runs out of memory.
+ * Everytime a region server reports in, add to the retained heap of memory.
+ * Needs to be started manually as in
+ * <code>${HBASE_HOME}/bin/hbase ./bin/hbase org.apache.hadoop.hbase.OOMEHMaster start/code>.
+ */
+public class OOMEHMaster extends HMaster {
+  private List<byte []> retainer = new ArrayList<byte[]>();
+
+  public OOMEHMaster(HBaseConfiguration conf)
+  throws IOException, KeeperException, InterruptedException {
+    super(conf);
+  }
+
+  @Override
+  public HMsg[] regionServerReport(HServerInfo serverInfo, HMsg[] msgs,
+    HRegionInfo[] mostLoadedRegions)
+  throws IOException {
+    // Retain 1M.
+    this.retainer.add(new byte [1024 * 1024]);
+    return super.regionServerReport(serverInfo, msgs, mostLoadedRegions);
+  }
+
+  public static void main(String[] args) throws Exception {
+    new HMasterCommandLine(OOMEHMaster.class).doMain(args);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java
new file mode 100644
index 0000000..1a19941
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java
@@ -0,0 +1,270 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.concurrent.Semaphore;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test the {@link ActiveMasterManager}.
+ */
+public class TestActiveMasterManager {
+  private final static Log LOG = LogFactory.getLog(TestActiveMasterManager.class);
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniZKCluster();
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniZKCluster();
+  }
+
+  @Test public void testRestartMaster() throws IOException, KeeperException {
+    ZooKeeperWatcher zk = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+      "testActiveMasterManagerFromZK", null);
+    ZKUtil.createAndFailSilent(zk, zk.baseZNode);
+    try {
+      ZKUtil.deleteNode(zk, zk.masterAddressZNode);
+    } catch(KeeperException.NoNodeException nne) {}
+
+    // Create the master node with a dummy address
+    HServerAddress master = new HServerAddress("localhost", 1);
+    // Should not have a master yet
+    DummyMaster dummyMaster = new DummyMaster();
+    ActiveMasterManager activeMasterManager = new ActiveMasterManager(zk,
+      master, dummyMaster);
+    zk.registerListener(activeMasterManager);
+    assertFalse(activeMasterManager.clusterHasActiveMaster.get());
+
+    // First test becoming the active master uninterrupted
+    activeMasterManager.blockUntilBecomingActiveMaster();
+    assertTrue(activeMasterManager.clusterHasActiveMaster.get());
+    assertMaster(zk, master);
+
+    // Now pretend master restart
+    DummyMaster secondDummyMaster = new DummyMaster();
+    ActiveMasterManager secondActiveMasterManager = new ActiveMasterManager(zk,
+      master, secondDummyMaster);
+    zk.registerListener(secondActiveMasterManager);
+    assertFalse(secondActiveMasterManager.clusterHasActiveMaster.get());
+    activeMasterManager.blockUntilBecomingActiveMaster();
+    assertTrue(activeMasterManager.clusterHasActiveMaster.get());
+    assertMaster(zk, master);
+  }
+
+  /**
+   * Unit tests that uses ZooKeeper but does not use the master-side methods
+   * but rather acts directly on ZK.
+   * @throws Exception
+   */
+  @Test
+  public void testActiveMasterManagerFromZK() throws Exception {
+    ZooKeeperWatcher zk = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+      "testActiveMasterManagerFromZK", null);
+    ZKUtil.createAndFailSilent(zk, zk.baseZNode);
+    try {
+      ZKUtil.deleteNode(zk, zk.masterAddressZNode);
+    } catch(KeeperException.NoNodeException nne) {}
+
+    // Create the master node with a dummy address
+    HServerAddress firstMasterAddress = new HServerAddress("localhost", 1);
+    HServerAddress secondMasterAddress = new HServerAddress("localhost", 2);
+
+    // Should not have a master yet
+    DummyMaster ms1 = new DummyMaster();
+    ActiveMasterManager activeMasterManager = new ActiveMasterManager(zk,
+      firstMasterAddress, ms1);
+    zk.registerListener(activeMasterManager);
+    assertFalse(activeMasterManager.clusterHasActiveMaster.get());
+
+    // First test becoming the active master uninterrupted
+    activeMasterManager.blockUntilBecomingActiveMaster();
+    assertTrue(activeMasterManager.clusterHasActiveMaster.get());
+    assertMaster(zk, firstMasterAddress);
+
+    // New manager will now try to become the active master in another thread
+    WaitToBeMasterThread t = new WaitToBeMasterThread(zk, secondMasterAddress);
+    zk.registerListener(t.manager);
+    t.start();
+    // Wait for this guy to figure out there is another active master
+    // Wait for 1 second at most
+    int sleeps = 0;
+    while(!t.manager.clusterHasActiveMaster.get() && sleeps < 100) {
+      Thread.sleep(10);
+      sleeps++;
+    }
+
+    // Both should see that there is an active master
+    assertTrue(activeMasterManager.clusterHasActiveMaster.get());
+    assertTrue(t.manager.clusterHasActiveMaster.get());
+    // But secondary one should not be the active master
+    assertFalse(t.isActiveMaster);
+
+    // Close the first server and delete it's master node
+    ms1.stop("stopping first server");
+
+    // Use a listener to capture when the node is actually deleted
+    NodeDeletionListener listener = new NodeDeletionListener(zk, zk.masterAddressZNode);
+    zk.registerListener(listener);
+
+    LOG.info("Deleting master node");
+    ZKUtil.deleteNode(zk, zk.masterAddressZNode);
+
+    // Wait for the node to be deleted
+    LOG.info("Waiting for active master manager to be notified");
+    listener.waitForDeletion();
+    LOG.info("Master node deleted");
+
+    // Now we expect the secondary manager to have and be the active master
+    // Wait for 1 second at most
+    sleeps = 0;
+    while(!t.isActiveMaster && sleeps < 100) {
+      Thread.sleep(10);
+      sleeps++;
+    }
+    LOG.debug("Slept " + sleeps + " times");
+
+    assertTrue(t.manager.clusterHasActiveMaster.get());
+    assertTrue(t.isActiveMaster);
+
+    LOG.info("Deleting master node");
+    ZKUtil.deleteNode(zk, zk.masterAddressZNode);
+  }
+
+  /**
+   * Assert there is an active master and that it has the specified address.
+   * @param zk
+   * @param thisMasterAddress
+   * @throws KeeperException
+   */
+  private void assertMaster(ZooKeeperWatcher zk,
+      HServerAddress expectedAddress) throws KeeperException {
+    HServerAddress readAddress = ZKUtil.getDataAsAddress(zk, zk.masterAddressZNode);
+    assertNotNull(readAddress);
+    assertTrue(expectedAddress.equals(readAddress));
+  }
+
+  public static class WaitToBeMasterThread extends Thread {
+
+    ActiveMasterManager manager;
+    boolean isActiveMaster;
+
+    public WaitToBeMasterThread(ZooKeeperWatcher zk,
+        HServerAddress address) {
+      this.manager = new ActiveMasterManager(zk, address,
+          new DummyMaster());
+      isActiveMaster = false;
+    }
+
+    @Override
+    public void run() {
+      manager.blockUntilBecomingActiveMaster();
+      LOG.info("Second master has become the active master!");
+      isActiveMaster = true;
+    }
+  }
+
+  public static class NodeDeletionListener extends ZooKeeperListener {
+    private static final Log LOG = LogFactory.getLog(NodeDeletionListener.class);
+
+    private Semaphore lock;
+    private String node;
+
+    public NodeDeletionListener(ZooKeeperWatcher watcher, String node) {
+      super(watcher);
+      lock = new Semaphore(0);
+      this.node = node;
+    }
+
+    @Override
+    public void nodeDeleted(String path) {
+      if(path.equals(node)) {
+        LOG.debug("nodeDeleted(" + path + ")");
+        lock.release();
+      }
+    }
+
+    public void waitForDeletion() throws InterruptedException {
+      lock.acquire();
+    }
+  }
+
+  /**
+   * Dummy Master Implementation.
+   */
+  public static class DummyMaster implements Server {
+    private volatile boolean stopped;
+
+    @Override
+    public void abort(final String msg, final Throwable t) {}
+
+    @Override
+    public Configuration getConfiguration() {
+      return null;
+    }
+
+    @Override
+    public ZooKeeperWatcher getZooKeeper() {
+      return null;
+    }
+
+    @Override
+    public String getServerName() {
+      return null;
+    }
+
+    @Override
+    public boolean isStopped() {
+      return this.stopped;
+    }
+
+    @Override
+    public void stop(String why) {
+      this.stopped = true;
+    }
+
+    @Override
+    public CatalogTracker getCatalogTracker() {
+      return null;
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java
new file mode 100644
index 0000000..5be8daa
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java
@@ -0,0 +1,223 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.executor.ExecutorService;
+import org.apache.hadoop.hbase.io.Reference;
+import org.apache.hadoop.hbase.ipc.HRegionInterface;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+public class TestCatalogJanitor {
+
+  /**
+   * Pseudo server for below tests.
+   */
+  class MockServer implements Server {
+    private final Configuration c;
+    private final CatalogTracker ct;
+
+    MockServer(final HBaseTestingUtility htu)
+    throws NotAllMetaRegionsOnlineException, IOException {
+      this.c = htu.getConfiguration();
+      // Set hbase.rootdir into test dir.
+      FileSystem fs = FileSystem.get(this.c);
+      Path rootdir =
+        fs.makeQualified(HBaseTestingUtility.getTestDir(HConstants.HBASE_DIR));
+      this.c.set(HConstants.HBASE_DIR, rootdir.toString());
+      this.ct = Mockito.mock(CatalogTracker.class);
+      HRegionInterface hri = Mockito.mock(HRegionInterface.class);
+      Mockito.when(ct.waitForMetaServerConnectionDefault()).thenReturn(hri);
+    }
+
+    @Override
+    public CatalogTracker getCatalogTracker() {
+      return this.ct;
+    }
+
+    @Override
+    public Configuration getConfiguration() {
+      return this.c;
+    }
+
+    @Override
+    public String getServerName() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public ZooKeeperWatcher getZooKeeper() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public void abort(String why, Throwable e) {
+      // TODO Auto-generated method stub
+    }
+
+    @Override
+    public boolean isStopped() {
+      // TODO Auto-generated method stub
+      return false;
+    }
+
+    @Override
+    public void stop(String why) {
+      // TODO Auto-generated method stub
+    }
+    
+  }
+
+  /**
+   * Mock MasterServices for tests below.
+   */
+  class MockMasterServices implements MasterServices {
+    private final MasterFileSystem mfs;
+
+    MockMasterServices(final Server server) throws IOException {
+      this.mfs = new MasterFileSystem(server, null);
+    }
+
+    @Override
+    public void checkTableModifiable(byte[] tableName) throws IOException {
+      // TODO Auto-generated method stub
+    }
+
+    @Override
+    public AssignmentManager getAssignmentManager() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public ExecutorService getExecutorService() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public MasterFileSystem getMasterFileSystem() {
+      return this.mfs;
+    }
+
+    @Override
+    public ServerManager getServerManager() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+    
+  }
+
+  @Test
+  public void testGetHRegionInfo() throws IOException {
+    assertNull(CatalogJanitor.getHRegionInfo(new Result()));
+    List<KeyValue> kvs = new ArrayList<KeyValue>();
+    Result r = new Result(kvs);
+    assertNull(CatalogJanitor.getHRegionInfo(r));
+    byte [] f = HConstants.CATALOG_FAMILY;
+    // Make a key value that doesn't have the expected qualifier.
+    kvs.add(new KeyValue(HConstants.EMPTY_BYTE_ARRAY, f,
+      HConstants.SERVER_QUALIFIER, f));
+    r = new Result(kvs);
+    assertNull(CatalogJanitor.getHRegionInfo(r));
+    // Make a key that does not have a regioninfo value.
+    kvs.add(new KeyValue(HConstants.EMPTY_BYTE_ARRAY, f,
+      HConstants.REGIONINFO_QUALIFIER, f));
+    HRegionInfo hri = CatalogJanitor.getHRegionInfo(new Result(kvs));
+    assertTrue(hri == null);
+    // OK, give it what it expects
+    kvs.clear();
+    kvs.add(new KeyValue(HConstants.EMPTY_BYTE_ARRAY, f,
+      HConstants.REGIONINFO_QUALIFIER,
+      Writables.getBytes(HRegionInfo.FIRST_META_REGIONINFO)));
+    hri = CatalogJanitor.getHRegionInfo(new Result(kvs));
+    assertNotNull(hri);
+    assertTrue(hri.equals(HRegionInfo.FIRST_META_REGIONINFO));
+  }
+
+  @Test
+  public void testCleanParent() throws IOException {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    Server server = new MockServer(htu);
+    MasterServices services = new MockMasterServices(server);
+    CatalogJanitor janitor = new CatalogJanitor(server, services);
+    // Create regions.
+    HTableDescriptor htd = new HTableDescriptor("table");
+    htd.addFamily(new HColumnDescriptor("family"));
+    HRegionInfo parent =
+      new HRegionInfo(htd, Bytes.toBytes("aaa"), Bytes.toBytes("eee"));
+    HRegionInfo splita =
+      new HRegionInfo(htd, Bytes.toBytes("aaa"), Bytes.toBytes("ccc"));
+    HRegionInfo splitb =
+      new HRegionInfo(htd, Bytes.toBytes("ccc"), Bytes.toBytes("eee"));
+    // Test that when both daughter regions are in place, that we do not
+    // remove the parent.
+    List<KeyValue> kvs = new ArrayList<KeyValue>();
+    kvs.add(new KeyValue(parent.getRegionName(), HConstants.CATALOG_FAMILY,
+      HConstants.SPLITA_QUALIFIER, Writables.getBytes(splita)));
+    kvs.add(new KeyValue(parent.getRegionName(), HConstants.CATALOG_FAMILY,
+      HConstants.SPLITB_QUALIFIER, Writables.getBytes(splitb)));
+    Result r = new Result(kvs);
+    // Add a reference under splitA directory so we don't clear out the parent.
+    Path rootdir = services.getMasterFileSystem().getRootDir();
+    Path tabledir =
+      HTableDescriptor.getTableDir(rootdir, htd.getName());
+    Path storedir = Store.getStoreHomedir(tabledir, splita.getEncodedName(),
+      htd.getColumnFamilies()[0].getName());
+    Reference ref = new Reference(Bytes.toBytes("ccc"), Reference.Range.top);
+    long now = System.currentTimeMillis();
+    // Reference name has this format: StoreFile#REF_NAME_PARSER
+    Path p = new Path(storedir, Long.toString(now) + "." + parent.getEncodedName());
+    FileSystem fs = services.getMasterFileSystem().getFileSystem();
+    ref.write(fs, p);
+    assertFalse(janitor.cleanParent(parent, r));
+    // Remove the reference file and try again.
+    assertTrue(fs.delete(p, true));
+    assertTrue(janitor.cleanParent(parent, r));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
new file mode 100644
index 0000000..915cdf6
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java
@@ -0,0 +1,96 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import junit.framework.Assert;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ClockOutOfSyncException;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.Test;
+
+
+public class TestClockSkewDetection {
+  private static final Log LOG =
+    LogFactory.getLog(TestClockSkewDetection.class);
+
+  @Test
+  public void testClockSkewDetection() throws Exception {
+    final Configuration conf = HBaseConfiguration.create();
+    ServerManager sm = new ServerManager(new Server() {
+      @Override
+      public CatalogTracker getCatalogTracker() {
+        return null;
+      }
+
+      @Override
+      public Configuration getConfiguration() {
+        return conf;
+      }
+
+      @Override
+      public String getServerName() {
+        return null;
+      }
+
+      @Override
+      public ZooKeeperWatcher getZooKeeper() {
+        return null;
+      }
+
+      @Override
+      public void abort(String why, Throwable e) {}
+
+      @Override
+      public boolean isStopped() {
+        return false;
+      }
+
+      @Override
+      public void stop(String why) {
+      }}, null, null);
+
+    LOG.debug("regionServerStartup 1");
+    HServerInfo hsi1 = new HServerInfo(new HServerAddress("example.org:1234"),
+        System.currentTimeMillis(), -1, "example.com");
+    sm.regionServerStartup(hsi1, System.currentTimeMillis());
+
+    long maxSkew = 30000;
+
+    try {
+      LOG.debug("regionServerStartup 2");
+      HServerInfo hsi2 = new HServerInfo(new HServerAddress("example.org:1235"),
+        System.currentTimeMillis(), -1, "example.com");
+      sm.regionServerStartup(hsi2, System.currentTimeMillis() - maxSkew * 2);
+      Assert.assertTrue("HMaster should have thrown an ClockOutOfSyncException "
+          + "but didn't.", false);
+    } catch(ClockOutOfSyncException e) {
+      //we want an exception
+      LOG.info("Recieved expected exception: "+e);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java
new file mode 100644
index 0000000..252159c
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import org.junit.Test;
+
+
+public class TestDeadServer {
+  @Test public void testIsDead() {
+    DeadServer ds = new DeadServer(2);
+    final String hostname123 = "127.0.0.1,123,3";
+    assertFalse(ds.isDeadServer(hostname123, false));
+    assertFalse(ds.isDeadServer(hostname123, true));
+    ds.add(hostname123);
+    assertTrue(ds.isDeadServer(hostname123, false));
+    assertFalse(ds.isDeadServer("127.0.0.1:1", true));
+    assertFalse(ds.isDeadServer("127.0.0.1:1234", true));
+    assertTrue(ds.isDeadServer("127.0.0.1:123", true));
+    assertTrue(ds.areDeadServersInProgress());
+    ds.finish(hostname123);
+    assertFalse(ds.areDeadServersInProgress());
+    final String hostname1234 = "127.0.0.2,1234,4";
+    ds.add(hostname1234);
+    assertTrue(ds.isDeadServer(hostname123, false));
+    assertTrue(ds.isDeadServer(hostname1234, false));
+    assertTrue(ds.areDeadServersInProgress());
+    ds.finish(hostname1234);
+    assertFalse(ds.areDeadServersInProgress());
+    final String hostname12345 = "127.0.0.2,12345,4";
+    ds.add(hostname12345);
+    // hostname123 should now be evicted
+    assertFalse(ds.isDeadServer(hostname123, false));
+    // but others should still be dead
+    assertTrue(ds.isDeadServer(hostname1234, false));
+    assertTrue(ds.isDeadServer(hostname12345, false));
+    assertTrue(ds.areDeadServersInProgress());
+    ds.finish(hostname12345);
+    assertFalse(ds.areDeadServersInProgress());
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestHMasterRPCException.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestHMasterRPCException.java
new file mode 100644
index 0000000..388d407
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestHMasterRPCException.java
@@ -0,0 +1,61 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.ipc.HBaseRPC;
+import org.apache.hadoop.hbase.ipc.HBaseRPCProtocolVersion;
+import org.apache.hadoop.hbase.ipc.HMasterInterface;
+import org.apache.hadoop.ipc.RemoteException;
+import org.junit.Test;
+
+public class TestHMasterRPCException {
+
+  @Test
+  public void testRPCException() throws Exception {
+    HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+    TEST_UTIL.startMiniZKCluster();
+    Configuration conf = TEST_UTIL.getConfiguration();
+    conf.set(HConstants.MASTER_PORT, "0");
+
+    HMaster hm = new HMaster(conf);
+
+    HServerAddress hma = hm.getMasterAddress();
+    try {
+      HMasterInterface inf =
+          (HMasterInterface) HBaseRPC.getProxy(
+              HMasterInterface.class,  HBaseRPCProtocolVersion.versionID,
+              hma.getInetSocketAddress(), conf, 100);
+      inf.isMasterRunning();
+      fail();
+    } catch (RemoteException ex) {
+      assertTrue(ex.getMessage().startsWith("org.apache.hadoop.hbase.ipc.ServerNotRunningException: Server is not running yet"));
+    } catch (Throwable t) {
+      fail("Unexpected throwable: " + t);
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestLoadBalancer.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestLoadBalancer.java
new file mode 100644
index 0000000..ca2a4bc
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestLoadBalancer.java
@@ -0,0 +1,455 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Random;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.master.LoadBalancer.RegionPlan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestLoadBalancer {
+  private static final Log LOG = LogFactory.getLog(TestLoadBalancer.class);
+
+  private static LoadBalancer loadBalancer;
+
+  private static Random rand;
+
+  @BeforeClass
+  public static void beforeAllTests() throws Exception {
+    loadBalancer = new LoadBalancer();
+    rand = new Random();
+  }
+
+  // int[testnum][servernumber] -> numregions
+  int [][] clusterStateMocks = new int [][] {
+      // 1 node
+      new int [] { 0 },
+      new int [] { 1 },
+      new int [] { 10 },
+      // 2 node
+      new int [] { 0, 0 },
+      new int [] { 2, 0 },
+      new int [] { 2, 1 },
+      new int [] { 2, 2 },
+      new int [] { 2, 3 },
+      new int [] { 2, 4 },
+      new int [] { 1, 1 },
+      new int [] { 0, 1 },
+      new int [] { 10, 1 },
+      new int [] { 14, 1432 },
+      new int [] { 47, 53 },
+      // 3 node
+      new int [] { 0, 1, 2 },
+      new int [] { 1, 2, 3 },
+      new int [] { 0, 2, 2 },
+      new int [] { 0, 3, 0 },
+      new int [] { 0, 4, 0 },
+      new int [] { 20, 20, 0 },
+      // 4 node
+      new int [] { 0, 1, 2, 3 },
+      new int [] { 4, 0, 0, 0 },
+      new int [] { 5, 0, 0, 0 },
+      new int [] { 6, 6, 0, 0 },
+      new int [] { 6, 2, 0, 0 },
+      new int [] { 6, 1, 0, 0 },
+      new int [] { 6, 0, 0, 0 },
+      new int [] { 4, 4, 4, 7 },
+      new int [] { 4, 4, 4, 8 },
+      new int [] { 0, 0, 0, 7 },
+      // 5 node
+      new int [] { 1, 1, 1, 1, 4 },
+      // more nodes
+      new int [] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 },
+      new int [] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 10 },
+      new int [] { 6, 6, 5, 6, 6, 6, 6, 6, 6, 1 },
+      new int [] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 54 },
+      new int [] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 55 },
+      new int [] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 56 },
+      new int [] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 16 },
+      new int [] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 8 },
+      new int [] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 9 },
+      new int [] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 10 },
+      new int [] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 123 },
+      new int [] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 155 },
+      new int [] { 0, 0, 144, 1, 1, 1, 1, 1123, 133, 138, 12, 1444 },
+      new int [] { 0, 0, 144, 1, 0, 4, 1, 1123, 133, 138, 12, 1444 },
+      new int [] { 1538, 1392, 1561, 1557, 1535, 1553, 1385, 1542, 1619 }
+  };
+
+  int [][] regionsAndServersMocks = new int [][] {
+      // { num regions, num servers }
+      new int [] { 0, 0 },
+      new int [] { 0, 1 },
+      new int [] { 1, 1 },
+      new int [] { 2, 1 },
+      new int [] { 10, 1 },
+      new int [] { 1, 2 },
+      new int [] { 2, 2 },
+      new int [] { 3, 2 },
+      new int [] { 1, 3 },
+      new int [] { 2, 3 },
+      new int [] { 3, 3 },
+      new int [] { 25, 3 },
+      new int [] { 2, 10 },
+      new int [] { 2, 100 },
+      new int [] { 12, 10 },
+      new int [] { 12, 100 },
+  };
+
+  /**
+   * Test the load balancing algorithm.
+   *
+   * Invariant is that all servers should be hosting either
+   * floor(average) or ceiling(average)
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testBalanceCluster() throws Exception {
+
+    for(int [] mockCluster : clusterStateMocks) {
+      Map<HServerInfo,List<HRegionInfo>> servers = mockClusterServers(mockCluster);
+      LOG.info("Mock Cluster : " + printMock(servers) + " " + printStats(servers));
+      List<RegionPlan> plans = loadBalancer.balanceCluster(servers);
+      List<HServerInfo> balancedCluster = reconcile(servers, plans);
+      LOG.info("Mock Balance : " + printMock(balancedCluster));
+      assertClusterAsBalanced(balancedCluster);
+      for(Map.Entry<HServerInfo, List<HRegionInfo>> entry : servers.entrySet()) {
+        returnRegions(entry.getValue());
+        returnServer(entry.getKey());
+      }
+    }
+
+  }
+
+  /**
+   * Invariant is that all servers have between floor(avg) and ceiling(avg)
+   * number of regions.
+   */
+  public void assertClusterAsBalanced(List<HServerInfo> servers) {
+    int numServers = servers.size();
+    int numRegions = 0;
+    int maxRegions = 0;
+    int minRegions = Integer.MAX_VALUE;
+    for(HServerInfo server : servers) {
+      int nr = server.getLoad().getNumberOfRegions();
+      if(nr > maxRegions) {
+        maxRegions = nr;
+      }
+      if(nr < minRegions) {
+        minRegions = nr;
+      }
+      numRegions += nr;
+    }
+    if(maxRegions - minRegions < 2) {
+      // less than 2 between max and min, can't balance
+      return;
+    }
+    int min = numRegions / numServers;
+    int max = numRegions % numServers == 0 ? min : min + 1;
+
+    for(HServerInfo server : servers) {
+      assertTrue(server.getLoad().getNumberOfRegions() <= max);
+      assertTrue(server.getLoad().getNumberOfRegions() >= min);
+    }
+  }
+
+  /**
+   * Tests immediate assignment.
+   *
+   * Invariant is that all regions have an assignment.
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testImmediateAssignment() throws Exception {
+    for(int [] mock : regionsAndServersMocks) {
+      LOG.debug("testImmediateAssignment with " + mock[0] + " regions and " + mock[1] + " servers");
+      List<HRegionInfo> regions = randomRegions(mock[0]);
+      List<HServerInfo> servers = randomServers(mock[1], 0);
+      Map<HRegionInfo,HServerInfo> assignments =
+        LoadBalancer.immediateAssignment(regions, servers);
+      assertImmediateAssignment(regions, servers, assignments);
+      returnRegions(regions);
+      returnServers(servers);
+    }
+  }
+
+  /**
+   * All regions have an assignment.
+   * @param regions
+   * @param servers
+   * @param assignments
+   */
+  private void assertImmediateAssignment(List<HRegionInfo> regions,
+      List<HServerInfo> servers, Map<HRegionInfo,HServerInfo> assignments) {
+    for(HRegionInfo region : regions) {
+      assertTrue(assignments.containsKey(region));
+    }
+  }
+
+  /**
+   * Tests the bulk assignment used during cluster startup.
+   *
+   * Round-robin.  Should yield a balanced cluster so same invariant as the load
+   * balancer holds, all servers holding either floor(avg) or ceiling(avg).
+   *
+   * @throws Exception
+   */
+  @Test
+  public void testBulkAssignment() throws Exception {
+    for(int [] mock : regionsAndServersMocks) {
+      LOG.debug("testBulkAssignment with " + mock[0] + " regions and " + mock[1] + " servers");
+      List<HRegionInfo> regions = randomRegions(mock[0]);
+      List<HServerInfo> servers = randomServers(mock[1], 0);
+      Map<HServerInfo,List<HRegionInfo>> assignments =
+        LoadBalancer.roundRobinAssignment(regions, servers);
+      float average = (float)regions.size()/servers.size();
+      int min = (int)Math.floor(average);
+      int max = (int)Math.ceil(average);
+      if(assignments != null && !assignments.isEmpty()) {
+        for(List<HRegionInfo> regionList : assignments.values()) {
+          assertTrue(regionList.size() == min || regionList.size() == max);
+        }
+      }
+      returnRegions(regions);
+      returnServers(servers);
+    }
+  }
+
+  /**
+   * Test the cluster startup bulk assignment which attempts to retain
+   * assignment info.
+   * @throws Exception
+   */
+  @Test
+  public void testRetainAssignment() throws Exception {
+    // Test simple case where all same servers are there
+    List<HServerInfo> servers = randomServers(10, 10);
+    List<HRegionInfo> regions = randomRegions(100);
+    Map<HRegionInfo, HServerAddress> existing =
+      new TreeMap<HRegionInfo, HServerAddress>();
+    for (int i=0;i<regions.size();i++) {
+      existing.put(regions.get(i),
+          servers.get(i % servers.size()).getServerAddress());
+    }
+    Map<HServerInfo, List<HRegionInfo>> assignment =
+      LoadBalancer.retainAssignment(existing, servers);
+    assertRetainedAssignment(existing, servers, assignment);
+
+    // Include two new servers that were not there before
+    List<HServerInfo> servers2 = new ArrayList<HServerInfo>(servers);
+    servers2.add(randomServer(10));
+    servers2.add(randomServer(10));
+    assignment = LoadBalancer.retainAssignment(existing, servers2);
+    assertRetainedAssignment(existing, servers2, assignment);
+
+    // Remove two of the servers that were previously there
+    List<HServerInfo> servers3 = new ArrayList<HServerInfo>(servers);
+    servers3.remove(servers3.size()-1);
+    servers3.remove(servers3.size()-2);
+    assignment = LoadBalancer.retainAssignment(existing, servers3);
+    assertRetainedAssignment(existing, servers3, assignment);
+  }
+
+  /**
+   * Asserts a valid retained assignment plan.
+   * <p>
+   * Must meet the following conditions:
+   * <ul>
+   *   <li>Every input region has an assignment, and to an online server
+   *   <li>If a region had an existing assignment to a server with the same
+   *       address a a currently online server, it will be assigned to it
+   * </ul>
+   * @param existing
+   * @param servers
+   * @param assignment
+   */
+  private void assertRetainedAssignment(
+      Map<HRegionInfo, HServerAddress> existing, List<HServerInfo> servers,
+      Map<HServerInfo, List<HRegionInfo>> assignment) {
+    // Verify condition 1, every region assigned, and to online server
+    Set<HServerInfo> onlineServerSet = new TreeSet<HServerInfo>(servers);
+    Set<HRegionInfo> assignedRegions = new TreeSet<HRegionInfo>();
+    for (Map.Entry<HServerInfo, List<HRegionInfo>> a : assignment.entrySet()) {
+      assertTrue("Region assigned to server that was not listed as online",
+          onlineServerSet.contains(a.getKey()));
+      for (HRegionInfo r : a.getValue()) assignedRegions.add(r);
+    }
+    assertEquals(existing.size(), assignedRegions.size());
+
+    // Verify condition 2, if server had existing assignment, must have same
+    Set<HServerAddress> onlineAddresses = new TreeSet<HServerAddress>();
+    for (HServerInfo s : servers) onlineAddresses.add(s.getServerAddress());
+    for (Map.Entry<HServerInfo, List<HRegionInfo>> a : assignment.entrySet()) {
+      for (HRegionInfo r : a.getValue()) {
+        HServerAddress address = existing.get(r);
+        if (address != null && onlineAddresses.contains(address)) {
+          assertTrue(a.getKey().getServerAddress().equals(address));
+        }
+      }
+    }
+  }
+
+  private String printStats(Map<HServerInfo, List<HRegionInfo>> servers) {
+    int numServers = servers.size();
+    int totalRegions = 0;
+    for(HServerInfo server : servers.keySet()) {
+      totalRegions += server.getLoad().getNumberOfRegions();
+    }
+    float average = (float)totalRegions / numServers;
+    int max = (int)Math.ceil(average);
+    int min = (int)Math.floor(average);
+    return "[srvr=" + numServers + " rgns=" + totalRegions + " avg=" + average + " max=" + max + " min=" + min + "]";
+  }
+
+  private String printMock(Map<HServerInfo, List<HRegionInfo>> servers) {
+    return printMock(Arrays.asList(servers.keySet().toArray(new HServerInfo[servers.size()])));
+  }
+
+  private String printMock(List<HServerInfo> balancedCluster) {
+    SortedSet<HServerInfo> sorted = new TreeSet<HServerInfo>(balancedCluster);
+    HServerInfo [] arr = sorted.toArray(new HServerInfo[sorted.size()]);
+    StringBuilder sb = new StringBuilder(sorted.size() * 4 + 4);
+    sb.append("{ ");
+    for(int i=0;i<arr.length;i++) {
+      if(i != 0) {
+        sb.append(" , ");
+      }
+      sb.append(arr[i].getLoad().getNumberOfRegions());
+    }
+    sb.append(" }");
+    return sb.toString();
+  }
+
+  /**
+   * This assumes the RegionPlan HSI instances are the same ones in the map, so
+   * actually no need to even pass in the map, but I think it's clearer.
+   * @param servers
+   * @param plans
+   * @return
+   */
+  private List<HServerInfo> reconcile(
+      Map<HServerInfo, List<HRegionInfo>> servers, List<RegionPlan> plans) {
+    if(plans != null) {
+      for(RegionPlan plan : plans) {
+        plan.getSource().getLoad().setNumberOfRegions(
+            plan.getSource().getLoad().getNumberOfRegions() - 1);
+        plan.getDestination().getLoad().setNumberOfRegions(
+            plan.getDestination().getLoad().getNumberOfRegions() + 1);
+      }
+    }
+    return Arrays.asList(servers.keySet().toArray(new HServerInfo[servers.size()]));
+  }
+
+  private Map<HServerInfo, List<HRegionInfo>> mockClusterServers(
+      int [] mockCluster) {
+    int numServers = mockCluster.length;
+    Map<HServerInfo,List<HRegionInfo>> servers =
+      new TreeMap<HServerInfo,List<HRegionInfo>>();
+    for(int i=0;i<numServers;i++) {
+      int numRegions = mockCluster[i];
+      HServerInfo server = randomServer(numRegions);
+      List<HRegionInfo> regions = randomRegions(numRegions);
+      servers.put(server, regions);
+    }
+    return servers;
+  }
+
+  private Queue<HRegionInfo> regionQueue = new LinkedList<HRegionInfo>();
+
+  private List<HRegionInfo> randomRegions(int numRegions) {
+    List<HRegionInfo> regions = new ArrayList<HRegionInfo>(numRegions);
+    byte [] start = new byte[16];
+    byte [] end = new byte[16];
+    rand.nextBytes(start);
+    rand.nextBytes(end);
+    for(int i=0;i<numRegions;i++) {
+      if(!regionQueue.isEmpty()) {
+        regions.add(regionQueue.poll());
+        continue;
+      }
+      Bytes.putInt(start, 0, numRegions << 1);
+      Bytes.putInt(end, 0, (numRegions << 1) + 1);
+      HRegionInfo hri = new HRegionInfo(
+          new HTableDescriptor(Bytes.toBytes("table")), start, end);
+      regions.add(hri);
+    }
+    return regions;
+  }
+
+  private void returnRegions(List<HRegionInfo> regions) {
+    regionQueue.addAll(regions);
+  }
+
+  private Queue<HServerInfo> serverQueue = new LinkedList<HServerInfo>();
+
+  private HServerInfo randomServer(int numRegions) {
+    if(!serverQueue.isEmpty()) {
+      HServerInfo server = this.serverQueue.poll();
+      server.getLoad().setNumberOfRegions(numRegions);
+      return server;
+    }
+    String host = "127.0.0.1";
+    int port = rand.nextInt(60000);
+    long startCode = rand.nextLong();
+    HServerInfo hsi =
+      new HServerInfo(new HServerAddress(host, port), startCode, port, host);
+    hsi.getLoad().setNumberOfRegions(numRegions);
+    return hsi;
+  }
+
+  private List<HServerInfo> randomServers(int numServers, int numRegionsPerServer) {
+    List<HServerInfo> servers = new ArrayList<HServerInfo>(numServers);
+    for(int i=0;i<numServers;i++) {
+      servers.add(randomServer(numRegionsPerServer));
+    }
+    return servers;
+  }
+
+  private void returnServer(HServerInfo server) {
+    serverQueue.add(server);
+  }
+
+  private void returnServers(List<HServerInfo> servers) {
+    serverQueue.addAll(servers);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestLogsCleaner.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestLogsCleaner.java
new file mode 100644
index 0000000..7c52b8c
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestLogsCleaner.java
@@ -0,0 +1,171 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertEquals;
+
+import java.io.IOException;
+import java.net.URLEncoder;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.replication.ReplicationZookeeper;
+import org.apache.hadoop.hbase.replication.regionserver.Replication;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestLogsCleaner {
+
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniZKCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniZKCluster();
+  }
+
+  @Test
+  public void testLogCleaning() throws Exception{
+    Configuration conf = TEST_UTIL.getConfiguration();
+    conf.setBoolean(HConstants.REPLICATION_ENABLE_KEY, true);
+    Replication.decorateMasterConfiguration(conf);
+    Server server = new DummyServer();
+    ReplicationZookeeper zkHelper =
+        new ReplicationZookeeper(server, new AtomicBoolean(true));
+
+    Path oldLogDir = new Path(HBaseTestingUtility.getTestDir(),
+        HConstants.HREGION_OLDLOGDIR_NAME);
+    String fakeMachineName = URLEncoder.encode(server.getServerName(), "UTF8");
+
+    FileSystem fs = FileSystem.get(conf);
+    LogCleaner cleaner  = new LogCleaner(1000, server, conf, fs, oldLogDir);
+
+    // Create 2 invalid files, 1 "recent" file, 1 very new file and 30 old files
+    long now = System.currentTimeMillis();
+    fs.delete(oldLogDir, true);
+    fs.mkdirs(oldLogDir);
+    // Case 1: 2 invalid files, which would be deleted directly
+    fs.createNewFile(new Path(oldLogDir, "a"));
+    fs.createNewFile(new Path(oldLogDir, fakeMachineName + "." + "a"));
+    // Case 2: 1 "recent" file, not even deletable for the first log cleaner
+    // (TimeToLiveLogCleaner), so we are not going down the chain
+    fs.createNewFile(new Path(oldLogDir, fakeMachineName + "." + now));
+    System.out.println("Now is: " + now);
+    for (int i = 0; i < 30; i++) {
+      // Case 3: old files which would be deletable for the first log cleaner
+      // (TimeToLiveLogCleaner), and also for the second (ReplicationLogCleaner)
+      Path fileName = new Path(oldLogDir, fakeMachineName + "." +
+          (now - 6000000 - i) );
+      fs.createNewFile(fileName);
+      // Case 4: put 3 old log files in ZK indicating that they are scheduled
+      // for replication so these files would pass the first log cleaner
+      // (TimeToLiveLogCleaner) but would be rejected by the second
+      // (ReplicationLogCleaner)
+      if (i % (30/3) == 0) {
+        zkHelper.addLogToList(fileName.getName(), fakeMachineName);
+        System.out.println("Replication log file: " + fileName);
+      }
+    }
+    for (FileStatus stat : fs.listStatus(oldLogDir)) {
+      System.out.println(stat.getPath().toString());
+    }
+
+    // Case 2: 1 newer file, not even deletable for the first log cleaner
+    // (TimeToLiveLogCleaner), so we are not going down the chain
+    fs.createNewFile(new Path(oldLogDir, fakeMachineName + "." + (now + 10000) ));
+
+    assertEquals(34, fs.listStatus(oldLogDir).length);
+
+    // This will take care of 20 old log files (default max we can delete)
+    cleaner.chore();
+
+    assertEquals(14, fs.listStatus(oldLogDir).length);
+
+    // We will delete all remaining log files which are not scheduled for
+    // replication and those that are invalid
+    cleaner.chore();
+
+    // We end up with the current log file, a newer one and the 3 old log
+    // files which are scheduled for replication
+    assertEquals(5, fs.listStatus(oldLogDir).length);
+
+    for (FileStatus file : fs.listStatus(oldLogDir)) {
+      System.out.println("Kept log files: " + file.getPath().getName());
+    }
+  }
+
+  static class DummyServer implements Server {
+
+    @Override
+    public Configuration getConfiguration() {
+      return TEST_UTIL.getConfiguration();
+    }
+
+    @Override
+    public ZooKeeperWatcher getZooKeeper() {
+      try {
+        return new ZooKeeperWatcher(getConfiguration(), "dummy server", this);
+      } catch (IOException e) {
+        e.printStackTrace();
+      }
+      return null;
+    }
+
+    @Override
+    public CatalogTracker getCatalogTracker() {
+      return null;
+    }
+
+    @Override
+    public String getServerName() {
+      return "regionserver,60020,000000";
+    }
+
+    @Override
+    public void abort(String why, Throwable e) {}
+
+    @Override
+    public void stop(String why) {}
+
+    @Override
+    public boolean isStopped() {
+      return false;
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java
new file mode 100644
index 0000000..c4ea83f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java
@@ -0,0 +1,148 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.executor.EventHandler.EventHandlerListener;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import com.google.common.base.Joiner;
+
+import static org.junit.Assert.*;
+
+public class TestMaster {
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final Log LOG = LogFactory.getLog(TestMaster.class);
+  private static final byte[] TABLENAME = Bytes.toBytes("TestMaster");
+  private static final byte[] FAMILYNAME = Bytes.toBytes("fam");
+
+  @BeforeClass
+  public static void beforeAllTests() throws Exception {
+    // Start a cluster of two regionservers.
+    TEST_UTIL.startMiniCluster(1);
+  }
+
+  @AfterClass
+  public static void afterAllTests() throws IOException {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testMasterOpsWhileSplitting() throws Exception {
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    HMaster m = cluster.getMaster();
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+
+    TEST_UTIL.createTable(TABLENAME, FAMILYNAME);
+    TEST_UTIL.loadTable(new HTable(TEST_UTIL.getConfiguration(), TABLENAME),
+      FAMILYNAME);
+
+    List<Pair<HRegionInfo, HServerAddress>> tableRegions =
+      MetaReader.getTableRegionsAndLocations(m.getCatalogTracker(),
+          Bytes.toString(TABLENAME));
+    LOG.info("Regions after load: " + Joiner.on(',').join(tableRegions));
+    assertEquals(1, tableRegions.size());
+    assertArrayEquals(HConstants.EMPTY_START_ROW,
+        tableRegions.get(0).getFirst().getStartKey());
+    assertArrayEquals(HConstants.EMPTY_END_ROW,
+        tableRegions.get(0).getFirst().getEndKey());
+
+    // Now trigger a split and stop when the split is in progress
+
+    CountDownLatch aboutToOpen = new CountDownLatch(1);
+    CountDownLatch proceed = new CountDownLatch(1);
+    RegionOpenListener list = new RegionOpenListener(aboutToOpen, proceed);
+    cluster.getMaster().executorService.
+      registerListener(EventType.RS_ZK_REGION_OPENED, list);
+
+    LOG.info("Splitting table");
+    admin.split(TABLENAME);
+    LOG.info("Waiting for split result to be about to open");
+    aboutToOpen.await(60, TimeUnit.SECONDS);
+    try {
+      LOG.info("Making sure we can call getTableRegions while opening");
+      tableRegions = MetaReader.getTableRegionsAndLocations(
+          m.getCatalogTracker(), Bytes.toString(TABLENAME));
+
+      LOG.info("Regions: " + Joiner.on(',').join(tableRegions));
+      // We have three regions because one is split-in-progress
+      assertEquals(3, tableRegions.size());
+      LOG.info("Making sure we can call getTableRegionClosest while opening");
+      Pair<HRegionInfo,HServerAddress> pair =
+        m.getTableRegionForRow(TABLENAME, Bytes.toBytes("cde"));
+      LOG.info("Result is: " + pair);
+      Pair<HRegionInfo, HServerAddress> tableRegionFromName =
+        MetaReader.getRegion(m.getCatalogTracker(),
+            pair.getFirst().getRegionName());
+      assertEquals(tableRegionFromName.getFirst(), pair.getFirst());
+    } finally {
+      proceed.countDown();
+    }
+  }
+
+  static class RegionOpenListener implements EventHandlerListener {
+    CountDownLatch aboutToOpen, proceed;
+
+    public RegionOpenListener(CountDownLatch aboutToOpen, CountDownLatch proceed)
+    {
+      this.aboutToOpen = aboutToOpen;
+      this.proceed = proceed;
+    }
+
+    @Override
+    public void afterProcess(EventHandler event) {
+      if (event.getEventType() != EventType.RS_ZK_REGION_OPENED) {
+        return;
+      }
+      try {
+        aboutToOpen.countDown();
+        proceed.await(60, TimeUnit.SECONDS);
+      } catch (InterruptedException ie) {
+        throw new RuntimeException(ie);
+      }
+      return;
+    }
+
+    @Override
+    public void beforeProcess(EventHandler event) {
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
new file mode 100644
index 0000000..838f3bf
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
@@ -0,0 +1,896 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.executor.RegionTransitionData;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.master.AssignmentManager.RegionState;
+import org.apache.hadoop.hbase.master.LoadBalancer.RegionPlan;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.hadoop.hbase.zookeeper.ZKTable;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.Test;
+
+public class TestMasterFailover {
+  private static final Log LOG = LogFactory.getLog(TestMasterFailover.class);
+
+  /**
+   * Simple test of master failover.
+   * <p>
+   * Starts with three masters.  Kills a backup master.  Then kills the active
+   * master.  Ensures the final master becomes active and we can still contact
+   * the cluster.
+   * @throws Exception
+   */
+  @Test (timeout=180000)
+  public void testSimpleMasterFailover() throws Exception {
+
+    final int NUM_MASTERS = 3;
+    final int NUM_RS = 3;
+
+    // Create config to use for this cluster
+    Configuration conf = HBaseConfiguration.create();
+    conf.setInt("hbase.master.wait.on.regionservers.mintostart", 3);
+    conf.setInt("hbase.master.wait.on.regionservers.maxtostart", 3);
+
+    // Start the cluster
+    HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf);
+    TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS);
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+
+    // get all the master threads
+    List<MasterThread> masterThreads = cluster.getMasterThreads();
+
+    // wait for each to come online
+    for (MasterThread mt : masterThreads) {
+      assertTrue(mt.isAlive());
+    }
+
+    // verify only one is the active master and we have right number
+    int numActive = 0;
+    int activeIndex = -1;
+    String activeName = null;
+    for (int i = 0; i < masterThreads.size(); i++) {
+      if (masterThreads.get(i).getMaster().isActiveMaster()) {
+        numActive++;
+        activeIndex = i;
+        activeName = masterThreads.get(i).getMaster().getServerName();
+      }
+    }
+    assertEquals(1, numActive);
+    assertEquals(NUM_MASTERS, masterThreads.size());
+
+    // attempt to stop one of the inactive masters
+    LOG.debug("\n\nStopping a backup master\n");
+    int backupIndex = (activeIndex == 0 ? 1 : activeIndex - 1);
+    cluster.stopMaster(backupIndex, false);
+    cluster.waitOnMaster(backupIndex);
+
+    // verify still one active master and it's the same
+    for (int i = 0; i < masterThreads.size(); i++) {
+      if (masterThreads.get(i).getMaster().isActiveMaster()) {
+        assertTrue(activeName.equals(
+            masterThreads.get(i).getMaster().getServerName()));
+        activeIndex = i;
+      }
+    }
+    assertEquals(1, numActive);
+    assertEquals(2, masterThreads.size());
+
+    // kill the active master
+    LOG.debug("\n\nStopping the active master\n");
+    cluster.stopMaster(activeIndex, false);
+    cluster.waitOnMaster(activeIndex);
+
+    // wait for an active master to show up and be ready
+    assertTrue(cluster.waitForActiveAndReadyMaster());
+
+    LOG.debug("\n\nVerifying backup master is now active\n");
+    // should only have one master now
+    assertEquals(1, masterThreads.size());
+    // and he should be active
+    assertTrue(masterThreads.get(0).getMaster().isActiveMaster());
+
+    // Stop the cluster
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * Complex test of master failover that tests as many permutations of the
+   * different possible states that regions in transition could be in within ZK.
+   * <p>
+   * This tests the proper handling of these states by the failed-over master
+   * and includes a thorough testing of the timeout code as well.
+   * <p>
+   * Starts with a single master and three regionservers.
+   * <p>
+   * Creates two tables, enabledTable and disabledTable, each containing 5
+   * regions.  The disabledTable is then disabled.
+   * <p>
+   * After reaching steady-state, the master is killed.  We then mock several
+   * states in ZK.
+   * <p>
+   * After mocking them, we will startup a new master which should become the
+   * active master and also detect that it is a failover.  The primary test
+   * passing condition will be that all regions of the enabled table are
+   * assigned and all the regions of the disabled table are not assigned.
+   * <p>
+   * The different scenarios to be tested are below:
+   * <p>
+   * <b>ZK State:  OFFLINE</b>
+   * <p>A node can get into OFFLINE state if</p>
+   * <ul>
+   * <li>An RS fails to open a region, so it reverts the state back to OFFLINE
+   * <li>The Master is assigning the region to a RS before it sends RPC
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>Master has assigned an enabled region but RS failed so a region is
+   *     not assigned anywhere and is sitting in ZK as OFFLINE</li>
+   * <li>This seems to cover both cases?</li>
+   * </ul>
+   * <p>
+   * <b>ZK State:  CLOSING</b>
+   * <p>A node can get into CLOSING state if</p>
+   * <ul>
+   * <li>An RS has begun to close a region
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>Region of enabled table was being closed but did not complete
+   * <li>Region of disabled table was being closed but did not complete
+   * </ul>
+   * <p>
+   * <b>ZK State:  CLOSED</b>
+   * <p>A node can get into CLOSED state if</p>
+   * <ul>
+   * <li>An RS has completed closing a region but not acknowledged by master yet
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>Region of a table that should be enabled was closed on an RS
+   * <li>Region of a table that should be disabled was closed on an RS
+   * </ul>
+   * <p>
+   * <b>ZK State:  OPENING</b>
+   * <p>A node can get into OPENING state if</p>
+   * <ul>
+   * <li>An RS has begun to open a region
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>RS was opening a region of enabled table but never finishes
+   * </ul>
+   * <p>
+   * <b>ZK State:  OPENED</b>
+   * <p>A node can get into OPENED state if</p>
+   * <ul>
+   * <li>An RS has finished opening a region but not acknowledged by master yet
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>Region of a table that should be enabled was opened on an RS
+   * <li>Region of a table that should be disabled was opened on an RS
+   * </ul>
+   * @throws Exception
+   */
+  @Test (timeout=180000)
+  public void testMasterFailoverWithMockedRIT() throws Exception {
+
+    final int NUM_MASTERS = 1;
+    final int NUM_RS = 3;
+
+    // Create config to use for this cluster
+    Configuration conf = HBaseConfiguration.create();
+    // Need to drop the timeout much lower
+    conf.setInt("hbase.master.assignment.timeoutmonitor.period", 2000);
+    conf.setInt("hbase.master.assignment.timeoutmonitor.timeout", 4000);
+    conf.setInt("hbase.master.wait.on.regionservers.mintostart", 3);
+    conf.setInt("hbase.master.wait.on.regionservers.maxtostart", 3);
+
+    // Start the cluster
+    HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf);
+    TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS);
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    log("Cluster started");
+
+    // Create a ZKW to use in the test
+    ZooKeeperWatcher zkw = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+      "unittest", new Abortable() {
+        @Override
+        public void abort(String why, Throwable e) {
+          throw new RuntimeException("Fatal ZK error, why=" + why, e);
+        }
+    });
+
+    // get all the master threads
+    List<MasterThread> masterThreads = cluster.getMasterThreads();
+    assertEquals(1, masterThreads.size());
+
+    // only one master thread, let's wait for it to be initialized
+    assertTrue(cluster.waitForActiveAndReadyMaster());
+    HMaster master = masterThreads.get(0).getMaster();
+    assertTrue(master.isActiveMaster());
+    assertTrue(master.isInitialized());
+
+    // disable load balancing on this master
+    master.balanceSwitch(false);
+
+    // create two tables in META, each with 10 regions
+    byte [] FAMILY = Bytes.toBytes("family");
+    byte [][] SPLIT_KEYS = new byte [][] {
+        new byte[0], Bytes.toBytes("aaa"), Bytes.toBytes("bbb"),
+        Bytes.toBytes("ccc"), Bytes.toBytes("ddd"), Bytes.toBytes("eee"),
+        Bytes.toBytes("fff"), Bytes.toBytes("ggg"), Bytes.toBytes("hhh"),
+        Bytes.toBytes("iii"), Bytes.toBytes("jjj")
+    };
+
+    byte [] enabledTable = Bytes.toBytes("enabledTable");
+    HTableDescriptor htdEnabled = new HTableDescriptor(enabledTable);
+    htdEnabled.addFamily(new HColumnDescriptor(FAMILY));
+    List<HRegionInfo> enabledRegions = TEST_UTIL.createMultiRegionsInMeta(
+        TEST_UTIL.getConfiguration(), htdEnabled, SPLIT_KEYS);
+
+    byte [] disabledTable = Bytes.toBytes("disabledTable");
+    HTableDescriptor htdDisabled = new HTableDescriptor(disabledTable);
+    htdDisabled.addFamily(new HColumnDescriptor(FAMILY));
+    List<HRegionInfo> disabledRegions = TEST_UTIL.createMultiRegionsInMeta(
+        TEST_UTIL.getConfiguration(), htdDisabled, SPLIT_KEYS);
+
+    log("Regions in META have been created");
+
+    // at this point we only expect 2 regions to be assigned out (catalogs)
+    assertEquals(2, cluster.countServedRegions());
+
+    // Let's just assign everything to first RS
+    HRegionServer hrs = cluster.getRegionServer(0);
+    String serverName = hrs.getServerName();
+    HServerInfo hsiAlive = hrs.getServerInfo();
+
+    // we'll need some regions to already be assigned out properly on live RS
+    List<HRegionInfo> enabledAndAssignedRegions = new ArrayList<HRegionInfo>();
+    enabledAndAssignedRegions.add(enabledRegions.remove(0));
+    enabledAndAssignedRegions.add(enabledRegions.remove(0));
+    List<HRegionInfo> disabledAndAssignedRegions = new ArrayList<HRegionInfo>();
+    disabledAndAssignedRegions.add(disabledRegions.remove(0));
+    disabledAndAssignedRegions.add(disabledRegions.remove(0));
+
+    // now actually assign them
+    for (HRegionInfo hri : enabledAndAssignedRegions) {
+      master.assignmentManager.regionPlans.put(hri.getEncodedName(),
+          new RegionPlan(hri, null, hsiAlive));
+      master.assignRegion(hri);
+    }
+    for (HRegionInfo hri : disabledAndAssignedRegions) {
+      master.assignmentManager.regionPlans.put(hri.getEncodedName(),
+          new RegionPlan(hri, null, hsiAlive));
+      master.assignRegion(hri);
+    }
+
+    // wait for no more RIT
+    log("Waiting for assignment to finish");
+    ZKAssign.blockUntilNoRIT(zkw);
+    log("Assignment completed");
+
+    // Stop the master
+    log("Aborting master");
+    cluster.abortMaster(0);
+    cluster.waitOnMaster(0);
+    log("Master has aborted");
+
+    /*
+     * Now, let's start mocking up some weird states as described in the method
+     * javadoc.
+     */
+
+    List<HRegionInfo> regionsThatShouldBeOnline = new ArrayList<HRegionInfo>();
+    List<HRegionInfo> regionsThatShouldBeOffline = new ArrayList<HRegionInfo>();
+
+    log("Beginning to mock scenarios");
+
+    // Disable the disabledTable in ZK
+    ZKTable zktable = new ZKTable(zkw);
+    zktable.setDisabledTable(Bytes.toString(disabledTable));
+
+    /*
+     *  ZK = OFFLINE
+     */
+
+    // Region that should be assigned but is not and is in ZK as OFFLINE
+    HRegionInfo region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, serverName);
+
+    /*
+     * ZK = CLOSING
+     */
+
+//    Disabled test of CLOSING.  This case is invalid after HBASE-3181.
+//    How can an RS stop a CLOSING w/o deleting the node?  If it did ever fail
+//    and left the node in CLOSING, the RS would have aborted and we'd process
+//    these regions in server shutdown
+//
+//    // Region of enabled table being closed but not complete
+//    // Region is already assigned, don't say anything to RS but set ZK closing
+//    region = enabledAndAssignedRegions.remove(0);
+//    regionsThatShouldBeOnline.add(region);
+//    ZKAssign.createNodeClosing(zkw, region, serverName);
+//
+//    // Region of disabled table being closed but not complete
+//    // Region is already assigned, don't say anything to RS but set ZK closing
+//    region = disabledAndAssignedRegions.remove(0);
+//    regionsThatShouldBeOffline.add(region);
+//    ZKAssign.createNodeClosing(zkw, region, serverName);
+
+    /*
+     * ZK = CLOSED
+     */
+
+    // Region of enabled table closed but not ack
+    region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    int version = ZKAssign.createNodeClosing(zkw, region, serverName);
+    ZKAssign.transitionNodeClosed(zkw, region, serverName, version);
+
+    // Region of disabled table closed but not ack
+    region = disabledRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    version = ZKAssign.createNodeClosing(zkw, region, serverName);
+    ZKAssign.transitionNodeClosed(zkw, region, serverName, version);
+
+    /*
+     * ZK = OPENING
+     */
+
+    // RS was opening a region of enabled table but never finishes
+    region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, serverName);
+    ZKAssign.transitionNodeOpening(zkw, region, serverName);
+
+    /*
+     * ZK = OPENED
+     */
+
+    // Region of enabled table was opened on RS
+    region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, serverName);
+    hrs.openRegion(region);
+    while (true) {
+      RegionTransitionData rtd = ZKAssign.getData(zkw, region.getEncodedName());
+      if (rtd != null && rtd.getEventType() == EventType.RS_ZK_REGION_OPENED) {
+        break;
+      }
+      Thread.sleep(100);
+    }
+
+    // Region of disable table was opened on RS
+    region = disabledRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, serverName);
+    hrs.openRegion(region);
+    while (true) {
+      RegionTransitionData rtd = ZKAssign.getData(zkw, region.getEncodedName());
+      if (rtd != null && rtd.getEventType() == EventType.RS_ZK_REGION_OPENED) {
+        break;
+      }
+      Thread.sleep(100);
+    }
+
+    /*
+     * ZK = NONE
+     */
+
+    /*
+     * DONE MOCKING
+     */
+
+    log("Done mocking data up in ZK");
+
+    // Start up a new master
+    log("Starting up a new master");
+    master = cluster.startMaster().getMaster();
+    log("Waiting for master to be ready");
+    cluster.waitForActiveAndReadyMaster();
+    log("Master is ready");
+
+    // Failover should be completed, now wait for no RIT
+    log("Waiting for no more RIT");
+    ZKAssign.blockUntilNoRIT(zkw);
+    log("No more RIT in ZK, now doing final test verification");
+
+    // Grab all the regions that are online across RSs
+    Set<HRegionInfo> onlineRegions = new TreeSet<HRegionInfo>();
+    for (JVMClusterUtil.RegionServerThread rst :
+      cluster.getRegionServerThreads()) {
+      onlineRegions.addAll(rst.getRegionServer().getOnlineRegions());
+    }
+
+    // Now, everything that should be online should be online
+    for (HRegionInfo hri : regionsThatShouldBeOnline) {
+      assertTrue(onlineRegions.contains(hri));
+    }
+
+    // Everything that should be offline should not be online
+    for (HRegionInfo hri : regionsThatShouldBeOffline) {
+      assertFalse(onlineRegions.contains(hri));
+    }
+
+    log("Done with verification, all passed, shutting down cluster");
+
+    // Done, shutdown the cluster
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+
+  /**
+   * Complex test of master failover that tests as many permutations of the
+   * different possible states that regions in transition could be in within ZK
+   * pointing to an RS that has died while no master is around to process it.
+   * <p>
+   * This tests the proper handling of these states by the failed-over master
+   * and includes a thorough testing of the timeout code as well.
+   * <p>
+   * Starts with a single master and two regionservers.
+   * <p>
+   * Creates two tables, enabledTable and disabledTable, each containing 5
+   * regions.  The disabledTable is then disabled.
+   * <p>
+   * After reaching steady-state, the master is killed.  We then mock several
+   * states in ZK.  And one of the RS will be killed.
+   * <p>
+   * After mocking them and killing an RS, we will startup a new master which
+   * should become the active master and also detect that it is a failover.  The
+   * primary test passing condition will be that all regions of the enabled
+   * table are assigned and all the regions of the disabled table are not
+   * assigned.
+   * <p>
+   * The different scenarios to be tested are below:
+   * <p>
+   * <b>ZK State:  CLOSING</b>
+   * <p>A node can get into CLOSING state if</p>
+   * <ul>
+   * <li>An RS has begun to close a region
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>Region was being closed but the RS died before finishing the close
+   * </ul>
+   * <b>ZK State:  OPENED</b>
+   * <p>A node can get into OPENED state if</p>
+   * <ul>
+   * <li>An RS has finished opening a region but not acknowledged by master yet
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>Region of a table that should be enabled was opened by a now-dead RS
+   * <li>Region of a table that should be disabled was opened by a now-dead RS
+   * </ul>
+   * <p>
+   * <b>ZK State:  NONE</b>
+   * <p>A region could not have a transition node if</p>
+   * <ul>
+   * <li>The server hosting the region died and no master processed it
+   * </ul>
+   * <p>We will mock the scenarios</p>
+   * <ul>
+   * <li>Region of enabled table was on a dead RS that was not yet processed
+   * <li>Region of disabled table was on a dead RS that was not yet processed
+   * </ul>
+   * @throws Exception
+   */
+  @Test (timeout=180000)
+  public void testMasterFailoverWithMockedRITOnDeadRS() throws Exception {
+
+    final int NUM_MASTERS = 1;
+    final int NUM_RS = 2;
+
+    // Create config to use for this cluster
+    Configuration conf = HBaseConfiguration.create();
+    // Need to drop the timeout much lower
+    conf.setInt("hbase.master.assignment.timeoutmonitor.period", 2000);
+    conf.setInt("hbase.master.assignment.timeoutmonitor.timeout", 4000);
+    conf.setInt("hbase.master.wait.on.regionservers.mintostart", 1);
+    conf.setInt("hbase.master.wait.on.regionservers.maxtostart", 2);
+
+    // Create and start the cluster
+    HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf);
+    TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS);
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    log("Cluster started");
+
+    // Create a ZKW to use in the test
+    ZooKeeperWatcher zkw = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+        "unittest", new Abortable() {
+          @Override
+          public void abort(String why, Throwable e) {
+            LOG.error("Fatal ZK Error: " + why, e);
+            org.junit.Assert.assertFalse("Fatal ZK error", true);
+          }
+    });
+
+    // get all the master threads
+    List<MasterThread> masterThreads = cluster.getMasterThreads();
+    assertEquals(1, masterThreads.size());
+
+    // only one master thread, let's wait for it to be initialized
+    assertTrue(cluster.waitForActiveAndReadyMaster());
+    HMaster master = masterThreads.get(0).getMaster();
+    assertTrue(master.isActiveMaster());
+    assertTrue(master.isInitialized());
+
+    // disable load balancing on this master
+    master.balanceSwitch(false);
+
+    // create two tables in META, each with 10 regions
+    byte [] FAMILY = Bytes.toBytes("family");
+    byte [][] SPLIT_KEYS = new byte [][] {
+        new byte[0], Bytes.toBytes("aaa"), Bytes.toBytes("bbb"),
+        Bytes.toBytes("ccc"), Bytes.toBytes("ddd"), Bytes.toBytes("eee"),
+        Bytes.toBytes("fff"), Bytes.toBytes("ggg"), Bytes.toBytes("hhh"),
+        Bytes.toBytes("iii"), Bytes.toBytes("jjj")
+    };
+
+    byte [] enabledTable = Bytes.toBytes("enabledTable");
+    HTableDescriptor htdEnabled = new HTableDescriptor(enabledTable);
+    htdEnabled.addFamily(new HColumnDescriptor(FAMILY));
+    List<HRegionInfo> enabledRegions = TEST_UTIL.createMultiRegionsInMeta(
+        TEST_UTIL.getConfiguration(), htdEnabled, SPLIT_KEYS);
+
+    byte [] disabledTable = Bytes.toBytes("disabledTable");
+    HTableDescriptor htdDisabled = new HTableDescriptor(disabledTable);
+    htdDisabled.addFamily(new HColumnDescriptor(FAMILY));
+    List<HRegionInfo> disabledRegions = TEST_UTIL.createMultiRegionsInMeta(
+        TEST_UTIL.getConfiguration(), htdDisabled, SPLIT_KEYS);
+
+    log("Regions in META have been created");
+
+    // at this point we only expect 2 regions to be assigned out (catalogs)
+    assertEquals(2, cluster.countServedRegions());
+
+    // The first RS will stay online
+    HRegionServer hrs = cluster.getRegionServer(0);
+    HServerInfo hsiAlive = hrs.getServerInfo();
+
+    // The second RS is going to be hard-killed
+    HRegionServer hrsDead = cluster.getRegionServer(1);
+    String deadServerName = hrsDead.getServerName();
+    HServerInfo hsiDead = hrsDead.getServerInfo();
+
+    // we'll need some regions to already be assigned out properly on live RS
+    List<HRegionInfo> enabledAndAssignedRegions = new ArrayList<HRegionInfo>();
+    enabledAndAssignedRegions.add(enabledRegions.remove(0));
+    enabledAndAssignedRegions.add(enabledRegions.remove(0));
+    List<HRegionInfo> disabledAndAssignedRegions = new ArrayList<HRegionInfo>();
+    disabledAndAssignedRegions.add(disabledRegions.remove(0));
+    disabledAndAssignedRegions.add(disabledRegions.remove(0));
+
+    // now actually assign them
+    for (HRegionInfo hri : enabledAndAssignedRegions) {
+      master.assignmentManager.regionPlans.put(hri.getEncodedName(),
+          new RegionPlan(hri, null, hsiAlive));
+      master.assignRegion(hri);
+    }
+    for (HRegionInfo hri : disabledAndAssignedRegions) {
+      master.assignmentManager.regionPlans.put(hri.getEncodedName(),
+          new RegionPlan(hri, null, hsiAlive));
+      master.assignRegion(hri);
+    }
+
+    // we also need regions assigned out on the dead server
+    List<HRegionInfo> enabledAndOnDeadRegions = new ArrayList<HRegionInfo>();
+    enabledAndOnDeadRegions.add(enabledRegions.remove(0));
+    enabledAndOnDeadRegions.add(enabledRegions.remove(0));
+    List<HRegionInfo> disabledAndOnDeadRegions = new ArrayList<HRegionInfo>();
+    disabledAndOnDeadRegions.add(disabledRegions.remove(0));
+    disabledAndOnDeadRegions.add(disabledRegions.remove(0));
+
+    // set region plan to server to be killed and trigger assign
+    for (HRegionInfo hri : enabledAndOnDeadRegions) {
+      master.assignmentManager.regionPlans.put(hri.getEncodedName(),
+          new RegionPlan(hri, null, hsiDead));
+      master.assignRegion(hri);
+    }
+    for (HRegionInfo hri : disabledAndOnDeadRegions) {
+      master.assignmentManager.regionPlans.put(hri.getEncodedName(),
+          new RegionPlan(hri, null, hsiDead));
+      master.assignRegion(hri);
+    }
+
+    // wait for no more RIT
+    log("Waiting for assignment to finish");
+    ZKAssign.blockUntilNoRIT(zkw);
+    log("Assignment completed");
+
+    // Stop the master
+    log("Aborting master");
+    cluster.abortMaster(0);
+    cluster.waitOnMaster(0);
+    log("Master has aborted");
+
+    /*
+     * Now, let's start mocking up some weird states as described in the method
+     * javadoc.
+     */
+
+    List<HRegionInfo> regionsThatShouldBeOnline = new ArrayList<HRegionInfo>();
+    List<HRegionInfo> regionsThatShouldBeOffline = new ArrayList<HRegionInfo>();
+
+    log("Beginning to mock scenarios");
+
+    // Disable the disabledTable in ZK
+    ZKTable zktable = new ZKTable(zkw);
+    zktable.setDisabledTable(Bytes.toString(disabledTable));
+
+    /*
+     * ZK = CLOSING
+     */
+
+    // Region of enabled table being closed on dead RS but not finished
+    HRegionInfo region = enabledAndOnDeadRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    ZKAssign.createNodeClosing(zkw, region, deadServerName);
+    LOG.debug("\n\nRegion of enabled table was CLOSING on dead RS\n" +
+        region + "\n\n");
+
+    // Region of disabled table being closed on dead RS but not finished
+    region = disabledAndOnDeadRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    ZKAssign.createNodeClosing(zkw, region, deadServerName);
+    LOG.debug("\n\nRegion of disabled table was CLOSING on dead RS\n" +
+        region + "\n\n");
+
+    /*
+     * ZK = CLOSED
+     */
+
+    // Region of enabled on dead server gets closed but not ack'd by master
+    region = enabledAndOnDeadRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    int version = ZKAssign.createNodeClosing(zkw, region, deadServerName);
+    ZKAssign.transitionNodeClosed(zkw, region, deadServerName, version);
+    LOG.debug("\n\nRegion of enabled table was CLOSED on dead RS\n" +
+        region + "\n\n");
+
+    // Region of disabled on dead server gets closed but not ack'd by master
+    region = disabledAndOnDeadRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    version = ZKAssign.createNodeClosing(zkw, region, deadServerName);
+    ZKAssign.transitionNodeClosed(zkw, region, deadServerName, version);
+    LOG.debug("\n\nRegion of disabled table was CLOSED on dead RS\n" +
+        region + "\n\n");
+
+    /*
+     * ZK = OPENING
+     */
+
+    // RS was opening a region of enabled table then died
+    region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, deadServerName);
+    ZKAssign.transitionNodeOpening(zkw, region, deadServerName);
+    LOG.debug("\n\nRegion of enabled table was OPENING on dead RS\n" +
+        region + "\n\n");
+
+    // RS was opening a region of disabled table then died
+    region = disabledRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, deadServerName);
+    ZKAssign.transitionNodeOpening(zkw, region, deadServerName);
+    LOG.debug("\n\nRegion of disabled table was OPENING on dead RS\n" +
+        region + "\n\n");
+
+    /*
+     * ZK = OPENED
+     */
+
+    // Region of enabled table was opened on dead RS
+    region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, deadServerName);
+    hrsDead.openRegion(region);
+    while (true) {
+      RegionTransitionData rtd = ZKAssign.getData(zkw, region.getEncodedName());
+      if (rtd != null && rtd.getEventType() == EventType.RS_ZK_REGION_OPENED) {
+        break;
+      }
+      Thread.sleep(100);
+    }
+    LOG.debug("\n\nRegion of enabled table was OPENED on dead RS\n" +
+        region + "\n\n");
+
+    // Region of disabled table was opened on dead RS
+    region = disabledRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, deadServerName);
+    hrsDead.openRegion(region);
+    while (true) {
+      RegionTransitionData rtd = ZKAssign.getData(zkw, region.getEncodedName());
+      if (rtd != null && rtd.getEventType() == EventType.RS_ZK_REGION_OPENED) {
+        break;
+      }
+      Thread.sleep(100);
+    }
+    LOG.debug("\n\nRegion of disabled table was OPENED on dead RS\n" +
+        region + "\n\n");
+
+    /*
+     * ZK = NONE
+     */
+
+    // Region of enabled table was open at steady-state on dead RS
+    region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, deadServerName);
+    hrsDead.openRegion(region);
+    while (true) {
+      RegionTransitionData rtd = ZKAssign.getData(zkw, region.getEncodedName());
+      if (rtd != null && rtd.getEventType() == EventType.RS_ZK_REGION_OPENED) {
+        ZKAssign.deleteOpenedNode(zkw, region.getEncodedName());
+        break;
+      }
+      Thread.sleep(100);
+    }
+    LOG.debug("\n\nRegion of enabled table was open at steady-state on dead RS"
+        + "\n" + region + "\n\n");
+
+    // Region of disabled table was open at steady-state on dead RS
+    region = disabledRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    ZKAssign.createNodeOffline(zkw, region, deadServerName);
+    hrsDead.openRegion(region);
+    while (true) {
+      RegionTransitionData rtd = ZKAssign.getData(zkw, region.getEncodedName());
+      if (rtd != null && rtd.getEventType() == EventType.RS_ZK_REGION_OPENED) {
+        ZKAssign.deleteOpenedNode(zkw, region.getEncodedName());
+        break;
+      }
+      Thread.sleep(100);
+    }
+    LOG.debug("\n\nRegion of disabled table was open at steady-state on dead RS"
+        + "\n" + region + "\n\n");
+
+    /*
+     * DONE MOCKING
+     */
+
+    log("Done mocking data up in ZK");
+
+    // Kill the RS that had a hard death
+    log("Killing RS " + deadServerName);
+    hrsDead.abort("Killing for unit test");
+    log("RS " + deadServerName + " killed");
+
+    // Start up a new master
+    log("Starting up a new master");
+    master = cluster.startMaster().getMaster();
+    log("Waiting for master to be ready");
+    cluster.waitForActiveAndReadyMaster();
+    log("Master is ready");
+
+    // Let's add some weird states to master in-memory state
+
+    // After HBASE-3181, we need to have some ZK state if we're PENDING_OPEN
+    // b/c it is impossible for us to get into this state w/o a zk node
+    // this is not true of PENDING_CLOSE
+
+    // PENDING_OPEN and enabled
+    region = enabledRegions.remove(0);
+    regionsThatShouldBeOnline.add(region);
+    master.assignmentManager.regionsInTransition.put(region.getEncodedName(),
+        new RegionState(region, RegionState.State.PENDING_OPEN, 0));
+    ZKAssign.createNodeOffline(zkw, region, master.getServerName());
+    // PENDING_OPEN and disabled
+    region = disabledRegions.remove(0);
+    regionsThatShouldBeOffline.add(region);
+    master.assignmentManager.regionsInTransition.put(region.getEncodedName(),
+        new RegionState(region, RegionState.State.PENDING_OPEN, 0));
+    ZKAssign.createNodeOffline(zkw, region, master.getServerName());
+    // This test is bad.  It puts up a PENDING_CLOSE but doesn't say what
+    // server we were PENDING_CLOSE against -- i.e. an entry in
+    // AssignmentManager#regions.  W/o a server, we NPE trying to resend close.
+    // In past, there was wonky logic that had us reassign region if no server
+    // at tail of the unassign.  This was removed.  Commenting out for now.
+    // TODO: Remove completely.
+    /*
+    // PENDING_CLOSE and enabled
+    region = enabledRegions.remove(0);
+    LOG.info("Setting PENDING_CLOSE enabled " + region.getEncodedName());
+    regionsThatShouldBeOnline.add(region);
+    master.assignmentManager.regionsInTransition.put(region.getEncodedName(),
+      new RegionState(region, RegionState.State.PENDING_CLOSE, 0));
+    // PENDING_CLOSE and disabled
+    region = disabledRegions.remove(0);
+    LOG.info("Setting PENDING_CLOSE disabled " + region.getEncodedName());
+    regionsThatShouldBeOffline.add(region);
+    master.assignmentManager.regionsInTransition.put(region.getEncodedName(),
+      new RegionState(region, RegionState.State.PENDING_CLOSE, 0));
+      */
+
+    // Failover should be completed, now wait for no RIT
+    log("Waiting for no more RIT");
+    ZKAssign.blockUntilNoRIT(zkw);
+    log("No more RIT in ZK");
+    long now = System.currentTimeMillis();
+    final long maxTime = 120000;
+    boolean done = master.assignmentManager.waitUntilNoRegionsInTransition(maxTime);
+    if (!done) {
+      LOG.info("rit=" + master.assignmentManager.getRegionsInTransition());
+    }
+    long elapsed = System.currentTimeMillis() - now;
+    assertTrue("Elapsed=" + elapsed + ", maxTime=" + maxTime + ", done=" + done,
+      elapsed < maxTime);
+    log("No more RIT in RIT map, doing final test verification");
+
+    // Grab all the regions that are online across RSs
+    Set<HRegionInfo> onlineRegions = new TreeSet<HRegionInfo>();
+    for (JVMClusterUtil.RegionServerThread rst :
+      cluster.getRegionServerThreads()) {
+      onlineRegions.addAll(rst.getRegionServer().getOnlineRegions());
+    }
+
+    // Now, everything that should be online should be online
+    for (HRegionInfo hri : regionsThatShouldBeOnline) {
+      assertTrue("region=" + hri.getRegionNameAsString(), onlineRegions.contains(hri));
+    }
+
+    // Everything that should be offline should not be online
+    for (HRegionInfo hri : regionsThatShouldBeOffline) {
+      assertFalse(onlineRegions.contains(hri));
+    }
+
+    log("Done with verification, all passed, shutting down cluster");
+
+    // Done, shutdown the cluster
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  // TODO: Next test to add is with testing permutations of the RIT or the RS
+  //       killed are hosting ROOT and META regions.
+
+  private void log(String string) {
+    LOG.info("\n\n" + string + " \n\n");
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java
new file mode 100644
index 0000000..172e380
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java
@@ -0,0 +1,534 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+/**
+ * Test transitions of state across the master.  Sets up the cluster once and
+ * then runs a couple of tests.
+ */
+public class TestMasterTransitions {
+  private static final Log LOG = LogFactory.getLog(TestMasterTransitions.class);
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final String TABLENAME = "master_transitions";
+  private static final byte [][] FAMILIES = new byte [][] {Bytes.toBytes("a"),
+    Bytes.toBytes("b"), Bytes.toBytes("c")};
+
+  /**
+   * Start up a mini cluster and put a small table of many empty regions into it.
+   * @throws Exception
+   */
+  @BeforeClass public static void beforeAllTests() throws Exception {
+    TEST_UTIL.getConfiguration().setBoolean("dfs.support.append", true);
+    TEST_UTIL.startMiniCluster(2);
+    // Create a table of three families.  This will assign a region.
+    TEST_UTIL.createTable(Bytes.toBytes(TABLENAME), FAMILIES);
+    HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME);
+    int countOfRegions = TEST_UTIL.createMultiRegions(t, getTestFamily());
+    TEST_UTIL.waitUntilAllRegionsAssigned(countOfRegions);
+    addToEachStartKey(countOfRegions);
+  }
+
+  @AfterClass public static void afterAllTests() throws IOException {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Before public void setup() throws IOException {
+    TEST_UTIL.ensureSomeRegionServersAvailable(2);
+  }
+
+  /**
+   * Listener for regionserver events testing hbase-2428 (Infinite loop of
+   * region closes if META region is offline).  In particular, listen
+   * for the close of the 'metaServer' and when it comes in, requeue it with a
+   * delay as though there were an issue processing the shutdown.  As part of
+   * the requeuing,  send over a close of a region on 'otherServer' so it comes
+   * into a master that has its meta region marked as offline.
+   */
+  /*
+  static class HBase2428Listener implements RegionServerOperationListener {
+    // Map of what we've delayed so we don't do do repeated delays.
+    private final Set<RegionServerOperation> postponed =
+      new CopyOnWriteArraySet<RegionServerOperation>();
+    private boolean done = false;;
+    private boolean metaShutdownReceived = false;
+    private final HServerAddress metaAddress;
+    private final MiniHBaseCluster cluster;
+    private final int otherServerIndex;
+    private final HRegionInfo hri;
+    private int closeCount = 0;
+    static final int SERVER_DURATION = 3 * 1000;
+    static final int CLOSE_DURATION = 1 * 1000;
+ 
+    HBase2428Listener(final MiniHBaseCluster c, final HServerAddress metaAddress,
+        final HRegionInfo closingHRI, final int otherServerIndex) {
+      this.cluster = c;
+      this.metaAddress = metaAddress;
+      this.hri = closingHRI;
+      this.otherServerIndex = otherServerIndex;
+    }
+
+    @Override
+    public boolean process(final RegionServerOperation op) throws IOException {
+      // If a regionserver shutdown and its of the meta server, then we want to
+      // delay the processing of the shutdown and send off a close of a region on
+      // the 'otherServer.
+      boolean result = true;
+      if (op instanceof ProcessServerShutdown) {
+        ProcessServerShutdown pss = (ProcessServerShutdown)op;
+        if (pss.getDeadServerAddress().equals(this.metaAddress)) {
+          // Don't postpone more than once.
+          if (!this.postponed.contains(pss)) {
+            // Close some region.
+            this.cluster.addMessageToSendRegionServer(this.otherServerIndex,
+              new HMsg(HMsg.Type.MSG_REGION_CLOSE, hri,
+              Bytes.toBytes("Forcing close in test")));
+            this.postponed.add(pss);
+            // Put off the processing of the regionserver shutdown processing.
+            pss.setDelay(SERVER_DURATION);
+            this.metaShutdownReceived = true;
+            // Return false.  This will add this op to the delayed queue.
+            result = false;
+          }
+        }
+      } else {
+        // Have the close run frequently.
+        if (isWantedCloseOperation(op) != null) {
+          op.setDelay(CLOSE_DURATION);
+          // Count how many times it comes through here.
+          this.closeCount++;
+        }
+      }
+      return result;
+    }
+
+    public void processed(final RegionServerOperation op) {
+      if (isWantedCloseOperation(op) != null) return;
+      this.done = true;
+    }
+*/
+    /*
+     * @param op
+     * @return Null if not the wanted ProcessRegionClose, else <code>op</code>
+     * cast as a ProcessRegionClose.
+     */
+  /*
+    private ProcessRegionClose isWantedCloseOperation(final RegionServerOperation op) {
+      // Count every time we get a close operation.
+      if (op instanceof ProcessRegionClose) {
+        ProcessRegionClose c = (ProcessRegionClose)op;
+        if (c.regionInfo.equals(hri)) {
+          return c;
+        }
+      }
+      return null;
+    }
+
+    boolean isDone() {
+      return this.done;
+    }
+
+    boolean isMetaShutdownReceived() {
+      return metaShutdownReceived;
+    }
+
+    int getCloseCount() {
+      return this.closeCount;
+    }
+
+    @Override
+    public boolean process(HServerInfo serverInfo, HMsg incomingMsg) {
+      return true;
+    }
+  }
+*/
+  /**
+   * In 2428, the meta region has just been set offline and then a close comes
+   * in.
+   * @see <a href="https://issues.apache.org/jira/browse/HBASE-2428">HBASE-2428</a> 
+   */
+  @Ignore @Test  (timeout=300000) public void testRegionCloseWhenNoMetaHBase2428()
+  throws Exception {
+    /*
+    LOG.info("Running testRegionCloseWhenNoMetaHBase2428");
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    final HMaster master = cluster.getMaster();
+    int metaIndex = cluster.getServerWithMeta();
+    // Figure the index of the server that is not server the .META.
+    int otherServerIndex = -1;
+    for (int i = 0; i < cluster.getRegionServerThreads().size(); i++) {
+      if (i == metaIndex) continue;
+      otherServerIndex = i;
+      break;
+    }
+    final HRegionServer otherServer = cluster.getRegionServer(otherServerIndex);
+    final HRegionServer metaHRS = cluster.getRegionServer(metaIndex);
+
+    // Get a region out on the otherServer.
+    final HRegionInfo hri =
+      otherServer.getOnlineRegions().iterator().next().getRegionInfo();
+ 
+    // Add our RegionServerOperationsListener
+    HBase2428Listener listener = new HBase2428Listener(cluster,
+      metaHRS.getHServerInfo().getServerAddress(), hri, otherServerIndex);
+    master.getRegionServerOperationQueue().
+      registerRegionServerOperationListener(listener);
+    try {
+      // Now close the server carrying meta.
+      cluster.abortRegionServer(metaIndex);
+
+      // First wait on receipt of meta server shutdown message.
+      while(!listener.metaShutdownReceived) Threads.sleep(100);
+      while(!listener.isDone()) Threads.sleep(10);
+      // We should not have retried the close more times than it took for the
+      // server shutdown message to exit the delay queue and get processed
+      // (Multiple by two to add in some slop in case of GC or something).
+      assertTrue(listener.getCloseCount() > 1);
+      assertTrue(listener.getCloseCount() <
+        ((HBase2428Listener.SERVER_DURATION/HBase2428Listener.CLOSE_DURATION) * 2));
+
+      // Assert the closed region came back online
+      assertRegionIsBackOnline(hri);
+    } finally {
+      master.getRegionServerOperationQueue().
+        unregisterRegionServerOperationListener(listener);
+    }
+    */
+  }
+
+  /**
+   * Test adding in a new server before old one on same host+port is dead.
+   * Make the test more onerous by having the server under test carry the meta.
+   * If confusion between old and new, purportedly meta never comes back.  Test
+   * that meta gets redeployed.
+   */
+  @Ignore @Test (timeout=300000) public void testAddingServerBeforeOldIsDead2413()
+  throws IOException {
+    /*
+    LOG.info("Running testAddingServerBeforeOldIsDead2413");
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    int count = count();
+    int metaIndex = cluster.getServerWithMeta();
+    MiniHBaseClusterRegionServer metaHRS =
+      (MiniHBaseClusterRegionServer)cluster.getRegionServer(metaIndex);
+    int port = metaHRS.getServerInfo().getServerAddress().getPort();
+    Configuration c = TEST_UTIL.getConfiguration();
+    String oldPort = c.get(HConstants.REGIONSERVER_PORT, "0");
+    try {
+      LOG.info("KILLED=" + metaHRS);
+      metaHRS.kill();
+      c.set(HConstants.REGIONSERVER_PORT, Integer.toString(port));
+      // Try and start new regionserver.  It might clash with the old
+      // regionserver port so keep trying to get past the BindException.
+      HRegionServer hrs = null;
+      while (true) {
+        try {
+          hrs = cluster.startRegionServer().getRegionServer();
+          break;
+        } catch (IOException e) {
+          if (e.getCause() != null && e.getCause() instanceof InvocationTargetException) {
+            InvocationTargetException ee = (InvocationTargetException)e.getCause();
+            if (ee.getCause() != null && ee.getCause() instanceof BindException) {
+              LOG.info("BindException; retrying: " + e.toString());
+            }
+          }
+        }
+      }
+      LOG.info("STARTED=" + hrs);
+      // Wait until he's been given at least 3 regions before we go on to try
+      // and count rows in table.
+      while (hrs.getOnlineRegions().size() < 3) Threads.sleep(100);
+      LOG.info(hrs.toString() + " has " + hrs.getOnlineRegions().size() +
+        " regions");
+      assertEquals(count, count());
+    } finally {
+      c.set(HConstants.REGIONSERVER_PORT, oldPort);
+    }
+    */
+  }
+
+  /**
+   * HBase2482 is about outstanding region openings.  If any are outstanding
+   * when a regionserver goes down, then they'll never deploy.  They'll be
+   * stuck in the regions-in-transition list for ever.  This listener looks
+   * for a region opening HMsg and if its from the server passed on construction,
+   * then we kill it.  It also looks out for a close message on the victim
+   * server because that signifies start of the fireworks.
+   */
+  /*
+  static class HBase2482Listener implements RegionServerOperationListener {
+    private final HRegionServer victim;
+    private boolean abortSent = false;
+    // We closed regions on new server.
+    private volatile boolean closed = false;
+    // Copy of regions on new server
+    private final Collection<HRegion> copyOfOnlineRegions;
+    // This is the region that was in transition on the server we aborted. Test
+    // passes if this region comes back online successfully.
+    private HRegionInfo regionToFind;
+
+    HBase2482Listener(final HRegionServer victim) {
+      this.victim = victim;
+      // Copy regions currently open on this server so I can notice when
+      // there is a close.
+      this.copyOfOnlineRegions =
+        this.victim.getCopyOfOnlineRegionsSortedBySize().values();
+    }
+ 
+    @Override
+    public boolean process(HServerInfo serverInfo, HMsg incomingMsg) {
+      if (!victim.getServerInfo().equals(serverInfo) ||
+          this.abortSent || !this.closed) {
+        return true;
+      }
+      if (!incomingMsg.isType(HMsg.Type.MSG_REPORT_PROCESS_OPEN)) return true;
+      // Save the region that is in transition so can test later it came back.
+      this.regionToFind = incomingMsg.getRegionInfo();
+      String msg = "ABORTING " + this.victim + " because got a " +
+        HMsg.Type.MSG_REPORT_PROCESS_OPEN + " on this server for " +
+        incomingMsg.getRegionInfo().getRegionNameAsString();
+      this.victim.abort(msg);
+      this.abortSent = true;
+      return true;
+    }
+
+    @Override
+    public boolean process(RegionServerOperation op) throws IOException {
+      return true;
+    }
+
+    @Override
+    public void processed(RegionServerOperation op) {
+      if (this.closed || !(op instanceof ProcessRegionClose)) return;
+      ProcessRegionClose close = (ProcessRegionClose)op;
+      for (HRegion r: this.copyOfOnlineRegions) {
+        if (r.getRegionInfo().equals(close.regionInfo)) {
+          // We've closed one of the regions that was on the victim server.
+          // Now can start testing for when all regions are back online again
+          LOG.info("Found close of " +
+            r.getRegionInfo().getRegionNameAsString() +
+            "; setting close happened flag");
+          this.closed = true;
+          break;
+        }
+      }
+    }
+  }
+*/
+  /**
+   * In 2482, a RS with an opening region on it dies.  The said region is then
+   * stuck in the master's regions-in-transition and never leaves it.  This
+   * test works by bringing up a new regionserver, waiting for the load
+   * balancer to give it some regions.  Then, we close all on the new server.
+   * After sending all the close messages, we send the new regionserver the
+   * special blocking message so it can not process any more messages.
+   * Meantime reopening of the just-closed regions is backed up on the new
+   * server.  Soon as master gets an opening region from the new regionserver,
+   * we kill it.  We then wait on all regions to come back on line.  If bug
+   * is fixed, this should happen soon as the processing of the killed server is
+   * done.
+   * @see <a href="https://issues.apache.org/jira/browse/HBASE-2482">HBASE-2482</a> 
+   */
+  @Ignore @Test (timeout=300000) public void testKillRSWithOpeningRegion2482()
+  throws Exception {
+    /*
+    LOG.info("Running testKillRSWithOpeningRegion2482");
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    if (cluster.getLiveRegionServerThreads().size() < 2) {
+      // Need at least two servers.
+      cluster.startRegionServer();
+    }
+    // Count how many regions are online.  They need to be all back online for
+    // this test to succeed.
+    int countOfMetaRegions = countOfMetaRegions();
+    // Add a listener on the server.
+    HMaster m = cluster.getMaster();
+    // Start new regionserver.
+    MiniHBaseClusterRegionServer hrs =
+      (MiniHBaseClusterRegionServer)cluster.startRegionServer().getRegionServer();
+    LOG.info("Started new regionserver: " + hrs.toString());
+    // Wait until has some regions before proceeding.  Balancer will give it some.
+    int minimumRegions =
+      countOfMetaRegions/(cluster.getRegionServerThreads().size() * 2);
+    while (hrs.getOnlineRegions().size() < minimumRegions) Threads.sleep(100);
+    // Set the listener only after some regions have been opened on new server.
+    HBase2482Listener listener = new HBase2482Listener(hrs);
+    m.getRegionServerOperationQueue().
+      registerRegionServerOperationListener(listener);
+    try {
+      // Go close all non-catalog regions on this new server
+      closeAllNonCatalogRegions(cluster, hrs);
+      // After all closes, add blocking message before the region opens start to
+      // come in.
+      cluster.addMessageToSendRegionServer(hrs,
+        new HMsg(HMsg.Type.TESTING_BLOCK_REGIONSERVER));
+      // Wait till one of the above close messages has an effect before we start
+      // wait on all regions back online.
+      while (!listener.closed) Threads.sleep(100);
+      LOG.info("Past close");
+      // Make sure the abort server message was sent.
+      while(!listener.abortSent) Threads.sleep(100);
+      LOG.info("Past abort send; waiting on all regions to redeploy");
+      // Now wait for regions to come back online.
+      assertRegionIsBackOnline(listener.regionToFind);
+    } finally {
+      m.getRegionServerOperationQueue().
+        unregisterRegionServerOperationListener(listener);
+    }
+    */
+  }
+
+  /*
+   * @return Count of all non-catalog regions on the designated server
+   */
+/*
+  private int closeAllNonCatalogRegions(final MiniHBaseCluster cluster,
+    final MiniHBaseCluster.MiniHBaseClusterRegionServer hrs)
+  throws IOException {
+    int countOfRegions = 0;
+    for (HRegion r: hrs.getOnlineRegions()) {
+      if (r.getRegionInfo().isMetaRegion()) continue;
+      cluster.addMessageToSendRegionServer(hrs,
+        new HMsg(HMsg.Type.MSG_REGION_CLOSE, r.getRegionInfo()));
+      LOG.info("Sent close of " + r.getRegionInfo().getRegionNameAsString() +
+        " on " + hrs.toString());
+      countOfRegions++;
+    }
+    return countOfRegions;
+  }
+
+  private void assertRegionIsBackOnline(final HRegionInfo hri)
+  throws IOException {
+    // Region should have an entry in its startkey because of addRowToEachRegion.
+    byte [] row = getStartKey(hri);
+    HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME);
+    Get g =  new Get(row);
+    assertTrue((t.get(g)).size() > 0);
+  }
+
+  /*
+   * @return Count of regions in meta table.
+   * @throws IOException
+   */
+  /*
+  private static int countOfMetaRegions()
+  throws IOException {
+    HTable meta = new HTable(TEST_UTIL.getConfiguration(),
+      HConstants.META_TABLE_NAME);
+    int rows = 0;
+    Scan scan = new Scan();
+    scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
+    ResultScanner s = meta.getScanner(scan);
+    for (Result r = null; (r = s.next()) != null;) {
+      byte [] b =
+        r.getValue(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
+      if (b == null || b.length <= 0) break;
+      rows++;
+    }
+    s.close();
+    return rows;
+  }
+*/
+  /*
+   * Add to each of the regions in .META. a value.  Key is the startrow of the
+   * region (except its 'aaa' for first region).  Actual value is the row name.
+   * @param expected
+   * @return
+   * @throws IOException
+   */
+  private static int addToEachStartKey(final int expected) throws IOException {
+    HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME);
+    HTable meta = new HTable(TEST_UTIL.getConfiguration(),
+        HConstants.META_TABLE_NAME);
+    int rows = 0;
+    Scan scan = new Scan();
+    scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    ResultScanner s = meta.getScanner(scan);
+    for (Result r = null; (r = s.next()) != null;) {
+      byte [] b =
+        r.getValue(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+      if (b == null || b.length <= 0) break;
+      HRegionInfo hri = Writables.getHRegionInfo(b);
+      // If start key, add 'aaa'.
+      byte [] row = getStartKey(hri);
+      Put p = new Put(row);
+      p.add(getTestFamily(), getTestQualifier(), row);
+      t.put(p);
+      rows++;
+    }
+    s.close();
+    Assert.assertEquals(expected, rows);
+    return rows;
+  }
+
+  /*
+   * @return Count of rows in TABLENAME
+   * @throws IOException
+   */
+  private static int count() throws IOException {
+    HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME);
+    int rows = 0;
+    Scan scan = new Scan();
+    ResultScanner s = t.getScanner(scan);
+    for (Result r = null; (r = s.next()) != null;) {
+      rows++;
+    }
+    s.close();
+    LOG.info("Counted=" + rows);
+    return rows;
+  }
+
+  /*
+   * @param hri
+   * @return Start key for hri (If start key is '', then return 'aaa'.
+   */
+  private static byte [] getStartKey(final HRegionInfo hri) {
+    return Bytes.equals(HConstants.EMPTY_START_ROW, hri.getStartKey())?
+        Bytes.toBytes("aaa"): hri.getStartKey();
+  }
+
+  private static byte [] getTestFamily() {
+    return FAMILIES[0];
+  }
+
+  private static byte [] getTestQualifier() {
+    return getTestFamily();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java
new file mode 100644
index 0000000..dff6c1b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java
@@ -0,0 +1,135 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.TableExistsException;
+import org.apache.hadoop.hbase.client.MetaScanner;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+
+public class TestRestartCluster {
+  private static final Log LOG = LogFactory.getLog(TestRestartCluster.class);
+  private static HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static ZooKeeperWatcher zooKeeper;
+  private static final byte[] TABLENAME = Bytes.toBytes("master_transitions");
+  private static final byte [][] FAMILIES = new byte [][] {Bytes.toBytes("a")};
+
+
+  private static final byte [][] TABLES = new byte[][] {
+      Bytes.toBytes("restartTableOne"),
+      Bytes.toBytes("restartTableTwo"),
+      Bytes.toBytes("restartTableThree")
+  };
+  private static final byte [] FAMILY = Bytes.toBytes("family");
+
+  @Before public void setup() throws Exception {
+  }
+
+  @After public void teardown() throws IOException {
+    UTIL.shutdownMiniCluster();
+  }
+
+  @Test (timeout=300000) public void testRestartClusterAfterKill()
+  throws Exception {
+    UTIL.startMiniZKCluster();
+    zooKeeper = new ZooKeeperWatcher(UTIL.getConfiguration(), "cluster1", null);
+
+    // create the unassigned region, throw up a region opened state for META
+    String unassignedZNode = zooKeeper.assignmentZNode;
+    ZKUtil.createAndFailSilent(zooKeeper, unassignedZNode);
+
+    ZKAssign.createNodeOffline(zooKeeper, HRegionInfo.ROOT_REGIONINFO,
+      HMaster.MASTER);
+
+    ZKAssign.createNodeOffline(zooKeeper, HRegionInfo.FIRST_META_REGIONINFO,
+      HMaster.MASTER);
+
+    LOG.debug("Created UNASSIGNED zNode for ROOT and META regions in state " +
+        EventType.M_ZK_REGION_OFFLINE);
+
+    // start the HB cluster
+    LOG.info("Starting HBase cluster...");
+    UTIL.startMiniCluster(2);
+
+    UTIL.createTable(TABLENAME, FAMILIES);
+    LOG.info("Created a table, waiting for table to be available...");
+    UTIL.waitTableAvailable(TABLENAME, 60*1000);
+
+    LOG.info("Master deleted unassigned region and started up successfully.");
+  }
+
+  @Test (timeout=300000)
+  public void testClusterRestart() throws Exception {
+    UTIL.startMiniCluster(3);
+    LOG.info("\n\nCreating tables");
+    for(byte [] TABLE : TABLES) {
+      UTIL.createTable(TABLE, FAMILY);
+      UTIL.waitTableAvailable(TABLE, 30000);
+    }
+    List<HRegionInfo> allRegions =
+      MetaScanner.listAllRegions(UTIL.getConfiguration());
+    assertEquals(3, allRegions.size());
+
+    LOG.info("\n\nShutting down cluster");
+    UTIL.getHBaseCluster().shutdown();
+    UTIL.getHBaseCluster().join();
+
+    LOG.info("\n\nSleeping a bit");
+    Thread.sleep(2000);
+
+    LOG.info("\n\nStarting cluster the second time");
+    UTIL.restartHBaseCluster(3);
+
+    // Need to use a new 'Configuration' so we make a new HConnection.
+    // Otherwise we're reusing an HConnection that has gone stale because
+    // the shutdown of the cluster also called shut of the connection.
+    allRegions = MetaScanner.
+      listAllRegions(new Configuration(UTIL.getConfiguration()));
+    assertEquals(3, allRegions.size());
+
+    LOG.info("\n\nWaiting for tables to be available");
+    for(byte [] TABLE: TABLES) {
+      try {
+        UTIL.createTable(TABLE, FAMILY);
+        assertTrue("Able to create table that should already exist", false);
+      } catch(TableExistsException tee) {
+        LOG.info("Table already exists as expected");
+      }
+      UTIL.waitTableAvailable(TABLE, 30000);
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java
new file mode 100644
index 0000000..6089ae6
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java
@@ -0,0 +1,397 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.*;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.Set;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread;
+import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread;
+import org.apache.hadoop.hbase.zookeeper.ZKAssign;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.zookeeper.KeeperException;
+import org.junit.Test;
+
+/**
+ * Tests the restarting of everything as done during rolling restarts.
+ */
+public class TestRollingRestart {
+  private static final Log LOG = LogFactory.getLog(TestRollingRestart.class);
+
+  @Test
+  public void testBasicRollingRestart() throws Exception {
+
+    // Start a cluster with 2 masters and 4 regionservers
+    final int NUM_MASTERS = 2;
+    final int NUM_RS = 3;
+    final int NUM_REGIONS_TO_CREATE = 20;
+
+    int expectedNumRS = 3;
+
+    // Start the cluster
+    log("Starting cluster");
+    Configuration conf = HBaseConfiguration.create();
+    conf.setInt("hbase.master.assignment.timeoutmonitor.period", 2000);
+    conf.setInt("hbase.master.assignment.timeoutmonitor.timeout", 5000);
+    HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf);
+    TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS);
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    log("Waiting for active/ready master");
+    cluster.waitForActiveAndReadyMaster();
+    ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testRollingRestart",
+        null);
+    HMaster master = cluster.getMaster();
+
+    // Create a table with regions
+    byte [] table = Bytes.toBytes("tableRestart");
+    byte [] family = Bytes.toBytes("family");
+    log("Creating table with " + NUM_REGIONS_TO_CREATE + " regions");
+    HTable ht = TEST_UTIL.createTable(table, family);
+    int numRegions = TEST_UTIL.createMultiRegions(conf, ht, family,
+        NUM_REGIONS_TO_CREATE);
+    numRegions += 2; // catalogs
+    log("Waiting for no more RIT\n");
+    blockUntilNoRIT(zkw, master);
+    log("Disabling table\n");
+    TEST_UTIL.getHBaseAdmin().disableTable(table);
+    log("Waiting for no more RIT\n");
+    blockUntilNoRIT(zkw, master);
+    NavigableSet<String> regions = getAllOnlineRegions(cluster);
+    log("Verifying only catalog regions are assigned\n");
+    if (regions.size() != 2) {
+      for (String oregion : regions) log("Region still online: " + oregion);
+    }
+    assertEquals(2, regions.size());
+    log("Enabling table\n");
+    TEST_UTIL.getHBaseAdmin().enableTable(table);
+    log("Waiting for no more RIT\n");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster\n");
+    regions = getAllOnlineRegions(cluster);
+    assertRegionsAssigned(cluster, regions);
+    assertEquals(expectedNumRS, cluster.getRegionServerThreads().size());
+
+    // Add a new regionserver
+    log("Adding a fourth RS");
+    RegionServerThread restarted = cluster.startRegionServer();
+    expectedNumRS++;
+    restarted.waitForServerOnline();
+    log("Additional RS is online");
+    log("Waiting for no more RIT");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster");
+    assertRegionsAssigned(cluster, regions);
+    assertEquals(expectedNumRS, cluster.getRegionServerThreads().size());
+
+    // Master Restarts
+    List<MasterThread> masterThreads = cluster.getMasterThreads();
+    MasterThread activeMaster = null;
+    MasterThread backupMaster = null;
+    assertEquals(2, masterThreads.size());
+    if (masterThreads.get(0).getMaster().isActiveMaster()) {
+      activeMaster = masterThreads.get(0);
+      backupMaster = masterThreads.get(1);
+    } else {
+      activeMaster = masterThreads.get(1);
+      backupMaster = masterThreads.get(0);
+    }
+
+    // Bring down the backup master
+    log("Stopping backup master\n\n");
+    backupMaster.getMaster().stop("Stop of backup during rolling restart");
+    cluster.hbaseCluster.waitOnMaster(backupMaster);
+
+    // Bring down the primary master
+    log("Stopping primary master\n\n");
+    activeMaster.getMaster().stop("Stop of active during rolling restart");
+    cluster.hbaseCluster.waitOnMaster(activeMaster);
+
+    // Start primary master
+    log("Restarting primary master\n\n");
+    activeMaster = cluster.startMaster();
+    cluster.waitForActiveAndReadyMaster();
+    master = activeMaster.getMaster();
+
+    // Start backup master
+    log("Restarting backup master\n\n");
+    backupMaster = cluster.startMaster();
+
+    assertEquals(expectedNumRS, cluster.getRegionServerThreads().size());
+
+    // RegionServer Restarts
+
+    // Bring them down, one at a time, waiting between each to complete
+    List<RegionServerThread> regionServers =
+      cluster.getLiveRegionServerThreads();
+    int num = 1;
+    int total = regionServers.size();
+    for (RegionServerThread rst : regionServers) {
+      String serverName = rst.getRegionServer().getServerName();
+      log("Stopping region server " + num + " of " + total + " [ " +
+          serverName + "]");
+      rst.getRegionServer().stop("Stopping RS during rolling restart");
+      cluster.hbaseCluster.waitOnRegionServer(rst);
+      log("Waiting for RS shutdown to be handled by master");
+      waitForRSShutdownToStartAndFinish(activeMaster, serverName);
+      log("RS shutdown done, waiting for no more RIT");
+      blockUntilNoRIT(zkw, master);
+      log("Verifying there are " + numRegions + " assigned on cluster");
+      assertRegionsAssigned(cluster, regions);
+      expectedNumRS--;
+      assertEquals(expectedNumRS, cluster.getRegionServerThreads().size());
+      log("Restarting region server " + num + " of " + total);
+      restarted = cluster.startRegionServer();
+      restarted.waitForServerOnline();
+      expectedNumRS++;
+      log("Region server " + num + " is back online");
+      log("Waiting for no more RIT");
+      blockUntilNoRIT(zkw, master);
+      log("Verifying there are " + numRegions + " assigned on cluster");
+      assertRegionsAssigned(cluster, regions);
+      assertEquals(expectedNumRS, cluster.getRegionServerThreads().size());
+      num++;
+    }
+    Thread.sleep(2000);
+    assertRegionsAssigned(cluster, regions);
+
+    // Bring the RS hosting ROOT down and the RS hosting META down at once
+    RegionServerThread rootServer = getServerHostingRoot(cluster);
+    RegionServerThread metaServer = getServerHostingMeta(cluster);
+    if (rootServer == metaServer) {
+      log("ROOT and META on the same server so killing another random server");
+      int i=0;
+      while (rootServer == metaServer) {
+        metaServer = cluster.getRegionServerThreads().get(i);
+        i++;
+      }
+    }
+    log("Stopping server hosting ROOT");
+    rootServer.getRegionServer().stop("Stopping ROOT server");
+    log("Stopping server hosting META #1");
+    metaServer.getRegionServer().stop("Stopping META server");
+    cluster.hbaseCluster.waitOnRegionServer(rootServer);
+    log("Root server down");
+    cluster.hbaseCluster.waitOnRegionServer(metaServer);
+    log("Meta server down #1");
+    expectedNumRS -= 2;
+    log("Waiting for meta server #1 RS shutdown to be handled by master");
+    waitForRSShutdownToStartAndFinish(activeMaster,
+        metaServer.getRegionServer().getServerName());
+    log("Waiting for no more RIT");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster");
+    assertRegionsAssigned(cluster, regions);
+    assertEquals(expectedNumRS, cluster.getRegionServerThreads().size());
+
+    // Kill off the server hosting META again
+    metaServer = getServerHostingMeta(cluster);
+    log("Stopping server hosting META #2");
+    metaServer.getRegionServer().stop("Stopping META server");
+    cluster.hbaseCluster.waitOnRegionServer(metaServer);
+    log("Meta server down");
+    expectedNumRS--;
+    log("Waiting for RS shutdown to be handled by master");
+    waitForRSShutdownToStartAndFinish(activeMaster,
+        metaServer.getRegionServer().getServerName());
+    log("RS shutdown done, waiting for no more RIT");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster");
+    assertRegionsAssigned(cluster, regions);
+    assertEquals(expectedNumRS, cluster.getRegionServerThreads().size());
+
+    // Start 3 RS again
+    cluster.startRegionServer().waitForServerOnline();
+    cluster.startRegionServer().waitForServerOnline();
+    cluster.startRegionServer().waitForServerOnline();
+    Thread.sleep(1000);
+    log("Waiting for no more RIT");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster");
+    assertRegionsAssigned(cluster, regions);
+    // Shutdown server hosting META
+    metaServer = getServerHostingMeta(cluster);
+    log("Stopping server hosting META (1 of 3)");
+    metaServer.getRegionServer().stop("Stopping META server");
+    cluster.hbaseCluster.waitOnRegionServer(metaServer);
+    log("Meta server down (1 of 3)");
+    log("Waiting for RS shutdown to be handled by master");
+    waitForRSShutdownToStartAndFinish(activeMaster,
+        metaServer.getRegionServer().getServerName());
+    log("RS shutdown done, waiting for no more RIT");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster");
+    assertRegionsAssigned(cluster, regions);
+
+    // Shutdown server hosting META again
+    metaServer = getServerHostingMeta(cluster);
+    log("Stopping server hosting META (2 of 3)");
+    metaServer.getRegionServer().stop("Stopping META server");
+    cluster.hbaseCluster.waitOnRegionServer(metaServer);
+    log("Meta server down (2 of 3)");
+    log("Waiting for RS shutdown to be handled by master");
+    waitForRSShutdownToStartAndFinish(activeMaster,
+        metaServer.getRegionServer().getServerName());
+    log("RS shutdown done, waiting for no more RIT");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster");
+    assertRegionsAssigned(cluster, regions);
+
+    // Shutdown server hosting META again
+    metaServer = getServerHostingMeta(cluster);
+    log("Stopping server hosting META (3 of 3)");
+    metaServer.getRegionServer().stop("Stopping META server");
+    cluster.hbaseCluster.waitOnRegionServer(metaServer);
+    log("Meta server down (3 of 3)");
+    log("Waiting for RS shutdown to be handled by master");
+    waitForRSShutdownToStartAndFinish(activeMaster,
+        metaServer.getRegionServer().getServerName());
+    log("RS shutdown done, waiting for no more RIT");
+    blockUntilNoRIT(zkw, master);
+    log("Verifying there are " + numRegions + " assigned on cluster");
+    assertRegionsAssigned(cluster, regions);
+
+    if (cluster.getRegionServerThreads().size() != 1) {
+      log("Online regionservers:");
+      for (RegionServerThread rst : cluster.getRegionServerThreads()) {
+        log("RS: " + rst.getRegionServer().getServerName());
+      }
+    }
+    assertEquals(1, cluster.getRegionServerThreads().size());
+
+
+    // TODO: Bring random 3 of 4 RS down at the same time
+
+
+    // Stop the cluster
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private void blockUntilNoRIT(ZooKeeperWatcher zkw, HMaster master)
+  throws KeeperException, InterruptedException {
+    ZKAssign.blockUntilNoRIT(zkw);
+    master.assignmentManager.waitUntilNoRegionsInTransition(60000);
+  }
+
+  private void waitForRSShutdownToStartAndFinish(MasterThread activeMaster,
+      String serverName) throws InterruptedException {
+    ServerManager sm = activeMaster.getMaster().getServerManager();
+    // First wait for it to be in dead list
+    while (!sm.getDeadServers().contains(serverName)) {
+      log("Waiting for [" + serverName + "] to be listed as dead in master");
+      Thread.sleep(1);
+    }
+    log("Server [" + serverName + "] marked as dead, waiting for it to " +
+        "finish dead processing");
+    while (sm.areDeadServersInProgress()) {
+      log("Server [" + serverName + "] still being processed, waiting");
+      Thread.sleep(100);
+    }
+    log("Server [" + serverName + "] done with server shutdown processing");
+  }
+
+  private void log(String msg) {
+    LOG.debug("\n\nTRR: " + msg + "\n");
+  }
+
+  private RegionServerThread getServerHostingMeta(MiniHBaseCluster cluster) {
+    return getServerHosting(cluster, HRegionInfo.FIRST_META_REGIONINFO);
+  }
+
+  private RegionServerThread getServerHostingRoot(MiniHBaseCluster cluster) {
+    return getServerHosting(cluster, HRegionInfo.ROOT_REGIONINFO);
+  }
+
+  private RegionServerThread getServerHosting(MiniHBaseCluster cluster,
+      HRegionInfo region) {
+    for (RegionServerThread rst : cluster.getRegionServerThreads()) {
+      if (rst.getRegionServer().getOnlineRegions().contains(region)) {
+        return rst;
+      }
+    }
+    return null;
+  }
+
+  private void assertRegionsAssigned(MiniHBaseCluster cluster,
+      Set<String> expectedRegions) {
+    int numFound = 0;
+    for (RegionServerThread rst : cluster.getLiveRegionServerThreads()) {
+      numFound += rst.getRegionServer().getNumberOfOnlineRegions();
+    }
+    if (expectedRegions.size() > numFound) {
+      log("Expected to find " + expectedRegions.size() + " but only found"
+          + " " + numFound);
+      NavigableSet<String> foundRegions = getAllOnlineRegions(cluster);
+      for (String region : expectedRegions) {
+        if (!foundRegions.contains(region)) {
+          log("Missing region: " + region);
+        }
+      }
+      assertEquals(expectedRegions.size(), numFound);
+    } else if (expectedRegions.size() < numFound) {
+      int doubled = numFound - expectedRegions.size();
+      log("Expected to find " + expectedRegions.size() + " but found"
+          + " " + numFound + " (" + doubled + " double assignments?)");
+      NavigableSet<String> doubleRegions = getDoubleAssignedRegions(cluster);
+      for (String region : doubleRegions) {
+        log("Region is double assigned: " + region);
+      }
+      assertEquals(expectedRegions.size(), numFound);
+    } else {
+      log("Success!  Found expected number of " + numFound + " regions");
+    }
+  }
+
+  private NavigableSet<String> getAllOnlineRegions(MiniHBaseCluster cluster) {
+    NavigableSet<String> online = new TreeSet<String>();
+    for (RegionServerThread rst : cluster.getLiveRegionServerThreads()) {
+      for (HRegionInfo region : rst.getRegionServer().getOnlineRegions()) {
+        online.add(region.getRegionNameAsString());
+      }
+    }
+    return online;
+  }
+
+  private NavigableSet<String> getDoubleAssignedRegions(
+      MiniHBaseCluster cluster) {
+    NavigableSet<String> online = new TreeSet<String>();
+    NavigableSet<String> doubled = new TreeSet<String>();
+    for (RegionServerThread rst : cluster.getLiveRegionServerThreads()) {
+      for (HRegionInfo region : rst.getRegionServer().getOnlineRegions()) {
+        if(!online.add(region.getRegionNameAsString())) {
+          doubled.add(region.getRegionNameAsString());
+        }
+      }
+    }
+    return doubled;
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java
new file mode 100644
index 0000000..86db194
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java
@@ -0,0 +1,330 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.executor.EventHandler;
+import org.apache.hadoop.hbase.executor.EventHandler.EventHandlerListener;
+import org.apache.hadoop.hbase.executor.EventHandler.EventType;
+import org.apache.hadoop.hbase.master.handler.TotesHRegionInfo;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.hbase.util.Writables;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test open and close of regions using zk.
+ */
+public class TestZKBasedOpenCloseRegion {
+  private static final Log LOG = LogFactory.getLog(TestZKBasedOpenCloseRegion.class);
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final String TABLENAME = "TestZKBasedOpenCloseRegion";
+  private static final byte [][] FAMILIES = new byte [][] {Bytes.toBytes("a"),
+    Bytes.toBytes("b"), Bytes.toBytes("c")};
+
+  @BeforeClass public static void beforeAllTests() throws Exception {
+    Configuration c = TEST_UTIL.getConfiguration();
+    c.setBoolean("dfs.support.append", true);
+    c.setInt("hbase.regionserver.info.port", 0);
+    TEST_UTIL.startMiniCluster(2);
+    TEST_UTIL.createTable(Bytes.toBytes(TABLENAME), FAMILIES);
+    HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME);
+    int countOfRegions = TEST_UTIL.createMultiRegions(t, getTestFamily());
+    waitUntilAllRegionsAssigned(countOfRegions);
+    addToEachStartKey(countOfRegions);
+  }
+
+  @AfterClass public static void afterAllTests() throws IOException {
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Before public void setup() throws IOException {
+    if (TEST_UTIL.getHBaseCluster().getLiveRegionServerThreads().size() < 2) {
+      // Need at least two servers.
+      LOG.info("Started new server=" +
+        TEST_UTIL.getHBaseCluster().startRegionServer());
+
+    }
+  }
+
+  /**
+   * Test we reopen a region once closed.
+   * @throws Exception
+   */
+  @Test (timeout=300000) public void testReOpenRegion()
+  throws Exception {
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    LOG.info("Number of region servers = " +
+      cluster.getLiveRegionServerThreads().size());
+
+    int rsIdx = 0;
+    HRegionServer regionServer =
+      TEST_UTIL.getHBaseCluster().getRegionServer(rsIdx);
+    HRegionInfo hri = getNonMetaRegion(regionServer.getOnlineRegions());
+    LOG.debug("Asking RS to close region " + hri.getRegionNameAsString());
+
+    AtomicBoolean closeEventProcessed = new AtomicBoolean(false);
+    AtomicBoolean reopenEventProcessed = new AtomicBoolean(false);
+
+    EventHandlerListener closeListener =
+      new ReopenEventListener(hri.getRegionNameAsString(),
+          closeEventProcessed, EventType.RS_ZK_REGION_CLOSED);
+    cluster.getMaster().executorService.
+      registerListener(EventType.RS_ZK_REGION_CLOSED, closeListener);
+
+    EventHandlerListener openListener =
+      new ReopenEventListener(hri.getRegionNameAsString(),
+          reopenEventProcessed, EventType.RS_ZK_REGION_OPENED);
+    cluster.getMaster().executorService.
+      registerListener(EventType.RS_ZK_REGION_OPENED, openListener);
+
+    LOG.info("Unassign " + hri.getRegionNameAsString());
+    cluster.getMaster().assignmentManager.unassign(hri);
+
+    while (!closeEventProcessed.get()) {
+      Threads.sleep(100);
+    }
+
+    while (!reopenEventProcessed.get()) {
+      Threads.sleep(100);
+    }
+
+    LOG.info("Done with testReOpenRegion");
+  }
+
+  private HRegionInfo getNonMetaRegion(final Collection<HRegionInfo> regions) {
+    HRegionInfo hri = null;
+    for (HRegionInfo i: regions) {
+      LOG.info(i.getRegionNameAsString());
+      if (!i.isMetaRegion()) {
+        hri = i;
+        break;
+      }
+    }
+    return hri;
+  }
+
+  public static class ReopenEventListener implements EventHandlerListener {
+    private static final Log LOG = LogFactory.getLog(ReopenEventListener.class);
+    String regionName;
+    AtomicBoolean eventProcessed;
+    EventType eventType;
+
+    public ReopenEventListener(String regionName,
+        AtomicBoolean eventProcessed, EventType eventType) {
+      this.regionName = regionName;
+      this.eventProcessed = eventProcessed;
+      this.eventType = eventType;
+    }
+
+    @Override
+    public void beforeProcess(EventHandler event) {
+      if(event.getEventType() == eventType) {
+        LOG.info("Received " + eventType + " and beginning to process it");
+      }
+    }
+
+    @Override
+    public void afterProcess(EventHandler event) {
+      LOG.info("afterProcess(" + event + ")");
+      if(event.getEventType() == eventType) {
+        LOG.info("Finished processing " + eventType);
+        String regionName = "";
+        if(eventType == EventType.RS_ZK_REGION_OPENED) {
+          TotesHRegionInfo hriCarrier = (TotesHRegionInfo)event;
+          regionName = hriCarrier.getHRegionInfo().getRegionNameAsString();
+        } else if(eventType == EventType.RS_ZK_REGION_CLOSED) {
+          TotesHRegionInfo hriCarrier = (TotesHRegionInfo)event;
+          regionName = hriCarrier.getHRegionInfo().getRegionNameAsString();
+        }
+        if(this.regionName.equals(regionName)) {
+          eventProcessed.set(true);
+        }
+        synchronized(eventProcessed) {
+          eventProcessed.notifyAll();
+        }
+      }
+    }
+  }
+
+  @Test (timeout=300000) public void testCloseRegion()
+  throws Exception {
+    LOG.info("Running testCloseRegion");
+    MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+    LOG.info("Number of region servers = " + cluster.getLiveRegionServerThreads().size());
+
+    int rsIdx = 0;
+    HRegionServer regionServer = TEST_UTIL.getHBaseCluster().getRegionServer(rsIdx);
+    HRegionInfo hri = getNonMetaRegion(regionServer.getOnlineRegions());
+    LOG.debug("Asking RS to close region " + hri.getRegionNameAsString());
+
+    AtomicBoolean closeEventProcessed = new AtomicBoolean(false);
+    EventHandlerListener listener =
+      new CloseRegionEventListener(hri.getRegionNameAsString(),
+          closeEventProcessed);
+    cluster.getMaster().executorService.registerListener(EventType.RS_ZK_REGION_CLOSED, listener);
+
+    cluster.getMaster().assignmentManager.unassign(hri);
+
+    while (!closeEventProcessed.get()) {
+      Threads.sleep(100);
+    }
+    LOG.info("Done with testCloseRegion");
+  }
+
+  public static class CloseRegionEventListener implements EventHandlerListener {
+    private static final Log LOG = LogFactory.getLog(CloseRegionEventListener.class);
+    String regionToClose;
+    AtomicBoolean closeEventProcessed;
+
+    public CloseRegionEventListener(String regionToClose,
+        AtomicBoolean closeEventProcessed) {
+      this.regionToClose = regionToClose;
+      this.closeEventProcessed = closeEventProcessed;
+    }
+
+    @Override
+    public void afterProcess(EventHandler event) {
+      LOG.info("afterProcess(" + event + ")");
+      if(event.getEventType() == EventType.RS_ZK_REGION_CLOSED) {
+        LOG.info("Finished processing CLOSE REGION");
+        TotesHRegionInfo hriCarrier = (TotesHRegionInfo)event;
+        if (regionToClose.equals(hriCarrier.getHRegionInfo().getRegionNameAsString())) {
+          LOG.info("Setting closeEventProcessed flag");
+          closeEventProcessed.set(true);
+        } else {
+          LOG.info("Region to close didn't match");
+        }
+      }
+    }
+
+    @Override
+    public void beforeProcess(EventHandler event) {
+      if(event.getEventType() == EventType.M_RS_CLOSE_REGION) {
+        LOG.info("Received CLOSE RPC and beginning to process it");
+      }
+    }
+  }
+
+  private static void waitUntilAllRegionsAssigned(final int countOfRegions)
+  throws IOException {
+    HTable meta = new HTable(TEST_UTIL.getConfiguration(),
+      HConstants.META_TABLE_NAME);
+    while (true) {
+      int rows = 0;
+      Scan scan = new Scan();
+      scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
+      ResultScanner s = meta.getScanner(scan);
+      for (Result r = null; (r = s.next()) != null;) {
+        byte [] b =
+          r.getValue(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER);
+        if (b == null || b.length <= 0) {
+          break;
+        }
+        rows++;
+      }
+      s.close();
+      // If I get to here and all rows have a Server, then all have been assigned.
+      if (rows == countOfRegions) {
+        break;
+      }
+      LOG.info("Found=" + rows);
+      Threads.sleep(1000);
+    }
+  }
+
+  /*
+   * Add to each of the regions in .META. a value.  Key is the startrow of the
+   * region (except its 'aaa' for first region).  Actual value is the row name.
+   * @param expected
+   * @return
+   * @throws IOException
+   */
+  private static int addToEachStartKey(final int expected) throws IOException {
+    HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME);
+    HTable meta = new HTable(TEST_UTIL.getConfiguration(),
+        HConstants.META_TABLE_NAME);
+    int rows = 0;
+    Scan scan = new Scan();
+    scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    ResultScanner s = meta.getScanner(scan);
+    for (Result r = null; (r = s.next()) != null;) {
+      byte [] b =
+        r.getValue(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+      if (b == null || b.length <= 0) {
+        break;
+      }
+      HRegionInfo hri = Writables.getHRegionInfo(b);
+      // If start key, add 'aaa'.
+      byte [] row = getStartKey(hri);
+      Put p = new Put(row);
+      p.add(getTestFamily(), getTestQualifier(), row);
+      t.put(p);
+      rows++;
+    }
+    s.close();
+    Assert.assertEquals(expected, rows);
+    return rows;
+  }
+
+  private static byte [] getStartKey(final HRegionInfo hri) {
+    return Bytes.equals(HConstants.EMPTY_START_ROW, hri.getStartKey())?
+        Bytes.toBytes("aaa"): hri.getStartKey();
+  }
+
+  private static byte [] getTestFamily() {
+    return FAMILIES[0];
+  }
+
+  private static byte [] getTestQualifier() {
+    return getTestFamily();
+  }
+
+  public static void main(String args[]) throws Exception {
+    TestZKBasedOpenCloseRegion.beforeAllTests();
+
+    TestZKBasedOpenCloseRegion test = new TestZKBasedOpenCloseRegion();
+    test.setup();
+    test.testCloseRegion();
+
+    TestZKBasedOpenCloseRegion.afterAllTests();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/metrics/TestMetricsMBeanBase.java b/0.90/src/test/java/org/apache/hadoop/hbase/metrics/TestMetricsMBeanBase.java
new file mode 100644
index 0000000..cd939cf
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/metrics/TestMetricsMBeanBase.java
@@ -0,0 +1,121 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.metrics;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import javax.management.MBeanAttributeInfo;
+import javax.management.MBeanInfo;
+
+import org.apache.hadoop.metrics.MetricsContext;
+import org.apache.hadoop.metrics.MetricsRecord;
+import org.apache.hadoop.metrics.MetricsUtil;
+import org.apache.hadoop.metrics.util.MetricsIntValue;
+import org.apache.hadoop.metrics.util.MetricsRegistry;
+import org.apache.hadoop.metrics.util.MetricsTimeVaryingRate;
+
+import junit.framework.TestCase;
+
+public class TestMetricsMBeanBase extends TestCase {
+
+  private class TestStatistics extends MetricsMBeanBase {
+    public TestStatistics(MetricsRegistry registry) {
+      super(registry, "TestStatistics");
+    }
+  }
+
+  private MetricsRegistry registry;
+  private MetricsRecord metricsRecord;
+  private TestStatistics stats;
+  private MetricsRate metricsRate;
+  private MetricsIntValue intValue;
+  private MetricsTimeVaryingRate varyRate;
+
+  public void setUp() {
+    this.registry = new MetricsRegistry();
+    this.metricsRate = new MetricsRate("metricsRate", registry, "test");
+    this.intValue = new MetricsIntValue("intValue", registry, "test");
+    this.varyRate = new MetricsTimeVaryingRate("varyRate", registry, "test");
+    this.stats = new TestStatistics(registry);
+    MetricsContext context = MetricsUtil.getContext("hbase");
+    this.metricsRecord = MetricsUtil.createRecord(context, "test");
+    this.metricsRecord.setTag("TestStatistics", "test");
+    //context.registerUpdater(this);
+
+  }
+
+  public void tearDown() {
+
+  }
+
+  public void testGetAttribute() throws Exception {
+    this.metricsRate.inc(2);
+    this.metricsRate.pushMetric(this.metricsRecord);
+    this.intValue.set(5);
+    this.intValue.pushMetric(this.metricsRecord);
+    this.varyRate.inc(10);
+    this.varyRate.inc(50);
+    this.varyRate.pushMetric(this.metricsRecord);
+
+
+    assertEquals( 2.0, (Float)this.stats.getAttribute("metricsRate"), 0.001 );
+    assertEquals( 5, this.stats.getAttribute("intValue") );
+    assertEquals( 10L, this.stats.getAttribute("varyRateMinTime") );
+    assertEquals( 50L, this.stats.getAttribute("varyRateMaxTime") );
+    assertEquals( 30L, this.stats.getAttribute("varyRateAvgTime") );
+    assertEquals( 2, this.stats.getAttribute("varyRateNumOps") );
+  }
+
+  public void testGetMBeanInfo() {
+    MBeanInfo info = this.stats.getMBeanInfo();
+    MBeanAttributeInfo[] attributes = info.getAttributes();
+    assertEquals( 6, attributes.length );
+
+    Map<String,MBeanAttributeInfo> attributeByName =
+        new HashMap<String,MBeanAttributeInfo>(attributes.length);
+    for (MBeanAttributeInfo attr : attributes)
+      attributeByName.put(attr.getName(), attr);
+
+    assertAttribute( attributeByName.get("metricsRate"),
+        "metricsRate", "java.lang.Float", "test");
+    assertAttribute( attributeByName.get("intValue"),
+        "intValue", "java.lang.Integer", "test");
+    assertAttribute( attributeByName.get("varyRateMinTime"),
+        "varyRateMinTime", "java.lang.Long", "test");
+    assertAttribute( attributeByName.get("varyRateMaxTime"),
+        "varyRateMaxTime", "java.lang.Long", "test");
+    assertAttribute( attributeByName.get("varyRateAvgTime"),
+        "varyRateAvgTime", "java.lang.Long", "test");
+    assertAttribute( attributeByName.get("varyRateNumOps"),
+        "varyRateNumOps", "java.lang.Integer", "test");
+  }
+
+  protected void assertAttribute(MBeanAttributeInfo attr, String name,
+      String type, String description) {
+
+    assertEquals(attr.getName(), name);
+    assertEquals(attr.getType(), type);
+    assertEquals(attr.getDescription(), description);
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/DisabledTestRegionServerExit.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/DisabledTestRegionServerExit.java
new file mode 100644
index 0000000..5b8b464
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/DisabledTestRegionServerExit.java
@@ -0,0 +1,211 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.LocalHBaseCluster;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+
+/**
+ * Tests region server failover when a region server exits both cleanly and
+ * when it aborts.
+ */
+public class DisabledTestRegionServerExit extends HBaseClusterTestCase {
+  final Log LOG = LogFactory.getLog(this.getClass().getName());
+  HTable table;
+
+  /** constructor */
+  public DisabledTestRegionServerExit() {
+    super(2);
+    conf.setInt("ipc.client.connect.max.retries", 5); // reduce ipc retries
+    conf.setInt("ipc.client.timeout", 10000);         // and ipc timeout
+    conf.setInt("hbase.client.pause", 10000);         // increase client timeout
+    conf.setInt("hbase.client.retries.number", 10);   // increase HBase retries
+  }
+
+  /**
+   * Test abort of region server.
+   * @throws IOException
+   */
+  public void testAbort() throws IOException {
+    // When the META table can be opened, the region servers are running
+    new HTable(conf, HConstants.META_TABLE_NAME);
+    // Create table and add a row.
+    final String tableName = getName();
+    byte [] row = createTableAndAddRow(tableName);
+    // Start up a new region server to take over serving of root and meta
+    // after we shut down the current meta/root host.
+    this.cluster.startRegionServer();
+    // Now abort the meta region server and wait for it to go down and come back
+    stopOrAbortMetaRegionServer(true);
+    // Verify that everything is back up.
+    LOG.info("Starting up the verification thread for " + getName());
+    Thread t = startVerificationThread(tableName, row);
+    t.start();
+    threadDumpingJoin(t);
+  }
+
+  /**
+   * Test abort of region server.
+   * Test is flakey up on hudson.  Needs work.
+   * @throws IOException
+   */
+  public void testCleanExit() throws IOException {
+    // When the META table can be opened, the region servers are running
+    new HTable(this.conf, HConstants.META_TABLE_NAME);
+    // Create table and add a row.
+    final String tableName = getName();
+    byte [] row = createTableAndAddRow(tableName);
+    // Start up a new region server to take over serving of root and meta
+    // after we shut down the current meta/root host.
+    this.cluster.startRegionServer();
+    // Now abort the meta region server and wait for it to go down and come back
+    stopOrAbortMetaRegionServer(false);
+    // Verify that everything is back up.
+    LOG.info("Starting up the verification thread for " + getName());
+    Thread t = startVerificationThread(tableName, row);
+    t.start();
+    threadDumpingJoin(t);
+  }
+
+  private byte [] createTableAndAddRow(final String tableName)
+  throws IOException {
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    HBaseAdmin admin = new HBaseAdmin(conf);
+    admin.createTable(desc);
+    // put some values in the table
+    this.table = new HTable(conf, tableName);
+    byte [] row = Bytes.toBytes("row1");
+    Put put = new Put(row);
+    put.add(HConstants.CATALOG_FAMILY, null, Bytes.toBytes(tableName));
+    table.put(put);
+    return row;
+  }
+
+  /*
+   * Stop the region server serving the meta region and wait for the meta region
+   * to get reassigned. This is always the most problematic case.
+   *
+   * @param abort set to true if region server should be aborted, if false it
+   * is just shut down.
+   */
+  private void stopOrAbortMetaRegionServer(boolean abort) {
+    List<JVMClusterUtil.RegionServerThread> regionThreads =
+      cluster.getRegionServerThreads();
+
+    int server = -1;
+    for (int i = 0; i < regionThreads.size() && server == -1; i++) {
+      HRegionServer s = regionThreads.get(i).getRegionServer();
+      Collection<HRegion> regions = s.getOnlineRegionsLocalContext();
+      for (HRegion r : regions) {
+        if (Bytes.equals(r.getTableDesc().getName(),
+            HConstants.META_TABLE_NAME)) {
+          server = i;
+        }
+      }
+    }
+    if (server == -1) {
+      LOG.fatal("could not find region server serving meta region");
+      fail();
+    }
+    if (abort) {
+      this.cluster.abortRegionServer(server);
+
+    } else {
+      this.cluster.stopRegionServer(server);
+    }
+    LOG.info(this.cluster.waitOnRegionServer(server) + " has been " +
+        (abort ? "aborted" : "shut down"));
+  }
+
+  /*
+   * Run verification in a thread so I can concurrently run a thread-dumper
+   * while we're waiting (because in this test sometimes the meta scanner
+   * looks to be be stuck).
+   * @param tableName Name of table to find.
+   * @param row Row we expect to find.
+   * @return Verification thread.  Caller needs to calls start on it.
+   */
+  private Thread startVerificationThread(final String tableName,
+      final byte [] row) {
+    Runnable runnable = new Runnable() {
+      public void run() {
+        try {
+          // Now try to open a scanner on the meta table. Should stall until
+          // meta server comes back up.
+          HTable t = new HTable(conf, HConstants.META_TABLE_NAME);
+          Scan scan = new Scan();
+          scan.addFamily(HConstants.CATALOG_FAMILY);
+
+          ResultScanner s = t.getScanner(scan);
+          s.close();
+
+        } catch (IOException e) {
+          LOG.fatal("could not re-open meta table because", e);
+          fail();
+        }
+        ResultScanner scanner = null;
+        try {
+          // Verify that the client can find the data after the region has moved
+          // to a different server
+          Scan scan = new Scan();
+          scan.addFamily(HConstants.CATALOG_FAMILY);
+
+          scanner = table.getScanner(scan);
+          LOG.info("Obtained scanner " + scanner);
+          for (Result r : scanner) {
+            assertTrue(Bytes.equals(r.getRow(), row));
+            assertEquals(1, r.size());
+            byte[] bytes = r.value();
+            assertNotNull(bytes);
+            assertTrue(tableName.equals(Bytes.toString(bytes)));
+          }
+          LOG.info("Success!");
+        } catch (Exception e) {
+          e.printStackTrace();
+          fail();
+        } finally {
+          if (scanner != null) {
+            LOG.info("Closing scanner " + scanner);
+            scanner.close();
+          }
+        }
+      }
+    };
+    return new Thread(runnable);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/KeyValueScanFixture.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/KeyValueScanFixture.java
new file mode 100644
index 0000000..5c90326
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/KeyValueScanFixture.java
@@ -0,0 +1,111 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.regionserver.KeyValueScanner;
+import org.apache.hadoop.hbase.KeyValue;
+
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.Collections;
+import java.util.List;
+
+/**
+ * A fixture that implements and presents a KeyValueScanner.
+ * It takes a list of key/values which is then sorted according
+ * to the provided comparator, and then the whole thing pretends
+ * to be a store file scanner.
+ */
+public class KeyValueScanFixture implements KeyValueScanner {
+  ArrayList<KeyValue> data;
+  Iterator<KeyValue> iter = null;
+  KeyValue current = null;
+  KeyValue.KVComparator comparator;
+
+  public KeyValueScanFixture(KeyValue.KVComparator comparator,
+                             KeyValue... incData) {
+    this.comparator = comparator;
+
+    data = new ArrayList<KeyValue>(incData.length);
+    for( int i = 0; i < incData.length ; ++i) {
+      data.add(incData[i]);
+    }
+    Collections.sort(data, this.comparator);
+  }
+
+  public static List<KeyValueScanner> scanFixture(KeyValue[] ... kvArrays) {
+    ArrayList<KeyValueScanner> scanners = new ArrayList<KeyValueScanner>();
+    for (KeyValue [] kvs : kvArrays) {
+      scanners.add(new KeyValueScanFixture(KeyValue.COMPARATOR, kvs));
+    }
+    return scanners;
+  }
+
+
+  @Override
+  public KeyValue peek() {
+    return this.current;
+  }
+
+  @Override
+  public KeyValue next() {
+    KeyValue res = current;
+
+    if (iter.hasNext())
+      current = iter.next();
+    else
+      current = null;
+    return res;
+  }
+
+  @Override
+  public boolean seek(KeyValue key) {
+    // start at beginning.
+    iter = data.iterator();
+      int cmp;
+    KeyValue kv = null;
+    do {
+      if (!iter.hasNext()) {
+        current = null;
+        return false;
+      }
+      kv = iter.next();
+      cmp = comparator.compare(key, kv);
+    } while (cmp > 0);
+    current = kv;
+    return true;
+  }
+
+  @Override
+  public boolean reseek(KeyValue key) {
+    return seek(key);
+  }
+
+  @Override
+  public void close() {
+    // noop.
+  }
+
+  @Override
+  public long getSequenceID() {
+    return 0;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/OOMERegionServer.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/OOMERegionServer.java
new file mode 100644
index 0000000..cac2989
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/OOMERegionServer.java
@@ -0,0 +1,55 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.client.Put;
+
+/**
+ * A region server that will OOME.
+ * Everytime {@link #put(regionName, Put)} is called, we add
+ * keep around a reference to the batch.  Use this class to test OOME extremes.
+ * Needs to be started manually as in
+ * <code>${HBASE_HOME}/bin/hbase ./bin/hbase org.apache.hadoop.hbase.OOMERegionServer start</code>.
+ */
+public class OOMERegionServer extends HRegionServer {
+  private List<Put> retainer = new ArrayList<Put>();
+
+  public OOMERegionServer(HBaseConfiguration conf) throws IOException, InterruptedException {
+    super(conf);
+  }
+
+  public void put(byte [] regionName, Put put)
+  throws IOException {
+    super.put(regionName, put);
+    for (int i = 0; i < 30; i++) {
+      // Add the batch update 30 times to bring on the OOME faster.
+      this.retainer.add(put);
+    }
+  }
+
+  public static void main(String[] args) throws Exception {
+    new HRegionServerCommandLine(OOMERegionServer.class).doMain(args);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java
new file mode 100644
index 0000000..e2f4507
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java
@@ -0,0 +1,289 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValueTestUtil;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+public class TestColumnSeeking {
+
+  private final static HBaseTestingUtility TEST_UTIL =
+      new HBaseTestingUtility();
+
+  static final Log LOG = LogFactory.getLog(TestColumnSeeking.class);
+
+  @SuppressWarnings("unchecked")
+  @Test
+  public void testDuplicateVersions() throws IOException {
+    String family = "Family";
+    byte[] familyBytes = Bytes.toBytes("Family");
+    String table = "TestDuplicateVersions";
+
+    HColumnDescriptor hcd =
+        new HColumnDescriptor(familyBytes, 1000,
+            HColumnDescriptor.DEFAULT_COMPRESSION,
+            HColumnDescriptor.DEFAULT_IN_MEMORY,
+            HColumnDescriptor.DEFAULT_BLOCKCACHE,
+            HColumnDescriptor.DEFAULT_TTL,
+            HColumnDescriptor.DEFAULT_BLOOMFILTER);
+    HTableDescriptor htd = new HTableDescriptor(table);
+    htd.addFamily(hcd);
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    HRegion region =
+        HRegion.createHRegion(info, HBaseTestingUtility.getTestDir(), TEST_UTIL
+            .getConfiguration());
+
+    List<String> rows = generateRandomWords(10, "row");
+    List<String> allColumns = generateRandomWords(10, "column");
+    List<String> values = generateRandomWords(100, "value");
+
+    long maxTimestamp = 2;
+    double selectPercent = 0.5;
+    int numberOfTests = 5;
+    double flushPercentage = 0.2;
+    double minorPercentage = 0.2;
+    double majorPercentage = 0.2;
+    double putPercentage = 0.2;
+
+    HashMap<String, KeyValue> allKVMap = new HashMap<String, KeyValue>();
+
+    HashMap<String, KeyValue>[] kvMaps = new HashMap[numberOfTests];
+    ArrayList<String>[] columnLists = new ArrayList[numberOfTests];
+
+    for (int i = 0; i < numberOfTests; i++) {
+      kvMaps[i] = new HashMap<String, KeyValue>();
+      columnLists[i] = new ArrayList<String>();
+      for (String column : allColumns) {
+        if (Math.random() < selectPercent) {
+          columnLists[i].add(column);
+        }
+      }
+    }
+
+    for (String value : values) {
+      for (String row : rows) {
+        Put p = new Put(Bytes.toBytes(row));
+        for (String column : allColumns) {
+          for (long timestamp = 1; timestamp <= maxTimestamp; timestamp++) {
+            KeyValue kv =
+                KeyValueTestUtil.create(row, family, column, timestamp, value);
+            if (Math.random() < putPercentage) {
+              p.add(kv);
+              allKVMap.put(kv.getKeyString(), kv);
+              for (int i = 0; i < numberOfTests; i++) {
+                if (columnLists[i].contains(column)) {
+                  kvMaps[i].put(kv.getKeyString(), kv);
+                }
+              }
+            }
+          }
+        }
+        region.put(p);
+        if (Math.random() < flushPercentage) {
+          LOG.info("Flushing... ");
+          region.flushcache();
+        }
+
+        if (Math.random() < minorPercentage) {
+          LOG.info("Minor compacting... ");
+          region.compactStores(false);
+        }
+
+        if (Math.random() < majorPercentage) {
+          LOG.info("Major compacting... ");
+          region.compactStores(true);
+        }
+      }
+    }
+
+    for (int i = 0; i < numberOfTests + 1; i++) {
+      Collection<KeyValue> kvSet;
+      Scan scan = new Scan();
+      scan.setMaxVersions();
+      if (i < numberOfTests) {
+        kvSet = kvMaps[i].values();
+        for (String column : columnLists[i]) {
+          scan.addColumn(familyBytes, Bytes.toBytes(column));
+        }
+        LOG.info("ExplicitColumns scanner");
+        LOG.info("Columns: " + columnLists[i].size() + "  Keys: "
+            + kvSet.size());
+      } else {
+        kvSet = allKVMap.values();
+        LOG.info("Wildcard scanner");
+        LOG.info("Columns: " + allColumns.size() + "  Keys: " + kvSet.size());
+
+      }
+      InternalScanner scanner = region.getScanner(scan);
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      while (scanner.next(results))
+        ;
+      assertEquals(kvSet.size(), results.size());
+      assertTrue(results.containsAll(kvSet));
+    }
+  }
+
+  @SuppressWarnings("unchecked")
+  @Test
+  public void testReseeking() throws IOException {
+    String family = "Family";
+    byte[] familyBytes = Bytes.toBytes("Family");
+    String table = "TestSingleVersions";
+
+    HTableDescriptor htd = new HTableDescriptor(table);
+    htd.addFamily(new HColumnDescriptor(family));
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    HRegion region =
+        HRegion.createHRegion(info, HBaseTestingUtility.getTestDir(), TEST_UTIL
+            .getConfiguration());
+
+    List<String> rows = generateRandomWords(10, "row");
+    List<String> allColumns = generateRandomWords(100, "column");
+
+    long maxTimestamp = 2;
+    double selectPercent = 0.5;
+    int numberOfTests = 5;
+    double flushPercentage = 0.2;
+    double minorPercentage = 0.2;
+    double majorPercentage = 0.2;
+    double putPercentage = 0.2;
+
+    HashMap<String, KeyValue> allKVMap = new HashMap<String, KeyValue>();
+
+    HashMap<String, KeyValue>[] kvMaps = new HashMap[numberOfTests];
+    ArrayList<String>[] columnLists = new ArrayList[numberOfTests];
+    String valueString = "Value";
+
+    for (int i = 0; i < numberOfTests; i++) {
+      kvMaps[i] = new HashMap<String, KeyValue>();
+      columnLists[i] = new ArrayList<String>();
+      for (String column : allColumns) {
+        if (Math.random() < selectPercent) {
+          columnLists[i].add(column);
+        }
+      }
+    }
+
+    for (String row : rows) {
+      Put p = new Put(Bytes.toBytes(row));
+      for (String column : allColumns) {
+        for (long timestamp = 1; timestamp <= maxTimestamp; timestamp++) {
+          KeyValue kv =
+              KeyValueTestUtil.create(row, family, column, timestamp,
+                  valueString);
+          if (Math.random() < putPercentage) {
+            p.add(kv);
+            allKVMap.put(kv.getKeyString(), kv);
+            for (int i = 0; i < numberOfTests; i++) {
+              if (columnLists[i].contains(column)) {
+                kvMaps[i].put(kv.getKeyString(), kv);
+              }
+            }
+          }
+
+        }
+      }
+      region.put(p);
+      if (Math.random() < flushPercentage) {
+        LOG.info("Flushing... ");
+        region.flushcache();
+      }
+
+      if (Math.random() < minorPercentage) {
+        LOG.info("Minor compacting... ");
+        region.compactStores(false);
+      }
+
+      if (Math.random() < majorPercentage) {
+        LOG.info("Major compacting... ");
+        region.compactStores(true);
+      }
+    }
+
+    for (int i = 0; i < numberOfTests + 1; i++) {
+      Collection<KeyValue> kvSet;
+      Scan scan = new Scan();
+      scan.setMaxVersions();
+      if (i < numberOfTests) {
+        kvSet = kvMaps[i].values();
+        for (String column : columnLists[i]) {
+          scan.addColumn(familyBytes, Bytes.toBytes(column));
+        }
+        LOG.info("ExplicitColumns scanner");
+        LOG.info("Columns: " + columnLists[i].size() + "  Keys: "
+            + kvSet.size());
+      } else {
+        kvSet = allKVMap.values();
+        LOG.info("Wildcard scanner");
+        LOG.info("Columns: " + allColumns.size() + "  Keys: " + kvSet.size());
+
+      }
+      InternalScanner scanner = region.getScanner(scan);
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      while (scanner.next(results))
+        ;
+      assertEquals(kvSet.size(), results.size());
+      assertTrue(results.containsAll(kvSet));
+    }
+  }
+
+  List<String> generateRandomWords(int numberOfWords, String suffix) {
+    Set<String> wordSet = new HashSet<String>();
+    for (int i = 0; i < numberOfWords; i++) {
+      int lengthOfWords = (int) (Math.random() * 5) + 1;
+      char[] wordChar = new char[lengthOfWords];
+      for (int j = 0; j < wordChar.length; j++) {
+        wordChar[j] = (char) (Math.random() * 26 + 97);
+      }
+      String word;
+      if (suffix == null) {
+        word = new String(wordChar);
+      } else {
+        word = new String(wordChar) + suffix;
+      }
+      wordSet.add(word);
+    }
+    List<String> wordList = new ArrayList<String>(wordSet);
+    return wordList;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
new file mode 100644
index 0000000..04a2d13
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
@@ -0,0 +1,460 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+import static org.mockito.Mockito.doAnswer;
+import static org.mockito.Mockito.spy;
+
+
+/**
+ * Test compactions
+ */
+public class TestCompaction extends HBaseTestCase {
+  static final Log LOG = LogFactory.getLog(TestCompaction.class.getName());
+  private HRegion r = null;
+  private Path compactionDir = null;
+  private Path regionCompactionDir = null;
+  private static final byte [] COLUMN_FAMILY = fam1;
+  private final byte [] STARTROW = Bytes.toBytes(START_KEY);
+  private static final byte [] COLUMN_FAMILY_TEXT = COLUMN_FAMILY;
+  private int compactionThreshold;
+  private byte[] firstRowBytes, secondRowBytes, thirdRowBytes;
+  final private byte[] col1, col2;
+
+  private MiniDFSCluster cluster;
+
+  /** constructor */
+  public TestCompaction() throws Exception {
+    super();
+
+    // Set cache flush size to 1MB
+    conf.setInt("hbase.hregion.memstore.flush.size", 1024*1024);
+    conf.setInt("hbase.hregion.memstore.block.multiplier", 100);
+    this.cluster = null;
+    compactionThreshold = conf.getInt("hbase.hstore.compactionThreshold", 3);
+
+    firstRowBytes = START_KEY.getBytes(HConstants.UTF8_ENCODING);
+    secondRowBytes = START_KEY.getBytes(HConstants.UTF8_ENCODING);
+    // Increment the least significant character so we get to next row.
+    secondRowBytes[START_KEY_BYTES.length - 1]++;
+    thirdRowBytes = START_KEY.getBytes(HConstants.UTF8_ENCODING);
+    thirdRowBytes[START_KEY_BYTES.length - 1]++;
+    thirdRowBytes[START_KEY_BYTES.length - 1]++;
+    col1 = "column1".getBytes(HConstants.UTF8_ENCODING);
+    col2 = "column2".getBytes(HConstants.UTF8_ENCODING);
+  }
+
+  @Override
+  public void setUp() throws Exception {
+    this.cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+    // Make the hbase rootdir match the minidfs we just span up
+    this.conf.set(HConstants.HBASE_DIR,
+      this.cluster.getFileSystem().getHomeDirectory().toString());
+    super.setUp();
+    HTableDescriptor htd = createTableDescriptor(getName());
+    this.r = createNewHRegion(htd, null, null);
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+    HLog hlog = r.getLog();
+    this.r.close();
+    hlog.closeAndDelete();
+    if (this.cluster != null) {
+      shutdownDfs(cluster);
+    }
+    super.tearDown();
+  }
+
+  /**
+   * Test that on a major compaction, if all cells are expired or deleted, then
+   * we'll end up with no product.  Make sure scanner over region returns
+   * right answer in this case - and that it just basically works.
+   * @throws IOException
+   */
+  public void testMajorCompactingToNoOutput() throws IOException {
+    createStoreFile(r);
+    for (int i = 0; i < compactionThreshold; i++) {
+      createStoreFile(r);
+    }
+    // Now delete everything.
+    InternalScanner s = r.getScanner(new Scan());
+    do {
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      boolean result = s.next(results);
+      r.delete(new Delete(results.get(0).getRow()), null, false);
+      if (!result) break;
+    } while(true);
+    // Flush
+    r.flushcache();
+    // Major compact.
+    r.compactStores(true);
+    s = r.getScanner(new Scan());
+    int counter = 0;
+    do {
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      boolean result = s.next(results);
+      if (!result) break;
+      counter++;
+    } while(true);
+    assertEquals(0, counter);
+  }
+
+  /**
+   * Run compaction and flushing memstore
+   * Assert deletes get cleaned up.
+   * @throws Exception
+   */
+  public void testMajorCompaction() throws Exception {
+    createStoreFile(r);
+    for (int i = 0; i < compactionThreshold; i++) {
+      createStoreFile(r);
+    }
+    // Add more content.
+    addContent(new HRegionIncommon(r), Bytes.toString(COLUMN_FAMILY));
+
+    // Now there are about 5 versions of each column.
+    // Default is that there only 3 (MAXVERSIONS) versions allowed per column.
+    //
+    // Assert == 3 when we ask for versions.
+    Result result = r.get(new Get(STARTROW).addFamily(COLUMN_FAMILY_TEXT).setMaxVersions(100), null);
+    assertEquals(compactionThreshold, result.size());
+
+    r.flushcache();
+    r.compactStores(true);
+
+    // look at the second row
+    // Increment the least significant character so we get to next row.
+    byte [] secondRowBytes = START_KEY.getBytes(HConstants.UTF8_ENCODING);
+    secondRowBytes[START_KEY_BYTES.length - 1]++;
+
+    // Always 3 versions if that is what max versions is.
+    result = r.get(new Get(secondRowBytes).addFamily(COLUMN_FAMILY_TEXT).setMaxVersions(100), null);
+    assertEquals(compactionThreshold, result.size());
+
+    // Now add deletes to memstore and then flush it.
+    // That will put us over
+    // the compaction threshold of 3 store files.  Compacting these store files
+    // should result in a compacted store file that has no references to the
+    // deleted row.
+    Delete delete = new Delete(secondRowBytes, System.currentTimeMillis(), null);
+    byte [][] famAndQf = {COLUMN_FAMILY, null};
+    delete.deleteFamily(famAndQf[0]);
+    r.delete(delete, null, true);
+
+    // Assert deleted.
+    result = r.get(new Get(secondRowBytes).addFamily(COLUMN_FAMILY_TEXT).setMaxVersions(100), null );
+    assertTrue("Second row should have been deleted", result.isEmpty());
+
+    r.flushcache();
+
+    result = r.get(new Get(secondRowBytes).addFamily(COLUMN_FAMILY_TEXT).setMaxVersions(100), null );
+    assertTrue("Second row should have been deleted", result.isEmpty());
+
+    // Add a bit of data and flush.  Start adding at 'bbb'.
+    createSmallerStoreFile(this.r);
+    r.flushcache();
+    // Assert that the second row is still deleted.
+    result = r.get(new Get(secondRowBytes).addFamily(COLUMN_FAMILY_TEXT).setMaxVersions(100), null );
+    assertTrue("Second row should still be deleted", result.isEmpty());
+
+    // Force major compaction.
+    r.compactStores(true);
+    assertEquals(r.getStore(COLUMN_FAMILY_TEXT).getStorefiles().size(), 1);
+
+    result = r.get(new Get(secondRowBytes).addFamily(COLUMN_FAMILY_TEXT).setMaxVersions(100), null );
+    assertTrue("Second row should still be deleted", result.isEmpty());
+
+    // Make sure the store files do have some 'aaa' keys in them -- exactly 3.
+    // Also, that compacted store files do not have any secondRowBytes because
+    // they were deleted.
+    verifyCounts(3,0);
+
+    // Multiple versions allowed for an entry, so the delete isn't enough
+    // Lower TTL and expire to ensure that all our entries have been wiped
+    final int ttlInSeconds = 1;
+    for (Store store: this.r.stores.values()) {
+      store.ttl = ttlInSeconds * 1000;
+    }
+    Thread.sleep(ttlInSeconds * 1000);
+
+    r.compactStores(true);
+    int count = count();
+    assertTrue("Should not see anything after TTL has expired", count == 0);
+  }
+
+  public void testMinorCompactionWithDeleteRow() throws Exception {
+    Delete deleteRow = new Delete(secondRowBytes);
+    testMinorCompactionWithDelete(deleteRow);
+  }
+  public void testMinorCompactionWithDeleteColumn1() throws Exception {
+    Delete dc = new Delete(secondRowBytes);
+    /* delete all timestamps in the column */
+    dc.deleteColumns(fam2, col2);
+    testMinorCompactionWithDelete(dc);
+  }
+  public void testMinorCompactionWithDeleteColumn2() throws Exception {
+    Delete dc = new Delete(secondRowBytes);
+    dc.deleteColumn(fam2, col2);
+    /* compactionThreshold is 3. The table has 4 versions: 0, 1, 2, and 3.
+     * we only delete the latest version. One might expect to see only
+     * versions 1 and 2. HBase differs, and gives us 0, 1 and 2.
+     * This is okay as well. Since there was no compaction done before the
+     * delete, version 0 seems to stay on.
+     */
+    //testMinorCompactionWithDelete(dc, 2);
+    testMinorCompactionWithDelete(dc, 3);
+  }
+  public void testMinorCompactionWithDeleteColumnFamily() throws Exception {
+    Delete deleteCF = new Delete(secondRowBytes);
+    deleteCF.deleteFamily(fam2);
+    testMinorCompactionWithDelete(deleteCF);
+  }
+  public void testMinorCompactionWithDeleteVersion1() throws Exception {
+    Delete deleteVersion = new Delete(secondRowBytes);
+    deleteVersion.deleteColumns(fam2, col2, 2);
+    /* compactionThreshold is 3. The table has 4 versions: 0, 1, 2, and 3.
+     * We delete versions 0 ... 2. So, we still have one remaining.
+     */
+    testMinorCompactionWithDelete(deleteVersion, 1);
+  }
+  public void testMinorCompactionWithDeleteVersion2() throws Exception {
+    Delete deleteVersion = new Delete(secondRowBytes);
+    deleteVersion.deleteColumn(fam2, col2, 1);
+    /*
+     * the table has 4 versions: 0, 1, 2, and 3.
+     * 0 does not count.
+     * We delete 1.
+     * Should have 2 remaining.
+     */
+    testMinorCompactionWithDelete(deleteVersion, 2);
+  }
+
+  /*
+   * A helper function to test the minor compaction algorithm. We check that
+   * the delete markers are left behind. Takes delete as an argument, which
+   * can be any delete (row, column, columnfamliy etc), that essentially
+   * deletes row2 and column2. row1 and column1 should be undeleted
+   */
+  private void testMinorCompactionWithDelete(Delete delete) throws Exception {
+    testMinorCompactionWithDelete(delete, 0);
+  }
+  private void testMinorCompactionWithDelete(Delete delete, int expectedResultsAfterDelete) throws Exception {
+    HRegionIncommon loader = new HRegionIncommon(r);
+    for (int i = 0; i < compactionThreshold + 1; i++) {
+      addContent(loader, Bytes.toString(fam1), Bytes.toString(col1), firstRowBytes, thirdRowBytes, i);
+      addContent(loader, Bytes.toString(fam1), Bytes.toString(col2), firstRowBytes, thirdRowBytes, i);
+      addContent(loader, Bytes.toString(fam2), Bytes.toString(col1), firstRowBytes, thirdRowBytes, i);
+      addContent(loader, Bytes.toString(fam2), Bytes.toString(col2), firstRowBytes, thirdRowBytes, i);
+      r.flushcache();
+    }
+
+    Result result = r.get(new Get(firstRowBytes).addColumn(fam1, col1).setMaxVersions(100), null);
+    assertEquals(compactionThreshold, result.size());
+    result = r.get(new Get(secondRowBytes).addColumn(fam2, col2).setMaxVersions(100), null);
+    assertEquals(compactionThreshold, result.size());
+
+    // Now add deletes to memstore and then flush it.  That will put us over
+    // the compaction threshold of 3 store files.  Compacting these store files
+    // should result in a compacted store file that has no references to the
+    // deleted row.
+    r.delete(delete, null, true);
+
+    // Make sure that we have only deleted family2 from secondRowBytes
+    result = r.get(new Get(secondRowBytes).addColumn(fam2, col2).setMaxVersions(100), null);
+    assertEquals(expectedResultsAfterDelete, result.size());
+    // but we still have firstrow
+    result = r.get(new Get(firstRowBytes).addColumn(fam1, col1).setMaxVersions(100), null);
+    assertEquals(compactionThreshold, result.size());
+
+    r.flushcache();
+    // should not change anything.
+    // Let us check again
+
+    // Make sure that we have only deleted family2 from secondRowBytes
+    result = r.get(new Get(secondRowBytes).addColumn(fam2, col2).setMaxVersions(100), null);
+    assertEquals(expectedResultsAfterDelete, result.size());
+    // but we still have firstrow
+    result = r.get(new Get(firstRowBytes).addColumn(fam1, col1).setMaxVersions(100), null);
+    assertEquals(compactionThreshold, result.size());
+
+    // do a compaction
+    Store store2 = this.r.stores.get(fam2);
+    int numFiles1 = store2.getStorefiles().size();
+    assertTrue("Was expecting to see 4 store files", numFiles1 > compactionThreshold); // > 3
+    store2.compactRecent(compactionThreshold);   // = 3
+    int numFiles2 = store2.getStorefiles().size();
+    // Check that we did compact
+    assertTrue("Number of store files should go down", numFiles1 > numFiles2);
+    // Check that it was a minor compaction.
+    assertTrue("Was not supposed to be a major compaction", numFiles2 > 1);
+
+    // Make sure that we have only deleted family2 from secondRowBytes
+    result = r.get(new Get(secondRowBytes).addColumn(fam2, col2).setMaxVersions(100), null);
+    assertEquals(expectedResultsAfterDelete, result.size());
+    // but we still have firstrow
+    result = r.get(new Get(firstRowBytes).addColumn(fam1, col1).setMaxVersions(100), null);
+    assertEquals(compactionThreshold, result.size());
+  }
+
+  private void verifyCounts(int countRow1, int countRow2) throws Exception {
+    int count1 = 0;
+    int count2 = 0;
+    for (StoreFile f: this.r.stores.get(COLUMN_FAMILY_TEXT).getStorefiles()) {
+      HFileScanner scanner = f.getReader().getScanner(false, false);
+      scanner.seekTo();
+      do {
+        byte [] row = scanner.getKeyValue().getRow();
+        if (Bytes.equals(row, STARTROW)) {
+          count1++;
+        } else if(Bytes.equals(row, secondRowBytes)) {
+          count2++;
+        }
+      } while(scanner.next());
+    }
+    assertEquals(countRow1,count1);
+    assertEquals(countRow2,count2);
+  }
+
+  /**
+   * Verify that you can stop a long-running compaction
+   * (used during RS shutdown)
+   * @throws Exception
+   */
+  public void testInterruptCompaction() throws Exception {
+    assertEquals(0, count());
+
+    // lower the polling interval for this test
+    int origWI = Store.closeCheckInterval;
+    Store.closeCheckInterval = 10*1000; // 10 KB
+
+    try {
+      // Create a couple store files w/ 15KB (over 10KB interval)
+      int jmax = (int) Math.ceil(15.0/compactionThreshold);
+      byte [] pad = new byte[1000]; // 1 KB chunk
+      for (int i = 0; i < compactionThreshold; i++) {
+        HRegionIncommon loader = new HRegionIncommon(r);
+        Put p = new Put(Bytes.add(STARTROW, Bytes.toBytes(i)));
+        for (int j = 0; j < jmax; j++) {
+          p.add(COLUMN_FAMILY, Bytes.toBytes(j), pad);
+        }
+        addContent(loader, Bytes.toString(COLUMN_FAMILY));
+        loader.put(p);
+        loader.flushcache();
+      }
+
+      HRegion spyR = spy(r);
+      doAnswer(new Answer() {
+        public Object answer(InvocationOnMock invocation) throws Throwable {
+          r.writestate.writesEnabled = false;
+          return invocation.callRealMethod();
+        }
+      }).when(spyR).doRegionCompactionPrep();
+
+      // force a minor compaction, but not before requesting a stop
+      spyR.compactStores();
+
+      // ensure that the compaction stopped, all old files are intact,
+      Store s = r.stores.get(COLUMN_FAMILY);
+      assertEquals(compactionThreshold, s.getStorefilesCount());
+      assertTrue(s.getStorefilesSize() > 15*1000);
+      // and no new store files persisted past compactStores()
+      FileStatus[] ls = cluster.getFileSystem().listStatus(r.getTmpDir());
+      assertEquals(0, ls.length);
+
+    } finally {
+      // don't mess up future tests
+      r.writestate.writesEnabled = true;
+      Store.closeCheckInterval = origWI;
+
+      // Delete all Store information once done using
+      for (int i = 0; i < compactionThreshold; i++) {
+        Delete delete = new Delete(Bytes.add(STARTROW, Bytes.toBytes(i)));
+        byte [][] famAndQf = {COLUMN_FAMILY, null};
+        delete.deleteFamily(famAndQf[0]);
+        r.delete(delete, null, true);
+      }
+      r.flushcache();
+
+      // Multiple versions allowed for an entry, so the delete isn't enough
+      // Lower TTL and expire to ensure that all our entries have been wiped
+      final int ttlInSeconds = 1;
+      for (Store store: this.r.stores.values()) {
+        store.ttl = ttlInSeconds * 1000;
+      }
+      Thread.sleep(ttlInSeconds * 1000);
+
+      r.compactStores(true);
+      assertEquals(0, count());
+    }
+  }
+
+  private int count() throws IOException {
+    int count = 0;
+    for (StoreFile f: this.r.stores.
+        get(COLUMN_FAMILY_TEXT).getStorefiles()) {
+      HFileScanner scanner = f.getReader().getScanner(false, false);
+      if (!scanner.seekTo()) {
+        continue;
+      }
+      do {
+        count++;
+      } while(scanner.next());
+    }
+    return count;
+  }
+
+  private void createStoreFile(final HRegion region) throws IOException {
+    HRegionIncommon loader = new HRegionIncommon(region);
+    addContent(loader, Bytes.toString(COLUMN_FAMILY));
+    loader.flushcache();
+  }
+
+  private void createSmallerStoreFile(final HRegion region) throws IOException {
+    HRegionIncommon loader = new HRegionIncommon(region);
+    addContent(loader, Bytes.toString(COLUMN_FAMILY), ("" +
+    		"bbb").getBytes(), null);
+    loader.flushcache();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
new file mode 100644
index 0000000..10b7f96
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
@@ -0,0 +1,192 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.TreeSet;
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+public class TestExplicitColumnTracker extends HBaseTestCase {
+  private boolean PRINT = false;
+
+  private final byte[] col1 = Bytes.toBytes("col1");
+  private final byte[] col2 = Bytes.toBytes("col2");
+  private final byte[] col3 = Bytes.toBytes("col3");
+  private final byte[] col4 = Bytes.toBytes("col4");
+  private final byte[] col5 = Bytes.toBytes("col5");
+
+  private void runTest(int maxVersions,
+                       TreeSet<byte[]> trackColumns,
+                       List<byte[]> scannerColumns,
+                       List<MatchCode> expected) {
+    ColumnTracker exp = new ExplicitColumnTracker(
+      trackColumns, maxVersions);
+
+
+    //Initialize result
+    List<ScanQueryMatcher.MatchCode> result = new ArrayList<ScanQueryMatcher.MatchCode>();
+
+    long timestamp = 0;
+    //"Match"
+    for(byte [] col : scannerColumns){
+      result.add(exp.checkColumn(col, 0, col.length, ++timestamp));
+    }
+
+    assertEquals(expected.size(), result.size());
+    for(int i=0; i< expected.size(); i++){
+      assertEquals(expected.get(i), result.get(i));
+      if(PRINT){
+        System.out.println("Expected " +expected.get(i) + ", actual " +
+            result.get(i));
+      }
+    }
+  }
+
+  public void testGet_SingleVersion(){
+    if(PRINT){
+      System.out.println("SingleVersion");
+    }
+
+    //Create tracker
+    TreeSet<byte[]> columns = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+    //Looking for every other
+    columns.add(col2);
+    columns.add(col4);
+    List<MatchCode> expected = new ArrayList<ScanQueryMatcher.MatchCode>();
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW);
+    int maxVersions = 1;
+
+    //Create "Scanner"
+    List<byte[]> scanner = new ArrayList<byte[]>();
+    scanner.add(col1);
+    scanner.add(col2);
+    scanner.add(col3);
+    scanner.add(col4);
+    scanner.add(col5);
+
+    runTest(maxVersions, columns, scanner, expected);
+  }
+
+  public void testGet_MultiVersion(){
+    if(PRINT){
+      System.out.println("\nMultiVersion");
+    }
+
+    //Create tracker
+    TreeSet<byte[]> columns = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+    //Looking for every other
+    columns.add(col2);
+    columns.add(col4);
+
+    List<ScanQueryMatcher.MatchCode> expected = new ArrayList<ScanQueryMatcher.MatchCode>();
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW);
+
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW);
+    int maxVersions = 2;
+
+    //Create "Scanner"
+    List<byte[]> scanner = new ArrayList<byte[]>();
+    scanner.add(col1);
+    scanner.add(col1);
+    scanner.add(col1);
+    scanner.add(col2);
+    scanner.add(col2);
+    scanner.add(col2);
+    scanner.add(col3);
+    scanner.add(col3);
+    scanner.add(col3);
+    scanner.add(col4);
+    scanner.add(col4);
+    scanner.add(col4);
+    scanner.add(col5);
+    scanner.add(col5);
+    scanner.add(col5);
+
+    //Initialize result
+    runTest(maxVersions, columns, scanner, expected);
+  }
+
+
+  /**
+   * hbase-2259
+   */
+  public void testStackOverflow(){
+    int maxVersions = 1;
+    TreeSet<byte[]> columns = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+    for (int i = 0; i < 100000; i++) {
+      columns.add(Bytes.toBytes("col"+i));
+    }
+
+    ColumnTracker explicit = new ExplicitColumnTracker(columns, maxVersions);
+    for (int i = 0; i < 100000; i+=2) {
+      byte [] col = Bytes.toBytes("col"+i);
+      explicit.checkColumn(col, 0, col.length, 1);
+    }
+    explicit.update();
+
+    for (int i = 1; i < 100000; i+=2) {
+      byte [] col = Bytes.toBytes("col"+i);
+      explicit.checkColumn(col, 0, col.length, 1);
+    }
+  }
+
+  /**
+   * Regression test for HBASE-2545
+   */
+  public void testInfiniteLoop() {
+    TreeSet<byte[]> columns = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+    columns.addAll(Arrays.asList(new byte[][] {
+      col2, col3, col5 }));
+    List<byte[]> scanner = Arrays.<byte[]>asList(
+      new byte[][] { col1, col4 });
+    List<ScanQueryMatcher.MatchCode> expected = Arrays.<ScanQueryMatcher.MatchCode>asList(
+      new ScanQueryMatcher.MatchCode[] {
+        ScanQueryMatcher.MatchCode.SEEK_NEXT_COL,
+        ScanQueryMatcher.MatchCode.SEEK_NEXT_COL });
+    runTest(1, columns, scanner, expected);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
new file mode 100644
index 0000000..9bbd428
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
@@ -0,0 +1,238 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.lang.ref.SoftReference;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.PositionedReadable;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.regionserver.StoreFile.BloomType;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+
+
+/**
+ * Test cases that ensure that file system level errors are bubbled up
+ * appropriately to clients, rather than swallowed.
+ */
+public class TestFSErrorsExposed {
+  private static final Log LOG = LogFactory.getLog(TestFSErrorsExposed.class);
+
+  HBaseTestingUtility util = new HBaseTestingUtility();
+
+  /**
+   * Injects errors into the pread calls of an on-disk file, and makes
+   * sure those bubble up to the HFile scanner
+   */
+  @Test
+  public void testHFileScannerThrowsErrors() throws IOException {
+    Path hfilePath = new Path(new Path(
+        HBaseTestingUtility.getTestDir("internalScannerExposesErrors"),
+        "regionname"), "familyname");
+    FaultyFileSystem fs = new FaultyFileSystem(util.getTestFileSystem());
+    StoreFile.Writer writer = StoreFile.createWriter(fs, hfilePath, 2*1024);
+    TestStoreFile.writeStoreFile(
+        writer, Bytes.toBytes("cf"), Bytes.toBytes("qual"));
+
+    StoreFile sf = new StoreFile(fs, writer.getPath(), false,
+        util.getConfiguration(), StoreFile.BloomType.NONE, false);
+    StoreFile.Reader reader = sf.createReader();
+    HFileScanner scanner = reader.getScanner(false, true);
+
+    FaultyInputStream inStream = fs.inStreams.get(0).get();
+    assertNotNull(inStream);
+
+    scanner.seekTo();
+    // Do at least one successful read
+    assertTrue(scanner.next());
+
+    inStream.startFaults();
+
+    try {
+      int scanned=0;
+      while (scanner.next()) {
+        scanned++;
+      }
+      fail("Scanner didn't throw after faults injected");
+    } catch (IOException ioe) {
+      LOG.info("Got expected exception", ioe);
+      assertTrue(ioe.getMessage().contains("Fault"));
+    }
+    reader.close();
+  }
+
+  /**
+   * Injects errors into the pread calls of an on-disk file, and makes
+   * sure those bubble up to the StoreFileScanner
+   */
+  @Test
+  public void testStoreFileScannerThrowsErrors() throws IOException {
+    Path hfilePath = new Path(new Path(
+        HBaseTestingUtility.getTestDir("internalScannerExposesErrors"),
+        "regionname"), "familyname");
+    FaultyFileSystem fs = new FaultyFileSystem(util.getTestFileSystem());
+    StoreFile.Writer writer = StoreFile.createWriter(fs, hfilePath, 2 * 1024);
+    TestStoreFile.writeStoreFile(
+        writer, Bytes.toBytes("cf"), Bytes.toBytes("qual"));
+
+    StoreFile sf = new StoreFile(fs, writer.getPath(), false,
+        util.getConfiguration(), BloomType.NONE, false);
+    List<StoreFileScanner> scanners = StoreFileScanner.getScannersForStoreFiles(
+        Collections.singletonList(sf), false, true);
+    KeyValueScanner scanner = scanners.get(0);
+
+    FaultyInputStream inStream = fs.inStreams.get(0).get();
+    assertNotNull(inStream);
+
+    scanner.seek(KeyValue.LOWESTKEY);
+    // Do at least one successful read
+    assertNotNull(scanner.next());
+
+    inStream.startFaults();
+
+    try {
+      int scanned=0;
+      while (scanner.next() != null) {
+        scanned++;
+      }
+      fail("Scanner didn't throw after faults injected");
+    } catch (IOException ioe) {
+      LOG.info("Got expected exception", ioe);
+      assertTrue(ioe.getMessage().contains("Could not iterate"));
+    }
+    scanner.close();
+  }
+
+  /**
+   * Cluster test which starts a region server with a region, then
+   * removes the data from HDFS underneath it, and ensures that
+   * errors are bubbled to the client.
+   */
+  @Test
+  public void testFullSystemBubblesFSErrors() throws Exception {
+    try {
+      // We set it not to run or it will trigger server shutdown while sync'ing
+      // because all the datanodes are bad
+      util.getConfiguration().setInt(
+          "hbase.regionserver.optionallogflushinterval", Integer.MAX_VALUE);
+      util.startMiniCluster(1);
+      byte[] tableName = Bytes.toBytes("table");
+      byte[] fam = Bytes.toBytes("fam");
+
+      HBaseAdmin admin = new HBaseAdmin(util.getConfiguration());
+      HTableDescriptor desc = new HTableDescriptor(tableName);
+      desc.addFamily(new HColumnDescriptor(
+          fam, 1, HColumnDescriptor.DEFAULT_COMPRESSION,
+          false, false, HConstants.FOREVER, "NONE"));
+      admin.createTable(desc);
+      // Make it fail faster.
+      util.getConfiguration().setInt("hbase.client.retries.number", 1);
+      // Make a new Configuration so it makes a new connection that has the
+      // above configuration on it; else we use the old one w/ 10 as default.
+      HTable table = new HTable(new Configuration(util.getConfiguration()), tableName);
+
+      // Load some data
+      util.loadTable(table, fam);
+      table.flushCommits();
+      util.flush();
+      util.countRows(table);
+
+      // Kill the DFS cluster
+      util.getDFSCluster().shutdownDataNodes();
+
+      try {
+        util.countRows(table);
+        fail("Did not fail to count after removing data");
+      } catch (Exception e) {
+        LOG.info("Got expected error", e);
+        assertTrue(e.getMessage().contains("Could not seek"));
+      }
+
+    } finally {
+      util.shutdownMiniCluster();
+    }
+  }
+
+  static class FaultyFileSystem extends FilterFileSystem {
+    List<SoftReference<FaultyInputStream>> inStreams =
+      new ArrayList<SoftReference<FaultyInputStream>>();
+
+    public FaultyFileSystem(FileSystem testFileSystem) {
+      super(testFileSystem);
+    }
+
+    @Override
+    public FSDataInputStream open(Path p, int bufferSize) throws IOException  {
+      FSDataInputStream orig = fs.open(p, bufferSize);
+      FaultyInputStream faulty = new FaultyInputStream(orig);
+      inStreams.add(new SoftReference<FaultyInputStream>(faulty));
+      return faulty;
+    }
+  }
+
+  static class FaultyInputStream extends FSDataInputStream {
+    boolean faultsStarted = false;
+
+    public FaultyInputStream(InputStream in) throws IOException {
+      super(in);
+    }
+
+    public void startFaults() {
+      faultsStarted = true;
+    }
+
+    public int read(long position, byte[] buffer, int offset, int length)
+      throws IOException {
+      injectFault();
+      return ((PositionedReadable)in).read(position, buffer, offset, length);
+    }
+
+    private void injectFault() throws IOException {
+      if (faultsStarted) {
+        throw new IOException("Fault injected");
+      }
+    }
+  }
+
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java
new file mode 100644
index 0000000..3b7c7e8
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java
@@ -0,0 +1,350 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+
+/**
+ * {@link TestGet} is a medley of tests of get all done up as a single test.
+ * This class
+ */
+public class TestGetClosestAtOrBefore extends HBaseTestCase {
+  private static final Log LOG = LogFactory.getLog(TestGetClosestAtOrBefore.class);
+  private MiniDFSCluster miniHdfs;
+
+  private static final byte[] T00 = Bytes.toBytes("000");
+  private static final byte[] T10 = Bytes.toBytes("010");
+  private static final byte[] T11 = Bytes.toBytes("011");
+  private static final byte[] T12 = Bytes.toBytes("012");
+  private static final byte[] T20 = Bytes.toBytes("020");
+  private static final byte[] T30 = Bytes.toBytes("030");
+  private static final byte[] T31 = Bytes.toBytes("031");
+  private static final byte[] T35 = Bytes.toBytes("035");
+  private static final byte[] T40 = Bytes.toBytes("040");
+
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+    this.miniHdfs = new MiniDFSCluster(this.conf, 1, true, null);
+    // Set the hbase.rootdir to be the home directory in mini dfs.
+    this.conf.set(HConstants.HBASE_DIR,
+      this.miniHdfs.getFileSystem().getHomeDirectory().toString());
+  }
+
+  public void testUsingMetaAndBinary() throws IOException {
+    FileSystem filesystem = FileSystem.get(conf);
+    Path rootdir = filesystem.makeQualified(new Path(conf.get(HConstants.HBASE_DIR)));
+    filesystem.mkdirs(rootdir);
+    // Up flush size else we bind up when we use default catalog flush of 16k.
+    HRegionInfo.FIRST_META_REGIONINFO.getTableDesc().
+      setMemStoreFlushSize(64 * 1024 * 1024);
+    HRegion mr = HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO,
+      rootdir, this.conf);
+    // Write rows for three tables 'A', 'B', and 'C'.
+    for (char c = 'A'; c < 'D'; c++) {
+      HTableDescriptor htd = new HTableDescriptor("" + c);
+      final int last = 128;
+      final int interval = 2;
+      for (int i = 0; i <= last; i += interval) {
+        HRegionInfo hri = new HRegionInfo(htd,
+          i == 0? HConstants.EMPTY_BYTE_ARRAY: Bytes.toBytes((byte)i),
+          i == last? HConstants.EMPTY_BYTE_ARRAY: Bytes.toBytes((byte)i + interval));
+        Put put = new Put(hri.getRegionName());
+        put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+                Writables.getBytes(hri));
+        mr.put(put, false);
+      }
+    }
+    InternalScanner s = mr.getScanner(new Scan());
+    try {
+      List<KeyValue> keys = new ArrayList<KeyValue>();
+      while(s.next(keys)) {
+        LOG.info(keys);
+        keys.clear();
+      }
+    } finally {
+      s.close();
+    }
+    findRow(mr, 'C', 44, 44);
+    findRow(mr, 'C', 45, 44);
+    findRow(mr, 'C', 46, 46);
+    findRow(mr, 'C', 43, 42);
+    mr.flushcache();
+    findRow(mr, 'C', 44, 44);
+    findRow(mr, 'C', 45, 44);
+    findRow(mr, 'C', 46, 46);
+    findRow(mr, 'C', 43, 42);
+    // Now delete 'C' and make sure I don't get entries from 'B'.
+    byte [] firstRowInC = HRegionInfo.createRegionName(Bytes.toBytes("" + 'C'),
+      HConstants.EMPTY_BYTE_ARRAY, HConstants.ZEROES, false);
+    Scan scan = new Scan(firstRowInC);
+    s = mr.getScanner(scan);
+    try {
+      List<KeyValue> keys = new ArrayList<KeyValue>();
+      while (s.next(keys)) {
+        mr.delete(new Delete(keys.get(0).getRow()), null, false);
+        keys.clear();
+      }
+    } finally {
+      s.close();
+    }
+    // Assert we get null back (pass -1).
+    findRow(mr, 'C', 44, -1);
+    findRow(mr, 'C', 45, -1);
+    findRow(mr, 'C', 46, -1);
+    findRow(mr, 'C', 43, -1);
+    mr.flushcache();
+    findRow(mr, 'C', 44, -1);
+    findRow(mr, 'C', 45, -1);
+    findRow(mr, 'C', 46, -1);
+    findRow(mr, 'C', 43, -1);
+  }
+
+  /*
+   * @param mr
+   * @param table
+   * @param rowToFind
+   * @param answer Pass -1 if we're not to find anything.
+   * @return Row found.
+   * @throws IOException
+   */
+  private byte [] findRow(final HRegion mr, final char table,
+    final int rowToFind, final int answer)
+  throws IOException {
+    byte [] tableb = Bytes.toBytes("" + table);
+    // Find the row.
+    byte [] tofindBytes = Bytes.toBytes((short)rowToFind);
+    byte [] metaKey = HRegionInfo.createRegionName(tableb, tofindBytes,
+      HConstants.NINES, false);
+    LOG.info("find=" + new String(metaKey));
+    Result r = mr.getClosestRowBefore(metaKey);
+    if (answer == -1) {
+      assertNull(r);
+      return null;
+    }
+    assertTrue(Bytes.compareTo(Bytes.toBytes((short)answer),
+      extractRowFromMetaRow(r.getRow())) == 0);
+    return r.getRow();
+  }
+
+  private byte [] extractRowFromMetaRow(final byte [] b) {
+    int firstDelimiter = KeyValue.getDelimiter(b, 0, b.length,
+      HRegionInfo.DELIMITER);
+    int lastDelimiter = KeyValue.getDelimiterInReverse(b, 0, b.length,
+      HRegionInfo.DELIMITER);
+    int length = lastDelimiter - firstDelimiter - 1;
+    byte [] row = new byte[length];
+    System.arraycopy(b, firstDelimiter + 1, row, 0, length);
+    return row;
+  }
+
+  /**
+   * Test file of multiple deletes and with deletes as final key.
+   * @see <a href="https://issues.apache.org/jira/browse/HBASE-751">HBASE-751</a>
+   */
+  public void testGetClosestRowBefore3() throws IOException{
+    HRegion region = null;
+    byte [] c0 = COLUMNS[0];
+    byte [] c1 = COLUMNS[1];
+    try {
+      HTableDescriptor htd = createTableDescriptor(getName());
+      region = createNewHRegion(htd, null, null);
+
+      Put p = new Put(T00);
+      p.add(c0, c0, T00);
+      region.put(p);
+
+      p = new Put(T10);
+      p.add(c0, c0, T10);
+      region.put(p);
+
+      p = new Put(T20);
+      p.add(c0, c0, T20);
+      region.put(p);
+
+      Result r = region.getClosestRowBefore(T20, c0);
+      assertTrue(Bytes.equals(T20, r.getRow()));
+
+      Delete d = new Delete(T20);
+      d.deleteColumn(c0, c0);
+      region.delete(d, null, false);
+
+      r = region.getClosestRowBefore(T20, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+
+      p = new Put(T30);
+      p.add(c0, c0, T30);
+      region.put(p);
+
+      r = region.getClosestRowBefore(T30, c0);
+      assertTrue(Bytes.equals(T30, r.getRow()));
+
+      d = new Delete(T30);
+      d.deleteColumn(c0, c0);
+      region.delete(d, null, false);
+
+      r = region.getClosestRowBefore(T30, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+      r = region.getClosestRowBefore(T31, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+
+      region.flushcache();
+
+      // try finding "010" after flush
+      r = region.getClosestRowBefore(T30, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+      r = region.getClosestRowBefore(T31, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+
+      // Put into a different column family.  Should make it so I still get t10
+      p = new Put(T20);
+      p.add(c1, c1, T20);
+      region.put(p);
+
+      r = region.getClosestRowBefore(T30, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+      r = region.getClosestRowBefore(T31, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+
+      region.flushcache();
+
+      r = region.getClosestRowBefore(T30, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+      r = region.getClosestRowBefore(T31, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+
+      // Now try combo of memcache and mapfiles.  Delete the t20 COLUMS[1]
+      // in memory; make sure we get back t10 again.
+      d = new Delete(T20);
+      d.deleteColumn(c1, c1);
+      region.delete(d, null, false);
+      r = region.getClosestRowBefore(T30, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+
+      // Ask for a value off the end of the file.  Should return t10.
+      r = region.getClosestRowBefore(T31, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+      region.flushcache();
+      r = region.getClosestRowBefore(T31, c0);
+      assertTrue(Bytes.equals(T10, r.getRow()));
+
+      // Ok.  Let the candidate come out of hfile but have delete of
+      // the candidate be in memory.
+      p = new Put(T11);
+      p.add(c0, c0, T11);
+      region.put(p);
+      d = new Delete(T10);
+      d.deleteColumn(c1, c1);
+      r = region.getClosestRowBefore(T12, c0);
+      assertTrue(Bytes.equals(T11, r.getRow()));
+    } finally {
+      if (region != null) {
+        try {
+          region.close();
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+        region.getLog().closeAndDelete();
+      }
+    }
+  }
+
+  /** For HBASE-694 */
+  public void testGetClosestRowBefore2() throws IOException{
+    HRegion region = null;
+    byte [] c0 = COLUMNS[0];
+    try {
+      HTableDescriptor htd = createTableDescriptor(getName());
+      region = createNewHRegion(htd, null, null);
+
+      Put p = new Put(T10);
+      p.add(c0, c0, T10);
+      region.put(p);
+
+      p = new Put(T30);
+      p.add(c0, c0, T30);
+      region.put(p);
+
+      p = new Put(T40);
+      p.add(c0, c0, T40);
+      region.put(p);
+
+      // try finding "035"
+      Result r = region.getClosestRowBefore(T35, c0);
+      assertTrue(Bytes.equals(T30, r.getRow()));
+
+      region.flushcache();
+
+      // try finding "035"
+      r = region.getClosestRowBefore(T35, c0);
+      assertTrue(Bytes.equals(T30, r.getRow()));
+
+      p = new Put(T20);
+      p.add(c0, c0, T20);
+      region.put(p);
+
+      // try finding "035"
+      r = region.getClosestRowBefore(T35, c0);
+      assertTrue(Bytes.equals(T30, r.getRow()));
+
+      region.flushcache();
+
+      // try finding "035"
+      r = region.getClosestRowBefore(T35, c0);
+      assertTrue(Bytes.equals(T30, r.getRow()));
+    } finally {
+      if (region != null) {
+        try {
+          region.close();
+        } catch (Exception e) {
+          e.printStackTrace();
+        }
+        region.getLog().closeAndDelete();
+      }
+    }
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    if (this.miniHdfs != null) {
+      this.miniHdfs.shutdown();
+    }
+    super.tearDown();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
new file mode 100644
index 0000000..48a7011d
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
@@ -0,0 +1,2985 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HConstants.OperationStatusCode;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.MultithreadedTestUtil;
+import org.apache.hadoop.hbase.MultithreadedTestUtil.TestThread;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.filter.ColumnCountGetFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.regionserver.HRegion.RegionScanner;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdge;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper;
+import org.apache.hadoop.hbase.util.IncrementingEnvironmentEdge;
+import org.apache.hadoop.hbase.util.ManualEnvironmentEdge;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.hbase.util.PairOfSameType;
+import org.apache.hadoop.hbase.util.Threads;
+
+import com.google.common.collect.Lists;
+
+
+/**
+ * Basic stand-alone testing of HRegion.
+ *
+ * A lot of the meta information for an HRegion now lives inside other
+ * HRegions or in the HBaseMaster, so only basic testing is possible.
+ */
+public class TestHRegion extends HBaseTestCase {
+  static final Log LOG = LogFactory.getLog(TestHRegion.class);
+
+  HRegion region = null;
+  private final String DIR = HBaseTestingUtility.getTestDir() +
+    "/TestHRegion/";
+
+  private final int MAX_VERSIONS = 2;
+
+  // Test names
+  protected final byte[] tableName = Bytes.toBytes("testtable");;
+  protected final byte[] qual1 = Bytes.toBytes("qual1");
+  protected final byte[] qual2 = Bytes.toBytes("qual2");
+  protected final byte[] qual3 = Bytes.toBytes("qual3");
+  protected final byte[] value1 = Bytes.toBytes("value1");
+  protected final byte[] value2 = Bytes.toBytes("value2");
+  protected final byte [] row = Bytes.toBytes("rowA");
+
+  /**
+   * @see org.apache.hadoop.hbase.HBaseTestCase#setUp()
+   */
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    super.tearDown();
+    EnvironmentEdgeManagerTestHelper.reset();
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // New tests that doesn't spin up a mini cluster but rather just test the
+  // individual code pieces in the HRegion. Putting files locally in
+  // /tmp/testtable
+  //////////////////////////////////////////////////////////////////////////////
+
+  public void testGetWhileRegionClose() throws IOException {
+    Configuration hc = initSplit();
+    int numRows = 100;
+    byte [][] families = {fam1, fam2, fam3};
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, hc, families);
+
+    // Put data in region
+    final int startRow = 100;
+    putData(startRow, numRows, qual1, families);
+    putData(startRow, numRows, qual2, families);
+    putData(startRow, numRows, qual3, families);
+    // this.region.flushcache();
+    final AtomicBoolean done = new AtomicBoolean(false);
+    final AtomicInteger gets = new AtomicInteger(0);
+    GetTillDoneOrException [] threads = new GetTillDoneOrException[10];
+    try {
+      // Set ten threads running concurrently getting from the region.
+      for (int i = 0; i < threads.length / 2; i++) {
+        threads[i] = new GetTillDoneOrException(i, Bytes.toBytes("" + startRow),
+          done, gets);
+        threads[i].setDaemon(true);
+        threads[i].start();
+      }
+      // Artificially make the condition by setting closing flag explicitly.
+      // I can't make the issue happen with a call to region.close().
+      this.region.closing.set(true);
+      for (int i = threads.length / 2; i < threads.length; i++) {
+        threads[i] = new GetTillDoneOrException(i, Bytes.toBytes("" + startRow),
+          done, gets);
+        threads[i].setDaemon(true);
+        threads[i].start();
+      }
+    } finally {
+      if (this.region != null) {
+        this.region.close();
+        this.region.getLog().closeAndDelete();
+      }
+    }
+    done.set(true);
+    for (GetTillDoneOrException t: threads) {
+      try {
+        t.join();
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+      if (t.e != null) {
+        LOG.info("Exception=" + t.e);
+        assertFalse("Found a NPE in " + t.getName(),
+          t.e instanceof NullPointerException);
+      }
+    }
+  }
+
+  /*
+   * Thread that does get on single row until 'done' flag is flipped.  If an
+   * exception causes us to fail, it records it.
+   */
+  class GetTillDoneOrException extends Thread {
+    private final Get g;
+    private final AtomicBoolean done;
+    private final AtomicInteger count;
+    private Exception e;
+
+    GetTillDoneOrException(final int i, final byte[] r, final AtomicBoolean d,
+        final AtomicInteger c) {
+      super("getter." + i);
+      this.g = new Get(r);
+      this.done = d;
+      this.count = c;
+    }
+
+    @Override
+    public void run() {
+      while (!this.done.get()) {
+        try {
+          assertTrue(region.get(g, null).size() > 0);
+          this.count.incrementAndGet();
+        } catch (Exception e) {
+          this.e = e;
+          break;
+        }
+      }
+    }
+  }
+
+  /*
+   * An involved filter test.  Has multiple column families and deletes in mix.
+   */
+  public void testWeirdCacheBehaviour() throws Exception {
+    byte[] TABLE = Bytes.toBytes("testWeirdCacheBehaviour");
+    byte[][] FAMILIES = new byte[][] { Bytes.toBytes("trans-blob"),
+        Bytes.toBytes("trans-type"), Bytes.toBytes("trans-date"),
+        Bytes.toBytes("trans-tags"), Bytes.toBytes("trans-group") };
+    initHRegion(TABLE, getName(), FAMILIES);
+    String value = "this is the value";
+    String value2 = "this is some other value";
+    String keyPrefix1 = "prefix1"; // UUID.randomUUID().toString();
+    String keyPrefix2 = "prefix2"; // UUID.randomUUID().toString();
+    String keyPrefix3 = "prefix3"; // UUID.randomUUID().toString();
+    putRows(this.region, 3, value, keyPrefix1);
+    putRows(this.region, 3, value, keyPrefix2);
+    putRows(this.region, 3, value, keyPrefix3);
+    // this.region.flushCommits();
+    putRows(this.region, 3, value2, keyPrefix1);
+    putRows(this.region, 3, value2, keyPrefix2);
+    putRows(this.region, 3, value2, keyPrefix3);
+    System.out.println("Checking values for key: " + keyPrefix1);
+    assertEquals("Got back incorrect number of rows from scan", 3,
+      getNumberOfRows(keyPrefix1, value2, this.region));
+    System.out.println("Checking values for key: " + keyPrefix2);
+    assertEquals("Got back incorrect number of rows from scan", 3,
+      getNumberOfRows(keyPrefix2, value2, this.region));
+    System.out.println("Checking values for key: " + keyPrefix3);
+    assertEquals("Got back incorrect number of rows from scan", 3,
+      getNumberOfRows(keyPrefix3, value2, this.region));
+    deleteColumns(this.region, value2, keyPrefix1);
+    deleteColumns(this.region, value2, keyPrefix2);
+    deleteColumns(this.region, value2, keyPrefix3);
+    System.out.println("Starting important checks.....");
+    assertEquals("Got back incorrect number of rows from scan: " + keyPrefix1,
+      0, getNumberOfRows(keyPrefix1, value2, this.region));
+    assertEquals("Got back incorrect number of rows from scan: " + keyPrefix2,
+      0, getNumberOfRows(keyPrefix2, value2, this.region));
+    assertEquals("Got back incorrect number of rows from scan: " + keyPrefix3,
+      0, getNumberOfRows(keyPrefix3, value2, this.region));
+  }
+
+  private void deleteColumns(HRegion r, String value, String keyPrefix)
+  throws IOException {
+    InternalScanner scanner = buildScanner(keyPrefix, value, r);
+    int count = 0;
+    boolean more = false;
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    do {
+      more = scanner.next(results);
+      if (results != null && !results.isEmpty())
+        count++;
+      else
+        break;
+      Delete delete = new Delete(results.get(0).getRow());
+      delete.deleteColumn(Bytes.toBytes("trans-tags"), Bytes.toBytes("qual2"));
+      r.delete(delete, null, false);
+      results.clear();
+    } while (more);
+    assertEquals("Did not perform correct number of deletes", 3, count);
+  }
+
+  private int getNumberOfRows(String keyPrefix, String value, HRegion r) throws Exception {
+    InternalScanner resultScanner = buildScanner(keyPrefix, value, r);
+    int numberOfResults = 0;
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    boolean more = false;
+    do {
+      more = resultScanner.next(results);
+      if (results != null && !results.isEmpty()) numberOfResults++;
+      else break;
+      for (KeyValue kv: results) {
+        System.out.println("kv=" + kv.toString() + ", " + Bytes.toString(kv.getValue()));
+      }
+      results.clear();
+    } while(more);
+    return numberOfResults;
+  }
+
+  private InternalScanner buildScanner(String keyPrefix, String value, HRegion r)
+  throws IOException {
+    // Defaults FilterList.Operator.MUST_PASS_ALL.
+    FilterList allFilters = new FilterList();
+    allFilters.addFilter(new PrefixFilter(Bytes.toBytes(keyPrefix)));
+    // Only return rows where this column value exists in the row.
+    SingleColumnValueFilter filter =
+      new SingleColumnValueFilter(Bytes.toBytes("trans-tags"),
+        Bytes.toBytes("qual2"), CompareOp.EQUAL, Bytes.toBytes(value));
+    filter.setFilterIfMissing(true);
+    allFilters.addFilter(filter);
+    Scan scan = new Scan();
+    scan.addFamily(Bytes.toBytes("trans-blob"));
+    scan.addFamily(Bytes.toBytes("trans-type"));
+    scan.addFamily(Bytes.toBytes("trans-date"));
+    scan.addFamily(Bytes.toBytes("trans-tags"));
+    scan.addFamily(Bytes.toBytes("trans-group"));
+    scan.setFilter(allFilters);
+    return r.getScanner(scan);
+  }
+
+  private void putRows(HRegion r, int numRows, String value, String key)
+  throws IOException {
+    for (int i = 0; i < numRows; i++) {
+      String row = key + "_" + i/* UUID.randomUUID().toString() */;
+      System.out.println(String.format("Saving row: %s, with value %s", row,
+        value));
+      Put put = new Put(Bytes.toBytes(row));
+      put.add(Bytes.toBytes("trans-blob"), null,
+        Bytes.toBytes("value for blob"));
+      put.add(Bytes.toBytes("trans-type"), null, Bytes.toBytes("statement"));
+      put.add(Bytes.toBytes("trans-date"), null,
+        Bytes.toBytes("20090921010101999"));
+      put.add(Bytes.toBytes("trans-tags"), Bytes.toBytes("qual2"),
+        Bytes.toBytes(value));
+      put.add(Bytes.toBytes("trans-group"), null,
+        Bytes.toBytes("adhocTransactionGroupId"));
+      r.put(put);
+    }
+  }
+
+  public void testFamilyWithAndWithoutColon() throws Exception {
+    byte [] b = Bytes.toBytes(getName());
+    byte [] cf = Bytes.toBytes("cf");
+    initHRegion(b, getName(), cf);
+    Put p = new Put(b);
+    byte [] cfwithcolon = Bytes.toBytes("cf:");
+    p.add(cfwithcolon, cfwithcolon, cfwithcolon);
+    boolean exception = false;
+    try {
+      this.region.put(p);
+    } catch (NoSuchColumnFamilyException e) {
+      exception = true;
+    }
+    assertTrue(exception);
+  }
+
+  @SuppressWarnings("unchecked")
+  public void testBatchPut() throws Exception {
+    byte[] b = Bytes.toBytes(getName());
+    byte[] cf = Bytes.toBytes("cf");
+    byte[] qual = Bytes.toBytes("qual");
+    byte[] val = Bytes.toBytes("val");
+    initHRegion(b, getName(), cf);
+
+    HLog.getSyncOps(); // clear counter from prior tests
+    assertEquals(0, HLog.getSyncOps());
+
+    LOG.info("First a batch put with all valid puts");
+    final Put[] puts = new Put[10];
+    for (int i = 0; i < 10; i++) {
+      puts[i] = new Put(Bytes.toBytes("row_" + i));
+      puts[i].add(cf, qual, val);
+    }
+
+    OperationStatusCode[] codes = this.region.put(puts);
+    assertEquals(10, codes.length);
+    for (int i = 0; i < 10; i++) {
+      assertEquals(OperationStatusCode.SUCCESS, codes[i]);
+    }
+    assertEquals(1, HLog.getSyncOps());
+
+    LOG.info("Next a batch put with one invalid family");
+    puts[5].add(Bytes.toBytes("BAD_CF"), qual, val);
+    codes = this.region.put(puts);
+    assertEquals(10, codes.length);
+    for (int i = 0; i < 10; i++) {
+      assertEquals((i == 5) ? OperationStatusCode.BAD_FAMILY :
+        OperationStatusCode.SUCCESS, codes[i]);
+    }
+    assertEquals(1, HLog.getSyncOps());
+
+    LOG.info("Next a batch put that has to break into two batches to avoid a lock");
+    Integer lockedRow = region.obtainRowLock(Bytes.toBytes("row_2"));
+
+    MultithreadedTestUtil.TestContext ctx =
+      new MultithreadedTestUtil.TestContext(HBaseConfiguration.create());
+    final AtomicReference<OperationStatusCode[]> retFromThread =
+      new AtomicReference<OperationStatusCode[]>();
+    TestThread putter = new TestThread(ctx) {
+      @Override
+      public void doWork() throws IOException {
+        retFromThread.set(region.put(puts));
+      }
+    };
+    LOG.info("...starting put thread while holding lock");
+    ctx.addThread(putter);
+    ctx.startThreads();
+
+    LOG.info("...waiting for put thread to sync first time");
+    long startWait = System.currentTimeMillis();
+    while (HLog.getSyncOps() == 0) {
+      Thread.sleep(100);
+      if (System.currentTimeMillis() - startWait > 10000) {
+        fail("Timed out waiting for thread to sync first minibatch");
+      }
+    }
+    LOG.info("...releasing row lock, which should let put thread continue");
+    region.releaseRowLock(lockedRow);
+    LOG.info("...joining on thread");
+    ctx.stop();
+    LOG.info("...checking that next batch was synced");
+    assertEquals(1, HLog.getSyncOps());
+    codes = retFromThread.get();
+    for (int i = 0; i < 10; i++) {
+      assertEquals((i == 5) ? OperationStatusCode.BAD_FAMILY :
+        OperationStatusCode.SUCCESS, codes[i]);
+    }
+
+    LOG.info("Nexta, a batch put which uses an already-held lock");
+    lockedRow = region.obtainRowLock(Bytes.toBytes("row_2"));
+    LOG.info("...obtained row lock");
+    List<Pair<Put, Integer>> putsAndLocks = Lists.newArrayList();
+    for (int i = 0; i < 10; i++) {
+      Pair<Put, Integer> pair = new Pair<Put, Integer>(puts[i], null);
+      if (i == 2) pair.setSecond(lockedRow);
+      putsAndLocks.add(pair);
+    }
+
+    codes = region.put(putsAndLocks.toArray(new Pair[0]));
+    LOG.info("...performed put");
+    for (int i = 0; i < 10; i++) {
+      assertEquals((i == 5) ? OperationStatusCode.BAD_FAMILY :
+        OperationStatusCode.SUCCESS, codes[i]);
+    }
+    // Make sure we didn't do an extra batch
+    assertEquals(1, HLog.getSyncOps());
+
+    // Make sure we still hold lock
+    assertTrue(region.isRowLocked(lockedRow));
+    LOG.info("...releasing lock");
+    region.releaseRowLock(lockedRow);
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // checkAndMutate tests
+  //////////////////////////////////////////////////////////////////////////////
+  public void testCheckAndMutate_WithEmptyRowValue() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] qf1  = Bytes.toBytes("qualifier");
+    byte [] emptyVal  = new byte[] {};
+    byte [] val1  = Bytes.toBytes("value1");
+    byte [] val2  = Bytes.toBytes("value2");
+    Integer lockId = null;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+    //Putting data in key
+    Put put = new Put(row1);
+    put.add(fam1, qf1, val1);
+
+    //checkAndPut with correct value
+    boolean res = region.checkAndMutate(row1, fam1, qf1, emptyVal, put, lockId,
+        true);
+    assertTrue(res);
+
+    // not empty anymore
+    res = region.checkAndMutate(row1, fam1, qf1, emptyVal, put, lockId, true);
+    assertFalse(res);
+
+    Delete delete = new Delete(row1);
+    delete.deleteColumn(fam1, qf1);
+    res = region.checkAndMutate(row1, fam1, qf1, emptyVal, delete, lockId,
+        true);
+    assertFalse(res);
+
+    put = new Put(row1);
+    put.add(fam1, qf1, val2);
+    //checkAndPut with correct value
+    res = region.checkAndMutate(row1, fam1, qf1, val1, put, lockId, true);
+    assertTrue(res);
+
+    //checkAndDelete with correct value
+    delete = new Delete(row1);
+    delete.deleteColumn(fam1, qf1);
+    delete.deleteColumn(fam1, qf1);
+    res = region.checkAndMutate(row1, fam1, qf1, val2, delete, lockId, true);
+    assertTrue(res);
+
+    delete = new Delete(row1);
+    res = region.checkAndMutate(row1, fam1, qf1, emptyVal, delete, lockId,
+        true);
+    assertTrue(res);
+
+    //checkAndPut looking for a null value
+    put = new Put(row1);
+    put.add(fam1, qf1, val1);
+
+    res = region.checkAndMutate(row1, fam1, qf1, null, put, lockId, true);
+    assertTrue(res);
+    
+  }
+
+  public void testCheckAndMutate_WithWrongValue() throws IOException{
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] qf1  = Bytes.toBytes("qualifier");
+    byte [] val1  = Bytes.toBytes("value1");
+    byte [] val2  = Bytes.toBytes("value2");
+    Integer lockId = null;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    //Putting data in key
+    Put put = new Put(row1);
+    put.add(fam1, qf1, val1);
+    region.put(put);
+
+    //checkAndPut with wrong value
+    boolean res = region.checkAndMutate(row1, fam1, qf1, val2, put, lockId, true);
+    assertEquals(false, res);
+
+    //checkAndDelete with wrong value
+    Delete delete = new Delete(row1);
+    delete.deleteFamily(fam1);
+    res = region.checkAndMutate(row1, fam1, qf1, val2, delete, lockId, true);
+    assertEquals(false, res);
+  }
+
+  public void testCheckAndMutate_WithCorrectValue() throws IOException{
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] qf1  = Bytes.toBytes("qualifier");
+    byte [] val1  = Bytes.toBytes("value1");
+    Integer lockId = null;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    //Putting data in key
+    Put put = new Put(row1);
+    put.add(fam1, qf1, val1);
+    region.put(put);
+
+    //checkAndPut with correct value
+    boolean res = region.checkAndMutate(row1, fam1, qf1, val1, put, lockId, true);
+    assertEquals(true, res);
+
+    //checkAndDelete with correct value
+    Delete delete = new Delete(row1);
+    delete.deleteColumn(fam1, qf1);
+    res = region.checkAndMutate(row1, fam1, qf1, val1, put, lockId, true);
+    assertEquals(true, res);
+  }
+
+  public void testCheckAndPut_ThatPutWasWritten() throws IOException{
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("fam2");
+    byte [] qf1  = Bytes.toBytes("qualifier");
+    byte [] val1  = Bytes.toBytes("value1");
+    byte [] val2  = Bytes.toBytes("value2");
+    Integer lockId = null;
+
+    byte [][] families = {fam1, fam2};
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    //Putting data in the key to check
+    Put put = new Put(row1);
+    put.add(fam1, qf1, val1);
+    region.put(put);
+
+    //Creating put to add
+    long ts = System.currentTimeMillis();
+    KeyValue kv = new KeyValue(row1, fam2, qf1, ts, KeyValue.Type.Put, val2);
+    put = new Put(row1);
+    put.add(kv);
+
+    //checkAndPut with wrong value
+    Store store = region.getStore(fam1);
+    store.memstore.kvset.size();
+
+    boolean res = region.checkAndMutate(row1, fam1, qf1, val1, put, lockId, true);
+    assertEquals(true, res);
+    store.memstore.kvset.size();
+
+    Get get = new Get(row1);
+    get.addColumn(fam2, qf1);
+    KeyValue [] actual = region.get(get, null).raw();
+
+    KeyValue [] expected = {kv};
+
+    assertEquals(expected.length, actual.length);
+    for(int i=0; i<actual.length; i++) {
+      assertEquals(expected[i], actual[i]);
+    }
+
+  }
+
+  public void testCheckAndDelete_ThatDeleteWasWritten() throws IOException{
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("fam2");
+    byte [] qf1  = Bytes.toBytes("qualifier1");
+    byte [] qf2  = Bytes.toBytes("qualifier2");
+    byte [] qf3  = Bytes.toBytes("qualifier3");
+    byte [] val1  = Bytes.toBytes("value1");
+    byte [] val2  = Bytes.toBytes("value2");
+    byte [] val3  = Bytes.toBytes("value3");
+    byte[] emptyVal = new byte[] { };
+    Integer lockId = null;
+
+    byte [][] families = {fam1, fam2};
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    //Put content
+    Put put = new Put(row1);
+    put.add(fam1, qf1, val1);
+    region.put(put);
+    Threads.sleep(2);
+
+    put = new Put(row1);
+    put.add(fam1, qf1, val2);
+    put.add(fam2, qf1, val3);
+    put.add(fam2, qf2, val2);
+    put.add(fam2, qf3, val1);
+    put.add(fam1, qf3, val1);
+    region.put(put);
+
+    //Multi-column delete
+    Delete delete = new Delete(row1);
+    delete.deleteColumn(fam1, qf1);
+    delete.deleteColumn(fam2, qf1);
+    delete.deleteColumn(fam1, qf3);
+    boolean res = region.checkAndMutate(row1, fam1, qf1, val2, delete, lockId,
+        true);
+    assertEquals(true, res);
+
+    Get get = new Get(row1);
+    get.addColumn(fam1, qf1);
+    get.addColumn(fam1, qf3);
+    get.addColumn(fam2, qf2);
+    Result r = region.get(get, null);
+    assertEquals(2, r.size());
+    assertEquals(val1, r.getValue(fam1, qf1));
+    assertEquals(val2, r.getValue(fam2, qf2));
+
+    //Family delete
+    delete = new Delete(row1);
+    delete.deleteFamily(fam2);
+    res = region.checkAndMutate(row1, fam2, qf1, emptyVal, delete, lockId,
+        true);
+    assertEquals(true, res);
+
+    get = new Get(row1);
+    r = region.get(get, null);
+    assertEquals(1, r.size());
+    assertEquals(val1, r.getValue(fam1, qf1));
+
+    //Row delete
+    delete = new Delete(row1);
+    res = region.checkAndMutate(row1, fam1, qf1, val1, delete, lockId,
+        true);
+    assertEquals(true, res);
+    get = new Get(row1);
+    r = region.get(get, null);
+    assertEquals(0, r.size());
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Delete tests
+  //////////////////////////////////////////////////////////////////////////////
+  public void testDelete_multiDeleteColumn() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] qual = Bytes.toBytes("qualifier");
+    byte [] value = Bytes.toBytes("value");
+
+    Put put = new Put(row1);
+    put.add(fam1, qual, 1, value);
+    put.add(fam1, qual, 2, value);
+
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    region.put(put);
+
+    // We do support deleting more than 1 'latest' version
+    Delete delete = new Delete(row1);
+    delete.deleteColumn(fam1, qual);
+    delete.deleteColumn(fam1, qual);
+    region.delete(delete, null, false);
+
+    Get get = new Get(row1);
+    get.addFamily(fam1);
+    Result r = region.get(get, null);
+    assertEquals(0, r.size());
+  }
+
+  public void testDelete_CheckFamily() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("fam2");
+    byte [] fam3 = Bytes.toBytes("fam3");
+    byte [] fam4 = Bytes.toBytes("fam4");
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1, fam2, fam3);
+
+    List<KeyValue> kvs  = new ArrayList<KeyValue>();
+    kvs.add(new KeyValue(row1, fam4, null, null));
+
+
+    //testing existing family
+    byte [] family = fam2;
+    try {
+      Map<byte[], List<KeyValue>> deleteMap = new HashMap<byte[], List<KeyValue>>();
+      deleteMap.put(family, kvs);
+      region.delete(deleteMap, true);
+    } catch (Exception e) {
+      assertTrue("Family " +new String(family)+ " does not exist", false);
+    }
+
+    //testing non existing family
+    boolean ok = false;
+    family = fam4;
+    try {
+      Map<byte[], List<KeyValue>> deleteMap = new HashMap<byte[], List<KeyValue>>();
+      deleteMap.put(family, kvs);
+      region.delete(deleteMap, true);
+    } catch (Exception e) {
+      ok = true;
+    }
+    assertEquals("Family " +new String(family)+ " does exist", true, ok);
+  }
+
+  public void testDelete_mixed() throws IOException, InterruptedException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] fam = Bytes.toBytes("info");
+    byte [][] families = {fam};
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+    EnvironmentEdgeManagerTestHelper.injectEdge(new IncrementingEnvironmentEdge());
+
+    byte [] row = Bytes.toBytes("table_name");
+    // column names
+    byte [] serverinfo = Bytes.toBytes("serverinfo");
+    byte [] splitA = Bytes.toBytes("splitA");
+    byte [] splitB = Bytes.toBytes("splitB");
+
+    // add some data:
+    Put put = new Put(row);
+    put.add(fam, splitA, Bytes.toBytes("reference_A"));
+    region.put(put);
+
+    put = new Put(row);
+    put.add(fam, splitB, Bytes.toBytes("reference_B"));
+    region.put(put);
+
+    put = new Put(row);
+    put.add(fam, serverinfo, Bytes.toBytes("ip_address"));
+    region.put(put);
+
+    // ok now delete a split:
+    Delete delete = new Delete(row);
+    delete.deleteColumns(fam, splitA);
+    region.delete(delete, null, true);
+
+    // assert some things:
+    Get get = new Get(row).addColumn(fam, serverinfo);
+    Result result = region.get(get, null);
+    assertEquals(1, result.size());
+
+    get = new Get(row).addColumn(fam, splitA);
+    result = region.get(get, null);
+    assertEquals(0, result.size());
+
+    get = new Get(row).addColumn(fam, splitB);
+    result = region.get(get, null);
+    assertEquals(1, result.size());
+
+    // Assert that after a delete, I can put.
+    put = new Put(row);
+    put.add(fam, splitA, Bytes.toBytes("reference_A"));
+    region.put(put);
+    get = new Get(row);
+    result = region.get(get, null);
+    assertEquals(3, result.size());
+
+    // Now delete all... then test I can add stuff back
+    delete = new Delete(row);
+    region.delete(delete, null, false);
+    assertEquals(0, region.get(get, null).size());
+
+    region.put(new Put(row).add(fam, splitA, Bytes.toBytes("reference_A")));
+    result = region.get(get, null);
+    assertEquals(1, result.size());
+  }
+
+  public void testDeleteRowWithFutureTs() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] fam = Bytes.toBytes("info");
+    byte [][] families = {fam};
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    byte [] row = Bytes.toBytes("table_name");
+    // column names
+    byte [] serverinfo = Bytes.toBytes("serverinfo");
+
+    // add data in the far future
+    Put put = new Put(row);
+    put.add(fam, serverinfo, HConstants.LATEST_TIMESTAMP-5,Bytes.toBytes("value"));
+    region.put(put);
+
+    // now delete something in the present
+    Delete delete = new Delete(row);
+    region.delete(delete, null, true);
+
+    // make sure we still see our data
+    Get get = new Get(row).addColumn(fam, serverinfo);
+    Result result = region.get(get, null);
+    assertEquals(1, result.size());
+
+    // delete the future row
+    delete = new Delete(row,HConstants.LATEST_TIMESTAMP-3,null);
+    region.delete(delete, null, true);
+
+    // make sure it is gone
+    get = new Get(row).addColumn(fam, serverinfo);
+    result = region.get(get, null);
+    assertEquals(0, result.size());
+  }
+
+  /**
+   * Tests that the special LATEST_TIMESTAMP option for puts gets
+   * replaced by the actual timestamp
+   */
+  public void testPutWithLatestTS() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] fam = Bytes.toBytes("info");
+    byte [][] families = {fam};
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    byte [] row = Bytes.toBytes("row1");
+    // column names
+    byte [] qual = Bytes.toBytes("qual");
+
+    // add data with LATEST_TIMESTAMP, put without WAL
+    Put put = new Put(row);
+    put.add(fam, qual, HConstants.LATEST_TIMESTAMP, Bytes.toBytes("value"));
+    region.put(put, false);
+
+    // Make sure it shows up with an actual timestamp
+    Get get = new Get(row).addColumn(fam, qual);
+    Result result = region.get(get, null);
+    assertEquals(1, result.size());
+    KeyValue kv = result.raw()[0];
+    LOG.info("Got: " + kv);
+    assertTrue("LATEST_TIMESTAMP was not replaced with real timestamp",
+        kv.getTimestamp() != HConstants.LATEST_TIMESTAMP);
+
+    // Check same with WAL enabled (historically these took different
+    // code paths, so check both)
+    row = Bytes.toBytes("row2");
+    put = new Put(row);
+    put.add(fam, qual, HConstants.LATEST_TIMESTAMP, Bytes.toBytes("value"));
+    region.put(put, true);
+
+    // Make sure it shows up with an actual timestamp
+    get = new Get(row).addColumn(fam, qual);
+    result = region.get(get, null);
+    assertEquals(1, result.size());
+    kv = result.raw()[0];
+    LOG.info("Got: " + kv);
+    assertTrue("LATEST_TIMESTAMP was not replaced with real timestamp",
+        kv.getTimestamp() != HConstants.LATEST_TIMESTAMP);
+
+  }
+
+  public void testScanner_DeleteOneFamilyNotAnother() throws IOException {
+    byte [] tableName = Bytes.toBytes("test_table");
+    byte [] fam1 = Bytes.toBytes("columnA");
+    byte [] fam2 = Bytes.toBytes("columnB");
+    initHRegion(tableName, getName(), fam1, fam2);
+
+    byte [] rowA = Bytes.toBytes("rowA");
+    byte [] rowB = Bytes.toBytes("rowB");
+
+    byte [] value = Bytes.toBytes("value");
+
+    Delete delete = new Delete(rowA);
+    delete.deleteFamily(fam1);
+
+    region.delete(delete, null, true);
+
+    // now create data.
+    Put put = new Put(rowA);
+    put.add(fam2, null, value);
+    region.put(put);
+
+    put = new Put(rowB);
+    put.add(fam1, null, value);
+    put.add(fam2, null, value);
+    region.put(put);
+
+    Scan scan = new Scan();
+    scan.addFamily(fam1).addFamily(fam2);
+    InternalScanner s = region.getScanner(scan);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    s.next(results);
+    assertTrue(Bytes.equals(rowA, results.get(0).getRow()));
+
+    results.clear();
+    s.next(results);
+    assertTrue(Bytes.equals(rowB, results.get(0).getRow()));
+
+  }
+
+  public void testDeleteColumns_PostInsert() throws IOException,
+      InterruptedException {
+    Delete delete = new Delete(row);
+    delete.deleteColumns(fam1, qual1);
+    doTestDelete_AndPostInsert(delete);
+  }
+
+  public void testDeleteFamily_PostInsert() throws IOException, InterruptedException {
+    Delete delete = new Delete(row);
+    delete.deleteFamily(fam1);
+    doTestDelete_AndPostInsert(delete);
+  }
+
+  public void doTestDelete_AndPostInsert(Delete delete)
+      throws IOException, InterruptedException {
+    initHRegion(tableName, getName(), fam1);
+    EnvironmentEdgeManagerTestHelper.injectEdge(new IncrementingEnvironmentEdge());
+    Put put = new Put(row);
+    put.add(fam1, qual1, value1);
+    region.put(put);
+
+    // now delete the value:
+    region.delete(delete, null, true);
+
+
+    // ok put data:
+    put = new Put(row);
+    put.add(fam1, qual1, value2);
+    region.put(put);
+
+    // ok get:
+    Get get = new Get(row);
+    get.addColumn(fam1, qual1);
+
+    Result r = region.get(get, null);
+    assertEquals(1, r.size());
+    assertByteEquals(value2, r.getValue(fam1, qual1));
+
+    // next:
+    Scan scan = new Scan(row);
+    scan.addColumn(fam1, qual1);
+    InternalScanner s = region.getScanner(scan);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(false, s.next(results));
+    assertEquals(1, results.size());
+    KeyValue kv = results.get(0);
+
+    assertByteEquals(value2, kv.getValue());
+    assertByteEquals(fam1, kv.getFamily());
+    assertByteEquals(qual1, kv.getQualifier());
+    assertByteEquals(row, kv.getRow());
+  }
+
+
+
+  public void testDelete_CheckTimestampUpdated()
+  throws IOException {
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] col1 = Bytes.toBytes("col1");
+    byte [] col2 = Bytes.toBytes("col2");
+    byte [] col3 = Bytes.toBytes("col3");
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    //Building checkerList
+    List<KeyValue> kvs  = new ArrayList<KeyValue>();
+    kvs.add(new KeyValue(row1, fam1, col1, null));
+    kvs.add(new KeyValue(row1, fam1, col2, null));
+    kvs.add(new KeyValue(row1, fam1, col3, null));
+
+    Map<byte[], List<KeyValue>> deleteMap = new HashMap<byte[], List<KeyValue>>();
+    deleteMap.put(fam1, kvs);
+    region.delete(deleteMap, true);
+
+    // extract the key values out the memstore:
+    // This is kinda hacky, but better than nothing...
+    long now = System.currentTimeMillis();
+    KeyValue firstKv = region.getStore(fam1).memstore.kvset.first();
+    assertTrue(firstKv.getTimestamp() <= now);
+    now = firstKv.getTimestamp();
+    for (KeyValue kv: region.getStore(fam1).memstore.kvset) {
+      assertTrue(kv.getTimestamp() <= now);
+      now = kv.getTimestamp();
+    }
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Get tests
+  //////////////////////////////////////////////////////////////////////////////
+  public void testGet_FamilyChecker() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("False");
+    byte [] col1 = Bytes.toBytes("col1");
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    Get get = new Get(row1);
+    get.addColumn(fam2, col1);
+
+    //Test
+    try {
+      region.get(get, null);
+    } catch (NoSuchColumnFamilyException e){
+      assertFalse(false);
+      return;
+    }
+    assertFalse(true);
+  }
+
+  public void testGet_Basic() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] col1 = Bytes.toBytes("col1");
+    byte [] col2 = Bytes.toBytes("col2");
+    byte [] col3 = Bytes.toBytes("col3");
+    byte [] col4 = Bytes.toBytes("col4");
+    byte [] col5 = Bytes.toBytes("col5");
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    //Add to memstore
+    Put put = new Put(row1);
+    put.add(fam1, col1, null);
+    put.add(fam1, col2, null);
+    put.add(fam1, col3, null);
+    put.add(fam1, col4, null);
+    put.add(fam1, col5, null);
+    region.put(put);
+
+    Get get = new Get(row1);
+    get.addColumn(fam1, col2);
+    get.addColumn(fam1, col4);
+    //Expected result
+    KeyValue kv1 = new KeyValue(row1, fam1, col2);
+    KeyValue kv2 = new KeyValue(row1, fam1, col4);
+    KeyValue [] expected = {kv1, kv2};
+
+    //Test
+    Result res = region.get(get, null);
+    assertEquals(expected.length, res.size());
+    for(int i=0; i<res.size(); i++){
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getRow(), res.raw()[i].getRow()));
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getFamily(), res.raw()[i].getFamily()));
+      assertEquals(0,
+          Bytes.compareTo(
+              expected[i].getQualifier(), res.raw()[i].getQualifier()));
+    }
+
+    // Test using a filter on a Get
+    Get g = new Get(row1);
+    final int count = 2;
+    g.setFilter(new ColumnCountGetFilter(count));
+    res = region.get(g, null);
+    assertEquals(count, res.size());
+  }
+
+  public void testGet_Empty() throws IOException {
+    byte [] tableName = Bytes.toBytes("emptytable");
+    byte [] row = Bytes.toBytes("row");
+    byte [] fam = Bytes.toBytes("fam");
+
+    String method = this.getName();
+    initHRegion(tableName, method, fam);
+
+    Get get = new Get(row);
+    get.addFamily(fam);
+    Result r = region.get(get, null);
+
+    assertTrue(r.isEmpty());
+  }
+
+  //Test that checked if there was anything special when reading from the ROOT
+  //table. To be able to use this test you need to comment the part in
+  //HTableDescriptor that checks for '-' and '.'. You also need to remove the
+  //s in the beginning of the name.
+  public void stestGet_Root() throws IOException {
+    //Setting up region
+    String method = this.getName();
+    initHRegion(HConstants.ROOT_TABLE_NAME, method, HConstants.CATALOG_FAMILY);
+
+    //Add to memstore
+    Put put = new Put(HConstants.EMPTY_START_ROW);
+    put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER, null);
+    region.put(put);
+
+    Get get = new Get(HConstants.EMPTY_START_ROW);
+    get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+
+    //Expected result
+    KeyValue kv1 = new KeyValue(HConstants.EMPTY_START_ROW,
+        HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    KeyValue [] expected = {kv1};
+
+    //Test from memstore
+    Result res = region.get(get, null);
+
+    assertEquals(expected.length, res.size());
+    for(int i=0; i<res.size(); i++){
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getRow(), res.raw()[i].getRow()));
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getFamily(), res.raw()[i].getFamily()));
+      assertEquals(0,
+          Bytes.compareTo(
+              expected[i].getQualifier(), res.raw()[i].getQualifier()));
+    }
+
+    //flush
+    region.flushcache();
+
+    //test2
+    res = region.get(get, null);
+
+    assertEquals(expected.length, res.size());
+    for(int i=0; i<res.size(); i++){
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getRow(), res.raw()[i].getRow()));
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getFamily(), res.raw()[i].getFamily()));
+      assertEquals(0,
+          Bytes.compareTo(
+              expected[i].getQualifier(), res.raw()[i].getQualifier()));
+    }
+
+    //Scan
+    Scan scan = new Scan();
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    InternalScanner s = region.getScanner(scan);
+    List<KeyValue> result = new ArrayList<KeyValue>();
+    s.next(result);
+
+    assertEquals(expected.length, result.size());
+    for(int i=0; i<res.size(); i++){
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getRow(), result.get(i).getRow()));
+      assertEquals(0,
+          Bytes.compareTo(expected[i].getFamily(), result.get(i).getFamily()));
+      assertEquals(0,
+          Bytes.compareTo(
+              expected[i].getQualifier(), result.get(i).getQualifier()));
+    }
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Lock test
+  //////////////////////////////////////////////////////////////////////////////
+  public void testLocks() throws IOException{
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [][] families = {fam1, fam2, fam3};
+
+    Configuration hc = initSplit();
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, hc, families);
+
+    final int threadCount = 10;
+    final int lockCount = 10;
+
+    List<Thread>threads = new ArrayList<Thread>(threadCount);
+    for (int i = 0; i < threadCount; i++) {
+      threads.add(new Thread(Integer.toString(i)) {
+        @Override
+        public void run() {
+          Integer [] lockids = new Integer[lockCount];
+          // Get locks.
+          for (int i = 0; i < lockCount; i++) {
+            try {
+              byte [] rowid = Bytes.toBytes(Integer.toString(i));
+              lockids[i] = region.obtainRowLock(rowid);
+              assertEquals(rowid, region.getRowFromLock(lockids[i]));
+              LOG.debug(getName() + " locked " + Bytes.toString(rowid));
+            } catch (IOException e) {
+              e.printStackTrace();
+            }
+          }
+          LOG.debug(getName() + " set " +
+              Integer.toString(lockCount) + " locks");
+
+          // Abort outstanding locks.
+          for (int i = lockCount - 1; i >= 0; i--) {
+            region.releaseRowLock(lockids[i]);
+            LOG.debug(getName() + " unlocked " + i);
+          }
+          LOG.debug(getName() + " released " +
+              Integer.toString(lockCount) + " locks");
+        }
+      });
+    }
+
+    // Startup all our threads.
+    for (Thread t : threads) {
+      t.start();
+    }
+
+    // Now wait around till all are done.
+    for (Thread t: threads) {
+      while (t.isAlive()) {
+        try {
+          Thread.sleep(1);
+        } catch (InterruptedException e) {
+          // Go around again.
+        }
+      }
+    }
+    LOG.info("locks completed.");
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Merge test
+  //////////////////////////////////////////////////////////////////////////////
+  public void testMerge() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [][] families = {fam1, fam2, fam3};
+    Configuration hc = initSplit();
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, hc, families);
+    try {
+      LOG.info("" + addContent(region, fam3));
+      region.flushcache();
+      byte [] splitRow = region.compactStores();
+      assertNotNull(splitRow);
+      LOG.info("SplitRow: " + Bytes.toString(splitRow));
+      HRegion [] subregions = splitRegion(region, splitRow);
+      try {
+        // Need to open the regions.
+        for (int i = 0; i < subregions.length; i++) {
+          openClosedRegion(subregions[i]);
+          subregions[i].compactStores();
+        }
+        Path oldRegionPath = region.getRegionDir();
+        Path oldRegion1 = subregions[0].getRegionDir();
+        Path oldRegion2 = subregions[1].getRegionDir();
+        long startTime = System.currentTimeMillis();
+        region = HRegion.mergeAdjacent(subregions[0], subregions[1]);
+        LOG.info("Merge regions elapsed time: " +
+            ((System.currentTimeMillis() - startTime) / 1000.0));
+        fs.delete(oldRegion1, true);
+        fs.delete(oldRegion2, true);
+        fs.delete(oldRegionPath, true);
+        LOG.info("splitAndMerge completed.");
+      } finally {
+        for (int i = 0; i < subregions.length; i++) {
+          try {
+            subregions[i].close();
+          } catch (IOException e) {
+            // Ignore.
+          }
+        }
+      }
+    } finally {
+      if (region != null) {
+        region.close();
+        region.getLog().closeAndDelete();
+      }
+    }
+  }
+
+  /**
+   * @param parent Region to split.
+   * @param midkey Key to split around.
+   * @return The Regions we created.
+   * @throws IOException
+   */
+  HRegion [] splitRegion(final HRegion parent, final byte [] midkey)
+  throws IOException {
+    PairOfSameType<HRegion> result = null;
+    SplitTransaction st = new SplitTransaction(parent, midkey);
+    // If prepare does not return true, for some reason -- logged inside in
+    // the prepare call -- we are not ready to split just now.  Just return.
+    if (!st.prepare()) return null;
+    try {
+      result = st.execute(null, null);
+    } catch (IOException ioe) {
+      try {
+        LOG.info("Running rollback of failed split of " +
+          parent.getRegionNameAsString() + "; " + ioe.getMessage());
+        st.rollback(null);
+        LOG.info("Successful rollback of failed split of " +
+          parent.getRegionNameAsString());
+        return null;
+      } catch (RuntimeException e) {
+        // If failed rollback, kill this server to avoid having a hole in table.
+        LOG.info("Failed rollback of failed split of " +
+          parent.getRegionNameAsString() + " -- aborting server", e);
+      }
+    }
+    return new HRegion [] {result.getFirst(), result.getSecond()};
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Scanner tests
+  //////////////////////////////////////////////////////////////////////////////
+  public void testGetScanner_WithOkFamilies() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("fam2");
+
+    byte [][] families = {fam1, fam2};
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    Scan scan = new Scan();
+    scan.addFamily(fam1);
+    scan.addFamily(fam2);
+    try {
+      region.getScanner(scan);
+    } catch (Exception e) {
+      assertTrue("Families could not be found in Region", false);
+    }
+  }
+
+  public void testGetScanner_WithNotOkFamilies() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("fam2");
+
+    byte [][] families = {fam1};
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    Scan scan = new Scan();
+    scan.addFamily(fam2);
+    boolean ok = false;
+    try {
+      region.getScanner(scan);
+    } catch (Exception e) {
+      ok = true;
+    }
+    assertTrue("Families could not be found in Region", ok);
+  }
+
+  public void testGetScanner_WithNoFamilies() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("fam2");
+    byte [] fam3 = Bytes.toBytes("fam3");
+    byte [] fam4 = Bytes.toBytes("fam4");
+
+    byte [][] families = {fam1, fam2, fam3, fam4};
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+
+    //Putting data in Region
+    Put put = new Put(row1);
+    put.add(fam1, null, null);
+    put.add(fam2, null, null);
+    put.add(fam3, null, null);
+    put.add(fam4, null, null);
+    region.put(put);
+
+    Scan scan = null;
+    HRegion.RegionScanner is = null;
+
+    //Testing to see how many scanners that is produced by getScanner, starting
+    //with known number, 2 - current = 1
+    scan = new Scan();
+    scan.addFamily(fam2);
+    scan.addFamily(fam4);
+    is = (RegionScanner) region.getScanner(scan);
+    ReadWriteConsistencyControl.resetThreadReadPoint(region.getRWCC());
+    assertEquals(1, ((RegionScanner)is).storeHeap.getHeap().size());
+
+    scan = new Scan();
+    is = (RegionScanner) region.getScanner(scan);
+    ReadWriteConsistencyControl.resetThreadReadPoint(region.getRWCC());
+    assertEquals(families.length -1,
+        ((RegionScanner)is).storeHeap.getHeap().size());
+  }
+
+  /**
+   * This method tests https://issues.apache.org/jira/browse/HBASE-2516.
+   */
+  public void testGetScanner_WithRegionClosed() {
+    byte[] tableName = Bytes.toBytes("testtable");
+    byte[] fam1 = Bytes.toBytes("fam1");
+    byte[] fam2 = Bytes.toBytes("fam2");
+
+    byte[][] families = {fam1, fam2};
+
+    //Setting up region
+    String method = this.getName();
+    try {
+      initHRegion(tableName, method, families);
+    } catch (IOException e) {
+      e.printStackTrace();
+      fail("Got IOException during initHRegion, " + e.getMessage());
+    }
+    region.closed.set(true);
+    try {
+      region.getScanner(null);
+      fail("Expected to get an exception during getScanner on a region that is closed");
+    } catch (org.apache.hadoop.hbase.NotServingRegionException e) {
+      //this is the correct exception that is expected
+    } catch (IOException e) {
+      fail("Got wrong type of exception - should be a NotServingRegionException, but was an IOException: "
+              + e.getMessage());
+    }
+  }
+
+  public void testRegionScanner_Next() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] row2 = Bytes.toBytes("row2");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] fam2 = Bytes.toBytes("fam2");
+    byte [] fam3 = Bytes.toBytes("fam3");
+    byte [] fam4 = Bytes.toBytes("fam4");
+
+    byte [][] families = {fam1, fam2, fam3, fam4};
+    long ts = System.currentTimeMillis();
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    //Putting data in Region
+    Put put = null;
+    put = new Put(row1);
+    put.add(fam1, null, ts, null);
+    put.add(fam2, null, ts, null);
+    put.add(fam3, null, ts, null);
+    put.add(fam4, null, ts, null);
+    region.put(put);
+
+    put = new Put(row2);
+    put.add(fam1, null, ts, null);
+    put.add(fam2, null, ts, null);
+    put.add(fam3, null, ts, null);
+    put.add(fam4, null, ts, null);
+    region.put(put);
+
+    Scan scan = new Scan();
+    scan.addFamily(fam2);
+    scan.addFamily(fam4);
+    InternalScanner is = region.getScanner(scan);
+
+    List<KeyValue> res = null;
+
+    //Result 1
+    List<KeyValue> expected1 = new ArrayList<KeyValue>();
+    expected1.add(new KeyValue(row1, fam2, null, ts, KeyValue.Type.Put, null));
+    expected1.add(new KeyValue(row1, fam4, null, ts, KeyValue.Type.Put, null));
+
+    res = new ArrayList<KeyValue>();
+    is.next(res);
+    for(int i=0; i<res.size(); i++) {
+      assertEquals(expected1.get(i), res.get(i));
+    }
+
+    //Result 2
+    List<KeyValue> expected2 = new ArrayList<KeyValue>();
+    expected2.add(new KeyValue(row2, fam2, null, ts, KeyValue.Type.Put, null));
+    expected2.add(new KeyValue(row2, fam4, null, ts, KeyValue.Type.Put, null));
+
+    res = new ArrayList<KeyValue>();
+    is.next(res);
+    for(int i=0; i<res.size(); i++) {
+      assertEquals(expected2.get(i), res.get(i));
+    }
+
+  }
+
+  public void testScanner_ExplicitColumns_FromMemStore_EnforceVersions()
+  throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] qf1 = Bytes.toBytes("qualifier1");
+    byte [] qf2 = Bytes.toBytes("qualifier2");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [][] families = {fam1};
+
+    long ts1 = System.currentTimeMillis();
+    long ts2 = ts1 + 1;
+    long ts3 = ts1 + 2;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    //Putting data in Region
+    Put put = null;
+    KeyValue kv13 = new KeyValue(row1, fam1, qf1, ts3, KeyValue.Type.Put, null);
+    KeyValue kv12 = new KeyValue(row1, fam1, qf1, ts2, KeyValue.Type.Put, null);
+    KeyValue kv11 = new KeyValue(row1, fam1, qf1, ts1, KeyValue.Type.Put, null);
+
+    KeyValue kv23 = new KeyValue(row1, fam1, qf2, ts3, KeyValue.Type.Put, null);
+    KeyValue kv22 = new KeyValue(row1, fam1, qf2, ts2, KeyValue.Type.Put, null);
+    KeyValue kv21 = new KeyValue(row1, fam1, qf2, ts1, KeyValue.Type.Put, null);
+
+    put = new Put(row1);
+    put.add(kv13);
+    put.add(kv12);
+    put.add(kv11);
+    put.add(kv23);
+    put.add(kv22);
+    put.add(kv21);
+    region.put(put);
+
+    //Expected
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(kv13);
+    expected.add(kv12);
+
+    Scan scan = new Scan(row1);
+    scan.addColumn(fam1, qf1);
+    scan.setMaxVersions(MAX_VERSIONS);
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    InternalScanner scanner = region.getScanner(scan);
+
+    boolean hasNext = scanner.next(actual);
+    assertEquals(false, hasNext);
+
+    //Verify result
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  public void testScanner_ExplicitColumns_FromFilesOnly_EnforceVersions()
+  throws IOException{
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] qf1 = Bytes.toBytes("qualifier1");
+    byte [] qf2 = Bytes.toBytes("qualifier2");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [][] families = {fam1};
+
+    long ts1 = 1; //System.currentTimeMillis();
+    long ts2 = ts1 + 1;
+    long ts3 = ts1 + 2;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    //Putting data in Region
+    Put put = null;
+    KeyValue kv13 = new KeyValue(row1, fam1, qf1, ts3, KeyValue.Type.Put, null);
+    KeyValue kv12 = new KeyValue(row1, fam1, qf1, ts2, KeyValue.Type.Put, null);
+    KeyValue kv11 = new KeyValue(row1, fam1, qf1, ts1, KeyValue.Type.Put, null);
+
+    KeyValue kv23 = new KeyValue(row1, fam1, qf2, ts3, KeyValue.Type.Put, null);
+    KeyValue kv22 = new KeyValue(row1, fam1, qf2, ts2, KeyValue.Type.Put, null);
+    KeyValue kv21 = new KeyValue(row1, fam1, qf2, ts1, KeyValue.Type.Put, null);
+
+    put = new Put(row1);
+    put.add(kv13);
+    put.add(kv12);
+    put.add(kv11);
+    put.add(kv23);
+    put.add(kv22);
+    put.add(kv21);
+    region.put(put);
+    region.flushcache();
+
+    //Expected
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(kv13);
+    expected.add(kv12);
+    expected.add(kv23);
+    expected.add(kv22);
+
+    Scan scan = new Scan(row1);
+    scan.addColumn(fam1, qf1);
+    scan.addColumn(fam1, qf2);
+    scan.setMaxVersions(MAX_VERSIONS);
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    InternalScanner scanner = region.getScanner(scan);
+
+    boolean hasNext = scanner.next(actual);
+    assertEquals(false, hasNext);
+
+    //Verify result
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  public void testScanner_ExplicitColumns_FromMemStoreAndFiles_EnforceVersions()
+  throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [][] families = {fam1};
+    byte [] qf1 = Bytes.toBytes("qualifier1");
+    byte [] qf2 = Bytes.toBytes("qualifier2");
+
+    long ts1 = 1;
+    long ts2 = ts1 + 1;
+    long ts3 = ts1 + 2;
+    long ts4 = ts1 + 3;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    //Putting data in Region
+    KeyValue kv14 = new KeyValue(row1, fam1, qf1, ts4, KeyValue.Type.Put, null);
+    KeyValue kv13 = new KeyValue(row1, fam1, qf1, ts3, KeyValue.Type.Put, null);
+    KeyValue kv12 = new KeyValue(row1, fam1, qf1, ts2, KeyValue.Type.Put, null);
+    KeyValue kv11 = new KeyValue(row1, fam1, qf1, ts1, KeyValue.Type.Put, null);
+
+    KeyValue kv24 = new KeyValue(row1, fam1, qf2, ts4, KeyValue.Type.Put, null);
+    KeyValue kv23 = new KeyValue(row1, fam1, qf2, ts3, KeyValue.Type.Put, null);
+    KeyValue kv22 = new KeyValue(row1, fam1, qf2, ts2, KeyValue.Type.Put, null);
+    KeyValue kv21 = new KeyValue(row1, fam1, qf2, ts1, KeyValue.Type.Put, null);
+
+    Put put = null;
+    put = new Put(row1);
+    put.add(kv14);
+    put.add(kv24);
+    region.put(put);
+    region.flushcache();
+
+    put = new Put(row1);
+    put.add(kv23);
+    put.add(kv13);
+    region.put(put);
+    region.flushcache();
+
+    put = new Put(row1);
+    put.add(kv22);
+    put.add(kv12);
+    region.put(put);
+    region.flushcache();
+
+    put = new Put(row1);
+    put.add(kv21);
+    put.add(kv11);
+    region.put(put);
+
+    //Expected
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(kv14);
+    expected.add(kv13);
+    expected.add(kv12);
+    expected.add(kv24);
+    expected.add(kv23);
+    expected.add(kv22);
+
+    Scan scan = new Scan(row1);
+    scan.addColumn(fam1, qf1);
+    scan.addColumn(fam1, qf2);
+    int versions = 3;
+    scan.setMaxVersions(versions);
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    InternalScanner scanner = region.getScanner(scan);
+
+    boolean hasNext = scanner.next(actual);
+    assertEquals(false, hasNext);
+
+    //Verify result
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  public void testScanner_Wildcard_FromMemStore_EnforceVersions()
+  throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] qf1 = Bytes.toBytes("qualifier1");
+    byte [] qf2 = Bytes.toBytes("qualifier2");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [][] families = {fam1};
+
+    long ts1 = System.currentTimeMillis();
+    long ts2 = ts1 + 1;
+    long ts3 = ts1 + 2;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, families);
+
+    //Putting data in Region
+    Put put = null;
+    KeyValue kv13 = new KeyValue(row1, fam1, qf1, ts3, KeyValue.Type.Put, null);
+    KeyValue kv12 = new KeyValue(row1, fam1, qf1, ts2, KeyValue.Type.Put, null);
+    KeyValue kv11 = new KeyValue(row1, fam1, qf1, ts1, KeyValue.Type.Put, null);
+
+    KeyValue kv23 = new KeyValue(row1, fam1, qf2, ts3, KeyValue.Type.Put, null);
+    KeyValue kv22 = new KeyValue(row1, fam1, qf2, ts2, KeyValue.Type.Put, null);
+    KeyValue kv21 = new KeyValue(row1, fam1, qf2, ts1, KeyValue.Type.Put, null);
+
+    put = new Put(row1);
+    put.add(kv13);
+    put.add(kv12);
+    put.add(kv11);
+    put.add(kv23);
+    put.add(kv22);
+    put.add(kv21);
+    region.put(put);
+
+    //Expected
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(kv13);
+    expected.add(kv12);
+    expected.add(kv23);
+    expected.add(kv22);
+
+    Scan scan = new Scan(row1);
+    scan.addFamily(fam1);
+    scan.setMaxVersions(MAX_VERSIONS);
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    InternalScanner scanner = region.getScanner(scan);
+
+    boolean hasNext = scanner.next(actual);
+    assertEquals(false, hasNext);
+
+    //Verify result
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  public void testScanner_Wildcard_FromFilesOnly_EnforceVersions()
+  throws IOException{
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] qf1 = Bytes.toBytes("qualifier1");
+    byte [] qf2 = Bytes.toBytes("qualifier2");
+    byte [] fam1 = Bytes.toBytes("fam1");
+
+    long ts1 = 1; //System.currentTimeMillis();
+    long ts2 = ts1 + 1;
+    long ts3 = ts1 + 2;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    //Putting data in Region
+    Put put = null;
+    KeyValue kv13 = new KeyValue(row1, fam1, qf1, ts3, KeyValue.Type.Put, null);
+    KeyValue kv12 = new KeyValue(row1, fam1, qf1, ts2, KeyValue.Type.Put, null);
+    KeyValue kv11 = new KeyValue(row1, fam1, qf1, ts1, KeyValue.Type.Put, null);
+
+    KeyValue kv23 = new KeyValue(row1, fam1, qf2, ts3, KeyValue.Type.Put, null);
+    KeyValue kv22 = new KeyValue(row1, fam1, qf2, ts2, KeyValue.Type.Put, null);
+    KeyValue kv21 = new KeyValue(row1, fam1, qf2, ts1, KeyValue.Type.Put, null);
+
+    put = new Put(row1);
+    put.add(kv13);
+    put.add(kv12);
+    put.add(kv11);
+    put.add(kv23);
+    put.add(kv22);
+    put.add(kv21);
+    region.put(put);
+    region.flushcache();
+
+    //Expected
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(kv13);
+    expected.add(kv12);
+    expected.add(kv23);
+    expected.add(kv22);
+
+    Scan scan = new Scan(row1);
+    scan.addFamily(fam1);
+    scan.setMaxVersions(MAX_VERSIONS);
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    InternalScanner scanner = region.getScanner(scan);
+
+    boolean hasNext = scanner.next(actual);
+    assertEquals(false, hasNext);
+
+    //Verify result
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  public void testScanner_StopRow1542() throws IOException {
+    byte [] tableName = Bytes.toBytes("test_table");
+    byte [] family = Bytes.toBytes("testFamily");
+    initHRegion(tableName, getName(), family);
+
+    byte [] row1 = Bytes.toBytes("row111");
+    byte [] row2 = Bytes.toBytes("row222");
+    byte [] row3 = Bytes.toBytes("row333");
+    byte [] row4 = Bytes.toBytes("row444");
+    byte [] row5 = Bytes.toBytes("row555");
+
+    byte [] col1 = Bytes.toBytes("Pub111");
+    byte [] col2 = Bytes.toBytes("Pub222");
+
+
+    Put put = new Put(row1);
+    put.add(family, col1, Bytes.toBytes(10L));
+    region.put(put);
+
+    put = new Put(row2);
+    put.add(family, col1, Bytes.toBytes(15L));
+    region.put(put);
+
+    put = new Put(row3);
+    put.add(family, col2, Bytes.toBytes(20L));
+    region.put(put);
+
+    put = new Put(row4);
+    put.add(family, col2, Bytes.toBytes(30L));
+    region.put(put);
+
+    put = new Put(row5);
+    put.add(family, col1, Bytes.toBytes(40L));
+    region.put(put);
+
+    Scan scan = new Scan(row3, row4);
+    scan.setMaxVersions();
+    scan.addColumn(family, col1);
+    InternalScanner s = region.getScanner(scan);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(false, s.next(results));
+    assertEquals(0, results.size());
+  }
+
+  public void testIncrementColumnValue_UpdatingInPlace() throws IOException {
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 1L;
+    long amount = 3L;
+
+    Put put = new Put(row);
+    put.add(fam1, qual1, Bytes.toBytes(value));
+    region.put(put);
+
+    long result = region.incrementColumnValue(row, fam1, qual1, amount, true);
+
+    assertEquals(value+amount, result);
+
+    Store store = region.getStore(fam1);
+    // ICV removes any extra values floating around in there.
+    assertEquals(1, store.memstore.kvset.size());
+    assertTrue(store.memstore.snapshot.isEmpty());
+
+    assertICV(row, fam1, qual1, value+amount);
+  }
+
+  public void testIncrementColumnValue_BumpSnapshot() throws IOException {
+    ManualEnvironmentEdge mee = new ManualEnvironmentEdge();
+    EnvironmentEdgeManagerTestHelper.injectEdge(mee);
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 42L;
+    long incr = 44L;
+
+    // first put something in kvset, then snapshot it.
+    Put put = new Put(row);
+    put.add(fam1, qual1, Bytes.toBytes(value));
+    region.put(put);
+
+    // get the store in question:
+    Store s = region.getStore(fam1);
+    s.snapshot(); //bam
+
+    // now increment:
+    long newVal = region.incrementColumnValue(row, fam1, qual1,
+        incr, false);
+
+    assertEquals(value+incr, newVal);
+
+    // get both versions:
+    Get get = new Get(row);
+    get.setMaxVersions();
+    get.addColumn(fam1,qual1);
+
+    Result r = region.get(get, null);
+    assertEquals(2, r.size());
+    KeyValue first = r.raw()[0];
+    KeyValue second = r.raw()[1];
+
+    assertTrue("ICV failed to upgrade timestamp",
+        first.getTimestamp() != second.getTimestamp());
+  }
+
+  public void testIncrementColumnValue_ConcurrentFlush() throws IOException {
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 1L;
+    long amount = 3L;
+
+    Put put = new Put(row);
+    put.add(fam1, qual1, Bytes.toBytes(value));
+    region.put(put);
+
+    // now increment during a flush
+    Thread t = new Thread() {
+      public void run() {
+        try {
+          region.flushcache();
+        } catch (IOException e) {
+          LOG.info("test ICV, got IOE during flushcache()");
+        }
+      }
+    };
+    t.start();
+    long r = region.incrementColumnValue(row, fam1, qual1, amount, true);
+    assertEquals(value+amount, r);
+
+    // this also asserts there is only 1 KeyValue in the set.
+    assertICV(row, fam1, qual1, value+amount);
+  }
+
+  public void testIncrementColumnValue_heapSize() throws IOException {
+    EnvironmentEdgeManagerTestHelper.injectEdge(new IncrementingEnvironmentEdge());
+
+    initHRegion(tableName, getName(), fam1);
+
+    long byAmount = 1L;
+    long size;
+
+    for( int i = 0; i < 1000 ; i++) {
+      region.incrementColumnValue(row, fam1, qual1, byAmount, true);
+
+      size = region.memstoreSize.get();
+      assertTrue("memstore size: " + size, size >= 0);
+    }
+  }
+
+  public void testIncrementColumnValue_UpdatingInPlace_Negative()
+    throws IOException {
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 3L;
+    long amount = -1L;
+
+    Put put = new Put(row);
+    put.add(fam1, qual1, Bytes.toBytes(value));
+    region.put(put);
+
+    long result = region.incrementColumnValue(row, fam1, qual1, amount, true);
+    assertEquals(value+amount, result);
+
+    assertICV(row, fam1, qual1, value+amount);
+  }
+
+  public void testIncrementColumnValue_AddingNew()
+    throws IOException {
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 1L;
+    long amount = 3L;
+
+    Put put = new Put(row);
+    put.add(fam1, qual1, Bytes.toBytes(value));
+    put.add(fam1, qual2, Bytes.toBytes(value));
+    region.put(put);
+
+    long result = region.incrementColumnValue(row, fam1, qual3, amount, true);
+    assertEquals(amount, result);
+
+    Get get = new Get(row);
+    get.addColumn(fam1, qual3);
+    Result rr = region.get(get, null);
+    assertEquals(1, rr.size());
+
+    // ensure none of the other cols were incremented.
+    assertICV(row, fam1, qual1, value);
+    assertICV(row, fam1, qual2, value);
+    assertICV(row, fam1, qual3, amount);
+  }
+
+  public void testIncrementColumnValue_UpdatingFromSF() throws IOException {
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 1L;
+    long amount = 3L;
+
+    Put put = new Put(row);
+    put.add(fam1, qual1, Bytes.toBytes(value));
+    put.add(fam1, qual2, Bytes.toBytes(value));
+    region.put(put);
+
+    // flush to disk.
+    region.flushcache();
+
+    Store store = region.getStore(fam1);
+    assertEquals(0, store.memstore.kvset.size());
+
+    long r = region.incrementColumnValue(row, fam1, qual1, amount, true);
+    assertEquals(value+amount, r);
+
+    assertICV(row, fam1, qual1, value+amount);
+  }
+
+  public void testIncrementColumnValue_AddingNewAfterSFCheck()
+    throws IOException {
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 1L;
+    long amount = 3L;
+
+    Put put = new Put(row);
+    put.add(fam1, qual1, Bytes.toBytes(value));
+    put.add(fam1, qual2, Bytes.toBytes(value));
+    region.put(put);
+    region.flushcache();
+
+    Store store = region.getStore(fam1);
+    assertEquals(0, store.memstore.kvset.size());
+
+    long r = region.incrementColumnValue(row, fam1, qual3, amount, true);
+    assertEquals(amount, r);
+
+    assertICV(row, fam1, qual3, amount);
+
+    region.flushcache();
+
+    // ensure that this gets to disk.
+    assertICV(row, fam1, qual3, amount);
+  }
+
+  /**
+   * Added for HBASE-3235.
+   *
+   * When the initial put and an ICV update were arriving with the same timestamp,
+   * the initial Put KV was being skipped during {@link MemStore#upsert(KeyValue)}
+   * causing the iteration for matching KVs, causing the update-in-place to not
+   * happen and the ICV put to effectively disappear.
+   * @throws IOException
+   */
+  public void testIncrementColumnValue_UpdatingInPlace_TimestampClobber() throws IOException {
+    initHRegion(tableName, getName(), fam1);
+
+    long value = 1L;
+    long amount = 3L;
+    long now = EnvironmentEdgeManager.currentTimeMillis();
+    ManualEnvironmentEdge mock = new ManualEnvironmentEdge();
+    mock.setValue(now);
+    EnvironmentEdgeManagerTestHelper.injectEdge(mock);
+
+    // verify we catch an ICV on a put with the same timestamp
+    Put put = new Put(row);
+    put.add(fam1, qual1, now, Bytes.toBytes(value));
+    region.put(put);
+
+    long result = region.incrementColumnValue(row, fam1, qual1, amount, true);
+
+    assertEquals(value+amount, result);
+
+    Store store = region.getStore(fam1);
+    // ICV should update the existing Put with the same timestamp
+    assertEquals(1, store.memstore.kvset.size());
+    assertTrue(store.memstore.snapshot.isEmpty());
+
+    assertICV(row, fam1, qual1, value+amount);
+
+    // verify we catch an ICV even when the put ts > now
+    put = new Put(row);
+    put.add(fam1, qual2, now+1, Bytes.toBytes(value));
+    region.put(put);
+
+    result = region.incrementColumnValue(row, fam1, qual2, amount, true);
+
+    assertEquals(value+amount, result);
+
+    store = region.getStore(fam1);
+    // ICV should update the existing Put with the same timestamp
+    assertEquals(2, store.memstore.kvset.size());
+    assertTrue(store.memstore.snapshot.isEmpty());
+
+    assertICV(row, fam1, qual2, value+amount);
+    EnvironmentEdgeManagerTestHelper.reset();
+  }
+
+  private void assertICV(byte [] row,
+                         byte [] familiy,
+                         byte[] qualifier,
+                         long amount) throws IOException {
+    // run a get and see?
+    Get get = new Get(row);
+    get.addColumn(familiy, qualifier);
+    Result result = region.get(get, null);
+    assertEquals(1, result.size());
+
+    KeyValue kv = result.raw()[0];
+    long r = Bytes.toLong(kv.getValue());
+    assertEquals(amount, r);
+  }
+
+
+
+  public void testScanner_Wildcard_FromMemStoreAndFiles_EnforceVersions()
+  throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] row1 = Bytes.toBytes("row1");
+    byte [] fam1 = Bytes.toBytes("fam1");
+    byte [] qf1 = Bytes.toBytes("qualifier1");
+    byte [] qf2 = Bytes.toBytes("quateslifier2");
+
+    long ts1 = 1;
+    long ts2 = ts1 + 1;
+    long ts3 = ts1 + 2;
+    long ts4 = ts1 + 3;
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, fam1);
+
+    //Putting data in Region
+    KeyValue kv14 = new KeyValue(row1, fam1, qf1, ts4, KeyValue.Type.Put, null);
+    KeyValue kv13 = new KeyValue(row1, fam1, qf1, ts3, KeyValue.Type.Put, null);
+    KeyValue kv12 = new KeyValue(row1, fam1, qf1, ts2, KeyValue.Type.Put, null);
+    KeyValue kv11 = new KeyValue(row1, fam1, qf1, ts1, KeyValue.Type.Put, null);
+
+    KeyValue kv24 = new KeyValue(row1, fam1, qf2, ts4, KeyValue.Type.Put, null);
+    KeyValue kv23 = new KeyValue(row1, fam1, qf2, ts3, KeyValue.Type.Put, null);
+    KeyValue kv22 = new KeyValue(row1, fam1, qf2, ts2, KeyValue.Type.Put, null);
+    KeyValue kv21 = new KeyValue(row1, fam1, qf2, ts1, KeyValue.Type.Put, null);
+
+    Put put = null;
+    put = new Put(row1);
+    put.add(kv14);
+    put.add(kv24);
+    region.put(put);
+    region.flushcache();
+
+    put = new Put(row1);
+    put.add(kv23);
+    put.add(kv13);
+    region.put(put);
+    region.flushcache();
+
+    put = new Put(row1);
+    put.add(kv22);
+    put.add(kv12);
+    region.put(put);
+    region.flushcache();
+
+    put = new Put(row1);
+    put.add(kv21);
+    put.add(kv11);
+    region.put(put);
+
+    //Expected
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(kv14);
+    expected.add(kv13);
+    expected.add(kv12);
+    expected.add(kv24);
+    expected.add(kv23);
+    expected.add(kv22);
+
+    Scan scan = new Scan(row1);
+    int versions = 3;
+    scan.setMaxVersions(versions);
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    InternalScanner scanner = region.getScanner(scan);
+
+    boolean hasNext = scanner.next(actual);
+    assertEquals(false, hasNext);
+
+    //Verify result
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Split test
+  //////////////////////////////////////////////////////////////////////////////
+  /**
+   * Splits twice and verifies getting from each of the split regions.
+   * @throws Exception
+   */
+  public void testBasicSplit() throws Exception {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [][] families = {fam1, fam2, fam3};
+
+    Configuration hc = initSplit();
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, hc, families);
+
+    try {
+      LOG.info("" + addContent(region, fam3));
+      region.flushcache();
+      byte [] splitRow = region.compactStores();
+      assertNotNull(splitRow);
+      LOG.info("SplitRow: " + Bytes.toString(splitRow));
+      HRegion [] regions = splitRegion(region, splitRow);
+      try {
+        // Need to open the regions.
+        // TODO: Add an 'open' to HRegion... don't do open by constructing
+        // instance.
+        for (int i = 0; i < regions.length; i++) {
+          regions[i] = openClosedRegion(regions[i]);
+        }
+        // Assert can get rows out of new regions. Should be able to get first
+        // row from first region and the midkey from second region.
+        assertGet(regions[0], fam3, Bytes.toBytes(START_KEY));
+        assertGet(regions[1], fam3, splitRow);
+        // Test I can get scanner and that it starts at right place.
+        assertScan(regions[0], fam3,
+            Bytes.toBytes(START_KEY));
+        assertScan(regions[1], fam3, splitRow);
+        // Now prove can't split regions that have references.
+        for (int i = 0; i < regions.length; i++) {
+          // Add so much data to this region, we create a store file that is >
+          // than one of our unsplitable references. it will.
+          for (int j = 0; j < 2; j++) {
+            addContent(regions[i], fam3);
+          }
+          addContent(regions[i], fam2);
+          addContent(regions[i], fam1);
+          regions[i].flushcache();
+        }
+
+        byte [][] midkeys = new byte [regions.length][];
+        // To make regions splitable force compaction.
+        for (int i = 0; i < regions.length; i++) {
+          midkeys[i] = regions[i].compactStores();
+        }
+
+        TreeMap<String, HRegion> sortedMap = new TreeMap<String, HRegion>();
+        // Split these two daughter regions so then I'll have 4 regions. Will
+        // split because added data above.
+        for (int i = 0; i < regions.length; i++) {
+          HRegion[] rs = null;
+          if (midkeys[i] != null) {
+            rs = splitRegion(regions[i], midkeys[i]);
+            for (int j = 0; j < rs.length; j++) {
+              sortedMap.put(Bytes.toString(rs[j].getRegionName()),
+                openClosedRegion(rs[j]));
+            }
+          }
+        }
+        LOG.info("Made 4 regions");
+        // The splits should have been even. Test I can get some arbitrary row
+        // out of each.
+        int interval = (LAST_CHAR - FIRST_CHAR) / 3;
+        byte[] b = Bytes.toBytes(START_KEY);
+        for (HRegion r : sortedMap.values()) {
+          assertGet(r, fam3, b);
+          b[0] += interval;
+        }
+      } finally {
+        for (int i = 0; i < regions.length; i++) {
+          try {
+            regions[i].close();
+          } catch (IOException e) {
+            // Ignore.
+          }
+        }
+      }
+    } finally {
+      if (region != null) {
+        region.close();
+        region.getLog().closeAndDelete();
+      }
+    }
+  }
+
+  public void testSplitRegion() throws IOException {
+    byte [] tableName = Bytes.toBytes("testtable");
+    byte [] qualifier = Bytes.toBytes("qualifier");
+    Configuration hc = initSplit();
+    int numRows = 10;
+    byte [][] families = {fam1, fam3};
+
+    //Setting up region
+    String method = this.getName();
+    initHRegion(tableName, method, hc, families);
+
+    //Put data in region
+    int startRow = 100;
+    putData(startRow, numRows, qualifier, families);
+    int splitRow = startRow + numRows;
+    putData(splitRow, numRows, qualifier, families);
+    region.flushcache();
+
+    HRegion [] regions = null;
+    try {
+      regions = splitRegion(region, Bytes.toBytes("" + splitRow));
+      //Opening the regions returned.
+      for (int i = 0; i < regions.length; i++) {
+        regions[i] = openClosedRegion(regions[i]);
+      }
+      //Verifying that the region has been split
+      assertEquals(2, regions.length);
+
+      //Verifying that all data is still there and that data is in the right
+      //place
+      verifyData(regions[0], startRow, numRows, qualifier, families);
+      verifyData(regions[1], splitRow, numRows, qualifier, families);
+
+    } finally {
+      if (region != null) {
+        region.close();
+        region.getLog().closeAndDelete();
+      }
+    }
+  }
+
+
+  /**
+   * Flushes the cache in a thread while scanning. The tests verify that the
+   * scan is coherent - e.g. the returned results are always of the same or
+   * later update as the previous results.
+   * @throws IOException scan / compact
+   * @throws InterruptedException thread join
+   */
+  public void testFlushCacheWhileScanning() throws IOException, InterruptedException {
+    byte[] tableName = Bytes.toBytes("testFlushCacheWhileScanning");
+    byte[] family = Bytes.toBytes("family");
+    int numRows = 1000;
+    int flushAndScanInterval = 10;
+    int compactInterval = 10 * flushAndScanInterval;
+
+    String method = "testFlushCacheWhileScanning";
+    initHRegion(tableName,method, family);
+    FlushThread flushThread = new FlushThread();
+    flushThread.start();
+
+    Scan scan = new Scan();
+    scan.addFamily(family);
+    scan.setFilter(new SingleColumnValueFilter(family, qual1,
+      CompareFilter.CompareOp.EQUAL, new BinaryComparator(Bytes.toBytes(5L))));
+
+    int expectedCount = 0;
+    List<KeyValue> res = new ArrayList<KeyValue>();
+
+    boolean toggle=true;
+    for (long i = 0; i < numRows; i++) {
+      Put put = new Put(Bytes.toBytes(i));
+      put.add(family, qual1, Bytes.toBytes(i % 10));
+      region.put(put);
+
+      if (i != 0 && i % compactInterval == 0) {
+        //System.out.println("iteration = " + i);
+        region.compactStores(true);
+      }
+
+      if (i % 10 == 5L) {
+        expectedCount++;
+      }
+
+      if (i != 0 && i % flushAndScanInterval == 0) {
+        res.clear();
+        InternalScanner scanner = region.getScanner(scan);
+        if (toggle) {
+          flushThread.flush();
+        }
+        while (scanner.next(res)) ;
+        if (!toggle) {
+          flushThread.flush();
+        }
+        assertEquals("i=" + i, expectedCount, res.size());
+        toggle = !toggle;
+      }
+    }
+
+    flushThread.done();
+    flushThread.join();
+    flushThread.checkNoError();
+  }
+
+  protected class FlushThread extends Thread {
+    private volatile boolean done;
+    private Throwable error = null;
+
+    public void done() {
+      done = true;
+      synchronized (this) {
+        interrupt();
+      }
+    }
+
+    public void checkNoError() {
+      if (error != null) {
+        assertNull(error);
+      }
+    }
+
+    @Override
+    public void run() {
+      done = false;
+      while (!done) {
+        synchronized (this) {
+          try {
+            wait();
+          } catch (InterruptedException ignored) {
+            if (done) {
+              break;
+            }
+          }
+        }
+        try {
+          region.flushcache();
+        } catch (IOException e) {
+          if (!done) {
+            LOG.error("Error while flusing cache", e);
+            error = e;
+          }
+          break;
+        }
+      }
+
+    }
+
+    public void flush() {
+      synchronized (this) {
+        notify();
+      }
+
+    }
+  }
+
+  /**
+   * Writes very wide records and scans for the latest every time..
+   * Flushes and compacts the region every now and then to keep things
+   * realistic.
+   *
+   * @throws IOException          by flush / scan / compaction
+   * @throws InterruptedException when joining threads
+   */
+  public void testWritesWhileScanning()
+    throws IOException, InterruptedException {
+    byte[] tableName = Bytes.toBytes("testWritesWhileScanning");
+    int testCount = 100;
+    int numRows = 1;
+    int numFamilies = 10;
+    int numQualifiers = 100;
+    int flushInterval = 7;
+    int compactInterval = 5 * flushInterval;
+    byte[][] families = new byte[numFamilies][];
+    for (int i = 0; i < numFamilies; i++) {
+      families[i] = Bytes.toBytes("family" + i);
+    }
+    byte[][] qualifiers = new byte[numQualifiers][];
+    for (int i = 0; i < numQualifiers; i++) {
+      qualifiers[i] = Bytes.toBytes("qual" + i);
+    }
+
+    String method = "testWritesWhileScanning";
+    initHRegion(tableName, method, families);
+    PutThread putThread = new PutThread(numRows, families, qualifiers);
+    putThread.start();
+    putThread.waitForFirstPut();
+
+    FlushThread flushThread = new FlushThread();
+    flushThread.start();
+
+    Scan scan = new Scan(Bytes.toBytes("row0"), Bytes.toBytes("row1"));
+//    scan.setFilter(new RowFilter(CompareFilter.CompareOp.EQUAL,
+//      new BinaryComparator(Bytes.toBytes("row0"))));
+
+    int expectedCount = numFamilies * numQualifiers;
+    List<KeyValue> res = new ArrayList<KeyValue>();
+
+    long prevTimestamp = 0L;
+    for (int i = 0; i < testCount; i++) {
+
+      if (i != 0 && i % compactInterval == 0) {
+        region.compactStores(true);
+      }
+
+      if (i != 0 && i % flushInterval == 0) {
+        //System.out.println("flush scan iteration = " + i);
+        flushThread.flush();
+      }
+
+      boolean previousEmpty = res.isEmpty();
+      res.clear();
+      InternalScanner scanner = region.getScanner(scan);
+      while (scanner.next(res)) ;
+      if (!res.isEmpty() || !previousEmpty || i > compactInterval) {
+        assertEquals("i=" + i, expectedCount, res.size());
+        long timestamp = res.get(0).getTimestamp();
+        assertTrue("Timestamps were broke: " + timestamp + " prev: " + prevTimestamp,
+            timestamp >= prevTimestamp);
+        prevTimestamp = timestamp;
+      }
+    }
+
+    putThread.done();
+
+    region.flushcache();
+
+    putThread.join();
+    putThread.checkNoError();
+
+    flushThread.done();
+    flushThread.join();
+    flushThread.checkNoError();
+  }
+
+  protected class PutThread extends Thread {
+    private volatile boolean done;
+    private volatile int numPutsFinished = 0;
+
+    private Throwable error = null;
+    private int numRows;
+    private byte[][] families;
+    private byte[][] qualifiers;
+
+    private PutThread(int numRows, byte[][] families,
+      byte[][] qualifiers) {
+      this.numRows = numRows;
+      this.families = families;
+      this.qualifiers = qualifiers;
+    }
+
+    /**
+     * Block until this thread has put at least one row.
+     */
+    public void waitForFirstPut() throws InterruptedException {
+      // wait until put thread actually puts some data
+      while (numPutsFinished == 0) {
+        checkNoError();
+        Thread.sleep(50);
+      }
+    }
+
+    public void done() {
+      done = true;
+      synchronized (this) {
+        interrupt();
+      }
+    }
+
+    public void checkNoError() {
+      if (error != null) {
+        assertNull(error);
+      }
+    }
+
+    @Override
+    public void run() {
+      done = false;
+      while (!done) {
+        try {
+          for (int r = 0; r < numRows; r++) {
+            byte[] row = Bytes.toBytes("row" + r);
+            Put put = new Put(row);
+            for (byte[] family : families) {
+              for (byte[] qualifier : qualifiers) {
+                put.add(family, qualifier, (long) numPutsFinished,
+                    Bytes.toBytes(numPutsFinished));
+              }
+            }
+//            System.out.println("Putting of kvsetsize=" + put.size());
+            region.put(put);
+            numPutsFinished++;
+            if (numPutsFinished > 0 && numPutsFinished % 47 == 0) {
+              System.out.println("put iteration = " + numPutsFinished);
+              Delete delete = new Delete(row, (long)numPutsFinished-30, null);
+              region.delete(delete, null, true);
+            }
+            numPutsFinished++;
+          }
+        } catch (IOException e) {
+          LOG.error("error while putting records", e);
+          error = e;
+          break;
+        }
+      }
+
+    }
+
+  }
+
+
+  /**
+   * Writes very wide records and gets the latest row every time..
+   * Flushes and compacts the region every now and then to keep things
+   * realistic.
+   *
+   * @throws IOException          by flush / scan / compaction
+   * @throws InterruptedException when joining threads
+   */
+  public void testWritesWhileGetting()
+    throws IOException, InterruptedException {
+    byte[] tableName = Bytes.toBytes("testWritesWhileScanning");
+    int testCount = 100;
+    int numRows = 1;
+    int numFamilies = 10;
+    int numQualifiers = 100;
+    int flushInterval = 10;
+    int compactInterval = 10 * flushInterval;
+    byte[][] families = new byte[numFamilies][];
+    for (int i = 0; i < numFamilies; i++) {
+      families[i] = Bytes.toBytes("family" + i);
+    }
+    byte[][] qualifiers = new byte[numQualifiers][];
+    for (int i = 0; i < numQualifiers; i++) {
+      qualifiers[i] = Bytes.toBytes("qual" + i);
+    }
+
+    String method = "testWritesWhileScanning";
+    initHRegion(tableName, method, families);
+    PutThread putThread = new PutThread(numRows, families, qualifiers);
+    putThread.start();
+    putThread.waitForFirstPut();
+
+    FlushThread flushThread = new FlushThread();
+    flushThread.start();
+
+    Get get = new Get(Bytes.toBytes("row0"));
+    Result result = null;
+
+    int expectedCount = numFamilies * numQualifiers;
+
+    long prevTimestamp = 0L;
+    for (int i = 0; i < testCount; i++) {
+
+      if (i != 0 && i % compactInterval == 0) {
+        region.compactStores(true);
+      }
+
+      if (i != 0 && i % flushInterval == 0) {
+        //System.out.println("iteration = " + i);
+        flushThread.flush();
+      }
+
+      boolean previousEmpty = result == null || result.isEmpty();
+      result = region.get(get, null);
+      if (!result.isEmpty() || !previousEmpty || i > compactInterval) {
+        assertEquals("i=" + i, expectedCount, result.size());
+        // TODO this was removed, now what dangit?!
+        // search looking for the qualifier in question?
+        long timestamp = 0;
+        for (KeyValue kv : result.sorted()) {
+          if (Bytes.equals(kv.getFamily(), families[0])
+            && Bytes.equals(kv.getQualifier(), qualifiers[0])) {
+            timestamp = kv.getTimestamp();
+          }
+        }
+        assertTrue(timestamp >= prevTimestamp);
+        prevTimestamp = timestamp;
+
+        byte [] gotValue = null;
+        for (KeyValue kv : result.raw()) {
+          byte [] thisValue = kv.getValue();
+          if (gotValue != null) {
+            assertEquals(gotValue, thisValue);
+          }
+          gotValue = thisValue;
+        }
+      }
+    }
+
+    putThread.done();
+
+    region.flushcache();
+
+    putThread.join();
+    putThread.checkNoError();
+
+    flushThread.done();
+    flushThread.join();
+    flushThread.checkNoError();
+  }
+
+
+  public void testIndexesScanWithOneDeletedRow() throws IOException {
+    byte[] tableName = Bytes.toBytes("testIndexesScanWithOneDeletedRow");
+    byte[] family = Bytes.toBytes("family");
+
+    //Setting up region
+    String method = "testIndexesScanWithOneDeletedRow";
+    initHRegion(tableName, method, HBaseConfiguration.create(), family);
+
+    Put put = new Put(Bytes.toBytes(1L));
+    put.add(family, qual1, 1L, Bytes.toBytes(1L));
+    region.put(put);
+
+    region.flushcache();
+
+    Delete delete = new Delete(Bytes.toBytes(1L), 1L, null);
+    //delete.deleteColumn(family, qual1);
+    region.delete(delete, null, true);
+
+    put = new Put(Bytes.toBytes(2L));
+    put.add(family, qual1, 2L, Bytes.toBytes(2L));
+    region.put(put);
+
+    Scan idxScan = new Scan();
+    idxScan.addFamily(family);
+    idxScan.setFilter(new FilterList(FilterList.Operator.MUST_PASS_ALL,
+      Arrays.<Filter>asList(new SingleColumnValueFilter(family, qual1,
+        CompareFilter.CompareOp.GREATER_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes(0L))),
+        new SingleColumnValueFilter(family, qual1,
+          CompareFilter.CompareOp.LESS_OR_EQUAL,
+          new BinaryComparator(Bytes.toBytes(3L)))
+      )));
+    InternalScanner scanner = region.getScanner(idxScan);
+    List<KeyValue> res = new ArrayList<KeyValue>();
+
+    //long start = System.nanoTime();
+    while (scanner.next(res)) ;
+    //long end = System.nanoTime();
+    //System.out.println("memStoreEmpty=" + memStoreEmpty + ", time=" + (end - start)/1000000D);
+    assertEquals(1L, res.size());
+
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Bloom filter test
+  //////////////////////////////////////////////////////////////////////////////
+
+  public void testAllColumnsWithBloomFilter() throws IOException {
+    byte [] TABLE = Bytes.toBytes("testAllColumnsWithBloomFilter");
+    byte [] FAMILY = Bytes.toBytes("family");
+
+    //Create table
+    HColumnDescriptor hcd = new HColumnDescriptor(FAMILY, Integer.MAX_VALUE,
+        HColumnDescriptor.DEFAULT_COMPRESSION,
+        HColumnDescriptor.DEFAULT_IN_MEMORY,
+        HColumnDescriptor.DEFAULT_BLOCKCACHE,
+        Integer.MAX_VALUE, HColumnDescriptor.DEFAULT_TTL,
+        "rowcol",
+        HColumnDescriptor.DEFAULT_REPLICATION_SCOPE);
+    HTableDescriptor htd = new HTableDescriptor(TABLE);
+    htd.addFamily(hcd);
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    Path path = new Path(DIR + "testAllColumnsWithBloomFilter");
+    region = HRegion.createHRegion(info, path, conf);
+
+    // For row:0, col:0: insert versions 1 through 5.
+    byte row[] = Bytes.toBytes("row:" + 0);
+    byte column[] = Bytes.toBytes("column:" + 0);
+    Put put = new Put(row);
+    for (long idx = 1; idx <= 4; idx++) {
+      put.add(FAMILY, column, idx, Bytes.toBytes("value-version-" + idx));
+    }
+    region.put(put);
+
+    //Flush
+    region.flushcache();
+
+    //Get rows
+    Get get = new Get(row);
+    get.setMaxVersions();
+    KeyValue[] kvs = region.get(get, null).raw();
+
+    //Check if rows are correct
+    assertEquals(4, kvs.length);
+    checkOneCell(kvs[0], FAMILY, 0, 0, 4);
+    checkOneCell(kvs[1], FAMILY, 0, 0, 3);
+    checkOneCell(kvs[2], FAMILY, 0, 0, 2);
+    checkOneCell(kvs[3], FAMILY, 0, 0, 1);
+  }
+
+  /**
+    * Testcase to cover bug-fix for HBASE-2823
+    * Ensures correct delete when issuing delete row
+    * on columns with bloom filter set to row+col (BloomType.ROWCOL)
+   */
+  public void testDeleteRowWithBloomFilter() throws IOException {
+    byte [] tableName = Bytes.toBytes("testDeleteRowWithBloomFilter");
+    byte [] familyName = Bytes.toBytes("familyName");
+
+    // Create Table
+    HColumnDescriptor hcd = new HColumnDescriptor(familyName, Integer.MAX_VALUE,
+        HColumnDescriptor.DEFAULT_COMPRESSION, false, true,
+        HColumnDescriptor.DEFAULT_TTL, "rowcol");
+
+    HTableDescriptor htd = new HTableDescriptor(tableName);
+    htd.addFamily(hcd);
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    Path path = new Path(DIR + "TestDeleteRowWithBloomFilter");
+    region = HRegion.createHRegion(info, path, conf);
+
+    // Insert some data
+    byte row[] = Bytes.toBytes("row1");
+    byte col[] = Bytes.toBytes("col1");
+
+    Put put = new Put(row);
+    put.add(familyName, col, 1, Bytes.toBytes("SomeRandomValue"));
+    region.put(put);
+    region.flushcache();
+
+    Delete del = new Delete(row);
+    region.delete(del, null, true);
+    region.flushcache();
+
+    // Get remaining rows (should have none)
+    Get get = new Get(row);
+    get.addColumn(familyName, col);
+
+    KeyValue[] keyValues = region.get(get, null).raw();
+    assertTrue(keyValues.length == 0);
+  }
+
+  private void putData(int startRow, int numRows, byte [] qf,
+      byte [] ...families)
+  throws IOException {
+    for(int i=startRow; i<startRow+numRows; i++) {
+      Put put = new Put(Bytes.toBytes("" + i));
+      for(byte [] family : families) {
+        put.add(family, qf, null);
+      }
+      region.put(put);
+    }
+  }
+
+  private void verifyData(HRegion newReg, int startRow, int numRows, byte [] qf,
+      byte [] ... families)
+  throws IOException {
+    for(int i=startRow; i<startRow + numRows; i++) {
+      byte [] row = Bytes.toBytes("" + i);
+      Get get = new Get(row);
+      for(byte [] family : families) {
+        get.addColumn(family, qf);
+      }
+      Result result = newReg.get(get, null);
+      KeyValue [] raw = result.sorted();
+      assertEquals(families.length, result.size());
+      for(int j=0; j<families.length; j++) {
+        assertEquals(0, Bytes.compareTo(row, raw[j].getRow()));
+        assertEquals(0, Bytes.compareTo(families[j], raw[j].getFamily()));
+        assertEquals(0, Bytes.compareTo(qf, raw[j].getQualifier()));
+      }
+    }
+  }
+
+  private void assertGet(final HRegion r, final byte [] family, final byte [] k)
+  throws IOException {
+    // Now I have k, get values out and assert they are as expected.
+    Get get = new Get(k).addFamily(family).setMaxVersions();
+    KeyValue [] results = r.get(get, null).raw();
+    for (int j = 0; j < results.length; j++) {
+      byte [] tmp = results[j].getValue();
+      // Row should be equal to value every time.
+      assertTrue(Bytes.equals(k, tmp));
+    }
+  }
+
+  /*
+   * Assert first value in the passed region is <code>firstValue</code>.
+   * @param r
+   * @param fs
+   * @param firstValue
+   * @throws IOException
+   */
+  private void assertScan(final HRegion r, final byte [] fs,
+      final byte [] firstValue)
+  throws IOException {
+    byte [][] families = {fs};
+    Scan scan = new Scan();
+    for (int i = 0; i < families.length; i++) scan.addFamily(families[i]);
+    InternalScanner s = r.getScanner(scan);
+    try {
+      List<KeyValue> curVals = new ArrayList<KeyValue>();
+      boolean first = true;
+      OUTER_LOOP: while(s.next(curVals)) {
+        for (KeyValue kv: curVals) {
+          byte [] val = kv.getValue();
+          byte [] curval = val;
+          if (first) {
+            first = false;
+            assertTrue(Bytes.compareTo(curval, firstValue) == 0);
+          } else {
+            // Not asserting anything.  Might as well break.
+            break OUTER_LOOP;
+          }
+        }
+      }
+    } finally {
+      s.close();
+    }
+  }
+
+  private Configuration initSplit() {
+    Configuration conf = HBaseConfiguration.create();
+    // Always compact if there is more than one store file.
+    conf.setInt("hbase.hstore.compactionThreshold", 2);
+
+    // Make lease timeout longer, lease checks less frequent
+    conf.setInt("hbase.master.lease.thread.wakefrequency", 5 * 1000);
+
+    conf.setInt(HConstants.HBASE_REGIONSERVER_LEASE_PERIOD_KEY, 10 * 1000);
+
+    // Increase the amount of time between client retries
+    conf.setLong("hbase.client.pause", 15 * 1000);
+
+    // This size should make it so we always split using the addContent
+    // below.  After adding all data, the first region is 1.3M
+    conf.setLong("hbase.hregion.max.filesize", 1024 * 128);
+    return conf;
+  }
+
+  private void initHRegion (byte [] tableName, String callingMethod,
+    byte[] ... families)
+  throws IOException {
+    initHRegion(tableName, callingMethod, HBaseConfiguration.create(), families);
+  }
+
+  private void initHRegion (byte [] tableName, String callingMethod,
+    Configuration conf, byte [] ... families)
+  throws IOException{
+    HTableDescriptor htd = new HTableDescriptor(tableName);
+    for(byte [] family : families) {
+      htd.addFamily(new HColumnDescriptor(family));
+    }
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    Path path = new Path(DIR + callingMethod);
+    if (fs.exists(path)) {
+      if (!fs.delete(path, true)) {
+        throw new IOException("Failed delete of " + path);
+      }
+    }
+    region = HRegion.createHRegion(info, path, conf);
+  }
+
+  /**
+   * Assert that the passed in KeyValue has expected contents for the
+   * specified row, column & timestamp.
+   */
+  private void checkOneCell(KeyValue kv, byte[] cf,
+                             int rowIdx, int colIdx, long ts) {
+    String ctx = "rowIdx=" + rowIdx + "; colIdx=" + colIdx + "; ts=" + ts;
+    assertEquals("Row mismatch which checking: " + ctx,
+                 "row:"+ rowIdx, Bytes.toString(kv.getRow()));
+    assertEquals("ColumnFamily mismatch while checking: " + ctx,
+                 Bytes.toString(cf), Bytes.toString(kv.getFamily()));
+    assertEquals("Column qualifier mismatch while checking: " + ctx,
+                 "column:" + colIdx, Bytes.toString(kv.getQualifier()));
+    assertEquals("Timestamp mismatch while checking: " + ctx,
+                 ts, kv.getTimestamp());
+    assertEquals("Value mismatch while checking: " + ctx,
+                 "value-version-" + ts, Bytes.toString(kv.getValue()));
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java
new file mode 100644
index 0000000..516139b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java
@@ -0,0 +1,82 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.MD5Hash;
+
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+public class TestHRegionInfo {
+  @Test
+  public void testCreateHRegionInfoName() throws Exception {
+    String tableName = "tablename";
+    final byte [] tn = Bytes.toBytes(tableName);
+    String startKey = "startkey";
+    final byte [] sk = Bytes.toBytes(startKey);
+    String id = "id";
+
+    // old format region name
+    byte [] name = HRegionInfo.createRegionName(tn, sk, id, false);
+    String nameStr = Bytes.toString(name);
+    assertEquals(tableName + "," + startKey + "," + id, nameStr);
+
+
+    // new format region name.
+    String md5HashInHex = MD5Hash.getMD5AsHex(name);
+    assertEquals(HRegionInfo.MD5_HEX_LENGTH, md5HashInHex.length());
+    name = HRegionInfo.createRegionName(tn, sk, id, true);
+    nameStr = Bytes.toString(name);
+    assertEquals(tableName + "," + startKey + ","
+                 + id + "." + md5HashInHex + ".",
+                 nameStr);
+  }
+  
+  @Test
+  public void testContainsRange() {
+    HTableDescriptor tableDesc = new HTableDescriptor("testtable");
+    HRegionInfo hri = new HRegionInfo(
+        tableDesc, Bytes.toBytes("a"), Bytes.toBytes("g"));
+    // Single row range at start of region
+    assertTrue(hri.containsRange(Bytes.toBytes("a"), Bytes.toBytes("a")));
+    // Fully contained range
+    assertTrue(hri.containsRange(Bytes.toBytes("b"), Bytes.toBytes("c")));
+    // Range overlapping start of region
+    assertTrue(hri.containsRange(Bytes.toBytes("a"), Bytes.toBytes("c")));
+    // Fully contained single-row range
+    assertTrue(hri.containsRange(Bytes.toBytes("c"), Bytes.toBytes("c")));
+    // Range that overlaps end key and hence doesn't fit
+    assertFalse(hri.containsRange(Bytes.toBytes("a"), Bytes.toBytes("g")));
+    // Single row range on end key
+    assertFalse(hri.containsRange(Bytes.toBytes("g"), Bytes.toBytes("g")));
+    // Single row range entirely outside
+    assertFalse(hri.containsRange(Bytes.toBytes("z"), Bytes.toBytes("z")));
+    
+    // Degenerate range
+    try {
+      hri.containsRange(Bytes.toBytes("z"), Bytes.toBytes("a"));
+      fail("Invalid range did not throw IAE");
+    } catch (IllegalArgumentException iae) {
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java
new file mode 100644
index 0000000..7be7f71
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java
@@ -0,0 +1,269 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+public class TestKeyValueHeap extends HBaseTestCase {
+  private static final boolean PRINT = false;
+
+  List<KeyValueScanner> scanners = new ArrayList<KeyValueScanner>();
+
+  private byte[] row1;
+  private byte[] fam1;
+  private byte[] col1;
+  private byte[] data;
+
+  private byte[] row2;
+  private byte[] fam2;
+  private byte[] col2;
+
+  private byte[] col3;
+  private byte[] col4;
+  private byte[] col5;
+
+  public void setUp() throws Exception {
+    super.setUp();
+    data = Bytes.toBytes("data");
+    row1 = Bytes.toBytes("row1");
+    fam1 = Bytes.toBytes("fam1");
+    col1 = Bytes.toBytes("col1");
+    row2 = Bytes.toBytes("row2");
+    fam2 = Bytes.toBytes("fam2");
+    col2 = Bytes.toBytes("col2");
+    col3 = Bytes.toBytes("col3");
+    col4 = Bytes.toBytes("col4");
+    col5 = Bytes.toBytes("col5");
+  }
+
+  public void testSorted() throws IOException{
+    //Cases that need to be checked are:
+    //1. The "smallest" KeyValue is in the same scanners as current
+    //2. Current scanner gets empty
+
+    List<KeyValue> l1 = new ArrayList<KeyValue>();
+    l1.add(new KeyValue(row1, fam1, col5, data));
+    l1.add(new KeyValue(row2, fam1, col1, data));
+    l1.add(new KeyValue(row2, fam1, col2, data));
+    scanners.add(new Scanner(l1));
+
+    List<KeyValue> l2 = new ArrayList<KeyValue>();
+    l2.add(new KeyValue(row1, fam1, col1, data));
+    l2.add(new KeyValue(row1, fam1, col2, data));
+    scanners.add(new Scanner(l2));
+
+    List<KeyValue> l3 = new ArrayList<KeyValue>();
+    l3.add(new KeyValue(row1, fam1, col3, data));
+    l3.add(new KeyValue(row1, fam1, col4, data));
+    l3.add(new KeyValue(row1, fam2, col1, data));
+    l3.add(new KeyValue(row1, fam2, col2, data));
+    l3.add(new KeyValue(row2, fam1, col3, data));
+    scanners.add(new Scanner(l3));
+
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(new KeyValue(row1, fam1, col1, data));
+    expected.add(new KeyValue(row1, fam1, col2, data));
+    expected.add(new KeyValue(row1, fam1, col3, data));
+    expected.add(new KeyValue(row1, fam1, col4, data));
+    expected.add(new KeyValue(row1, fam1, col5, data));
+    expected.add(new KeyValue(row1, fam2, col1, data));
+    expected.add(new KeyValue(row1, fam2, col2, data));
+    expected.add(new KeyValue(row2, fam1, col1, data));
+    expected.add(new KeyValue(row2, fam1, col2, data));
+    expected.add(new KeyValue(row2, fam1, col3, data));
+
+    //Creating KeyValueHeap
+    KeyValueHeap kvh =
+      new KeyValueHeap(scanners, KeyValue.COMPARATOR);
+
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    while(kvh.peek() != null){
+      actual.add(kvh.next());
+    }
+
+    assertEquals(expected.size(), actual.size());
+    for(int i=0; i<expected.size(); i++){
+      assertEquals(expected.get(i), actual.get(i));
+      if(PRINT){
+        System.out.println("expected " +expected.get(i)+
+            "\nactual   " +actual.get(i) +"\n");
+      }
+    }
+
+    //Check if result is sorted according to Comparator
+    for(int i=0; i<actual.size()-1; i++){
+      int ret = KeyValue.COMPARATOR.compare(actual.get(i), actual.get(i+1));
+      assertTrue(ret < 0);
+    }
+
+  }
+
+  public void testSeek() throws IOException {
+    //Cases:
+    //1. Seek KeyValue that is not in scanner
+    //2. Check that smallest that is returned from a seek is correct
+
+    List<KeyValue> l1 = new ArrayList<KeyValue>();
+    l1.add(new KeyValue(row1, fam1, col5, data));
+    l1.add(new KeyValue(row2, fam1, col1, data));
+    l1.add(new KeyValue(row2, fam1, col2, data));
+    scanners.add(new Scanner(l1));
+
+    List<KeyValue> l2 = new ArrayList<KeyValue>();
+    l2.add(new KeyValue(row1, fam1, col1, data));
+    l2.add(new KeyValue(row1, fam1, col2, data));
+    scanners.add(new Scanner(l2));
+
+    List<KeyValue> l3 = new ArrayList<KeyValue>();
+    l3.add(new KeyValue(row1, fam1, col3, data));
+    l3.add(new KeyValue(row1, fam1, col4, data));
+    l3.add(new KeyValue(row1, fam2, col1, data));
+    l3.add(new KeyValue(row1, fam2, col2, data));
+    l3.add(new KeyValue(row2, fam1, col3, data));
+    scanners.add(new Scanner(l3));
+
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(new KeyValue(row2, fam1, col1, data));
+
+    //Creating KeyValueHeap
+    KeyValueHeap kvh =
+      new KeyValueHeap(scanners, KeyValue.COMPARATOR);
+
+    KeyValue seekKv = new KeyValue(row2, fam1, null, null);
+    kvh.seek(seekKv);
+
+    List<KeyValue> actual = new ArrayList<KeyValue>();
+    actual.add(kvh.peek());
+
+    assertEquals(expected.size(), actual.size());
+    for(int i=0; i<expected.size(); i++){
+      assertEquals(expected.get(i), actual.get(i));
+      if(PRINT){
+        System.out.println("expected " +expected.get(i)+
+            "\nactual   " +actual.get(i) +"\n");
+      }
+    }
+
+  }
+
+  public void testScannerLeak() throws IOException {
+    // Test for unclosed scanners (HBASE-1927)
+
+    List<KeyValue> l1 = new ArrayList<KeyValue>();
+    l1.add(new KeyValue(row1, fam1, col5, data));
+    l1.add(new KeyValue(row2, fam1, col1, data));
+    l1.add(new KeyValue(row2, fam1, col2, data));
+    scanners.add(new Scanner(l1));
+
+    List<KeyValue> l2 = new ArrayList<KeyValue>();
+    l2.add(new KeyValue(row1, fam1, col1, data));
+    l2.add(new KeyValue(row1, fam1, col2, data));
+    scanners.add(new Scanner(l2));
+
+    List<KeyValue> l3 = new ArrayList<KeyValue>();
+    l3.add(new KeyValue(row1, fam1, col3, data));
+    l3.add(new KeyValue(row1, fam1, col4, data));
+    l3.add(new KeyValue(row1, fam2, col1, data));
+    l3.add(new KeyValue(row1, fam2, col2, data));
+    l3.add(new KeyValue(row2, fam1, col3, data));
+    scanners.add(new Scanner(l3));
+
+    List<KeyValue> l4 = new ArrayList<KeyValue>();
+    scanners.add(new Scanner(l4));
+
+    //Creating KeyValueHeap
+    KeyValueHeap kvh = new KeyValueHeap(scanners, KeyValue.COMPARATOR);
+
+    while(kvh.next() != null);
+
+    for(KeyValueScanner scanner : scanners) {
+      assertTrue(((Scanner)scanner).isClosed());
+    }
+  }
+
+  private static class Scanner implements KeyValueScanner {
+    private Iterator<KeyValue> iter;
+    private KeyValue current;
+    private boolean closed = false;
+
+    public Scanner(List<KeyValue> list) {
+      Collections.sort(list, KeyValue.COMPARATOR);
+      iter = list.iterator();
+      if(iter.hasNext()){
+        current = iter.next();
+      }
+    }
+
+    public KeyValue peek() {
+      return current;
+    }
+
+    public KeyValue next() {
+      KeyValue oldCurrent = current;
+      if(iter.hasNext()){
+        current = iter.next();
+      } else {
+        current = null;
+      }
+      return oldCurrent;
+    }
+
+    public void close(){
+      closed = true;
+    }
+
+    public boolean isClosed() {
+      return closed;
+    }
+
+    public boolean seek(KeyValue seekKv) {
+      while(iter.hasNext()){
+        KeyValue next = iter.next();
+        int ret = KeyValue.COMPARATOR.compare(next, seekKv);
+        if(ret >= 0){
+          current = next;
+          return true;
+        }
+      }
+      return false;
+    }
+
+    @Override
+    public boolean reseek(KeyValue key) throws IOException {
+      return seek(key);
+    }
+
+    @Override
+    public long getSequenceID() {
+      return 0;
+    }
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueScanFixture.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueScanFixture.java
new file mode 100644
index 0000000..0ebeee4
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueScanFixture.java
@@ -0,0 +1,70 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+
+import junit.framework.TestCase;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValueTestUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TestKeyValueScanFixture extends TestCase {
+
+
+  public void testKeyValueScanFixture() throws IOException {
+    KeyValue kvs[] = new KeyValue[]{
+        KeyValueTestUtil.create("RowA", "family", "qf1",
+            1, KeyValue.Type.Put, "value-1"),
+        KeyValueTestUtil.create("RowA", "family", "qf2",
+            1, KeyValue.Type.Put, "value-2"),
+        KeyValueTestUtil.create("RowB", "family", "qf1",
+            10, KeyValue.Type.Put, "value-10")
+    };
+    KeyValueScanner scan = new KeyValueScanFixture(
+        KeyValue.COMPARATOR, kvs);
+
+    // test simple things.
+    assertNull(scan.peek());
+    KeyValue kv = KeyValue.createFirstOnRow(Bytes.toBytes("RowA"));
+    // should seek to this:
+    assertTrue(scan.seek(kv));
+    KeyValue res = scan.peek();
+    assertEquals(kvs[0], res);
+
+    kv = KeyValue.createFirstOnRow(Bytes.toBytes("RowB"));
+    assertTrue(scan.seek(kv));
+    res = scan.peek();
+    assertEquals(kvs[2], res);
+
+    // ensure we pull things out properly:
+    kv = KeyValue.createFirstOnRow(Bytes.toBytes("RowA"));
+    assertTrue(scan.seek(kv));
+    assertEquals(kvs[0], scan.peek());
+    assertEquals(kvs[0], scan.next());
+    assertEquals(kvs[1], scan.peek());
+    assertEquals(kvs[1], scan.next());
+    assertEquals(kvs[2], scan.peek());
+    assertEquals(kvs[2], scan.next());
+    assertEquals(null, scan.peek());
+    assertEquals(null, scan.next());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueSkipListSet.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueSkipListSet.java
new file mode 100644
index 0000000..0264e02
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueSkipListSet.java
@@ -0,0 +1,147 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.Iterator;
+import java.util.SortedSet;
+
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestKeyValueSkipListSet extends TestCase {
+  private final KeyValueSkipListSet kvsls =
+    new KeyValueSkipListSet(KeyValue.COMPARATOR);
+
+  protected void setUp() throws Exception {
+    super.setUp();
+    this.kvsls.clear();
+  }
+
+  public void testAdd() throws Exception {
+    byte [] bytes = Bytes.toBytes(getName());
+    KeyValue kv = new KeyValue(bytes, bytes, bytes, bytes);
+    this.kvsls.add(kv);
+    assertTrue(this.kvsls.contains(kv));
+    assertEquals(1, this.kvsls.size());
+    KeyValue first = this.kvsls.first();
+    assertTrue(kv.equals(first));
+    assertTrue(Bytes.equals(kv.getValue(), first.getValue()));
+    // Now try overwritting
+    byte [] overwriteValue = Bytes.toBytes("overwrite");
+    KeyValue overwrite = new KeyValue(bytes, bytes, bytes, overwriteValue);
+    this.kvsls.add(overwrite);
+    assertEquals(1, this.kvsls.size());
+    first = this.kvsls.first();
+    assertTrue(Bytes.equals(overwrite.getValue(), first.getValue()));
+    assertFalse(Bytes.equals(overwrite.getValue(), kv.getValue()));
+  }
+
+  public void testIterator() throws Exception {
+    byte [] bytes = Bytes.toBytes(getName());
+    byte [] value1 = Bytes.toBytes("1");
+    byte [] value2 = Bytes.toBytes("2");
+    final int total = 3;
+    for (int i = 0; i < total; i++) {
+      this.kvsls.add(new KeyValue(bytes, bytes, Bytes.toBytes("" + i), value1));
+    }
+    // Assert that we added 'total' values and that they are in order
+    int count = 0;
+    for (KeyValue kv: this.kvsls) {
+      assertEquals("" + count, Bytes.toString(kv.getQualifier()));
+      assertTrue(Bytes.equals(kv.getValue(), value1));
+      count++;
+    }
+    assertEquals(total, count);
+    // Now overwrite with a new value.
+    for (int i = 0; i < total; i++) {
+      this.kvsls.add(new KeyValue(bytes, bytes, Bytes.toBytes("" + i), value2));
+    }
+    // Assert that we added 'total' values and that they are in order and that
+    // we are getting back value2
+    count = 0;
+    for (KeyValue kv: this.kvsls) {
+      assertEquals("" + count, Bytes.toString(kv.getQualifier()));
+      assertTrue(Bytes.equals(kv.getValue(), value2));
+      count++;
+    }
+    assertEquals(total, count);
+  }
+
+  public void testDescendingIterator() throws Exception {
+    byte [] bytes = Bytes.toBytes(getName());
+    byte [] value1 = Bytes.toBytes("1");
+    byte [] value2 = Bytes.toBytes("2");
+    final int total = 3;
+    for (int i = 0; i < total; i++) {
+      this.kvsls.add(new KeyValue(bytes, bytes, Bytes.toBytes("" + i), value1));
+    }
+    // Assert that we added 'total' values and that they are in order
+    int count = 0;
+    for (Iterator<KeyValue> i = this.kvsls.descendingIterator(); i.hasNext();) {
+      KeyValue kv = i.next();
+      assertEquals("" + (total - (count + 1)), Bytes.toString(kv.getQualifier()));
+      assertTrue(Bytes.equals(kv.getValue(), value1));
+      count++;
+    }
+    assertEquals(total, count);
+    // Now overwrite with a new value.
+    for (int i = 0; i < total; i++) {
+      this.kvsls.add(new KeyValue(bytes, bytes, Bytes.toBytes("" + i), value2));
+    }
+    // Assert that we added 'total' values and that they are in order and that
+    // we are getting back value2
+    count = 0;
+    for (Iterator<KeyValue> i = this.kvsls.descendingIterator(); i.hasNext();) {
+      KeyValue kv = i.next();
+      assertEquals("" + (total - (count + 1)), Bytes.toString(kv.getQualifier()));
+      assertTrue(Bytes.equals(kv.getValue(), value2));
+      count++;
+    }
+    assertEquals(total, count);
+  }
+
+  public void testHeadTail() throws Exception {
+    byte [] bytes = Bytes.toBytes(getName());
+    byte [] value1 = Bytes.toBytes("1");
+    byte [] value2 = Bytes.toBytes("2");
+    final int total = 3;
+    KeyValue splitter = null;
+    for (int i = 0; i < total; i++) {
+      KeyValue kv = new KeyValue(bytes, bytes, Bytes.toBytes("" + i), value1);
+      if (i == 1) splitter = kv;
+      this.kvsls.add(kv);
+    }
+    SortedSet<KeyValue> tail = this.kvsls.tailSet(splitter);
+    assertEquals(2, tail.size());
+    SortedSet<KeyValue> head = this.kvsls.headSet(splitter);
+    assertEquals(1, head.size());
+    // Now ensure that we get back right answer even when we do tail or head.
+    // Now overwrite with a new value.
+    for (int i = 0; i < total; i++) {
+      this.kvsls.add(new KeyValue(bytes, bytes, Bytes.toBytes("" + i), value2));
+    }
+    tail = this.kvsls.tailSet(splitter);
+    assertTrue(Bytes.equals(tail.first().getValue(), value2));
+    head = this.kvsls.headSet(splitter);
+    assertTrue(Bytes.equals(head.first().getValue(), value2));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestMasterAddressManager.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestMasterAddressManager.java
new file mode 100644
index 0000000..319a74e
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestMasterAddressManager.java
@@ -0,0 +1,116 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.util.concurrent.Semaphore;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.MasterAddressTracker;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestMasterAddressManager {
+  private static final Log LOG = LogFactory.getLog(TestMasterAddressManager.class);
+
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniZKCluster();
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniZKCluster();
+  }
+  /**
+   * Unit tests that uses ZooKeeper but does not use the master-side methods
+   * but rather acts directly on ZK.
+   * @throws Exception
+   */
+  @Test
+  public void testMasterAddressManagerFromZK() throws Exception {
+
+    ZooKeeperWatcher zk = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+        "testMasterAddressManagerFromZK", null);
+    ZKUtil.createAndFailSilent(zk, zk.baseZNode);
+
+    // Should not have a master yet
+    MasterAddressTracker addressManager = new MasterAddressTracker(zk, null);
+    addressManager.start();
+    assertFalse(addressManager.hasMaster());
+    zk.registerListener(addressManager);
+
+    // Use a listener to capture when the node is actually created
+    NodeCreationListener listener = new NodeCreationListener(zk, zk.masterAddressZNode);
+    zk.registerListener(listener);
+
+    // Create the master node with a dummy address
+    String host = "localhost";
+    int port = 1234;
+    HServerAddress dummyAddress = new HServerAddress(host, port);
+    LOG.info("Creating master node");
+    ZKUtil.setAddressAndWatch(zk, zk.masterAddressZNode, dummyAddress);
+
+    // Wait for the node to be created
+    LOG.info("Waiting for master address manager to be notified");
+    listener.waitForCreation();
+    LOG.info("Master node created");
+    assertTrue(addressManager.hasMaster());
+    HServerAddress pulledAddress = addressManager.getMasterAddress();
+    assertTrue(pulledAddress.equals(dummyAddress));
+
+  }
+
+  public static class NodeCreationListener extends ZooKeeperListener {
+    private static final Log LOG = LogFactory.getLog(NodeCreationListener.class);
+
+    private Semaphore lock;
+    private String node;
+
+    public NodeCreationListener(ZooKeeperWatcher watcher, String node) {
+      super(watcher);
+      lock = new Semaphore(0);
+      this.node = node;
+    }
+
+    @Override
+    public void nodeCreated(String path) {
+      if(path.equals(node)) {
+        LOG.debug("nodeCreated(" + path + ")");
+        lock.release();
+      }
+    }
+
+    public void waitForCreation() throws InterruptedException {
+      lock.acquire();
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
new file mode 100644
index 0000000..3779114
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStore.java
@@ -0,0 +1,919 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.rmi.UnexpectedException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.TreeSet;
+import java.util.concurrent.atomic.AtomicReference;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValueTestUtil;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+
+/** memstore test case */
+public class TestMemStore extends TestCase {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+  private MemStore memstore;
+  private static final int ROW_COUNT = 10;
+  private static final int QUALIFIER_COUNT = ROW_COUNT;
+  private static final byte [] FAMILY = Bytes.toBytes("column");
+  private static final byte [] CONTENTS = Bytes.toBytes("contents");
+  private static final byte [] BASIC = Bytes.toBytes("basic");
+  private static final String CONTENTSTR = "contentstr";
+  private ReadWriteConsistencyControl rwcc;
+
+  @Override
+  public void setUp() throws Exception {
+    super.setUp();
+    this.rwcc = new ReadWriteConsistencyControl();
+    this.memstore = new MemStore();
+  }
+
+  public void testPutSameKey() {
+    byte [] bytes = Bytes.toBytes(getName());
+    KeyValue kv = new KeyValue(bytes, bytes, bytes, bytes);
+    this.memstore.add(kv);
+    byte [] other = Bytes.toBytes("somethingelse");
+    KeyValue samekey = new KeyValue(bytes, bytes, bytes, other);
+    this.memstore.add(samekey);
+    KeyValue found = this.memstore.kvset.first();
+    assertEquals(1, this.memstore.kvset.size());
+    assertTrue(Bytes.toString(found.getValue()), Bytes.equals(samekey.getValue(),
+      found.getValue()));
+  }
+
+  /**
+   * Test memstore snapshot happening while scanning.
+   * @throws IOException
+   */
+  public void testScanAcrossSnapshot() throws IOException {
+    int rowCount = addRows(this.memstore);
+    List<KeyValueScanner> memstorescanners = this.memstore.getScanners();
+    Scan scan = new Scan();
+    List<KeyValue> result = new ArrayList<KeyValue>();
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    StoreScanner s = new StoreScanner(scan, null, HConstants.LATEST_TIMESTAMP,
+      this.memstore.comparator, null, memstorescanners);
+    int count = 0;
+    try {
+      while (s.next(result)) {
+        LOG.info(result);
+        count++;
+        // Row count is same as column count.
+        assertEquals(rowCount, result.size());
+        result.clear();
+      }
+    } finally {
+      s.close();
+    }
+    assertEquals(rowCount, count);
+    for (KeyValueScanner scanner : memstorescanners) {
+      scanner.close();
+    }
+
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    memstorescanners = this.memstore.getScanners();
+    // Now assert can count same number even if a snapshot mid-scan.
+    s = new StoreScanner(scan, null, HConstants.LATEST_TIMESTAMP,
+      this.memstore.comparator, null, memstorescanners);
+    count = 0;
+    try {
+      while (s.next(result)) {
+        LOG.info(result);
+        // Assert the stuff is coming out in right order.
+        assertTrue(Bytes.compareTo(Bytes.toBytes(count), result.get(0).getRow()) == 0);
+        count++;
+        // Row count is same as column count.
+        assertEquals(rowCount, result.size());
+        if (count == 2) {
+          this.memstore.snapshot();
+          LOG.info("Snapshotted");
+        }
+        result.clear();
+      }
+    } finally {
+      s.close();
+    }
+    assertEquals(rowCount, count);
+    for (KeyValueScanner scanner : memstorescanners) {
+      scanner.close();
+    }
+    memstorescanners = this.memstore.getScanners();
+    // Assert that new values are seen in kvset as we scan.
+    long ts = System.currentTimeMillis();
+    s = new StoreScanner(scan, null, HConstants.LATEST_TIMESTAMP,
+      this.memstore.comparator, null, memstorescanners);
+    count = 0;
+    int snapshotIndex = 5;
+    try {
+      while (s.next(result)) {
+        LOG.info(result);
+        // Assert the stuff is coming out in right order.
+        assertTrue(Bytes.compareTo(Bytes.toBytes(count), result.get(0).getRow()) == 0);
+        // Row count is same as column count.
+        assertEquals("count=" + count + ", result=" + result, rowCount, result.size());
+        count++;
+        if (count == snapshotIndex) {
+          this.memstore.snapshot();
+          this.memstore.clearSnapshot(this.memstore.getSnapshot());
+          // Added more rows into kvset.  But the scanner wont see these rows.
+          addRows(this.memstore, ts);
+          LOG.info("Snapshotted, cleared it and then added values (which wont be seen)");
+        }
+        result.clear();
+      }
+    } finally {
+      s.close();
+    }
+    assertEquals(rowCount, count);
+  }
+
+  /**
+   * A simple test which verifies the 3 possible states when scanning across snapshot.
+   * @throws IOException
+   */
+  public void testScanAcrossSnapshot2() throws IOException {
+    // we are going to the scanning across snapshot with two kvs
+    // kv1 should always be returned before kv2
+    final byte[] one = Bytes.toBytes(1);
+    final byte[] two = Bytes.toBytes(2);
+    final byte[] f = Bytes.toBytes("f");
+    final byte[] q = Bytes.toBytes("q");
+    final byte[] v = Bytes.toBytes(3);
+
+    final KeyValue kv1 = new KeyValue(one, f, q, v);
+    final KeyValue kv2 = new KeyValue(two, f, q, v);
+
+    // use case 1: both kvs in kvset
+    this.memstore.add(kv1.clone());
+    this.memstore.add(kv2.clone());
+    verifyScanAcrossSnapshot2(kv1, kv2);
+
+    // use case 2: both kvs in snapshot
+    this.memstore.snapshot();
+    verifyScanAcrossSnapshot2(kv1, kv2);
+
+    // use case 3: first in snapshot second in kvset
+    this.memstore = new MemStore();
+    this.memstore.add(kv1.clone());
+    this.memstore.snapshot();
+    this.memstore.add(kv2.clone());
+    verifyScanAcrossSnapshot2(kv1, kv2);
+  }
+
+  private void verifyScanAcrossSnapshot2(KeyValue kv1, KeyValue kv2)
+      throws IOException {
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    List<KeyValueScanner> memstorescanners = this.memstore.getScanners();
+    assertEquals(1, memstorescanners.size());
+    final KeyValueScanner scanner = memstorescanners.get(0);
+    scanner.seek(KeyValue.createFirstOnRow(HConstants.EMPTY_START_ROW));
+    assertEquals(kv1, scanner.next());
+    assertEquals(kv2, scanner.next());
+    assertNull(scanner.next());
+  }
+
+  private void assertScannerResults(KeyValueScanner scanner, KeyValue[] expected)
+      throws IOException {
+    scanner.seek(KeyValue.createFirstOnRow(new byte[]{}));
+    List<KeyValue> returned = Lists.newArrayList();
+
+    while (true) {
+      KeyValue next = scanner.next();
+      if (next == null) break;
+      returned.add(next);
+    }
+
+    assertTrue(
+        "Got:\n" + Joiner.on("\n").join(returned) +
+        "\nExpected:\n" + Joiner.on("\n").join(expected),
+        Iterables.elementsEqual(Arrays.asList(expected), returned));
+    assertNull(scanner.peek());
+  }
+
+  public void testMemstoreConcurrentControl() throws IOException {
+    final byte[] row = Bytes.toBytes(1);
+    final byte[] f = Bytes.toBytes("family");
+    final byte[] q1 = Bytes.toBytes("q1");
+    final byte[] q2 = Bytes.toBytes("q2");
+    final byte[] v = Bytes.toBytes("value");
+
+    ReadWriteConsistencyControl.WriteEntry w =
+        rwcc.beginMemstoreInsert();
+
+    KeyValue kv1 = new KeyValue(row, f, q1, v);
+    kv1.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv1);
+
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    KeyValueScanner s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{});
+
+    rwcc.completeMemstoreInsert(w);
+
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv1});
+
+    w = rwcc.beginMemstoreInsert();
+    KeyValue kv2 = new KeyValue(row, f, q2, v);
+    kv2.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv2);
+
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv1});
+
+    rwcc.completeMemstoreInsert(w);
+
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv1, kv2});
+  }
+
+  /**
+   * Regression test for HBASE-2616, HBASE-2670.
+   * When we insert a higher-memstoreTS version of a cell but with
+   * the same timestamp, we still need to provide consistent reads
+   * for the same scanner.
+   */
+  public void testMemstoreEditsVisibilityWithSameKey() throws IOException {
+    final byte[] row = Bytes.toBytes(1);
+    final byte[] f = Bytes.toBytes("family");
+    final byte[] q1 = Bytes.toBytes("q1");
+    final byte[] q2 = Bytes.toBytes("q2");
+    final byte[] v1 = Bytes.toBytes("value1");
+    final byte[] v2 = Bytes.toBytes("value2");
+
+    // INSERT 1: Write both columns val1
+    ReadWriteConsistencyControl.WriteEntry w =
+        rwcc.beginMemstoreInsert();
+
+    KeyValue kv11 = new KeyValue(row, f, q1, v1);
+    kv11.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv11);
+
+    KeyValue kv12 = new KeyValue(row, f, q2, v1);
+    kv12.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv12);
+    rwcc.completeMemstoreInsert(w);
+
+    // BEFORE STARTING INSERT 2, SEE FIRST KVS
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    KeyValueScanner s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv11, kv12});
+
+    // START INSERT 2: Write both columns val2
+    w = rwcc.beginMemstoreInsert();
+    KeyValue kv21 = new KeyValue(row, f, q1, v2);
+    kv21.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv21);
+
+    KeyValue kv22 = new KeyValue(row, f, q2, v2);
+    kv22.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv22);
+
+    // BEFORE COMPLETING INSERT 2, SEE FIRST KVS
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv11, kv12});
+
+    // COMPLETE INSERT 2
+    rwcc.completeMemstoreInsert(w);
+
+    // NOW SHOULD SEE NEW KVS IN ADDITION TO OLD KVS.
+    // See HBASE-1485 for discussion about what we should do with
+    // the duplicate-TS inserts
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv21, kv11, kv22, kv12});
+  }
+
+  /**
+   * When we insert a higher-memstoreTS deletion of a cell but with
+   * the same timestamp, we still need to provide consistent reads
+   * for the same scanner.
+   */
+  public void testMemstoreDeletesVisibilityWithSameKey() throws IOException {
+    final byte[] row = Bytes.toBytes(1);
+    final byte[] f = Bytes.toBytes("family");
+    final byte[] q1 = Bytes.toBytes("q1");
+    final byte[] q2 = Bytes.toBytes("q2");
+    final byte[] v1 = Bytes.toBytes("value1");
+    // INSERT 1: Write both columns val1
+    ReadWriteConsistencyControl.WriteEntry w =
+        rwcc.beginMemstoreInsert();
+
+    KeyValue kv11 = new KeyValue(row, f, q1, v1);
+    kv11.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv11);
+
+    KeyValue kv12 = new KeyValue(row, f, q2, v1);
+    kv12.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kv12);
+    rwcc.completeMemstoreInsert(w);
+
+    // BEFORE STARTING INSERT 2, SEE FIRST KVS
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    KeyValueScanner s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv11, kv12});
+
+    // START DELETE: Insert delete for one of the columns
+    w = rwcc.beginMemstoreInsert();
+    KeyValue kvDel = new KeyValue(row, f, q2, kv11.getTimestamp(),
+        KeyValue.Type.DeleteColumn);
+    kvDel.setMemstoreTS(w.getWriteNumber());
+    memstore.add(kvDel);
+
+    // BEFORE COMPLETING DELETE, SEE FIRST KVS
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv11, kv12});
+
+    // COMPLETE DELETE
+    rwcc.completeMemstoreInsert(w);
+
+    // NOW WE SHOULD SEE DELETE
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+    s = this.memstore.getScanners().get(0);
+    assertScannerResults(s, new KeyValue[]{kv11, kvDel, kv12});
+  }
+
+
+  private static class ReadOwnWritesTester extends Thread {
+    static final int NUM_TRIES = 1000;
+
+    final byte[] row;
+
+    final byte[] f = Bytes.toBytes("family");
+    final byte[] q1 = Bytes.toBytes("q1");
+
+    final ReadWriteConsistencyControl rwcc;
+    final MemStore memstore;
+
+    AtomicReference<Throwable> caughtException;
+
+
+    public ReadOwnWritesTester(int id,
+                               MemStore memstore,
+                               ReadWriteConsistencyControl rwcc,
+                               AtomicReference<Throwable> caughtException)
+    {
+      this.rwcc = rwcc;
+      this.memstore = memstore;
+      this.caughtException = caughtException;
+      row = Bytes.toBytes(id);
+    }
+
+    public void run() {
+      try {
+        internalRun();
+      } catch (Throwable t) {
+        caughtException.compareAndSet(null, t);
+      }
+    }
+
+    private void internalRun() throws IOException {
+      for (long i = 0; i < NUM_TRIES && caughtException.get() == null; i++) {
+        ReadWriteConsistencyControl.WriteEntry w =
+          rwcc.beginMemstoreInsert();
+
+        // Insert the sequence value (i)
+        byte[] v = Bytes.toBytes(i);
+
+        KeyValue kv = new KeyValue(row, f, q1, i, v);
+        kv.setMemstoreTS(w.getWriteNumber());
+        memstore.add(kv);
+        rwcc.completeMemstoreInsert(w);
+
+        // Assert that we can read back
+        ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+
+        KeyValueScanner s = this.memstore.getScanners().get(0);
+        s.seek(kv);
+
+        KeyValue ret = s.next();
+        assertNotNull("Didnt find own write at all", ret);
+        assertEquals("Didnt read own writes",
+                     kv.getTimestamp(), ret.getTimestamp());
+      }
+    }
+  }
+
+  public void testReadOwnWritesUnderConcurrency() throws Throwable {
+
+    int NUM_THREADS = 8;
+
+    ReadOwnWritesTester threads[] = new ReadOwnWritesTester[NUM_THREADS];
+    AtomicReference<Throwable> caught = new AtomicReference<Throwable>();
+
+    for (int i = 0; i < NUM_THREADS; i++) {
+      threads[i] = new ReadOwnWritesTester(i, memstore, rwcc, caught);
+      threads[i].start();
+    }
+
+    for (int i = 0; i < NUM_THREADS; i++) {
+      threads[i].join();
+    }
+
+    if (caught.get() != null) {
+      throw caught.get();
+    }
+  }
+
+  /**
+   * Test memstore snapshots
+   * @throws IOException
+   */
+  public void testSnapshotting() throws IOException {
+    final int snapshotCount = 5;
+    // Add some rows, run a snapshot. Do it a few times.
+    for (int i = 0; i < snapshotCount; i++) {
+      addRows(this.memstore);
+      runSnapshot(this.memstore);
+      KeyValueSkipListSet ss = this.memstore.getSnapshot();
+      assertEquals("History not being cleared", 0, ss.size());
+    }
+  }
+
+  public void testMultipleVersionsSimple() throws Exception {
+    MemStore m = new MemStore(KeyValue.COMPARATOR);
+    byte [] row = Bytes.toBytes("testRow");
+    byte [] family = Bytes.toBytes("testFamily");
+    byte [] qf = Bytes.toBytes("testQualifier");
+    long [] stamps = {1,2,3};
+    byte [][] values = {Bytes.toBytes("value0"), Bytes.toBytes("value1"),
+        Bytes.toBytes("value2")};
+    KeyValue key0 = new KeyValue(row, family, qf, stamps[0], values[0]);
+    KeyValue key1 = new KeyValue(row, family, qf, stamps[1], values[1]);
+    KeyValue key2 = new KeyValue(row, family, qf, stamps[2], values[2]);
+
+    m.add(key0);
+    m.add(key1);
+    m.add(key2);
+
+    assertTrue("Expected memstore to hold 3 values, actually has " +
+        m.kvset.size(), m.kvset.size() == 3);
+  }
+
+  public void testBinary() throws IOException {
+    MemStore mc = new MemStore(KeyValue.ROOT_COMPARATOR);
+    final int start = 43;
+    final int end = 46;
+    for (int k = start; k <= end; k++) {
+      byte [] kk = Bytes.toBytes(k);
+      byte [] row =
+        Bytes.toBytes(".META.,table," + Bytes.toString(kk) + ",1," + k);
+      KeyValue key = new KeyValue(row, CONTENTS, BASIC,
+        System.currentTimeMillis(),
+        (CONTENTSTR + k).getBytes(HConstants.UTF8_ENCODING));
+      mc.add(key);
+      System.out.println(key);
+//      key = new KeyValue(row, Bytes.toBytes(ANCHORNUM + k),
+//        System.currentTimeMillis(),
+//        (ANCHORSTR + k).getBytes(HConstants.UTF8_ENCODING));
+//      mc.add(key);
+//      System.out.println(key);
+    }
+    int index = start;
+    for (KeyValue kv: mc.kvset) {
+      System.out.println(kv);
+      byte [] b = kv.getRow();
+      // Hardcoded offsets into String
+      String str = Bytes.toString(b, 13, 4);
+      byte [] bb = Bytes.toBytes(index);
+      String bbStr = Bytes.toString(bb);
+      assertEquals(str, bbStr);
+      index++;
+    }
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Get tests
+  //////////////////////////////////////////////////////////////////////////////
+
+  /** Test getNextRow from memstore
+   * @throws InterruptedException
+   */
+  public void testGetNextRow() throws Exception {
+    addRows(this.memstore);
+    // Add more versions to make it a little more interesting.
+    Thread.sleep(1);
+    addRows(this.memstore);
+    KeyValue closestToEmpty = this.memstore.getNextRow(KeyValue.LOWESTKEY);
+    assertTrue(KeyValue.COMPARATOR.compareRows(closestToEmpty,
+      new KeyValue(Bytes.toBytes(0), System.currentTimeMillis())) == 0);
+    for (int i = 0; i < ROW_COUNT; i++) {
+      KeyValue nr = this.memstore.getNextRow(new KeyValue(Bytes.toBytes(i),
+        System.currentTimeMillis()));
+      if (i + 1 == ROW_COUNT) {
+        assertEquals(nr, null);
+      } else {
+        assertTrue(KeyValue.COMPARATOR.compareRows(nr,
+          new KeyValue(Bytes.toBytes(i + 1), System.currentTimeMillis())) == 0);
+      }
+    }
+    //starting from each row, validate results should contain the starting row
+    for (int startRowId = 0; startRowId < ROW_COUNT; startRowId++) {
+      InternalScanner scanner =
+          new StoreScanner(new Scan(Bytes.toBytes(startRowId)), FAMILY,
+              Integer.MAX_VALUE, this.memstore.comparator, null,
+              memstore.getScanners());
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      for (int i = 0; scanner.next(results); i++) {
+        int rowId = startRowId + i;
+        assertTrue("Row name",
+          KeyValue.COMPARATOR.compareRows(results.get(0),
+          Bytes.toBytes(rowId)) == 0);
+        assertEquals("Count of columns", QUALIFIER_COUNT, results.size());
+        List<KeyValue> row = new ArrayList<KeyValue>();
+        for (KeyValue kv : results) {
+          row.add(kv);
+        }
+        isExpectedRowWithoutTimestamps(rowId, row);
+        // Clear out set.  Otherwise row results accumulate.
+        results.clear();
+      }
+    }
+  }
+
+  public void testGet_memstoreAndSnapShot() throws IOException {
+    byte [] row = Bytes.toBytes("testrow");
+    byte [] fam = Bytes.toBytes("testfamily");
+    byte [] qf1 = Bytes.toBytes("testqualifier1");
+    byte [] qf2 = Bytes.toBytes("testqualifier2");
+    byte [] qf3 = Bytes.toBytes("testqualifier3");
+    byte [] qf4 = Bytes.toBytes("testqualifier4");
+    byte [] qf5 = Bytes.toBytes("testqualifier5");
+    byte [] val = Bytes.toBytes("testval");
+
+    //Setting up memstore
+    memstore.add(new KeyValue(row, fam ,qf1, val));
+    memstore.add(new KeyValue(row, fam ,qf2, val));
+    memstore.add(new KeyValue(row, fam ,qf3, val));
+    //Creating a snapshot
+    memstore.snapshot();
+    assertEquals(3, memstore.snapshot.size());
+    //Adding value to "new" memstore
+    assertEquals(0, memstore.kvset.size());
+    memstore.add(new KeyValue(row, fam ,qf4, val));
+    memstore.add(new KeyValue(row, fam ,qf5, val));
+    assertEquals(2, memstore.kvset.size());
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Delete tests
+  //////////////////////////////////////////////////////////////////////////////
+  public void testGetWithDelete() throws IOException {
+    byte [] row = Bytes.toBytes("testrow");
+    byte [] fam = Bytes.toBytes("testfamily");
+    byte [] qf1 = Bytes.toBytes("testqualifier");
+    byte [] val = Bytes.toBytes("testval");
+
+    long ts1 = System.nanoTime();
+    KeyValue put1 = new KeyValue(row, fam, qf1, ts1, val);
+    long ts2 = ts1 + 1;
+    KeyValue put2 = new KeyValue(row, fam, qf1, ts2, val);
+    long ts3 = ts2 +1;
+    KeyValue put3 = new KeyValue(row, fam, qf1, ts3, val);
+    memstore.add(put1);
+    memstore.add(put2);
+    memstore.add(put3);
+
+    assertEquals(3, memstore.kvset.size());
+
+    KeyValue del2 = new KeyValue(row, fam, qf1, ts2, KeyValue.Type.Delete, val);
+    memstore.delete(del2);
+
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(put3);
+    expected.add(del2);
+    expected.add(put2);
+    expected.add(put1);
+
+    assertEquals(4, memstore.kvset.size());
+    int i = 0;
+    for(KeyValue kv : memstore.kvset) {
+      assertEquals(expected.get(i++), kv);
+    }
+  }
+
+  public void testGetWithDeleteColumn() throws IOException {
+    byte [] row = Bytes.toBytes("testrow");
+    byte [] fam = Bytes.toBytes("testfamily");
+    byte [] qf1 = Bytes.toBytes("testqualifier");
+    byte [] val = Bytes.toBytes("testval");
+
+    long ts1 = System.nanoTime();
+    KeyValue put1 = new KeyValue(row, fam, qf1, ts1, val);
+    long ts2 = ts1 + 1;
+    KeyValue put2 = new KeyValue(row, fam, qf1, ts2, val);
+    long ts3 = ts2 +1;
+    KeyValue put3 = new KeyValue(row, fam, qf1, ts3, val);
+    memstore.add(put1);
+    memstore.add(put2);
+    memstore.add(put3);
+
+    assertEquals(3, memstore.kvset.size());
+
+    KeyValue del2 =
+      new KeyValue(row, fam, qf1, ts2, KeyValue.Type.DeleteColumn, val);
+    memstore.delete(del2);
+
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(put3);
+    expected.add(del2);
+    expected.add(put2);
+    expected.add(put1);
+
+
+    assertEquals(4, memstore.kvset.size());
+    int i = 0;
+    for (KeyValue kv: memstore.kvset) {
+      assertEquals(expected.get(i++), kv);
+    }
+  }
+
+
+  public void testGetWithDeleteFamily() throws IOException {
+    byte [] row = Bytes.toBytes("testrow");
+    byte [] fam = Bytes.toBytes("testfamily");
+    byte [] qf1 = Bytes.toBytes("testqualifier1");
+    byte [] qf2 = Bytes.toBytes("testqualifier2");
+    byte [] qf3 = Bytes.toBytes("testqualifier3");
+    byte [] val = Bytes.toBytes("testval");
+    long ts = System.nanoTime();
+
+    KeyValue put1 = new KeyValue(row, fam, qf1, ts, val);
+    KeyValue put2 = new KeyValue(row, fam, qf2, ts, val);
+    KeyValue put3 = new KeyValue(row, fam, qf3, ts, val);
+    KeyValue put4 = new KeyValue(row, fam, qf3, ts+1, val);
+
+    memstore.add(put1);
+    memstore.add(put2);
+    memstore.add(put3);
+    memstore.add(put4);
+
+    KeyValue del =
+      new KeyValue(row, fam, null, ts, KeyValue.Type.DeleteFamily, val);
+    memstore.delete(del);
+
+    List<KeyValue> expected = new ArrayList<KeyValue>();
+    expected.add(del);
+    expected.add(put1);
+    expected.add(put2);
+    expected.add(put4);
+    expected.add(put3);
+
+
+
+    assertEquals(5, memstore.kvset.size());
+    int i = 0;
+    for (KeyValue kv: memstore.kvset) {
+      assertEquals(expected.get(i++), kv);
+    }
+  }
+
+  public void testKeepDeleteInmemstore() {
+    byte [] row = Bytes.toBytes("testrow");
+    byte [] fam = Bytes.toBytes("testfamily");
+    byte [] qf = Bytes.toBytes("testqualifier");
+    byte [] val = Bytes.toBytes("testval");
+    long ts = System.nanoTime();
+    memstore.add(new KeyValue(row, fam, qf, ts, val));
+    KeyValue delete = new KeyValue(row, fam, qf, ts, KeyValue.Type.Delete, val);
+    memstore.delete(delete);
+    assertEquals(2, memstore.kvset.size());
+    assertEquals(delete, memstore.kvset.first());
+  }
+
+  public void testRetainsDeleteVersion() throws IOException {
+    // add a put to memstore
+    memstore.add(KeyValueTestUtil.create("row1", "fam", "a", 100, "dont-care"));
+
+    // now process a specific delete:
+    KeyValue delete = KeyValueTestUtil.create(
+        "row1", "fam", "a", 100, KeyValue.Type.Delete, "dont-care");
+    memstore.delete(delete);
+
+    assertEquals(2, memstore.kvset.size());
+    assertEquals(delete, memstore.kvset.first());
+  }
+  public void testRetainsDeleteColumn() throws IOException {
+    // add a put to memstore
+    memstore.add(KeyValueTestUtil.create("row1", "fam", "a", 100, "dont-care"));
+
+    // now process a specific delete:
+    KeyValue delete = KeyValueTestUtil.create("row1", "fam", "a", 100,
+        KeyValue.Type.DeleteColumn, "dont-care");
+    memstore.delete(delete);
+
+    assertEquals(2, memstore.kvset.size());
+    assertEquals(delete, memstore.kvset.first());
+  }
+  public void testRetainsDeleteFamily() throws IOException {
+    // add a put to memstore
+    memstore.add(KeyValueTestUtil.create("row1", "fam", "a", 100, "dont-care"));
+
+    // now process a specific delete:
+    KeyValue delete = KeyValueTestUtil.create("row1", "fam", "a", 100,
+        KeyValue.Type.DeleteFamily, "dont-care");
+    memstore.delete(delete);
+
+    assertEquals(2, memstore.kvset.size());
+    assertEquals(delete, memstore.kvset.first());
+  }
+
+
+  ////////////////////////////////////
+  //Test for timestamps
+  ////////////////////////////////////
+
+  /**
+   * Test to ensure correctness when using Memstore with multiple timestamps
+   */
+  public void testMultipleTimestamps() throws IOException {
+    long[] timestamps = new long[] {20,10,5,1};
+    Scan scan = new Scan();
+
+    for (long timestamp: timestamps)
+      addRows(memstore,timestamp);
+
+    scan.setTimeRange(0, 2);
+    assertTrue(memstore.shouldSeek(scan));
+
+    scan.setTimeRange(20, 82);
+    assertTrue(memstore.shouldSeek(scan));
+
+    scan.setTimeRange(10, 20);
+    assertTrue(memstore.shouldSeek(scan));
+
+    scan.setTimeRange(8, 12);
+    assertTrue(memstore.shouldSeek(scan));
+
+    /*This test is not required for correctness but it should pass when
+     * timestamp range optimization is on*/
+    //scan.setTimeRange(28, 42);
+    //assertTrue(!memstore.shouldSeek(scan));
+  }
+
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Helpers
+  //////////////////////////////////////////////////////////////////////////////
+  private static byte [] makeQualifier(final int i1, final int i2){
+    return Bytes.toBytes(Integer.toString(i1) + ";" +
+        Integer.toString(i2));
+  }
+
+  /**
+   * Adds {@link #ROW_COUNT} rows and {@link #QUALIFIER_COUNT}
+   * @param hmc Instance to add rows to.
+   * @return How many rows we added.
+   * @throws IOException
+   */
+  private int addRows(final MemStore hmc) {
+    return addRows(hmc, HConstants.LATEST_TIMESTAMP);
+  }
+
+  /**
+   * Adds {@link #ROW_COUNT} rows and {@link #QUALIFIER_COUNT}
+   * @param hmc Instance to add rows to.
+   * @return How many rows we added.
+   * @throws IOException
+   */
+  private int addRows(final MemStore hmc, final long ts) {
+    for (int i = 0; i < ROW_COUNT; i++) {
+      long timestamp = ts == HConstants.LATEST_TIMESTAMP?
+        System.currentTimeMillis(): ts;
+      for (int ii = 0; ii < QUALIFIER_COUNT; ii++) {
+        byte [] row = Bytes.toBytes(i);
+        byte [] qf = makeQualifier(i, ii);
+        hmc.add(new KeyValue(row, FAMILY, qf, timestamp, qf));
+      }
+    }
+    return ROW_COUNT;
+  }
+
+  private void runSnapshot(final MemStore hmc) throws UnexpectedException {
+    // Save off old state.
+    int oldHistorySize = hmc.getSnapshot().size();
+    hmc.snapshot();
+    KeyValueSkipListSet ss = hmc.getSnapshot();
+    // Make some assertions about what just happened.
+    assertTrue("History size has not increased", oldHistorySize < ss.size());
+    hmc.clearSnapshot(ss);
+  }
+
+  private void isExpectedRowWithoutTimestamps(final int rowIndex,
+      List<KeyValue> kvs) {
+    int i = 0;
+    for (KeyValue kv: kvs) {
+      String expectedColname = Bytes.toString(makeQualifier(rowIndex, i++));
+      String colnameStr = Bytes.toString(kv.getQualifier());
+      assertEquals("Column name", colnameStr, expectedColname);
+      // Value is column name as bytes.  Usually result is
+      // 100 bytes in size at least. This is the default size
+      // for BytesWriteable.  For comparison, convert bytes to
+      // String and trim to remove trailing null bytes.
+      String colvalueStr = Bytes.toString(kv.getBuffer(), kv.getValueOffset(),
+        kv.getValueLength());
+      assertEquals("Content", colnameStr, colvalueStr);
+    }
+  }
+
+  private KeyValue getDeleteKV(byte [] row) {
+    return new KeyValue(row, Bytes.toBytes("test_col"), null,
+      HConstants.LATEST_TIMESTAMP, KeyValue.Type.Delete, null);
+  }
+
+  private KeyValue getKV(byte [] row, byte [] value) {
+    return new KeyValue(row, Bytes.toBytes("test_col"), null,
+      HConstants.LATEST_TIMESTAMP, value);
+  }
+  private static void addRows(int count, final MemStore mem) {
+    long nanos = System.nanoTime();
+
+    for (int i = 0 ; i < count ; i++) {
+      if (i % 1000 == 0) {
+
+        System.out.println(i + " Took for 1k usec: " + (System.nanoTime() - nanos)/1000);
+        nanos = System.nanoTime();
+      }
+      long timestamp = System.currentTimeMillis();
+
+      for (int ii = 0; ii < QUALIFIER_COUNT ; ii++) {
+        byte [] row = Bytes.toBytes(i);
+        byte [] qf = makeQualifier(i, ii);
+        mem.add(new KeyValue(row, FAMILY, qf, timestamp, qf));
+      }
+    }
+  }
+
+
+  static void doScan(MemStore ms, int iteration) throws IOException {
+    long nanos = System.nanoTime();
+    KeyValueScanner s = ms.getScanners().get(0);
+    s.seek(KeyValue.createFirstOnRow(new byte[]{}));
+
+    System.out.println(iteration + " create/seek took: " + (System.nanoTime() - nanos)/1000);
+    int cnt=0;
+    while(s.next() != null) ++cnt;
+
+    System.out.println(iteration + " took usec: " + (System.nanoTime() - nanos)/1000 + " for: " + cnt);
+
+  }
+
+  public static void main(String [] args) throws IOException {
+    ReadWriteConsistencyControl rwcc = new ReadWriteConsistencyControl();
+    MemStore ms = new MemStore();
+
+    long n1 = System.nanoTime();
+    addRows(25000, ms);
+    System.out.println("Took for insert: " + (System.nanoTime()-n1)/1000);
+
+
+    System.out.println("foo");
+
+    ReadWriteConsistencyControl.resetThreadReadPoint(rwcc);
+
+    for (int i = 0 ; i < 50 ; i++)
+      doScan(ms, i);
+
+  }
+
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityCompactionQueue.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityCompactionQueue.java
new file mode 100644
index 0000000..dc2743f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityCompactionQueue.java
@@ -0,0 +1,218 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Test class for the priority compaction queue
+ */
+public class TestPriorityCompactionQueue {
+  static final Log LOG = LogFactory.getLog(TestPriorityCompactionQueue.class);
+
+  @Before
+  public void setUp() {
+  }
+
+  @After
+  public void tearDown() {
+
+  }
+
+  class DummyHRegion extends HRegion {
+    String name;
+
+    DummyHRegion(String name) {
+      super();
+      this.name = name;
+    }
+
+    public int hashCode() {
+      return name.hashCode();
+    }
+
+    public boolean equals(DummyHRegion r) {
+      return name.equals(r.name);
+    }
+
+    public String toString() {
+      return "[DummyHRegion " + name + "]";
+    }
+
+    public String getRegionNameAsString() {
+      return name;
+    }
+  }
+
+  protected void getAndCheckRegion(PriorityCompactionQueue pq,
+      HRegion checkRegion) {
+    HRegion r = pq.remove();
+    if (r != checkRegion) {
+      Assert.assertTrue("Didn't get expected " + checkRegion + " got " + r, r
+          .equals(checkRegion));
+    }
+  }
+
+  protected void addRegion(PriorityCompactionQueue pq, HRegion r, int p) {
+    pq.add(r, p);
+    try {
+      // Sleep 1 millisecond so 2 things are not put in the queue within the
+      // same millisecond. The queue breaks ties arbitrarily between two
+      // requests inserted at the same time. We want the ordering to
+      // be consistent for our unit test.
+      Thread.sleep(1);
+    } catch (InterruptedException ex) {
+      // continue
+    }
+  }
+
+  // ////////////////////////////////////////////////////////////////////////////
+  // tests
+  // ////////////////////////////////////////////////////////////////////////////
+
+  /** tests general functionality of the compaction queue */
+  @Test public void testPriorityQueue() throws InterruptedException {
+    PriorityCompactionQueue pq = new PriorityCompactionQueue();
+
+    HRegion r1 = new DummyHRegion("r1");
+    HRegion r2 = new DummyHRegion("r2");
+    HRegion r3 = new DummyHRegion("r3");
+    HRegion r4 = new DummyHRegion("r4");
+    HRegion r5 = new DummyHRegion("r5");
+
+    // test 1
+    // check fifo w/priority
+    addRegion(pq, r1, 0);
+    addRegion(pq, r2, 0);
+    addRegion(pq, r3, 0);
+    addRegion(pq, r4, 0);
+    addRegion(pq, r5, 0);
+
+    getAndCheckRegion(pq, r1);
+    getAndCheckRegion(pq, r2);
+    getAndCheckRegion(pq, r3);
+    getAndCheckRegion(pq, r4);
+    getAndCheckRegion(pq, r5);
+
+    // test 2
+    // check fifo w/mixed priority
+    addRegion(pq, r1, 0);
+    addRegion(pq, r2, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r3, 0);
+    addRegion(pq, r4, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r5, 0);
+
+    getAndCheckRegion(pq, r1);
+    getAndCheckRegion(pq, r3);
+    getAndCheckRegion(pq, r5);
+    getAndCheckRegion(pq, r2);
+    getAndCheckRegion(pq, r4);
+
+    // test 3
+    // check fifo w/mixed priority
+    addRegion(pq, r1, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r2, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r3, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r4, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r5, 0);
+
+    getAndCheckRegion(pq, r5);
+    getAndCheckRegion(pq, r1);
+    getAndCheckRegion(pq, r2);
+    getAndCheckRegion(pq, r3);
+    getAndCheckRegion(pq, r4);
+
+    // test 4
+    // check fifo w/mixed priority elevation time
+    addRegion(pq, r1, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r2, 0);
+    addRegion(pq, r3, CompactSplitThread.PRIORITY_USER);
+    Thread.sleep(1000);
+    addRegion(pq, r4, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r5, 0);
+
+    getAndCheckRegion(pq, r2);
+    getAndCheckRegion(pq, r5);
+    getAndCheckRegion(pq, r1);
+    getAndCheckRegion(pq, r3);
+    getAndCheckRegion(pq, r4);
+
+    // reset the priority compaction queue back to a normal queue
+    pq = new PriorityCompactionQueue();
+
+    // test 5
+    // test that lower priority are removed from the queue when a high priority
+    // is added
+    addRegion(pq, r1, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r2, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r3, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r4, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r5, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r3, 0);
+
+    getAndCheckRegion(pq, r3);
+    getAndCheckRegion(pq, r1);
+    getAndCheckRegion(pq, r2);
+    getAndCheckRegion(pq, r4);
+    getAndCheckRegion(pq, r5);
+
+    Assert.assertTrue("Queue should be empty.", pq.size() == 0);
+
+    // test 6
+    // don't add the same region more than once
+    addRegion(pq, r1, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r2, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r3, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r4, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r5, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r1, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r2, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r3, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r4, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r5, CompactSplitThread.PRIORITY_USER);
+
+    getAndCheckRegion(pq, r1);
+    getAndCheckRegion(pq, r2);
+    getAndCheckRegion(pq, r3);
+    getAndCheckRegion(pq, r4);
+    getAndCheckRegion(pq, r5);
+
+    Assert.assertTrue("Queue should be empty.", pq.size() == 0);
+    
+    // test 7
+    // we can handle negative priorities
+    addRegion(pq, r1, CompactSplitThread.PRIORITY_USER);
+    addRegion(pq, r2, -1);
+    addRegion(pq, r3, 0);    
+    addRegion(pq, r4, -2);
+    
+    getAndCheckRegion(pq, r4);
+    getAndCheckRegion(pq, r2);
+    getAndCheckRegion(pq, r3);
+    getAndCheckRegion(pq, r1);
+    
+    Assert.assertTrue("Queue should be empty.", pq.size() == 0);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
new file mode 100644
index 0000000..8399175
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java
@@ -0,0 +1,272 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValue.KeyComparator;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+public class TestQueryMatcher extends HBaseTestCase {
+  private static final boolean PRINT = false;
+
+  private byte[] row1;
+  private byte[] row2;
+  private byte[] fam1;
+  private byte[] fam2;
+  private byte[] col1;
+  private byte[] col2;
+  private byte[] col3;
+  private byte[] col4;
+  private byte[] col5;
+
+  private byte[] data;
+
+  private Get get;
+
+  long ttl = Long.MAX_VALUE;
+  KeyComparator rowComparator;
+  private Scan scan;
+
+  public void setUp() throws Exception {
+    super.setUp();
+    row1 = Bytes.toBytes("row1");
+    row2 = Bytes.toBytes("row2");
+    fam1 = Bytes.toBytes("fam1");
+    fam2 = Bytes.toBytes("fam2");
+    col1 = Bytes.toBytes("col1");
+    col2 = Bytes.toBytes("col2");
+    col3 = Bytes.toBytes("col3");
+    col4 = Bytes.toBytes("col4");
+    col5 = Bytes.toBytes("col5");
+
+    data = Bytes.toBytes("data");
+
+    //Create Get
+    get = new Get(row1);
+    get.addFamily(fam1);
+    get.addColumn(fam2, col2);
+    get.addColumn(fam2, col4);
+    get.addColumn(fam2, col5);
+    this.scan = new Scan(get);
+
+    rowComparator = KeyValue.KEY_COMPARATOR;
+
+  }
+
+  public void testMatch_ExplicitColumns()
+  throws IOException {
+    //Moving up from the Tracker by using Gets and List<KeyValue> instead
+    //of just byte []
+
+    //Expected result
+    List<MatchCode> expected = new ArrayList<ScanQueryMatcher.MatchCode>();
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.DONE);
+
+    // 2,4,5
+    ScanQueryMatcher qm = new ScanQueryMatcher(scan, fam2,
+        get.getFamilyMap().get(fam2), ttl, rowComparator, 1);
+
+    List<KeyValue> memstore = new ArrayList<KeyValue>();
+    memstore.add(new KeyValue(row1, fam2, col1, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col2, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col3, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col4, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col5, 1, data));
+
+    memstore.add(new KeyValue(row2, fam1, col1, data));
+
+    List<ScanQueryMatcher.MatchCode> actual = new ArrayList<ScanQueryMatcher.MatchCode>();
+    qm.setRow(memstore.get(0).getRow());
+
+    for (KeyValue kv : memstore){
+      actual.add(qm.match(kv));
+    }
+
+    assertEquals(expected.size(), actual.size());
+    for(int i=0; i< expected.size(); i++){
+      assertEquals(expected.get(i), actual.get(i));
+      if(PRINT){
+        System.out.println("expected "+expected.get(i)+
+            ", actual " +actual.get(i));
+      }
+    }
+  }
+
+
+  public void testMatch_Wildcard()
+  throws IOException {
+    //Moving up from the Tracker by using Gets and List<KeyValue> instead
+    //of just byte []
+
+    //Expected result
+    List<MatchCode> expected = new ArrayList<ScanQueryMatcher.MatchCode>();
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.DONE);
+
+    ScanQueryMatcher qm = new ScanQueryMatcher(scan, fam2, null, ttl, rowComparator, 1);
+
+    List<KeyValue> memstore = new ArrayList<KeyValue>();
+    memstore.add(new KeyValue(row1, fam2, col1, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col2, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col3, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col4, 1, data));
+    memstore.add(new KeyValue(row1, fam2, col5, 1, data));
+    memstore.add(new KeyValue(row2, fam1, col1, 1, data));
+
+    List<ScanQueryMatcher.MatchCode> actual = new ArrayList<ScanQueryMatcher.MatchCode>();
+
+    qm.setRow(memstore.get(0).getRow());
+
+    for(KeyValue kv : memstore) {
+      actual.add(qm.match(kv));
+    }
+
+    assertEquals(expected.size(), actual.size());
+    for(int i=0; i< expected.size(); i++){
+      assertEquals(expected.get(i), actual.get(i));
+      if(PRINT){
+        System.out.println("expected "+expected.get(i)+
+            ", actual " +actual.get(i));
+      }
+    }
+  }
+
+
+  /**
+   * Verify that {@link ScanQueryMatcher} only skips expired KeyValue
+   * instances and does not exit early from the row (skipping
+   * later non-expired KeyValues).  This version mimics a Get with
+   * explicitly specified column qualifiers.
+   *
+   * @throws IOException
+   */
+  public void testMatch_ExpiredExplicit()
+  throws IOException {
+
+    long testTTL = 1000;
+    MatchCode [] expected = new MatchCode[] {
+        ScanQueryMatcher.MatchCode.SEEK_NEXT_COL,
+        ScanQueryMatcher.MatchCode.INCLUDE,
+        ScanQueryMatcher.MatchCode.SEEK_NEXT_COL,
+        ScanQueryMatcher.MatchCode.INCLUDE,
+        ScanQueryMatcher.MatchCode.SEEK_NEXT_ROW,
+        ScanQueryMatcher.MatchCode.DONE
+    };
+
+    ScanQueryMatcher qm = new ScanQueryMatcher(scan, fam2,
+        get.getFamilyMap().get(fam2), testTTL, rowComparator, 1);
+
+    long now = System.currentTimeMillis();
+    KeyValue [] kvs = new KeyValue[] {
+        new KeyValue(row1, fam2, col1, now-100, data),
+        new KeyValue(row1, fam2, col2, now-50, data),
+        new KeyValue(row1, fam2, col3, now-5000, data),
+        new KeyValue(row1, fam2, col4, now-500, data),
+        new KeyValue(row1, fam2, col5, now-10000, data),
+        new KeyValue(row2, fam1, col1, now-10, data)
+    };
+
+    qm.setRow(kvs[0].getRow());
+
+    List<MatchCode> actual = new ArrayList<MatchCode>(kvs.length);
+    for (KeyValue kv : kvs) {
+      actual.add( qm.match(kv) );
+    }
+
+    assertEquals(expected.length, actual.size());
+    for (int i=0; i<expected.length; i++) {
+      if(PRINT){
+        System.out.println("expected "+expected[i]+
+            ", actual " +actual.get(i));
+      }
+      assertEquals(expected[i], actual.get(i));
+    }
+  }
+
+
+  /**
+   * Verify that {@link ScanQueryMatcher} only skips expired KeyValue
+   * instances and does not exit early from the row (skipping
+   * later non-expired KeyValues).  This version mimics a Get with
+   * wildcard-inferred column qualifiers.
+   *
+   * @throws IOException
+   */
+  public void testMatch_ExpiredWildcard()
+  throws IOException {
+
+    long testTTL = 1000;
+    MatchCode [] expected = new MatchCode[] {
+        ScanQueryMatcher.MatchCode.INCLUDE,
+        ScanQueryMatcher.MatchCode.INCLUDE,
+        ScanQueryMatcher.MatchCode.SEEK_NEXT_COL,
+        ScanQueryMatcher.MatchCode.INCLUDE,
+        ScanQueryMatcher.MatchCode.SEEK_NEXT_COL,
+        ScanQueryMatcher.MatchCode.DONE
+    };
+
+    ScanQueryMatcher qm = new ScanQueryMatcher(scan, fam2,
+        null, testTTL, rowComparator, 1);
+
+    long now = System.currentTimeMillis();
+    KeyValue [] kvs = new KeyValue[] {
+        new KeyValue(row1, fam2, col1, now-100, data),
+        new KeyValue(row1, fam2, col2, now-50, data),
+        new KeyValue(row1, fam2, col3, now-5000, data),
+        new KeyValue(row1, fam2, col4, now-500, data),
+        new KeyValue(row1, fam2, col5, now-10000, data),
+        new KeyValue(row2, fam1, col1, now-10, data)
+    };
+    qm.setRow(kvs[0].getRow());
+
+    List<ScanQueryMatcher.MatchCode> actual = new ArrayList<ScanQueryMatcher.MatchCode>(kvs.length);
+    for (KeyValue kv : kvs) {
+      actual.add( qm.match(kv) );
+    }
+
+    assertEquals(expected.length, actual.size());
+    for (int i=0; i<expected.length; i++) {
+      if(PRINT){
+        System.out.println("expected "+expected[i]+
+            ", actual " +actual.get(i));
+      }
+      assertEquals(expected[i], actual.get(i));
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestReadWriteConsistencyControl.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestReadWriteConsistencyControl.java
new file mode 100644
index 0000000..92075b0
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestReadWriteConsistencyControl.java
@@ -0,0 +1,128 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import junit.framework.TestCase;
+
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+public class TestReadWriteConsistencyControl extends TestCase {
+  static class Writer implements Runnable {
+    final AtomicBoolean finished;
+    final ReadWriteConsistencyControl rwcc;
+    final AtomicBoolean status;
+
+    Writer(AtomicBoolean finished, ReadWriteConsistencyControl rwcc, AtomicBoolean status) {
+      this.finished = finished;
+      this.rwcc = rwcc;
+      this.status = status;
+    }
+    private Random rnd = new Random();
+    public boolean failed = false;
+
+    public void run() {
+      while (!finished.get()) {
+        ReadWriteConsistencyControl.WriteEntry e = rwcc.beginMemstoreInsert();
+//        System.out.println("Begin write: " + e.getWriteNumber());
+        // 10 usec - 500usec (including 0)
+        int sleepTime = rnd.nextInt(500);
+        // 500 * 1000 = 500,000ns = 500 usec
+        // 1 * 100 = 100ns = 1usec
+        try {
+          if (sleepTime > 0)
+            Thread.sleep(0, sleepTime * 1000);
+        } catch (InterruptedException e1) {
+        }
+        try {
+          rwcc.completeMemstoreInsert(e);
+        } catch (RuntimeException ex) {
+          // got failure
+          System.out.println(ex.toString());
+          ex.printStackTrace();
+          status.set(false);
+          return;
+          // Report failure if possible.
+        }
+      }
+    }
+  }
+
+  public void testParallelism() throws Exception {
+    final ReadWriteConsistencyControl rwcc = new ReadWriteConsistencyControl();
+
+    final AtomicBoolean finished = new AtomicBoolean(false);
+
+    // fail flag for the reader thread
+    final AtomicBoolean readerFailed = new AtomicBoolean(false);
+    final AtomicLong failedAt = new AtomicLong();
+    Runnable reader = new Runnable() {
+      public void run() {
+        long prev = rwcc.memstoreReadPoint();
+        while (!finished.get()) {
+          long newPrev = rwcc.memstoreReadPoint();
+          if (newPrev < prev) {
+            // serious problem.
+            System.out.println("Reader got out of order, prev: " +
+            prev + " next was: " + newPrev);
+            readerFailed.set(true);
+            // might as well give up
+            failedAt.set(newPrev);
+            return;
+          }
+        }
+      }
+    };
+
+    // writer thread parallelism.
+    int n = 20;
+    Thread [] writers = new Thread[n];
+    AtomicBoolean [] statuses = new AtomicBoolean[n];
+    Thread readThread = new Thread(reader);
+
+    for (int i = 0 ; i < n ; ++i ) {
+      statuses[i] = new AtomicBoolean(true);
+      writers[i] = new Thread(new Writer(finished, rwcc, statuses[i]));
+      writers[i].start();
+    }
+    readThread.start();
+
+    try {
+      Thread.sleep(10 * 1000);
+    } catch (InterruptedException ex) {
+    }
+
+    finished.set(true);
+
+    readThread.join();
+    for (int i = 0; i < n; ++i) {
+      writers[i].join();
+    }
+
+    // check failure.
+    assertFalse(readerFailed.get());
+    for (int i = 0; i < n; ++i) {
+      assertTrue(statuses[i].get());
+    }
+
+
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java
new file mode 100644
index 0000000..afb3fcc
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java
@@ -0,0 +1,113 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+
+
+public class TestScanDeleteTracker extends HBaseTestCase {
+
+  private ScanDeleteTracker sdt;
+  private long timestamp = 10L;
+  private byte deleteType = 0;
+
+  public void setUp() throws Exception {
+    super.setUp();
+    sdt = new ScanDeleteTracker();
+  }
+
+  public void testDeletedBy_Delete() {
+    byte [] qualifier = Bytes.toBytes("qualifier");
+    deleteType = KeyValue.Type.Delete.getCode();
+
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+    boolean ret = sdt.isDeleted(qualifier, 0, qualifier.length, timestamp);
+    assertEquals(true, ret);
+  }
+
+  public void testDeletedBy_DeleteColumn() {
+    byte [] qualifier = Bytes.toBytes("qualifier");
+    deleteType = KeyValue.Type.DeleteColumn.getCode();
+
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+    timestamp -= 5;
+    boolean ret = sdt.isDeleted(qualifier, 0, qualifier.length, timestamp);
+    assertEquals(true, ret);
+  }
+
+  public void testDeletedBy_DeleteFamily() {
+    byte [] qualifier = Bytes.toBytes("qualifier");
+    deleteType = KeyValue.Type.DeleteFamily.getCode();
+
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+
+    timestamp -= 5;
+    boolean ret = sdt.isDeleted(qualifier, 0, qualifier.length, timestamp);
+    assertEquals(true, ret);
+  }
+
+  public void testDelete_DeleteColumn() {
+    byte [] qualifier = Bytes.toBytes("qualifier");
+    deleteType = KeyValue.Type.Delete.getCode();
+
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+
+    timestamp -= 5;
+    deleteType = KeyValue.Type.DeleteColumn.getCode();
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+
+    timestamp -= 5;
+    boolean ret = sdt.isDeleted(qualifier, 0, qualifier.length, timestamp);
+    assertEquals(true, ret);
+  }
+
+
+  public void testDeleteColumn_Delete() {
+    byte [] qualifier = Bytes.toBytes("qualifier");
+    deleteType = KeyValue.Type.DeleteColumn.getCode();
+
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+
+    qualifier = Bytes.toBytes("qualifier1");
+    deleteType = KeyValue.Type.Delete.getCode();
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+
+    boolean ret = sdt.isDeleted(qualifier, 0, qualifier.length, timestamp);
+    assertEquals(true, ret);
+  }
+
+  //Testing new way where we save the Delete in case of a Delete for specific
+  //ts, could have just added the last line to the first test, but rather keep
+  //them separated
+  public void testDelete_KeepDelete(){
+    byte [] qualifier = Bytes.toBytes("qualifier");
+    deleteType = KeyValue.Type.Delete.getCode();
+
+    sdt.add(qualifier, 0, qualifier.length, timestamp, deleteType);
+    sdt.isDeleted(qualifier, 0, qualifier.length, timestamp);
+    assertEquals(false ,sdt.isEmpty());
+  }
+
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java
new file mode 100644
index 0000000..6a6dfdf
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java
@@ -0,0 +1,121 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode;
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class TestScanWildcardColumnTracker extends HBaseTestCase {
+
+  final static int VERSIONS = 2;
+
+  public void testCheckColumn_Ok() {
+    ScanWildcardColumnTracker tracker =
+      new ScanWildcardColumnTracker(VERSIONS);
+
+    //Create list of qualifiers
+    List<byte[]> qualifiers = new ArrayList<byte[]>();
+    qualifiers.add(Bytes.toBytes("qualifer1"));
+    qualifiers.add(Bytes.toBytes("qualifer2"));
+    qualifiers.add(Bytes.toBytes("qualifer3"));
+    qualifiers.add(Bytes.toBytes("qualifer4"));
+
+    //Setting up expected result
+    List<MatchCode> expected = new ArrayList<MatchCode>();
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+
+    List<ScanQueryMatcher.MatchCode> actual = new ArrayList<MatchCode>();
+
+    for(byte [] qualifier : qualifiers) {
+      ScanQueryMatcher.MatchCode mc = tracker.checkColumn(qualifier, 0,
+          qualifier.length, 1);
+      actual.add(mc);
+    }
+
+    //Compare actual with expected
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  public void testCheckColumn_EnforceVersions() {
+    ScanWildcardColumnTracker tracker =
+      new ScanWildcardColumnTracker(VERSIONS);
+
+    //Create list of qualifiers
+    List<byte[]> qualifiers = new ArrayList<byte[]>();
+    qualifiers.add(Bytes.toBytes("qualifer1"));
+    qualifiers.add(Bytes.toBytes("qualifer1"));
+    qualifiers.add(Bytes.toBytes("qualifer1"));
+    qualifiers.add(Bytes.toBytes("qualifer2"));
+
+    //Setting up expected result
+    List<ScanQueryMatcher.MatchCode> expected = new ArrayList<MatchCode>();
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+    expected.add(ScanQueryMatcher.MatchCode.SEEK_NEXT_COL);
+    expected.add(ScanQueryMatcher.MatchCode.INCLUDE);
+
+    List<MatchCode> actual = new ArrayList<ScanQueryMatcher.MatchCode>();
+
+    long timestamp = 0;
+    for(byte [] qualifier : qualifiers) {
+      MatchCode mc = tracker.checkColumn(qualifier, 0, qualifier.length,
+          ++timestamp);
+      actual.add(mc);
+    }
+
+    //Compare actual with expected
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), actual.get(i));
+    }
+  }
+
+  public void DisabledTestCheckColumn_WrongOrder() {
+    ScanWildcardColumnTracker tracker =
+      new ScanWildcardColumnTracker(VERSIONS);
+
+    //Create list of qualifiers
+    List<byte[]> qualifiers = new ArrayList<byte[]>();
+    qualifiers.add(Bytes.toBytes("qualifer2"));
+    qualifiers.add(Bytes.toBytes("qualifer1"));
+
+    boolean ok = false;
+
+    try {
+      for(byte [] qualifier : qualifiers) {
+        tracker.checkColumn(qualifier, 0, qualifier.length, 1);
+      }
+    } catch (Exception e) {
+      ok = true;
+    }
+
+    assertEquals(true, ok);
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java
new file mode 100644
index 0000000..7ff6a2e
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java
@@ -0,0 +1,541 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.InclusiveStopFilter;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Writables;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+
+/**
+ * Test of a long-lived scanner validating as we go.
+ */
+public class TestScanner extends HBaseTestCase {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+
+  private static final byte [] FIRST_ROW = HConstants.EMPTY_START_ROW;
+  private static final byte [][] COLS = { HConstants.CATALOG_FAMILY };
+  private static final byte [][] EXPLICIT_COLS = {
+    HConstants.REGIONINFO_QUALIFIER, HConstants.SERVER_QUALIFIER,
+      // TODO ryan
+      //HConstants.STARTCODE_QUALIFIER
+  };
+
+  static final HTableDescriptor TESTTABLEDESC =
+    new HTableDescriptor("testscanner");
+  static {
+    TESTTABLEDESC.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY,
+      10,  // Ten is arbitrary number.  Keep versions to help debuggging.
+      Compression.Algorithm.NONE.getName(), false, true, 8 * 1024,
+      HConstants.FOREVER, StoreFile.BloomType.NONE.toString(),
+      HConstants.REPLICATION_SCOPE_LOCAL));
+  }
+  /** HRegionInfo for root region */
+  public static final HRegionInfo REGION_INFO =
+    new HRegionInfo(TESTTABLEDESC, HConstants.EMPTY_BYTE_ARRAY,
+    HConstants.EMPTY_BYTE_ARRAY);
+
+  private static final byte [] ROW_KEY = REGION_INFO.getRegionName();
+
+  private static final long START_CODE = Long.MAX_VALUE;
+
+  private MiniDFSCluster cluster = null;
+  private HRegion r;
+  private HRegionIncommon region;
+
+  @Override
+  public void setUp() throws Exception {
+    cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+    // Set the hbase.rootdir to be the home directory in mini dfs.
+    this.conf.set(HConstants.HBASE_DIR,
+      this.cluster.getFileSystem().getHomeDirectory().toString());
+    super.setUp();
+
+  }
+
+  /**
+   * Test basic stop row filter works.
+   * @throws Exception
+   */
+  public void testStopRow() throws Exception {
+    byte [] startrow = Bytes.toBytes("bbb");
+    byte [] stoprow = Bytes.toBytes("ccc");
+    try {
+      this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+      addContent(this.r, HConstants.CATALOG_FAMILY);
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      // Do simple test of getting one row only first.
+      Scan scan = new Scan(Bytes.toBytes("abc"), Bytes.toBytes("abd"));
+      scan.addFamily(HConstants.CATALOG_FAMILY);
+
+      InternalScanner s = r.getScanner(scan);
+      int count = 0;
+      while (s.next(results)) {
+        count++;
+      }
+      s.close();
+      assertEquals(0, count);
+      // Now do something a bit more imvolved.
+      scan = new Scan(startrow, stoprow);
+      scan.addFamily(HConstants.CATALOG_FAMILY);
+
+      s = r.getScanner(scan);
+      count = 0;
+      KeyValue kv = null;
+      results = new ArrayList<KeyValue>();
+      for (boolean first = true; s.next(results);) {
+        kv = results.get(0);
+        if (first) {
+          assertTrue(Bytes.BYTES_COMPARATOR.compare(startrow, kv.getRow()) == 0);
+          first = false;
+        }
+        count++;
+      }
+      assertTrue(Bytes.BYTES_COMPARATOR.compare(stoprow, kv.getRow()) > 0);
+      // We got something back.
+      assertTrue(count > 10);
+      s.close();
+    } finally {
+      this.r.close();
+      this.r.getLog().closeAndDelete();
+      shutdownDfs(this.cluster);
+    }
+  }
+
+  void rowPrefixFilter(Scan scan) throws IOException {
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    InternalScanner s = r.getScanner(scan);
+    boolean hasMore = true;
+    while (hasMore) {
+      hasMore = s.next(results);
+      for (KeyValue kv : results) {
+        assertEquals((byte)'a', kv.getRow()[0]);
+        assertEquals((byte)'b', kv.getRow()[1]);
+      }
+      results.clear();
+    }
+    s.close();
+  }
+
+  void rowInclusiveStopFilter(Scan scan, byte[] stopRow) throws IOException {
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    scan.addFamily(HConstants.CATALOG_FAMILY);
+    InternalScanner s = r.getScanner(scan);
+    boolean hasMore = true;
+    while (hasMore) {
+      hasMore = s.next(results);
+      for (KeyValue kv : results) {
+        assertTrue(Bytes.compareTo(kv.getRow(), stopRow) <= 0);
+      }
+      results.clear();
+    }
+    s.close();
+  }
+
+  public void testFilters() throws IOException {
+    try {
+      this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+      addContent(this.r, HConstants.CATALOG_FAMILY);
+      byte [] prefix = Bytes.toBytes("ab");
+      Filter newFilter = new PrefixFilter(prefix);
+      Scan scan = new Scan();
+      scan.setFilter(newFilter);
+      rowPrefixFilter(scan);
+
+      byte[] stopRow = Bytes.toBytes("bbc");
+      newFilter = new WhileMatchFilter(new InclusiveStopFilter(stopRow));
+      scan = new Scan();
+      scan.setFilter(newFilter);
+      rowInclusiveStopFilter(scan, stopRow);
+
+    } finally {
+      this.r.close();
+      this.r.getLog().closeAndDelete();
+      shutdownDfs(this.cluster);
+    }
+  }
+
+  /**
+   * Test that closing a scanner while a client is using it doesn't throw
+   * NPEs but instead a UnknownScannerException. HBASE-2503
+   * @throws Exception
+   */
+  public void testRaceBetweenClientAndTimeout() throws Exception {
+    try {
+      this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+      addContent(this.r, HConstants.CATALOG_FAMILY);
+      Scan scan = new Scan();
+      InternalScanner s = r.getScanner(scan);
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      try {
+        s.next(results);
+        s.close();
+        s.next(results);
+        fail("We don't want anything more, we should be failing");
+      } catch (UnknownScannerException ex) {
+        // ok!
+        return;
+      }
+    } finally {
+      this.r.close();
+      this.r.getLog().closeAndDelete();
+      shutdownDfs(this.cluster);
+    }
+  }
+
+  /** The test!
+   * @throws IOException
+   */
+  public void testScanner() throws IOException {
+    try {
+      r = createNewHRegion(TESTTABLEDESC, null, null);
+      region = new HRegionIncommon(r);
+
+      // Write information to the meta table
+
+      Put put = new Put(ROW_KEY, System.currentTimeMillis(), null);
+
+      ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
+      DataOutputStream s = new DataOutputStream(byteStream);
+      REGION_INFO.write(s);
+      put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER,
+          byteStream.toByteArray());
+      region.put(put);
+
+      // What we just committed is in the memstore. Verify that we can get
+      // it back both with scanning and get
+
+      scan(false, null);
+      getRegionInfo();
+
+      // Close and re-open
+
+      r.close();
+      r = openClosedRegion(r);
+      region = new HRegionIncommon(r);
+
+      // Verify we can get the data back now that it is on disk.
+
+      scan(false, null);
+      getRegionInfo();
+
+      // Store some new information
+
+      HServerAddress address = new HServerAddress("foo.bar.com:1234");
+
+      put = new Put(ROW_KEY, System.currentTimeMillis(), null);
+      put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER,
+          Bytes.toBytes(address.toString()));
+
+//      put.add(HConstants.COL_STARTCODE, Bytes.toBytes(START_CODE));
+
+      region.put(put);
+
+      // Validate that we can still get the HRegionInfo, even though it is in
+      // an older row on disk and there is a newer row in the memstore
+
+      scan(true, address.toString());
+      getRegionInfo();
+
+      // flush cache
+
+      region.flushcache();
+
+      // Validate again
+
+      scan(true, address.toString());
+      getRegionInfo();
+
+      // Close and reopen
+
+      r.close();
+      r = openClosedRegion(r);
+      region = new HRegionIncommon(r);
+
+      // Validate again
+
+      scan(true, address.toString());
+      getRegionInfo();
+
+      // Now update the information again
+
+      address = new HServerAddress("bar.foo.com:4321");
+
+      put = new Put(ROW_KEY, System.currentTimeMillis(), null);
+
+      put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER,
+          Bytes.toBytes(address.toString()));
+      region.put(put);
+
+      // Validate again
+
+      scan(true, address.toString());
+      getRegionInfo();
+
+      // flush cache
+
+      region.flushcache();
+
+      // Validate again
+
+      scan(true, address.toString());
+      getRegionInfo();
+
+      // Close and reopen
+
+      r.close();
+      r = openClosedRegion(r);
+      region = new HRegionIncommon(r);
+
+      // Validate again
+
+      scan(true, address.toString());
+      getRegionInfo();
+
+      // clean up
+
+      r.close();
+      r.getLog().closeAndDelete();
+
+    } finally {
+      shutdownDfs(cluster);
+    }
+  }
+
+  /** Compare the HRegionInfo we read from HBase to what we stored */
+  private void validateRegionInfo(byte [] regionBytes) throws IOException {
+    HRegionInfo info =
+      (HRegionInfo) Writables.getWritable(regionBytes, new HRegionInfo());
+
+    assertEquals(REGION_INFO.getRegionId(), info.getRegionId());
+    assertEquals(0, info.getStartKey().length);
+    assertEquals(0, info.getEndKey().length);
+    assertEquals(0, Bytes.compareTo(info.getRegionName(), REGION_INFO.getRegionName()));
+    assertEquals(0, info.getTableDesc().compareTo(REGION_INFO.getTableDesc()));
+  }
+
+  /** Use a scanner to get the region info and then validate the results */
+  private void scan(boolean validateStartcode, String serverName)
+  throws IOException {
+    InternalScanner scanner = null;
+    Scan scan = null;
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    byte [][][] scanColumns = {
+        COLS,
+        EXPLICIT_COLS
+    };
+
+    for(int i = 0; i < scanColumns.length; i++) {
+      try {
+        scan = new Scan(FIRST_ROW);
+        for (int ii = 0; ii < EXPLICIT_COLS.length; ii++) {
+          scan.addColumn(COLS[0],  EXPLICIT_COLS[ii]);
+        }
+        scanner = r.getScanner(scan);
+        while (scanner.next(results)) {
+          assertTrue(hasColumn(results, HConstants.CATALOG_FAMILY,
+              HConstants.REGIONINFO_QUALIFIER));
+          byte [] val = getColumn(results, HConstants.CATALOG_FAMILY,
+              HConstants.REGIONINFO_QUALIFIER).getValue();
+          validateRegionInfo(val);
+          if(validateStartcode) {
+//            assertTrue(hasColumn(results, HConstants.CATALOG_FAMILY,
+//                HConstants.STARTCODE_QUALIFIER));
+//            val = getColumn(results, HConstants.CATALOG_FAMILY,
+//                HConstants.STARTCODE_QUALIFIER).getValue();
+            assertNotNull(val);
+            assertFalse(val.length == 0);
+            long startCode = Bytes.toLong(val);
+            assertEquals(START_CODE, startCode);
+          }
+
+          if(serverName != null) {
+            assertTrue(hasColumn(results, HConstants.CATALOG_FAMILY,
+                HConstants.SERVER_QUALIFIER));
+            val = getColumn(results, HConstants.CATALOG_FAMILY,
+                HConstants.SERVER_QUALIFIER).getValue();
+            assertNotNull(val);
+            assertFalse(val.length == 0);
+            String server = Bytes.toString(val);
+            assertEquals(0, server.compareTo(serverName));
+          }
+        }
+      } finally {
+        InternalScanner s = scanner;
+        scanner = null;
+        if(s != null) {
+          s.close();
+        }
+      }
+    }
+  }
+
+  private boolean hasColumn(final List<KeyValue> kvs, final byte [] family,
+      final byte [] qualifier) {
+    for (KeyValue kv: kvs) {
+      if (kv.matchingFamily(family) && kv.matchingQualifier(qualifier)) {
+        return true;
+      }
+    }
+    return false;
+  }
+
+  private KeyValue getColumn(final List<KeyValue> kvs, final byte [] family,
+      final byte [] qualifier) {
+    for (KeyValue kv: kvs) {
+      if (kv.matchingFamily(family) && kv.matchingQualifier(qualifier)) {
+        return kv;
+      }
+    }
+    return null;
+  }
+
+
+  /** Use get to retrieve the HRegionInfo and validate it */
+  private void getRegionInfo() throws IOException {
+    Get get = new Get(ROW_KEY);
+    get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER);
+    Result result = region.get(get, null);
+    byte [] bytes = result.value();
+    validateRegionInfo(bytes);
+  }
+
+  /**
+   * Tests to do a sync flush during the middle of a scan. This is testing the StoreScanner
+   * update readers code essentially.  This is not highly concurrent, since its all 1 thread.
+   * HBase-910.
+   * @throws Exception
+   */
+  public void testScanAndSyncFlush() throws Exception {
+    this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+    HRegionIncommon hri = new HRegionIncommon(r);
+    try {
+        LOG.info("Added: " + addContent(hri, Bytes.toString(HConstants.CATALOG_FAMILY),
+            Bytes.toString(HConstants.REGIONINFO_QUALIFIER)));
+      int count = count(hri, -1, false);
+      assertEquals(count, count(hri, 100, false)); // do a sync flush.
+    } catch (Exception e) {
+      LOG.error("Failed", e);
+      throw e;
+    } finally {
+      this.r.close();
+      this.r.getLog().closeAndDelete();
+      shutdownDfs(cluster);
+    }
+  }
+
+  /**
+   * Tests to do a concurrent flush (using a 2nd thread) while scanning.  This tests both
+   * the StoreScanner update readers and the transition from memstore -> snapshot -> store file.
+   *
+   * @throws Exception
+   */
+  public void testScanAndRealConcurrentFlush() throws Exception {
+    this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+    HRegionIncommon hri = new HRegionIncommon(r);
+    try {
+        LOG.info("Added: " + addContent(hri, Bytes.toString(HConstants.CATALOG_FAMILY),
+            Bytes.toString(HConstants.REGIONINFO_QUALIFIER)));
+      int count = count(hri, -1, false);
+      assertEquals(count, count(hri, 100, true)); // do a true concurrent background thread flush
+    } catch (Exception e) {
+      LOG.error("Failed", e);
+      throw e;
+    } finally {
+      this.r.close();
+      this.r.getLog().closeAndDelete();
+      shutdownDfs(cluster);
+    }
+  }
+
+
+  /*
+   * @param hri Region
+   * @param flushIndex At what row we start the flush.
+   * @param concurrent if the flush should be concurrent or sync.
+   * @return Count of rows found.
+   * @throws IOException
+   */
+  private int count(final HRegionIncommon hri, final int flushIndex,
+                    boolean concurrent)
+  throws IOException {
+    LOG.info("Taking out counting scan");
+    ScannerIncommon s = hri.getScanner(HConstants.CATALOG_FAMILY, EXPLICIT_COLS,
+        HConstants.EMPTY_START_ROW, HConstants.LATEST_TIMESTAMP);
+    List<KeyValue> values = new ArrayList<KeyValue>();
+    int count = 0;
+    boolean justFlushed = false;
+    while (s.next(values)) {
+      if (justFlushed) {
+        LOG.info("after next() just after next flush");
+        justFlushed=false;
+      }
+      count++;
+      if (flushIndex == count) {
+        LOG.info("Starting flush at flush index " + flushIndex);
+        Thread t = new Thread() {
+          public void run() {
+            try {
+              hri.flushcache();
+              LOG.info("Finishing flush");
+            } catch (IOException e) {
+              LOG.info("Failed flush cache");
+            }
+          }
+        };
+        if (concurrent) {
+          t.start(); // concurrently flush.
+        } else {
+          t.run(); // sync flush
+        }
+        LOG.info("Continuing on after kicking off background flush");
+        justFlushed = true;
+      }
+    }
+    s.close();
+    LOG.info("Found " + count + " items");
+    return count;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
new file mode 100644
index 0000000..67a7089
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java
@@ -0,0 +1,258 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.PairOfSameType;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+/**
+ * Test the {@link SplitTransaction} class against an HRegion (as opposed to
+ * running cluster).
+ */
+public class TestSplitTransaction {
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private final Path testdir =
+    HBaseTestingUtility.getTestDir(this.getClass().getName());
+  private HRegion parent;
+  private HLog wal;
+  private FileSystem fs;
+  private static final byte [] STARTROW = new byte [] {'a', 'a', 'a'};
+  // '{' is next ascii after 'z'.
+  private static final byte [] ENDROW = new byte [] {'{', '{', '{'};
+  private static final byte [] GOOD_SPLIT_ROW = new byte [] {'d', 'd', 'd'};
+  private static final byte [] CF = HConstants.CATALOG_FAMILY;
+
+  @Before public void setup() throws IOException {
+    this.fs = FileSystem.get(TEST_UTIL.getConfiguration());
+    this.fs.delete(this.testdir, true);
+    this.wal = new HLog(fs, new Path(this.testdir, "logs"),
+      new Path(this.testdir, "archive"),
+      TEST_UTIL.getConfiguration());
+    this.parent = createRegion(this.testdir, this.wal);
+    TEST_UTIL.getConfiguration().setBoolean("hbase.testing.nocluster", true);
+  }
+
+  @After public void teardown() throws IOException {
+    if (this.parent != null && !this.parent.isClosed()) this.parent.close();
+    if (this.fs.exists(this.parent.getRegionDir()) &&
+        !this.fs.delete(this.parent.getRegionDir(), true)) {
+      throw new IOException("Failed delete of " + this.parent.getRegionDir());
+    }
+    if (this.wal != null) this.wal.closeAndDelete();
+    this.fs.delete(this.testdir, true);
+  }
+
+  /**
+   * Test straight prepare works.  Tries to split on {@link #GOOD_SPLIT_ROW}
+   * @throws IOException
+   */
+  @Test public void testPrepare() throws IOException {
+    prepareGOOD_SPLIT_ROW();
+  }
+
+  private SplitTransaction prepareGOOD_SPLIT_ROW() {
+    SplitTransaction st = new SplitTransaction(this.parent, GOOD_SPLIT_ROW);
+    assertTrue(st.prepare());
+    return st;
+  }
+
+  /**
+   * Pass an unreasonable split row.
+   */
+  @Test public void testPrepareWithBadSplitRow() throws IOException {
+    // Pass start row as split key.
+    SplitTransaction st = new SplitTransaction(this.parent, STARTROW);
+    assertFalse(st.prepare());
+    st = new SplitTransaction(this.parent, HConstants.EMPTY_BYTE_ARRAY);
+    assertFalse(st.prepare());
+    st = new SplitTransaction(this.parent, new byte [] {'A', 'A', 'A'});
+    assertFalse(st.prepare());
+    st = new SplitTransaction(this.parent, ENDROW);
+    assertFalse(st.prepare());
+  }
+
+  @Test public void testPrepareWithClosedRegion() throws IOException {
+    this.parent.close();
+    SplitTransaction st = new SplitTransaction(this.parent, GOOD_SPLIT_ROW);
+    assertFalse(st.prepare());
+  }
+
+  @Test public void testWholesomeSplit() throws IOException {
+    final int rowcount = TEST_UTIL.loadRegion(this.parent, CF);
+    assertTrue(rowcount > 0);
+    int parentRowCount = countRows(this.parent);
+    assertEquals(rowcount, parentRowCount);
+
+    // Start transaction.
+    SplitTransaction st = prepareGOOD_SPLIT_ROW();
+
+    // Run the execute.  Look at what it returns.
+    Server mockServer = Mockito.mock(Server.class);
+    when(mockServer.getConfiguration()).thenReturn(TEST_UTIL.getConfiguration());
+    PairOfSameType<HRegion> daughters = st.execute(mockServer, null);
+    // Do some assertions about execution.
+    assertTrue(this.fs.exists(st.getSplitDir()));
+    // Assert the parent region is closed.
+    assertTrue(this.parent.isClosed());
+
+    // Assert splitdir is empty -- because its content will have been moved out
+    // to be under the daughter region dirs.
+    assertEquals(0, this.fs.listStatus(st.getSplitDir()).length);
+    // Check daughters have correct key span.
+    assertTrue(Bytes.equals(this.parent.getStartKey(),
+      daughters.getFirst().getStartKey()));
+    assertTrue(Bytes.equals(GOOD_SPLIT_ROW,
+      daughters.getFirst().getEndKey()));
+    assertTrue(Bytes.equals(daughters.getSecond().getStartKey(),
+      GOOD_SPLIT_ROW));
+    assertTrue(Bytes.equals(this.parent.getEndKey(),
+      daughters.getSecond().getEndKey()));
+    // Count rows.
+    int daughtersRowCount = 0;
+    for (HRegion r: daughters) {
+      // Open so can count its content.
+      HRegion openRegion = HRegion.openHRegion(r.getRegionInfo(),
+        r.getLog(), r.getConf());
+      try {
+        int count = countRows(openRegion);
+        assertTrue(count > 0 && count != rowcount);
+        daughtersRowCount += count;
+      } finally {
+        openRegion.close();
+      }
+    }
+    assertEquals(rowcount, daughtersRowCount);
+    // Assert the write lock is no longer held on parent
+    assertTrue(!this.parent.lock.writeLock().isHeldByCurrentThread());
+  }
+
+  @Test public void testRollback() throws IOException {
+    final int rowcount = TEST_UTIL.loadRegion(this.parent, CF);
+    assertTrue(rowcount > 0);
+    int parentRowCount = countRows(this.parent);
+    assertEquals(rowcount, parentRowCount);
+
+    // Start transaction.
+    SplitTransaction st = prepareGOOD_SPLIT_ROW();
+    SplitTransaction spiedUponSt = spy(st);
+    when(spiedUponSt.createDaughterRegion(spiedUponSt.getSecondDaughter(), null)).
+      thenThrow(new MockedFailedDaughterCreation());
+    // Run the execute.  Look at what it returns.
+    boolean expectedException = false;
+    Server mockServer = Mockito.mock(Server.class);
+    when(mockServer.getConfiguration()).thenReturn(TEST_UTIL.getConfiguration());
+    try {
+      spiedUponSt.execute(mockServer, null);
+    } catch (MockedFailedDaughterCreation e) {
+      expectedException = true;
+    }
+    assertTrue(expectedException);
+    // Run rollback
+    spiedUponSt.rollback(null);
+
+    // Assert I can scan parent.
+    int parentRowCount2 = countRows(this.parent);
+    assertEquals(parentRowCount, parentRowCount2);
+
+    // Assert rollback cleaned up stuff in fs
+    assertTrue(!this.fs.exists(HRegion.getRegionDir(this.testdir, st.getFirstDaughter())));
+    assertTrue(!this.fs.exists(HRegion.getRegionDir(this.testdir, st.getSecondDaughter())));
+    assertTrue(!this.parent.lock.writeLock().isHeldByCurrentThread());
+
+    // Now retry the split but do not throw an exception this time.
+    assertTrue(st.prepare());
+    PairOfSameType<HRegion> daughters = st.execute(mockServer, null);
+    // Count rows.
+    int daughtersRowCount = 0;
+    for (HRegion r: daughters) {
+      // Open so can count its content.
+      HRegion openRegion = HRegion.openHRegion(r.getRegionInfo(),
+        r.getLog(), r.getConf());
+      try {
+        int count = countRows(openRegion);
+        assertTrue(count > 0 && count != rowcount);
+        daughtersRowCount += count;
+      } finally {
+        openRegion.close();
+      }
+    }
+    assertEquals(rowcount, daughtersRowCount);
+    // Assert the write lock is no longer held on parent
+    assertTrue(!this.parent.lock.writeLock().isHeldByCurrentThread());
+  }
+
+  /**
+   * Exception used in this class only.
+   */
+  @SuppressWarnings("serial")
+  private class MockedFailedDaughterCreation extends IOException {}
+
+  private int countRows(final HRegion r) throws IOException {
+    int rowcount = 0;
+    InternalScanner scanner = r.getScanner(new Scan());
+    try {
+      List<KeyValue> kvs = new ArrayList<KeyValue>();
+      boolean hasNext = true;
+      while (hasNext) {
+        hasNext = scanner.next(kvs);
+        if (!kvs.isEmpty()) rowcount++;
+      }
+    } finally {
+      scanner.close();
+    }
+    return rowcount;
+  }
+
+  static HRegion createRegion(final Path testdir, final HLog wal)
+  throws IOException {
+    // Make a region with start and end keys. Use 'aaa', to 'AAA'.  The load
+    // region utility will add rows between 'aaa' and 'zzz'.
+    HTableDescriptor htd = new HTableDescriptor("table");
+    HColumnDescriptor hcd = new HColumnDescriptor(CF);
+    htd.addFamily(hcd);
+    HRegionInfo hri = new HRegionInfo(htd, STARTROW, ENDROW);
+    return HRegion.openHRegion(hri, wal, TEST_UTIL.getConfiguration());
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
new file mode 100644
index 0000000..de6f097
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
@@ -0,0 +1,633 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.lang.ref.SoftReference;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.concurrent.ConcurrentSkipListSet;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.FilterFileSystem;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper;
+import org.apache.hadoop.hbase.util.IncrementingEnvironmentEdge;
+import org.apache.hadoop.hbase.util.ManualEnvironmentEdge;
+
+import com.google.common.base.Joiner;
+
+/**
+ * Test class fosr the Store
+ */
+public class TestStore extends TestCase {
+  public static final Log LOG = LogFactory.getLog(TestStore.class);
+
+  Store store;
+  byte [] table = Bytes.toBytes("table");
+  byte [] family = Bytes.toBytes("family");
+
+  byte [] row = Bytes.toBytes("row");
+  byte [] row2 = Bytes.toBytes("row2");
+  byte [] qf1 = Bytes.toBytes("qf1");
+  byte [] qf2 = Bytes.toBytes("qf2");
+  byte [] qf3 = Bytes.toBytes("qf3");
+  byte [] qf4 = Bytes.toBytes("qf4");
+  byte [] qf5 = Bytes.toBytes("qf5");
+  byte [] qf6 = Bytes.toBytes("qf6");
+
+  NavigableSet<byte[]> qualifiers =
+    new ConcurrentSkipListSet<byte[]>(Bytes.BYTES_COMPARATOR);
+
+  List<KeyValue> expected = new ArrayList<KeyValue>();
+  List<KeyValue> result = new ArrayList<KeyValue>();
+
+  long id = System.currentTimeMillis();
+  Get get = new Get(row);
+
+  private static final String DIR = HBaseTestingUtility.getTestDir() + "/TestStore/";
+
+  /**
+   * Setup
+   * @throws IOException
+   */
+  @Override
+  public void setUp() throws IOException {
+    qualifiers.add(qf1);
+    qualifiers.add(qf3);
+    qualifiers.add(qf5);
+
+    Iterator<byte[]> iter = qualifiers.iterator();
+    while(iter.hasNext()){
+      byte [] next = iter.next();
+      expected.add(new KeyValue(row, family, next, 1, (byte[])null));
+      get.addColumn(family, next);
+    }
+  }
+
+  private void init(String methodName) throws IOException {
+    init(methodName, HBaseConfiguration.create());
+  }
+
+  private void init(String methodName, Configuration conf)
+  throws IOException {
+    //Setting up a Store
+    Path basedir = new Path(DIR+methodName);
+    Path logdir = new Path(DIR+methodName+"/logs");
+    Path oldLogDir = new Path(basedir, HConstants.HREGION_OLDLOGDIR_NAME);
+    HColumnDescriptor hcd = new HColumnDescriptor(family);
+    FileSystem fs = FileSystem.get(conf);
+
+    fs.delete(logdir, true);
+
+    HTableDescriptor htd = new HTableDescriptor(table);
+    htd.addFamily(hcd);
+    HRegionInfo info = new HRegionInfo(htd, null, null, false);
+    HLog hlog = new HLog(fs, logdir, oldLogDir, conf);
+    HRegion region = new HRegion(basedir, hlog, fs, conf, info, null);
+
+    store = new Store(basedir, region, hcd, fs, conf);
+  }
+
+
+  //////////////////////////////////////////////////////////////////////////////
+  // Get tests
+  //////////////////////////////////////////////////////////////////////////////
+
+  /**
+   * Test for hbase-1686.
+   * @throws IOException
+   */
+  public void testEmptyStoreFile() throws IOException {
+    init(this.getName());
+    // Write a store file.
+    this.store.add(new KeyValue(row, family, qf1, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf2, 1, (byte[])null));
+    flush(1);
+    // Now put in place an empty store file.  Its a little tricky.  Have to
+    // do manually with hacked in sequence id.
+    StoreFile f = this.store.getStorefiles().get(0);
+    Path storedir = f.getPath().getParent();
+    long seqid = f.getMaxSequenceId();
+    Configuration c = HBaseConfiguration.create();
+    FileSystem fs = FileSystem.get(c);
+    StoreFile.Writer w = StoreFile.createWriter(fs, storedir,
+        StoreFile.DEFAULT_BLOCKSIZE_SMALL);
+    w.appendMetadata(seqid + 1, false);
+    w.close();
+    this.store.close();
+    // Reopen it... should pick up two files
+    this.store = new Store(storedir.getParent().getParent(),
+      this.store.getHRegion(),
+      this.store.getFamily(), fs, c);
+    System.out.println(this.store.getHRegionInfo().getEncodedName());
+    assertEquals(2, this.store.getStorefilesCount());
+
+    result = HBaseTestingUtility.getFromStoreFile(store,
+        get.getRow(),
+        qualifiers);
+    assertEquals(1, result.size());
+  }
+
+  /**
+   * Getting data from memstore only
+   * @throws IOException
+   */
+  public void testGet_FromMemStoreOnly() throws IOException {
+    init(this.getName());
+
+    //Put data in memstore
+    this.store.add(new KeyValue(row, family, qf1, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf2, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf3, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf4, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf5, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf6, 1, (byte[])null));
+
+    //Get
+    result = HBaseTestingUtility.getFromStoreFile(store,
+        get.getRow(), qualifiers);
+
+    //Compare
+    assertCheck();
+  }
+
+  /**
+   * Getting data from files only
+   * @throws IOException
+   */
+  public void testGet_FromFilesOnly() throws IOException {
+    init(this.getName());
+
+    //Put data in memstore
+    this.store.add(new KeyValue(row, family, qf1, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf2, 1, (byte[])null));
+    //flush
+    flush(1);
+
+    //Add more data
+    this.store.add(new KeyValue(row, family, qf3, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf4, 1, (byte[])null));
+    //flush
+    flush(2);
+
+    //Add more data
+    this.store.add(new KeyValue(row, family, qf5, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf6, 1, (byte[])null));
+    //flush
+    flush(3);
+
+    //Get
+    result = HBaseTestingUtility.getFromStoreFile(store,
+        get.getRow(),
+        qualifiers);
+    //this.store.get(get, qualifiers, result);
+
+    //Need to sort the result since multiple files
+    Collections.sort(result, KeyValue.COMPARATOR);
+
+    //Compare
+    assertCheck();
+  }
+
+  /**
+   * Getting data from memstore and files
+   * @throws IOException
+   */
+  public void testGet_FromMemStoreAndFiles() throws IOException {
+    init(this.getName());
+
+    //Put data in memstore
+    this.store.add(new KeyValue(row, family, qf1, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf2, 1, (byte[])null));
+    //flush
+    flush(1);
+
+    //Add more data
+    this.store.add(new KeyValue(row, family, qf3, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf4, 1, (byte[])null));
+    //flush
+    flush(2);
+
+    //Add more data
+    this.store.add(new KeyValue(row, family, qf5, 1, (byte[])null));
+    this.store.add(new KeyValue(row, family, qf6, 1, (byte[])null));
+
+    //Get
+    result = HBaseTestingUtility.getFromStoreFile(store,
+        get.getRow(), qualifiers);
+
+    //Need to sort the result since multiple files
+    Collections.sort(result, KeyValue.COMPARATOR);
+
+    //Compare
+    assertCheck();
+  }
+
+  private void flush(int storeFilessize) throws IOException{
+    this.store.snapshot();
+    flushStore(store, id++);
+    assertEquals(storeFilessize, this.store.getStorefiles().size());
+    assertEquals(0, this.store.memstore.kvset.size());
+  }
+
+  private void assertCheck() {
+    assertEquals(expected.size(), result.size());
+    for(int i=0; i<expected.size(); i++) {
+      assertEquals(expected.get(i), result.get(i));
+    }
+  }
+
+  //////////////////////////////////////////////////////////////////////////////
+  // IncrementColumnValue tests
+  //////////////////////////////////////////////////////////////////////////////
+  /*
+   * test the internal details of how ICV works, especially during a flush scenario.
+   */
+  public void testIncrementColumnValue_ICVDuringFlush()
+      throws IOException, InterruptedException {
+    init(this.getName());
+
+    long oldValue = 1L;
+    long newValue = 3L;
+    this.store.add(new KeyValue(row, family, qf1,
+        System.currentTimeMillis(),
+        Bytes.toBytes(oldValue)));
+
+    // snapshot the store.
+    this.store.snapshot();
+
+    // add other things:
+    this.store.add(new KeyValue(row, family, qf2,
+        System.currentTimeMillis(),
+        Bytes.toBytes(oldValue)));
+
+    // update during the snapshot.
+    long ret = this.store.updateColumnValue(row, family, qf1, newValue);
+
+    // memstore should have grown by some amount.
+    assertTrue(ret > 0);
+
+    // then flush.
+    flushStore(store, id++);
+    assertEquals(1, this.store.getStorefiles().size());
+    // from the one we inserted up there, and a new one
+    assertEquals(2, this.store.memstore.kvset.size());
+
+    // how many key/values for this row are there?
+    Get get = new Get(row);
+    get.addColumn(family, qf1);
+    get.setMaxVersions(); // all versions.
+    List<KeyValue> results = new ArrayList<KeyValue>();
+
+    results = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertEquals(2, results.size());
+
+    long ts1 = results.get(0).getTimestamp();
+    long ts2 = results.get(1).getTimestamp();
+
+    assertTrue(ts1 > ts2);
+
+    assertEquals(newValue, Bytes.toLong(results.get(0).getValue()));
+    assertEquals(oldValue, Bytes.toLong(results.get(1).getValue()));
+  }
+
+  public void testICV_negMemstoreSize()  throws IOException {
+      init(this.getName());
+
+    long time = 100;
+    ManualEnvironmentEdge ee = new ManualEnvironmentEdge();
+    ee.setValue(time);
+    EnvironmentEdgeManagerTestHelper.injectEdge(ee);
+    long newValue = 3L;
+    long size = 0;
+
+
+    size += this.store.add(new KeyValue(Bytes.toBytes("200909091000"), family, qf1,
+        System.currentTimeMillis(),
+        Bytes.toBytes(newValue)));
+    size += this.store.add(new KeyValue(Bytes.toBytes("200909091200"), family, qf1,
+        System.currentTimeMillis(),
+        Bytes.toBytes(newValue)));
+    size += this.store.add(new KeyValue(Bytes.toBytes("200909091300"), family, qf1,
+        System.currentTimeMillis(),
+        Bytes.toBytes(newValue)));
+    size += this.store.add(new KeyValue(Bytes.toBytes("200909091400"), family, qf1,
+        System.currentTimeMillis(),
+        Bytes.toBytes(newValue)));
+    size += this.store.add(new KeyValue(Bytes.toBytes("200909091500"), family, qf1,
+        System.currentTimeMillis(),
+        Bytes.toBytes(newValue)));
+
+
+    for ( int i = 0 ; i < 10000 ; ++i) {
+      newValue++;
+
+      long ret = this.store.updateColumnValue(row, family, qf1, newValue);
+      long ret2 = this.store.updateColumnValue(row2, family, qf1, newValue);
+
+      if (ret != 0) System.out.println("ret: " + ret);
+      if (ret2 != 0) System.out.println("ret2: " + ret2);
+
+      assertTrue("ret: " + ret, ret >= 0);
+      size += ret;
+      assertTrue("ret2: " + ret2, ret2 >= 0);
+      size += ret2;
+
+
+      if (i % 1000 == 0)
+        ee.setValue(++time);
+    }
+
+    long computedSize=0;
+    for (KeyValue kv : this.store.memstore.kvset) {
+      long kvsize = this.store.memstore.heapSizeChange(kv, true);
+      //System.out.println(kv + " size= " + kvsize + " kvsize= " + kv.heapSize());
+      computedSize += kvsize;
+    }
+    assertEquals(computedSize, size);
+  }
+
+  public void testIncrementColumnValue_SnapshotFlushCombo() throws Exception {
+    ManualEnvironmentEdge mee = new ManualEnvironmentEdge();
+    EnvironmentEdgeManagerTestHelper.injectEdge(mee);
+    init(this.getName());
+
+    long oldValue = 1L;
+    long newValue = 3L;
+    this.store.add(new KeyValue(row, family, qf1,
+        EnvironmentEdgeManager.currentTimeMillis(),
+        Bytes.toBytes(oldValue)));
+
+    // snapshot the store.
+    this.store.snapshot();
+
+    // update during the snapshot, the exact same TS as the Put (lololol)
+    long ret = this.store.updateColumnValue(row, family, qf1, newValue);
+
+    // memstore should have grown by some amount.
+    assertTrue(ret > 0);
+
+    // then flush.
+    flushStore(store, id++);
+    assertEquals(1, this.store.getStorefiles().size());
+    assertEquals(1, this.store.memstore.kvset.size());
+
+    // now increment again:
+    newValue += 1;
+    this.store.updateColumnValue(row, family, qf1, newValue);
+
+    // at this point we have a TS=1 in snapshot, and a TS=2 in kvset, so increment again:
+    newValue += 1;
+    this.store.updateColumnValue(row, family, qf1, newValue);
+
+    // the second TS should be TS=2 or higher., even though 'time=1' right now.
+
+
+    // how many key/values for this row are there?
+    Get get = new Get(row);
+    get.addColumn(family, qf1);
+    get.setMaxVersions(); // all versions.
+    List<KeyValue> results = new ArrayList<KeyValue>();
+
+    results = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertEquals(2, results.size());
+
+    long ts1 = results.get(0).getTimestamp();
+    long ts2 = results.get(1).getTimestamp();
+
+    assertTrue(ts1 > ts2);
+    assertEquals(newValue, Bytes.toLong(results.get(0).getValue()));
+    assertEquals(oldValue, Bytes.toLong(results.get(1).getValue()));
+
+    mee.setValue(2); // time goes up slightly
+    newValue += 1;
+    this.store.updateColumnValue(row, family, qf1, newValue);
+
+    results = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertEquals(2, results.size());
+
+    ts1 = results.get(0).getTimestamp();
+    ts2 = results.get(1).getTimestamp();
+
+    assertTrue(ts1 > ts2);
+    assertEquals(newValue, Bytes.toLong(results.get(0).getValue()));
+    assertEquals(oldValue, Bytes.toLong(results.get(1).getValue()));
+  }
+
+  public void testHandleErrorsInFlush() throws Exception {
+    LOG.info("Setting up a faulty file system that cannot write");
+
+    final Configuration conf = HBaseConfiguration.create();
+    User user = User.createUserForTesting(conf,
+        "testhandleerrorsinflush", new String[]{"foo"});
+    // Inject our faulty LocalFileSystem
+    conf.setClass("fs.file.impl", FaultyFileSystem.class,
+        FileSystem.class);
+    user.runAs(new PrivilegedExceptionAction<Object>() {
+      public Object run() throws Exception {
+        // Make sure it worked (above is sensitive to caching details in hadoop core)
+        FileSystem fs = FileSystem.get(conf);
+        assertEquals(FaultyFileSystem.class, fs.getClass());
+
+        // Initialize region
+        init(getName(), conf);
+
+        LOG.info("Adding some data");
+        store.add(new KeyValue(row, family, qf1, 1, (byte[])null));
+        store.add(new KeyValue(row, family, qf2, 1, (byte[])null));
+        store.add(new KeyValue(row, family, qf3, 1, (byte[])null));
+
+        LOG.info("Before flush, we should have no files");
+        FileStatus[] files = fs.listStatus(store.getHomedir());
+        Path[] paths = FileUtil.stat2Paths(files);
+        System.err.println("Got paths: " + Joiner.on(",").join(paths));
+        assertEquals(0, paths.length);
+
+        //flush
+        try {
+          LOG.info("Flushing");
+          flush(1);
+          fail("Didn't bubble up IOE!");
+        } catch (IOException ioe) {
+          assertTrue(ioe.getMessage().contains("Fault injected"));
+        }
+
+        LOG.info("After failed flush, we should still have no files!");
+        files = fs.listStatus(store.getHomedir());
+        paths = FileUtil.stat2Paths(files);
+        System.err.println("Got paths: " + Joiner.on(",").join(paths));
+        assertEquals(0, paths.length);
+        return null;
+      }
+    });
+  }
+
+
+  static class FaultyFileSystem extends FilterFileSystem {
+    List<SoftReference<FaultyOutputStream>> outStreams =
+      new ArrayList<SoftReference<FaultyOutputStream>>();
+    private long faultPos = 200;
+
+    public FaultyFileSystem() {
+      super(new LocalFileSystem());
+      System.err.println("Creating faulty!");
+    }
+
+    @Override
+    public FSDataOutputStream create(Path p) throws IOException {
+      return new FaultyOutputStream(super.create(p), faultPos);
+    }
+
+  }
+
+  static class FaultyOutputStream extends FSDataOutputStream {
+    volatile long faultPos = Long.MAX_VALUE;
+
+    public FaultyOutputStream(FSDataOutputStream out,
+        long faultPos) throws IOException {
+      super(out, null);
+      this.faultPos = faultPos;
+    }
+
+    @Override
+    public void write(byte[] buf, int offset, int length) throws IOException {
+      System.err.println("faulty stream write at pos " + getPos());
+      injectFault();
+      super.write(buf, offset, length);
+    }
+
+    private void injectFault() throws IOException {
+      if (getPos() >= faultPos) {
+        throw new IOException("Fault injected");
+      }
+    }
+  }
+
+
+
+  private static void flushStore(Store store, long id) throws IOException {
+    StoreFlusher storeFlusher = store.getStoreFlusher(id);
+    storeFlusher.prepare();
+    storeFlusher.flushCache();
+    storeFlusher.commit();
+  }
+
+
+
+  /**
+   * Generate a list of KeyValues for testing based on given parameters
+   * @param timestamps
+   * @param numRows
+   * @param qualifier
+   * @param family
+   * @return
+   */
+  List<KeyValue> getKeyValueSet(long[] timestamps, int numRows,
+      byte[] qualifier, byte[] family) {
+    List<KeyValue> kvList = new ArrayList<KeyValue>();
+    for (int i=1;i<=numRows;i++) {
+      byte[] b = Bytes.toBytes(i);
+      for (long timestamp: timestamps) {
+        kvList.add(new KeyValue(b, family, qualifier, timestamp, b));
+      }
+    }
+    return kvList;
+  }
+
+  /**
+   * Test to ensure correctness when using Stores with multiple timestamps
+   * @throws IOException
+   */
+  public void testMultipleTimestamps() throws IOException {
+    int numRows = 1;
+    long[] timestamps1 = new long[] {1,5,10,20};
+    long[] timestamps2 = new long[] {30,80};
+
+    init(this.getName());
+
+    List<KeyValue> kvList1 = getKeyValueSet(timestamps1,numRows, qf1, family);
+    for (KeyValue kv : kvList1) {
+      this.store.add(kv);
+    }
+
+    this.store.snapshot();
+    flushStore(store, id++);
+
+    List<KeyValue> kvList2 = getKeyValueSet(timestamps2,numRows, qf1, family);
+    for(KeyValue kv : kvList2) {
+      this.store.add(kv);
+    }
+
+    List<KeyValue> result;
+    Get get = new Get(Bytes.toBytes(1));
+    get.addColumn(family,qf1);
+
+    get.setTimeRange(0,15);
+    result = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertTrue(result.size()>0);
+
+    get.setTimeRange(40,90);
+    result = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertTrue(result.size()>0);
+
+    get.setTimeRange(10,45);
+    result = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertTrue(result.size()>0);
+
+    get.setTimeRange(80,145);
+    result = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertTrue(result.size()>0);
+
+    get.setTimeRange(1,2);
+    result = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertTrue(result.size()>0);
+
+    get.setTimeRange(90,200);
+    result = HBaseTestingUtility.getFromStoreFile(store, get);
+    assertTrue(result.size()==0);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
new file mode 100644
index 0000000..0a6872b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
@@ -0,0 +1,629 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.List;
+import java.util.TreeSet;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.Reference.Range;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.io.hfile.HFileScanner;
+import org.apache.hadoop.hbase.util.ByteBloomFilter;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Hash;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.mockito.Mockito;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Collections2;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+
+/**
+ * Test HStoreFile
+ */
+public class TestStoreFile extends HBaseTestCase {
+  static final Log LOG = LogFactory.getLog(TestStoreFile.class);
+  private MiniDFSCluster cluster;
+
+  @Override
+  public void setUp() throws Exception {
+    try {
+      this.cluster = new MiniDFSCluster(this.conf, 2, true, (String[])null);
+      // Set the hbase.rootdir to be the home directory in mini dfs.
+      this.conf.set(HConstants.HBASE_DIR,
+        this.cluster.getFileSystem().getHomeDirectory().toString());
+    } catch (IOException e) {
+      shutdownDfs(cluster);
+    }
+    super.setUp();
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+    super.tearDown();
+    shutdownDfs(cluster);
+    // ReflectionUtils.printThreadInfo(new PrintWriter(System.out),
+    //  "Temporary end-of-test thread dump debugging HADOOP-2040: " + getName());
+  }
+
+  /**
+   * Write a file and then assert that we can read from top and bottom halves
+   * using two HalfMapFiles.
+   * @throws Exception
+   */
+  public void testBasicHalfMapFile() throws Exception {
+    // Make up a directory hierarchy that has a regiondir and familyname.
+    StoreFile.Writer writer = StoreFile.createWriter(this.fs,
+      new Path(new Path(this.testDir, "regionname"), "familyname"), 2 * 1024);
+    writeStoreFile(writer);
+    checkHalfHFile(new StoreFile(this.fs, writer.getPath(), true, conf,
+        StoreFile.BloomType.NONE, false));
+  }
+
+  private void writeStoreFile(final StoreFile.Writer writer) throws IOException {
+    writeStoreFile(writer, Bytes.toBytes(getName()), Bytes.toBytes(getName()));
+  }
+  /*
+   * Writes HStoreKey and ImmutableBytes data to passed writer and
+   * then closes it.
+   * @param writer
+   * @throws IOException
+   */
+  public static void writeStoreFile(final StoreFile.Writer writer, byte[] fam, byte[] qualifier)
+  throws IOException {
+    long now = System.currentTimeMillis();
+    try {
+      for (char d = FIRST_CHAR; d <= LAST_CHAR; d++) {
+        for (char e = FIRST_CHAR; e <= LAST_CHAR; e++) {
+          byte[] b = new byte[] { (byte) d, (byte) e };
+          writer.append(new KeyValue(b, fam, qualifier, now, b));
+        }
+      }
+    } finally {
+      writer.close();
+    }
+  }
+
+  /**
+   * Test that our mechanism of writing store files in one region to reference
+   * store files in other regions works.
+   * @throws IOException
+   */
+  public void testReference()
+  throws IOException {
+    Path storedir = new Path(new Path(this.testDir, "regionname"), "familyname");
+    Path dir = new Path(storedir, "1234567890");
+    // Make a store file and write data to it.
+    StoreFile.Writer writer = StoreFile.createWriter(this.fs, dir, 8 * 1024);
+    writeStoreFile(writer);
+    StoreFile hsf = new StoreFile(this.fs, writer.getPath(), true, conf,
+        StoreFile.BloomType.NONE, false);
+    StoreFile.Reader reader = hsf.createReader();
+    // Split on a row, not in middle of row.  Midkey returned by reader
+    // may be in middle of row.  Create new one with empty column and
+    // timestamp.
+    KeyValue kv = KeyValue.createKeyValueFromKey(reader.midkey());
+    byte [] midRow = kv.getRow();
+    kv = KeyValue.createKeyValueFromKey(reader.getLastKey());
+    byte [] finalRow = kv.getRow();
+    // Make a reference
+    Path refPath = StoreFile.split(fs, dir, hsf, midRow, Range.top);
+    StoreFile refHsf = new StoreFile(this.fs, refPath, true, conf,
+        StoreFile.BloomType.NONE, false);
+    // Now confirm that I can read from the reference and that it only gets
+    // keys from top half of the file.
+    HFileScanner s = refHsf.createReader().getScanner(false, false);
+    for(boolean first = true; (!s.isSeeked() && s.seekTo()) || s.next();) {
+      ByteBuffer bb = s.getKey();
+      kv = KeyValue.createKeyValueFromKey(bb);
+      if (first) {
+        assertTrue(Bytes.equals(kv.getRow(), midRow));
+        first = false;
+      }
+    }
+    assertTrue(Bytes.equals(kv.getRow(), finalRow));
+  }
+
+  private void checkHalfHFile(final StoreFile f)
+  throws IOException {
+    byte [] midkey = f.createReader().midkey();
+    KeyValue midKV = KeyValue.createKeyValueFromKey(midkey);
+    byte [] midRow = midKV.getRow();
+    // Create top split.
+    Path topDir = Store.getStoreHomedir(this.testDir, "1",
+      Bytes.toBytes(f.getPath().getParent().getName()));
+    if (this.fs.exists(topDir)) {
+      this.fs.delete(topDir, true);
+    }
+    Path topPath = StoreFile.split(this.fs, topDir, f, midRow, Range.top);
+    // Create bottom split.
+    Path bottomDir = Store.getStoreHomedir(this.testDir, "2",
+      Bytes.toBytes(f.getPath().getParent().getName()));
+    if (this.fs.exists(bottomDir)) {
+      this.fs.delete(bottomDir, true);
+    }
+    Path bottomPath = StoreFile.split(this.fs, bottomDir,
+      f, midRow, Range.bottom);
+    // Make readers on top and bottom.
+    StoreFile.Reader top = new StoreFile(this.fs, topPath, true, conf,
+        StoreFile.BloomType.NONE, false).createReader();
+    StoreFile.Reader bottom = new StoreFile(this.fs, bottomPath, true, conf,
+        StoreFile.BloomType.NONE, false).createReader();
+    ByteBuffer previous = null;
+    LOG.info("Midkey: " + midKV.toString());
+    ByteBuffer bbMidkeyBytes = ByteBuffer.wrap(midkey);
+    try {
+      // Now make two HalfMapFiles and assert they can read the full backing
+      // file, one from the top and the other from the bottom.
+      // Test bottom half first.
+      // Now test reading from the top.
+      boolean first = true;
+      ByteBuffer key = null;
+      HFileScanner topScanner = top.getScanner(false, false);
+      while ((!topScanner.isSeeked() && topScanner.seekTo()) ||
+          (topScanner.isSeeked() && topScanner.next())) {
+        key = topScanner.getKey();
+
+        assertTrue(topScanner.getReader().getComparator().compare(key.array(),
+          key.arrayOffset(), key.limit(), midkey, 0, midkey.length) >= 0);
+        if (first) {
+          first = false;
+          LOG.info("First in top: " + Bytes.toString(Bytes.toBytes(key)));
+        }
+      }
+      LOG.info("Last in top: " + Bytes.toString(Bytes.toBytes(key)));
+
+      first = true;
+      HFileScanner bottomScanner = bottom.getScanner(false, false);
+      while ((!bottomScanner.isSeeked() && bottomScanner.seekTo()) ||
+          bottomScanner.next()) {
+        previous = bottomScanner.getKey();
+        key = bottomScanner.getKey();
+        if (first) {
+          first = false;
+          LOG.info("First in bottom: " +
+            Bytes.toString(Bytes.toBytes(previous)));
+        }
+        assertTrue(key.compareTo(bbMidkeyBytes) < 0);
+      }
+      if (previous != null) {
+        LOG.info("Last in bottom: " + Bytes.toString(Bytes.toBytes(previous)));
+      }
+      // Remove references.
+      this.fs.delete(topPath, false);
+      this.fs.delete(bottomPath, false);
+
+      // Next test using a midkey that does not exist in the file.
+      // First, do a key that is < than first key. Ensure splits behave
+      // properly.
+      byte [] badmidkey = Bytes.toBytes("  .");
+      topPath = StoreFile.split(this.fs, topDir, f, badmidkey, Range.top);
+      bottomPath = StoreFile.split(this.fs, bottomDir, f, badmidkey,
+        Range.bottom);
+      top = new StoreFile(this.fs, topPath, true, conf,
+          StoreFile.BloomType.NONE, false).createReader();
+      bottom = new StoreFile(this.fs, bottomPath, true, conf,
+          StoreFile.BloomType.NONE, false).createReader();
+      bottomScanner = bottom.getScanner(false, false);
+      int count = 0;
+      while ((!bottomScanner.isSeeked() && bottomScanner.seekTo()) ||
+          bottomScanner.next()) {
+        count++;
+      }
+      // When badkey is < than the bottom, should return no values.
+      assertTrue(count == 0);
+      // Now read from the top.
+      first = true;
+      topScanner = top.getScanner(false, false);
+      while ((!topScanner.isSeeked() && topScanner.seekTo()) ||
+          topScanner.next()) {
+        key = topScanner.getKey();
+        assertTrue(topScanner.getReader().getComparator().compare(key.array(),
+          key.arrayOffset(), key.limit(), badmidkey, 0, badmidkey.length) >= 0);
+        if (first) {
+          first = false;
+          KeyValue keyKV = KeyValue.createKeyValueFromKey(key);
+          LOG.info("First top when key < bottom: " + keyKV);
+          String tmp = Bytes.toString(keyKV.getRow());
+          for (int i = 0; i < tmp.length(); i++) {
+            assertTrue(tmp.charAt(i) == 'a');
+          }
+        }
+      }
+      KeyValue keyKV = KeyValue.createKeyValueFromKey(key);
+      LOG.info("Last top when key < bottom: " + keyKV);
+      String tmp = Bytes.toString(keyKV.getRow());
+      for (int i = 0; i < tmp.length(); i++) {
+        assertTrue(tmp.charAt(i) == 'z');
+      }
+      // Remove references.
+      this.fs.delete(topPath, false);
+      this.fs.delete(bottomPath, false);
+
+      // Test when badkey is > than last key in file ('||' > 'zz').
+      badmidkey = Bytes.toBytes("|||");
+      topPath = StoreFile.split(this.fs, topDir, f, badmidkey, Range.top);
+      bottomPath = StoreFile.split(this.fs, bottomDir, f, badmidkey,
+        Range.bottom);
+      top = new StoreFile(this.fs, topPath, true, conf,
+          StoreFile.BloomType.NONE, false).createReader();
+      bottom = new StoreFile(this.fs, bottomPath, true, conf,
+          StoreFile.BloomType.NONE, false).createReader();
+      first = true;
+      bottomScanner = bottom.getScanner(false, false);
+      while ((!bottomScanner.isSeeked() && bottomScanner.seekTo()) ||
+          bottomScanner.next()) {
+        key = bottomScanner.getKey();
+        if (first) {
+          first = false;
+          keyKV = KeyValue.createKeyValueFromKey(key);
+          LOG.info("First bottom when key > top: " + keyKV);
+          tmp = Bytes.toString(keyKV.getRow());
+          for (int i = 0; i < tmp.length(); i++) {
+            assertTrue(tmp.charAt(i) == 'a');
+          }
+        }
+      }
+      keyKV = KeyValue.createKeyValueFromKey(key);
+      LOG.info("Last bottom when key > top: " + keyKV);
+      for (int i = 0; i < tmp.length(); i++) {
+        assertTrue(Bytes.toString(keyKV.getRow()).charAt(i) == 'z');
+      }
+      count = 0;
+      topScanner = top.getScanner(false, false);
+      while ((!topScanner.isSeeked() && topScanner.seekTo()) ||
+          (topScanner.isSeeked() && topScanner.next())) {
+        count++;
+      }
+      // When badkey is < than the bottom, should return no values.
+      assertTrue(count == 0);
+    } finally {
+      if (top != null) {
+        top.close();
+      }
+      if (bottom != null) {
+        bottom.close();
+      }
+      fs.delete(f.getPath(), true);
+    }
+  }
+
+  private static String ROOT_DIR =
+    HBaseTestingUtility.getTestDir("TestStoreFile").toString();
+  private static String localFormatter = "%010d";
+  
+  private void bloomWriteRead(StoreFile.Writer writer, FileSystem fs) 
+  throws Exception {
+    float err = conf.getFloat(StoreFile.IO_STOREFILE_BLOOM_ERROR_RATE, 0);
+    Path f = writer.getPath();
+    long now = System.currentTimeMillis();
+    for (int i = 0; i < 2000; i += 2) {
+      String row = String.format(localFormatter, i);
+      KeyValue kv = new KeyValue(row.getBytes(), "family".getBytes(),
+        "col".getBytes(), now, "value".getBytes());
+      writer.append(kv);
+    }
+    writer.close();
+
+    StoreFile.Reader reader = new StoreFile.Reader(fs, f, null, false);
+    reader.loadFileInfo();
+    reader.loadBloomfilter();
+    StoreFileScanner scanner = reader.getStoreFileScanner(false, false);
+
+    // check false positives rate
+    int falsePos = 0;
+    int falseNeg = 0;
+    for (int i = 0; i < 2000; i++) {
+      String row = String.format(localFormatter, i);
+      TreeSet<byte[]> columns = new TreeSet<byte[]>();
+      columns.add("family:col".getBytes());
+
+      Scan scan = new Scan(row.getBytes(),row.getBytes());
+      scan.addColumn("family".getBytes(), "family:col".getBytes());
+      boolean exists = scanner.shouldSeek(scan, columns);
+      if (i % 2 == 0) {
+        if (!exists) falseNeg++;
+      } else {
+        if (exists) falsePos++;
+      }
+    }
+    reader.close();
+    fs.delete(f, true);
+    System.out.println("False negatives: " + falseNeg);
+    assertEquals(0, falseNeg);
+    System.out.println("False positives: " + falsePos);
+    if (!(falsePos <= 2* 2000 * err)) {
+      System.out.println("WTFBBQ! " + falsePos + ", " + (2* 2000 * err) );
+    }
+    assertTrue(falsePos <= 2* 2000 * err);    
+  }
+
+  public void testBloomFilter() throws Exception {
+    FileSystem fs = FileSystem.getLocal(conf);
+    conf.setFloat(StoreFile.IO_STOREFILE_BLOOM_ERROR_RATE, (float)0.01);
+    conf.setBoolean(StoreFile.IO_STOREFILE_BLOOM_ENABLED, true);
+
+    // write the file
+    Path f = new Path(ROOT_DIR, getName());
+    StoreFile.Writer writer = new StoreFile.Writer(fs, f,
+        StoreFile.DEFAULT_BLOCKSIZE_SMALL, HFile.DEFAULT_COMPRESSION_ALGORITHM,
+        conf, KeyValue.COMPARATOR, StoreFile.BloomType.ROW, 2000);
+
+    bloomWriteRead(writer, fs);
+  }
+
+  public void testBloomTypes() throws Exception {
+    float err = (float) 0.01;
+    FileSystem fs = FileSystem.getLocal(conf);
+    conf.setFloat(StoreFile.IO_STOREFILE_BLOOM_ERROR_RATE, err);
+    conf.setBoolean(StoreFile.IO_STOREFILE_BLOOM_ENABLED, true);
+
+    int rowCount = 50;
+    int colCount = 10;
+    int versions = 2;
+
+    // run once using columns and once using rows
+    StoreFile.BloomType[] bt =
+      {StoreFile.BloomType.ROWCOL, StoreFile.BloomType.ROW};
+    int[] expKeys    = {rowCount*colCount, rowCount};
+    // below line deserves commentary.  it is expected bloom false positives
+    //  column = rowCount*2*colCount inserts
+    //  row-level = only rowCount*2 inserts, but failures will be magnified by
+    //              2nd for loop for every column (2*colCount)
+    float[] expErr   = {2*rowCount*colCount*err, 2*rowCount*2*colCount*err};
+
+    for (int x : new int[]{0,1}) {
+      // write the file
+      Path f = new Path(ROOT_DIR, getName());
+      StoreFile.Writer writer = new StoreFile.Writer(fs, f,
+          StoreFile.DEFAULT_BLOCKSIZE_SMALL,
+          HFile.DEFAULT_COMPRESSION_ALGORITHM,
+          conf, KeyValue.COMPARATOR, bt[x], expKeys[x]);
+
+      long now = System.currentTimeMillis();
+      for (int i = 0; i < rowCount*2; i += 2) { // rows
+        for (int j = 0; j < colCount*2; j += 2) {   // column qualifiers
+          String row = String.format(localFormatter, i);
+          String col = String.format(localFormatter, j);
+          for (int k= 0; k < versions; ++k) { // versions
+            KeyValue kv = new KeyValue(row.getBytes(),
+              "family".getBytes(), ("col" + col).getBytes(),
+                now-k, Bytes.toBytes((long)-1));
+            writer.append(kv);
+          }
+        }
+      }
+      writer.close();
+
+      StoreFile.Reader reader = new StoreFile.Reader(fs, f, null, false);
+      reader.loadFileInfo();
+      reader.loadBloomfilter();
+      StoreFileScanner scanner = reader.getStoreFileScanner(false, false);
+      assertEquals(expKeys[x], reader.bloomFilter.getKeyCount());
+
+      // check false positives rate
+      int falsePos = 0;
+      int falseNeg = 0;
+      for (int i = 0; i < rowCount*2; ++i) { // rows
+        for (int j = 0; j < colCount*2; ++j) {   // column qualifiers
+          String row = String.format(localFormatter, i);
+          String col = String.format(localFormatter, j);
+          TreeSet<byte[]> columns = new TreeSet<byte[]>();
+          columns.add(("col" + col).getBytes());
+
+          Scan scan = new Scan(row.getBytes(),row.getBytes());
+          scan.addColumn("family".getBytes(), ("col"+col).getBytes());
+          boolean exists = scanner.shouldSeek(scan, columns);
+          boolean shouldRowExist = i % 2 == 0;
+          boolean shouldColExist = j % 2 == 0;
+          shouldColExist = shouldColExist || bt[x] == StoreFile.BloomType.ROW;
+          if (shouldRowExist && shouldColExist) {
+            if (!exists) falseNeg++;
+          } else {
+            if (exists) falsePos++;
+          }
+        }
+      }
+      reader.close();
+      fs.delete(f, true);
+      System.out.println(bt[x].toString());
+      System.out.println("  False negatives: " + falseNeg);
+      System.out.println("  False positives: " + falsePos);
+      assertEquals(0, falseNeg);
+      assertTrue(falsePos < 2*expErr[x]);
+    }
+  }
+  
+  public void testBloomEdgeCases() throws Exception {
+    float err = (float)0.005;
+    FileSystem fs = FileSystem.getLocal(conf);
+    Path f = new Path(ROOT_DIR, getName());
+    conf.setFloat(StoreFile.IO_STOREFILE_BLOOM_ERROR_RATE, err);
+    conf.setBoolean(StoreFile.IO_STOREFILE_BLOOM_ENABLED, true);
+    conf.setInt(StoreFile.IO_STOREFILE_BLOOM_MAX_KEYS, 1000);
+    
+    // this should not create a bloom because the max keys is too small
+    StoreFile.Writer writer = new StoreFile.Writer(fs, f,
+        StoreFile.DEFAULT_BLOCKSIZE_SMALL, HFile.DEFAULT_COMPRESSION_ALGORITHM,
+        conf, KeyValue.COMPARATOR, StoreFile.BloomType.ROW, 2000);
+    assertFalse(writer.hasBloom());
+    writer.close();
+    fs.delete(f, true);
+    
+    conf.setInt(StoreFile.IO_STOREFILE_BLOOM_MAX_KEYS, Integer.MAX_VALUE);
+
+    // TODO: commented out because we run out of java heap space on trunk
+    /*
+    // the below config caused IllegalArgumentException in our production cluster
+    // however, the resulting byteSize is < MAX_INT, so this should work properly
+    writer = new StoreFile.Writer(fs, f,
+        StoreFile.DEFAULT_BLOCKSIZE_SMALL, HFile.DEFAULT_COMPRESSION_ALGORITHM,
+        conf, KeyValue.COMPARATOR, StoreFile.BloomType.ROW, 272446963);
+    assertTrue(writer.hasBloom());
+    bloomWriteRead(writer, fs);
+    */
+    
+    // this, however, is too large and should not create a bloom
+    // because Java can't create a contiguous array > MAX_INT
+    writer = new StoreFile.Writer(fs, f,
+        StoreFile.DEFAULT_BLOCKSIZE_SMALL, HFile.DEFAULT_COMPRESSION_ALGORITHM,
+        conf, KeyValue.COMPARATOR, StoreFile.BloomType.ROW, Integer.MAX_VALUE);
+    assertFalse(writer.hasBloom());
+    writer.close();
+    fs.delete(f, true);
+  }
+  
+  public void testFlushTimeComparator() {
+    assertOrdering(StoreFile.Comparators.FLUSH_TIME,
+        mockStoreFile(true, 1000, -1, "/foo/123"),
+        mockStoreFile(true, 1000, -1, "/foo/126"),
+        mockStoreFile(true, 2000, -1, "/foo/126"),
+        mockStoreFile(false, -1, 1, "/foo/1"),
+        mockStoreFile(false, -1, 3, "/foo/2"),
+        mockStoreFile(false, -1, 5, "/foo/2"),
+        mockStoreFile(false, -1, 5, "/foo/3"));
+  }
+  
+  /**
+   * Assert that the given comparator orders the given storefiles in the
+   * same way that they're passed.
+   */
+  private void assertOrdering(Comparator<StoreFile> comparator, StoreFile ... sfs) {
+    ArrayList<StoreFile> sorted = Lists.newArrayList(sfs);
+    Collections.shuffle(sorted);
+    Collections.sort(sorted, comparator);
+    LOG.debug("sfs: " + Joiner.on(",").join(sfs));
+    LOG.debug("sorted: " + Joiner.on(",").join(sorted));
+    assertTrue(Iterables.elementsEqual(Arrays.asList(sfs), sorted));
+  }
+
+  /**
+   * Create a mock StoreFile with the given attributes.
+   */
+  private StoreFile mockStoreFile(boolean bulkLoad, long bulkTimestamp,
+      long seqId, String path) {
+    StoreFile mock = Mockito.mock(StoreFile.class);
+    Mockito.doReturn(bulkLoad).when(mock).isBulkLoadResult();
+    Mockito.doReturn(bulkTimestamp).when(mock).getBulkLoadTimestamp();
+    if (bulkLoad) {
+      // Bulk load files will throw if you ask for their sequence ID
+      Mockito.doThrow(new IllegalAccessError("bulk load"))
+        .when(mock).getMaxSequenceId();
+    } else {
+      Mockito.doReturn(seqId).when(mock).getMaxSequenceId();
+    }
+    Mockito.doReturn(new Path(path)).when(mock).getPath();
+    String name = "mock storefile, bulkLoad=" + bulkLoad +
+      " bulkTimestamp=" + bulkTimestamp +
+      " seqId=" + seqId +
+      " path=" + path;
+    Mockito.doReturn(name).when(mock).toString();
+    return mock;
+  }
+
+  /**
+   *Generate a list of KeyValues for testing based on given parameters
+   * @param timestamps
+   * @param numRows
+   * @param qualifier
+   * @param family
+   * @return
+   */
+  List<KeyValue> getKeyValueSet(long[] timestamps, int numRows,
+      byte[] qualifier, byte[] family) {
+    List<KeyValue> kvList = new ArrayList<KeyValue>();
+    for (int i=1;i<=numRows;i++) {
+      byte[] b = Bytes.toBytes(i) ;
+      LOG.info(Bytes.toString(b));
+      LOG.info(Bytes.toString(b));
+      for (long timestamp: timestamps)
+      {
+        kvList.add(new KeyValue(b, family, qualifier, timestamp, b));
+      }
+    }
+    return kvList;
+  }
+
+  /**
+   * Test to ensure correctness when using StoreFile with multiple timestamps
+   * @throws IOException
+   */
+  public void testMultipleTimestamps() throws IOException {
+    byte[] family = Bytes.toBytes("familyname");
+    byte[] qualifier = Bytes.toBytes("qualifier");
+    int numRows = 10;
+    long[] timestamps = new long[] {20,10,5,1};
+    Scan scan = new Scan();
+
+    Path storedir = new Path(new Path(this.testDir, "regionname"),
+    "familyname");
+    Path dir = new Path(storedir, "1234567890");
+    StoreFile.Writer writer = StoreFile.createWriter(this.fs, dir, 8 * 1024);
+
+    List<KeyValue> kvList = getKeyValueSet(timestamps,numRows,
+        family, qualifier);
+
+    for (KeyValue kv : kvList) {
+      writer.append(kv);
+    }
+    writer.appendMetadata(0, false);
+    writer.close();
+
+    StoreFile hsf = new StoreFile(this.fs, writer.getPath(), true, conf,
+        StoreFile.BloomType.NONE, false);
+    StoreFile.Reader reader = hsf.createReader();
+    StoreFileScanner scanner = reader.getStoreFileScanner(false, false);
+    TreeSet<byte[]> columns = new TreeSet<byte[]>();
+    columns.add(qualifier);
+
+    scan.setTimeRange(20, 100);
+    assertTrue(scanner.shouldSeek(scan, columns));
+
+    scan.setTimeRange(1, 2);
+    assertTrue(scanner.shouldSeek(scan, columns));
+
+    scan.setTimeRange(8, 10);
+    assertTrue(scanner.shouldSeek(scan, columns));
+
+    scan.setTimeRange(7, 50);
+    assertTrue(scanner.shouldSeek(scan, columns));
+
+    /*This test is not required for correctness but it should pass when
+     * timestamp range optimization is on*/
+    //scan.setTimeRange(27, 50);
+    //assertTrue(!scanner.shouldSeek(scan, columns));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
new file mode 100644
index 0000000..1b5fb25
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java
@@ -0,0 +1,475 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import junit.framework.TestCase;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.KeyValueTestUtil;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.mockito.Mockito;
+import org.mockito.stubbing.OngoingStubbing;
+
+import com.google.common.collect.Lists;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.NavigableSet;
+import java.util.TreeSet;
+import static org.apache.hadoop.hbase.regionserver.KeyValueScanFixture.scanFixture;
+
+public class TestStoreScanner extends TestCase {
+  private static final String CF_STR = "cf";
+  final byte [] CF = Bytes.toBytes(CF_STR);
+
+  /*
+   * Test utility for building a NavigableSet for scanners.
+   * @param strCols
+   * @return
+   */
+  NavigableSet<byte[]> getCols(String ...strCols) {
+    NavigableSet<byte[]> cols = new TreeSet<byte[]>(Bytes.BYTES_COMPARATOR);
+    for (String col : strCols) {
+      byte[] bytes = Bytes.toBytes(col);
+      cols.add(bytes);
+    }
+    return cols;
+  }
+
+  public void testScanTimeRange() throws IOException {
+    String r1 = "R1";
+    // returns only 1 of these 2 even though same timestamp
+    KeyValue [] kvs = new KeyValue[] {
+        KeyValueTestUtil.create(r1, CF_STR, "a", 1, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create(r1, CF_STR, "a", 2, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create(r1, CF_STR, "a", 3, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create(r1, CF_STR, "a", 4, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create(r1, CF_STR, "a", 5, KeyValue.Type.Put, "dont-care"),
+    };
+    List<KeyValueScanner> scanners = Arrays.<KeyValueScanner>asList(
+        new KeyValueScanner[] {
+            new KeyValueScanFixture(KeyValue.COMPARATOR, kvs)
+    });
+    Scan scanSpec = new Scan(Bytes.toBytes(r1));
+    scanSpec.setTimeRange(0, 6);
+    scanSpec.setMaxVersions();
+    StoreScanner scan =
+      new StoreScanner(scanSpec, CF, Long.MAX_VALUE,
+        KeyValue.COMPARATOR, getCols("a"), scanners);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(5, results.size());
+    assertEquals(kvs[kvs.length - 1], results.get(0));
+    // Scan limited TimeRange
+    scanSpec = new Scan(Bytes.toBytes(r1));
+    scanSpec.setTimeRange(1, 3);
+    scanSpec.setMaxVersions();
+    scan = new StoreScanner(scanSpec, CF, Long.MAX_VALUE,
+      KeyValue.COMPARATOR, getCols("a"), scanners);
+    results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(2, results.size());
+    // Another range.
+    scanSpec = new Scan(Bytes.toBytes(r1));
+    scanSpec.setTimeRange(5, 10);
+    scanSpec.setMaxVersions();
+    scan = new StoreScanner(scanSpec, CF, Long.MAX_VALUE,
+      KeyValue.COMPARATOR, getCols("a"), scanners);
+    results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(1, results.size());
+    // See how TimeRange and Versions interact.
+    // Another range.
+    scanSpec = new Scan(Bytes.toBytes(r1));
+    scanSpec.setTimeRange(0, 10);
+    scanSpec.setMaxVersions(3);
+    scan = new StoreScanner(scanSpec, CF, Long.MAX_VALUE,
+      KeyValue.COMPARATOR, getCols("a"), scanners);
+    results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(3, results.size());
+  }
+
+  public void testScanSameTimestamp() throws IOException {
+    // returns only 1 of these 2 even though same timestamp
+    KeyValue [] kvs = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Put, "dont-care"),
+    };
+    List<KeyValueScanner> scanners = Arrays.asList(
+        new KeyValueScanner[] {
+            new KeyValueScanFixture(KeyValue.COMPARATOR, kvs)
+        });
+
+    Scan scanSpec = new Scan(Bytes.toBytes("R1"));
+    // this only uses maxVersions (default=1) and TimeRange (default=all)
+    StoreScanner scan =
+      new StoreScanner(scanSpec, CF, Long.MAX_VALUE,
+          KeyValue.COMPARATOR, getCols("a"),
+          scanners);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(1, results.size());
+    assertEquals(kvs[0], results.get(0));
+  }
+
+  /*
+   * Test test shows exactly how the matcher's return codes confuses the StoreScanner
+   * and prevent it from doing the right thing.  Seeking once, then nexting twice
+   * should return R1, then R2, but in this case it doesnt.
+   * TODO this comment makes no sense above. Appears to do the right thing.
+   * @throws IOException
+   */
+  public void testWontNextToNext() throws IOException {
+    // build the scan file:
+    KeyValue [] kvs = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", 2, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "a", 1, KeyValue.Type.Put, "dont-care")
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+
+    Scan scanSpec = new Scan(Bytes.toBytes("R1"));
+    // this only uses maxVersions (default=1) and TimeRange (default=all)
+    StoreScanner scan =
+      new StoreScanner(scanSpec, CF, Long.MAX_VALUE,
+          KeyValue.COMPARATOR, getCols("a"),
+          scanners);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    scan.next(results);
+    assertEquals(1, results.size());
+    assertEquals(kvs[0], results.get(0));
+    // should be ok...
+    // now scan _next_ again.
+    results.clear();
+    scan.next(results);
+    assertEquals(1, results.size());
+    assertEquals(kvs[2], results.get(0));
+
+    results.clear();
+    scan.next(results);
+    assertEquals(0, results.size());
+
+  }
+
+
+  public void testDeleteVersionSameTimestamp() throws IOException {
+    KeyValue [] kvs = new KeyValue [] {
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Delete, "dont-care"),
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    Scan scanSpec = new Scan(Bytes.toBytes("R1"));
+    StoreScanner scan =
+      new StoreScanner(scanSpec, CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          getCols("a"), scanners);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertFalse(scan.next(results));
+    assertEquals(0, results.size());
+  }
+
+  /*
+   * Test the case where there is a delete row 'in front of' the next row, the scanner
+   * will move to the next row.
+   */
+  public void testDeletedRowThenGoodRow() throws IOException {
+    KeyValue [] kvs = new KeyValue [] {
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Delete, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "a", 20, KeyValue.Type.Put, "dont-care")
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    Scan scanSpec = new Scan(Bytes.toBytes("R1"));
+    StoreScanner scan =
+      new StoreScanner(scanSpec, CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          getCols("a"), scanners);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(0, results.size());
+
+    assertEquals(true, scan.next(results));
+    assertEquals(1, results.size());
+    assertEquals(kvs[2], results.get(0));
+
+    assertEquals(false, scan.next(results));
+  }
+
+  public void testDeleteVersionMaskingMultiplePuts() throws IOException {
+    long now = System.currentTimeMillis();
+    KeyValue [] kvs1 = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", now, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", now, KeyValue.Type.Delete, "dont-care")
+    };
+    KeyValue [] kvs2 = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", now-500, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", now-100, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", now, KeyValue.Type.Put, "dont-care")
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs1, kvs2);
+
+    StoreScanner scan =
+      new StoreScanner(new Scan(Bytes.toBytes("R1")), CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          getCols("a"), scanners);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    // the two put at ts=now will be masked by the 1 delete, and
+    // since the scan default returns 1 version we'll return the newest
+    // key, which is kvs[2], now-100.
+    assertEquals(true, scan.next(results));
+    assertEquals(1, results.size());
+    assertEquals(kvs2[1], results.get(0));
+  }
+  public void testDeleteVersionsMixedAndMultipleVersionReturn() throws IOException {
+    long now = System.currentTimeMillis();
+    KeyValue [] kvs1 = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", now, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", now, KeyValue.Type.Delete, "dont-care")
+    };
+    KeyValue [] kvs2 = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", now-500, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", now+500, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", now, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "z", now, KeyValue.Type.Put, "dont-care")
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs1, kvs2);
+
+    Scan scanSpec = new Scan(Bytes.toBytes("R1")).setMaxVersions(2);
+    StoreScanner scan =
+      new StoreScanner(scanSpec, CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          getCols("a"), scanners);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(2, results.size());
+    assertEquals(kvs2[1], results.get(0));
+    assertEquals(kvs2[0], results.get(1));
+  }
+
+  public void testWildCardOneVersionScan() throws IOException {
+    KeyValue [] kvs = new KeyValue [] {
+        KeyValueTestUtil.create("R1", "cf", "a", 2, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "b", 1, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.DeleteColumn, "dont-care"),
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    StoreScanner scan =
+      new StoreScanner(new Scan(Bytes.toBytes("R1")), CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          null, scanners);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(2, results.size());
+    assertEquals(kvs[0], results.get(0));
+    assertEquals(kvs[1], results.get(1));
+  }
+
+  public void testWildCardScannerUnderDeletes() throws IOException {
+    KeyValue [] kvs = new KeyValue [] {
+        KeyValueTestUtil.create("R1", "cf", "a", 2, KeyValue.Type.Put, "dont-care"), // inc
+        // orphaned delete column.
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.DeleteColumn, "dont-care"),
+        // column b
+        KeyValueTestUtil.create("R1", "cf", "b", 2, KeyValue.Type.Put, "dont-care"), // inc
+        KeyValueTestUtil.create("R1", "cf", "b", 1, KeyValue.Type.Put, "dont-care"), // inc
+        // column c
+        KeyValueTestUtil.create("R1", "cf", "c", 10, KeyValue.Type.Delete, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "c", 10, KeyValue.Type.Put, "dont-care"), // no
+        KeyValueTestUtil.create("R1", "cf", "c", 9, KeyValue.Type.Put, "dont-care"),  // inc
+        // column d
+        KeyValueTestUtil.create("R1", "cf", "d", 11, KeyValue.Type.Put, "dont-care"), // inc
+        KeyValueTestUtil.create("R1", "cf", "d", 10, KeyValue.Type.DeleteColumn, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "d", 9, KeyValue.Type.Put, "dont-care"),  // no
+        KeyValueTestUtil.create("R1", "cf", "d", 8, KeyValue.Type.Put, "dont-care"),  // no
+
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    StoreScanner scan =
+      new StoreScanner(new Scan().setMaxVersions(2), CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          null, scanners);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(5, results.size());
+    assertEquals(kvs[0], results.get(0));
+    assertEquals(kvs[2], results.get(1));
+    assertEquals(kvs[3], results.get(2));
+    assertEquals(kvs[6], results.get(3));
+    assertEquals(kvs[7], results.get(4));
+  }
+
+  public void testDeleteFamily() throws IOException {
+    KeyValue [] kvs = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", 100, KeyValue.Type.DeleteFamily, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "b", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "c", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "d", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "e", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "e", 11, KeyValue.Type.DeleteColumn, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "f", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "g", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "g", 11, KeyValue.Type.Delete, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "h", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "i", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "a", 11, KeyValue.Type.Put, "dont-care"),
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    StoreScanner scan =
+      new StoreScanner(new Scan().setMaxVersions(Integer.MAX_VALUE), CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          null, scanners);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(0, results.size());
+    assertEquals(true, scan.next(results));
+    assertEquals(1, results.size());
+    assertEquals(kvs[kvs.length-1], results.get(0));
+
+    assertEquals(false, scan.next(results));
+  }
+
+  public void testDeleteColumn() throws IOException {
+    KeyValue [] kvs = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", 10, KeyValue.Type.DeleteColumn, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 9, KeyValue.Type.Delete, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 8, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "b", 5, KeyValue.Type.Put, "dont-care")
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    StoreScanner scan =
+      new StoreScanner(new Scan(), CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          null, scanners);
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(1, results.size());
+    assertEquals(kvs[3], results.get(0));
+  }
+
+  private static final  KeyValue [] kvs = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "b", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "c", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "d", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "e", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "f", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "g", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "h", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "i", 11, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "a", 11, KeyValue.Type.Put, "dont-care"),
+    };
+
+  public void testSkipColumn() throws IOException {
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    StoreScanner scan =
+      new StoreScanner(new Scan(), CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          getCols("a", "d"), scanners);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scan.next(results));
+    assertEquals(2, results.size());
+    assertEquals(kvs[0], results.get(0));
+    assertEquals(kvs[3], results.get(1));
+    results.clear();
+
+    assertEquals(true, scan.next(results));
+    assertEquals(1, results.size());
+    assertEquals(kvs[kvs.length-1], results.get(0));
+
+    results.clear();
+    assertEquals(false, scan.next(results));
+  }
+
+  /*
+   * Test expiration of KeyValues in combination with a configured TTL for
+   * a column family (as should be triggered in a major compaction).
+   */
+  public void testWildCardTtlScan() throws IOException {
+    long now = System.currentTimeMillis();
+    KeyValue [] kvs = new KeyValue[] {
+        KeyValueTestUtil.create("R1", "cf", "a", now-1000, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "b", now-10, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "c", now-200, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "d", now-10000, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "a", now, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "b", now-10, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "c", now-200, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R2", "cf", "c", now-1000, KeyValue.Type.Put, "dont-care")
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    Scan scan = new Scan();
+    scan.setMaxVersions(1);
+    StoreScanner scanner =
+      new StoreScanner(scan, CF, 500, KeyValue.COMPARATOR,
+          null, scanners);
+
+    List<KeyValue> results = new ArrayList<KeyValue>();
+    assertEquals(true, scanner.next(results));
+    assertEquals(2, results.size());
+    assertEquals(kvs[1], results.get(0));
+    assertEquals(kvs[2], results.get(1));
+    results.clear();
+
+    assertEquals(true, scanner.next(results));
+    assertEquals(3, results.size());
+    assertEquals(kvs[4], results.get(0));
+    assertEquals(kvs[5], results.get(1));
+    assertEquals(kvs[6], results.get(2));
+    results.clear();
+
+    assertEquals(false, scanner.next(results));
+  }
+
+  public void testScannerReseekDoesntNPE() throws Exception {
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    StoreScanner scan =
+        new StoreScanner(new Scan(), CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+            getCols("a", "d"), scanners);
+
+
+    // Previously a updateReaders twice in a row would cause an NPE.  In test this would also
+    // normally cause an NPE because scan.store is null.  So as long as we get through these
+    // two calls we are good and the bug was quashed.
+
+    scan.updateReaders();
+
+    scan.updateReaders();
+
+    scan.peek();
+  }
+
+
+  /**
+   * TODO this fails, since we don't handle deletions, etc, in peek
+   */
+  public void SKIP_testPeek() throws Exception {
+    KeyValue [] kvs = new KeyValue [] {
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Put, "dont-care"),
+        KeyValueTestUtil.create("R1", "cf", "a", 1, KeyValue.Type.Delete, "dont-care"),
+    };
+    List<KeyValueScanner> scanners = scanFixture(kvs);
+    Scan scanSpec = new Scan(Bytes.toBytes("R1"));
+    StoreScanner scan =
+      new StoreScanner(scanSpec, CF, Long.MAX_VALUE, KeyValue.COMPARATOR,
+          getCols("a"), scanners);
+    assertNull(scan.peek());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java
new file mode 100644
index 0000000..106cbc1
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java
@@ -0,0 +1,149 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Random;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+
+public class TestWideScanner extends HBaseTestCase {
+  private final Log LOG = LogFactory.getLog(this.getClass());
+
+  static final byte[] A = Bytes.toBytes("A");
+  static final byte[] B = Bytes.toBytes("B");
+  static final byte[] C = Bytes.toBytes("C");
+  static byte[][] COLUMNS = { A, B, C };
+  static final Random rng = new Random();
+  static final HTableDescriptor TESTTABLEDESC =
+    new HTableDescriptor("testwidescan");
+  static {
+    TESTTABLEDESC.addFamily(new HColumnDescriptor(A,
+      10,  // Ten is arbitrary number.  Keep versions to help debuggging.
+      Compression.Algorithm.NONE.getName(), false, true, 8 * 1024,
+      HConstants.FOREVER, StoreFile.BloomType.NONE.toString(),
+      HColumnDescriptor.DEFAULT_REPLICATION_SCOPE));
+    TESTTABLEDESC.addFamily(new HColumnDescriptor(B,
+      10,  // Ten is arbitrary number.  Keep versions to help debuggging.
+      Compression.Algorithm.NONE.getName(), false, true, 8 * 1024,
+      HConstants.FOREVER, StoreFile.BloomType.NONE.toString(),
+      HColumnDescriptor.DEFAULT_REPLICATION_SCOPE));
+    TESTTABLEDESC.addFamily(new HColumnDescriptor(C,
+      10,  // Ten is arbitrary number.  Keep versions to help debuggging.
+      Compression.Algorithm.NONE.getName(), false, true, 8 * 1024,
+      HConstants.FOREVER, StoreFile.BloomType.NONE.toString(),
+      HColumnDescriptor.DEFAULT_REPLICATION_SCOPE));
+  }
+
+  /** HRegionInfo for root region */
+  public static final HRegionInfo REGION_INFO =
+    new HRegionInfo(TESTTABLEDESC, HConstants.EMPTY_BYTE_ARRAY,
+    HConstants.EMPTY_BYTE_ARRAY);
+
+  MiniDFSCluster cluster = null;
+  HRegion r;
+
+  @Override
+  public void setUp() throws Exception {
+    cluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+    // Set the hbase.rootdir to be the home directory in mini dfs.
+    this.conf.set(HConstants.HBASE_DIR,
+      this.cluster.getFileSystem().getHomeDirectory().toString());
+    super.setUp();
+  }
+
+  private int addWideContent(HRegion region) throws IOException {
+    int count = 0;
+    for (char c = 'a'; c <= 'c'; c++) {
+      byte[] row = Bytes.toBytes("ab" + c);
+      int i;
+      for (i = 0; i < 2500; i++) {
+        byte[] b = Bytes.toBytes(String.format("%10d", i));
+        Put put = new Put(row);
+        put.add(COLUMNS[rng.nextInt(COLUMNS.length)], b, b);
+        region.put(put);
+        count++;
+      }
+    }
+    return count;
+  }
+
+  public void testWideScanBatching() throws IOException {
+    try {
+      this.r = createNewHRegion(REGION_INFO.getTableDesc(), null, null);
+      int inserted = addWideContent(this.r);
+      List<KeyValue> results = new ArrayList<KeyValue>();
+      Scan scan = new Scan();
+      scan.addFamily(A);
+      scan.addFamily(B);
+      scan.addFamily(C);
+      scan.setBatch(1000);
+      InternalScanner s = r.getScanner(scan);
+      int total = 0;
+      int i = 0;
+      boolean more;
+      do {
+        more = s.next(results);
+        i++;
+        LOG.info("iteration #" + i + ", results.size=" + results.size());
+
+        // assert that the result set is no larger than 1000
+        assertTrue(results.size() <= 1000);
+
+        total += results.size();
+
+        if (results.size() > 0) {
+          // assert that all results are from the same row
+          byte[] row = results.get(0).getRow();
+          for (KeyValue kv: results) {
+            assertTrue(Bytes.equals(row, kv.getRow()));
+          }
+        }
+
+        results.clear();
+      } while (more);
+
+      // assert that the scanner returned all values
+      LOG.info("inserted " + inserted + ", scanned " + total);
+      assertTrue(total == inserted);
+
+      s.close();
+    } finally {
+      this.r.close();
+      this.r.getLog().closeAndDelete();
+      shutdownDfs(this.cluster);
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/FaultySequenceFileLogReader.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/FaultySequenceFileLogReader.java
new file mode 100644
index 0000000..16db167
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/FaultySequenceFileLogReader.java
@@ -0,0 +1,79 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.IOException;
+import java.util.LinkedList;
+import java.util.Queue;
+
+import org.apache.hadoop.hbase.regionserver.wal.HLog.Entry;
+
+public class FaultySequenceFileLogReader extends SequenceFileLogReader {
+
+  enum FailureType {
+    BEGINNING, MIDDLE, END, NONE
+  }
+
+  Queue<Entry> nextQueue = new LinkedList<Entry>();
+  int numberOfFileEntries = 0;
+
+  FailureType getFailureType() {
+    return FailureType.valueOf(conf.get("faultysequencefilelogreader.failuretype", "NONE"));
+  }
+
+  @Override
+  public HLog.Entry next(HLog.Entry reuse) throws IOException {
+    this.entryStart = this.reader.getPosition();
+    boolean b = true;
+
+    if (nextQueue.isEmpty()) { // Read the whole thing at once and fake reading
+      while (b == true) {
+        HLogKey key = HLog.newKey(conf);
+        WALEdit val = new WALEdit();
+        HLog.Entry e = new HLog.Entry(key, val);
+        b = this.reader.next(e.getKey(), e.getEdit());
+        nextQueue.offer(e);
+        numberOfFileEntries++;
+      }
+    }
+
+    if (nextQueue.size() == this.numberOfFileEntries
+        && getFailureType() == FailureType.BEGINNING) {
+      throw this.addFileInfoToException(new IOException("fake Exception"));
+    } else if (nextQueue.size() == this.numberOfFileEntries / 2
+        && getFailureType() == FailureType.MIDDLE) {
+      throw this.addFileInfoToException(new IOException("fake Exception"));
+    } else if (nextQueue.size() == 1 && getFailureType() == FailureType.END) {
+      throw this.addFileInfoToException(new IOException("fake Exception"));
+    }
+
+    if (nextQueue.peek() != null) {
+      edit++;
+    }
+
+    Entry e = nextQueue.poll();
+
+    if (e.getEdit().isEmpty()) {
+      return null;
+    }
+    return e;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/InstrumentedSequenceFileLogWriter.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/InstrumentedSequenceFileLogWriter.java
new file mode 100644
index 0000000..bf9bfc4
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/InstrumentedSequenceFileLogWriter.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+public class InstrumentedSequenceFileLogWriter extends SequenceFileLogWriter {
+
+  public InstrumentedSequenceFileLogWriter() {
+    super(HLogKey.class);
+  }
+  
+  public static boolean activateFailure = false;
+  @Override
+    public void append(HLog.Entry entry) throws IOException {
+      super.append(entry);
+      if (activateFailure && Bytes.equals(entry.getKey().getEncodedRegionName(), "break".getBytes())) {
+        System.out.println(getClass().getName() + ": I will throw an exception now...");
+        throw(new IOException("This exception is instrumented and should only be thrown for testing"));
+      }
+    }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java
new file mode 100644
index 0000000..08ba8cb
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLog.java
@@ -0,0 +1,680 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.logging.impl.Log4JLogger;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.regionserver.wal.HLog.Reader;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.protocol.FSConstants.SafeModeAction;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.hdfs.server.namenode.LeaseManager;
+import org.apache.hadoop.io.SequenceFile;
+import org.apache.log4j.Level;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/** JUnit test case for HLog */
+public class TestHLog  {
+  private static final Log LOG = LogFactory.getLog(TestHLog.class);
+  {
+    ((Log4JLogger)DataNode.LOG).getLogger().setLevel(Level.ALL);
+    ((Log4JLogger)LeaseManager.LOG).getLogger().setLevel(Level.ALL);
+    ((Log4JLogger)FSNamesystem.LOG).getLogger().setLevel(Level.ALL);
+    ((Log4JLogger)DFSClient.LOG).getLogger().setLevel(Level.ALL);
+    ((Log4JLogger)HLog.LOG).getLogger().setLevel(Level.ALL);
+  }
+
+  private static Configuration conf;
+  private static FileSystem fs;
+  private static Path dir;
+  private static MiniDFSCluster cluster;
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static Path hbaseDir;
+  private static Path oldLogDir;
+
+  @Before
+  public void setUp() throws Exception {
+
+    FileStatus[] entries = fs.listStatus(new Path("/"));
+    for (FileStatus dir : entries) {
+      fs.delete(dir.getPath(), true);
+    }
+
+  }
+
+  @After
+  public void tearDown() throws Exception {
+  }
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    // Make block sizes small.
+    TEST_UTIL.getConfiguration().setInt("dfs.blocksize", 1024 * 1024);
+    TEST_UTIL.getConfiguration().setInt(
+        "hbase.regionserver.flushlogentries", 1);
+    // needed for testAppendClose()
+    TEST_UTIL.getConfiguration().setBoolean("dfs.support.append", true);
+    // quicker heartbeat interval for faster DN death notification
+    TEST_UTIL.getConfiguration().setInt("heartbeat.recheck.interval", 5000);
+    TEST_UTIL.getConfiguration().setInt("dfs.heartbeat.interval", 1);
+    TEST_UTIL.getConfiguration().setInt("dfs.socket.timeout", 5000);
+    // faster failover with cluster.shutdown();fs.close() idiom
+    TEST_UTIL.getConfiguration()
+        .setInt("ipc.client.connect.max.retries", 1);
+    TEST_UTIL.getConfiguration().setInt(
+        "dfs.client.block.recovery.retries", 1);
+    TEST_UTIL.startMiniCluster(3);
+
+    conf = TEST_UTIL.getConfiguration();
+    cluster = TEST_UTIL.getDFSCluster();
+    fs = cluster.getFileSystem();
+
+    hbaseDir = new Path(TEST_UTIL.getConfiguration().get("hbase.rootdir"));
+    oldLogDir = new Path(hbaseDir, ".oldlogs");
+    dir = new Path(hbaseDir, getName());
+  }
+  private static String getName() {
+    // TODO Auto-generated method stub
+    return "TestHLog";
+  }
+
+  /**
+   * Just write multiple logs then split.  Before fix for HADOOP-2283, this
+   * would fail.
+   * @throws IOException
+   */
+  @Test
+  public void testSplit() throws IOException {
+
+    final byte [] tableName = Bytes.toBytes(getName());
+    final byte [] rowName = tableName;
+    Path logdir = new Path(hbaseDir, HConstants.HREGION_LOGDIR_NAME);
+    HLog log = new HLog(fs, logdir, oldLogDir, conf);
+    final int howmany = 3;
+    HRegionInfo[] infos = new HRegionInfo[3];
+    Path tabledir = new Path(hbaseDir, getName());
+    fs.mkdirs(tabledir);
+    for(int i = 0; i < howmany; i++) {
+      infos[i] = new HRegionInfo(new HTableDescriptor(tableName),
+                Bytes.toBytes("" + i), Bytes.toBytes("" + (i+1)), false);
+      fs.mkdirs(new Path(tabledir, infos[i].getEncodedName()));
+      LOG.info("allo " + new Path(tabledir, infos[i].getEncodedName()).toString());
+    }
+    // Add edits for three regions.
+    try {
+      for (int ii = 0; ii < howmany; ii++) {
+        for (int i = 0; i < howmany; i++) {
+
+          for (int j = 0; j < howmany; j++) {
+            WALEdit edit = new WALEdit();
+            byte [] family = Bytes.toBytes("column");
+            byte [] qualifier = Bytes.toBytes(Integer.toString(j));
+            byte [] column = Bytes.toBytes("column:" + Integer.toString(j));
+            edit.add(new KeyValue(rowName, family, qualifier,
+                System.currentTimeMillis(), column));
+            LOG.info("Region " + i + ": " + edit);
+            log.append(infos[i], tableName, edit,
+              System.currentTimeMillis());
+          }
+        }
+        log.rollWriter();
+      }
+      log.close();
+      HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+          hbaseDir, logdir, this.oldLogDir, this.fs);
+      List<Path> splits =
+        logSplitter.splitLog();
+      verifySplits(splits, howmany);
+      log = null;
+    } finally {
+      if (log != null) {
+        log.closeAndDelete();
+      }
+    }
+  }
+
+  /**
+   * Test new HDFS-265 sync.
+   * @throws Exception
+   */
+  @Test
+  public void Broken_testSync() throws Exception {
+    byte [] bytes = Bytes.toBytes(getName());
+    // First verify that using streams all works.
+    Path p = new Path(dir, getName() + ".fsdos");
+    FSDataOutputStream out = fs.create(p);
+    out.write(bytes);
+    out.sync();
+    FSDataInputStream in = fs.open(p);
+    assertTrue(in.available() > 0);
+    byte [] buffer = new byte [1024];
+    int read = in.read(buffer);
+    assertEquals(bytes.length, read);
+    out.close();
+    in.close();
+    Path subdir = new Path(dir, "hlogdir");
+    HLog wal = new HLog(fs, subdir, oldLogDir, conf);
+    final int total = 20;
+
+    HRegionInfo info = new HRegionInfo(new HTableDescriptor(bytes),
+                null,null, false);
+
+    for (int i = 0; i < total; i++) {
+      WALEdit kvs = new WALEdit();
+      kvs.add(new KeyValue(Bytes.toBytes(i), bytes, bytes));
+      wal.append(info, bytes, kvs, System.currentTimeMillis());
+    }
+    // Now call sync and try reading.  Opening a Reader before you sync just
+    // gives you EOFE.
+    wal.sync();
+    // Open a Reader.
+    Path walPath = wal.computeFilename();
+    HLog.Reader reader = HLog.getReader(fs, walPath, conf);
+    int count = 0;
+    HLog.Entry entry = new HLog.Entry();
+    while ((entry = reader.next(entry)) != null) count++;
+    assertEquals(total, count);
+    reader.close();
+    // Add test that checks to see that an open of a Reader works on a file
+    // that has had a sync done on it.
+    for (int i = 0; i < total; i++) {
+      WALEdit kvs = new WALEdit();
+      kvs.add(new KeyValue(Bytes.toBytes(i), bytes, bytes));
+      wal.append(info, bytes, kvs, System.currentTimeMillis());
+    }
+    reader = HLog.getReader(fs, walPath, conf);
+    count = 0;
+    while((entry = reader.next(entry)) != null) count++;
+    assertTrue(count >= total);
+    reader.close();
+    // If I sync, should see double the edits.
+    wal.sync();
+    reader = HLog.getReader(fs, walPath, conf);
+    count = 0;
+    while((entry = reader.next(entry)) != null) count++;
+    assertEquals(total * 2, count);
+    // Now do a test that ensures stuff works when we go over block boundary,
+    // especially that we return good length on file.
+    final byte [] value = new byte[1025 * 1024];  // Make a 1M value.
+    for (int i = 0; i < total; i++) {
+      WALEdit kvs = new WALEdit();
+      kvs.add(new KeyValue(Bytes.toBytes(i), bytes, value));
+      wal.append(info, bytes, kvs, System.currentTimeMillis());
+    }
+    // Now I should have written out lots of blocks.  Sync then read.
+    wal.sync();
+    reader = HLog.getReader(fs, walPath, conf);
+    count = 0;
+    while((entry = reader.next(entry)) != null) count++;
+    assertEquals(total * 3, count);
+    reader.close();
+    // Close it and ensure that closed, Reader gets right length also.
+    wal.close();
+    reader = HLog.getReader(fs, walPath, conf);
+    count = 0;
+    while((entry = reader.next(entry)) != null) count++;
+    assertEquals(total * 3, count);
+    reader.close();
+  }
+
+  /**
+   * Test the findMemstoresWithEditsEqualOrOlderThan method.
+   * @throws IOException
+   */
+  @Test
+  public void testFindMemstoresWithEditsEqualOrOlderThan() throws IOException {
+    Map<byte [], Long> regionsToSeqids = new HashMap<byte [], Long>();
+    for (int i = 0; i < 10; i++) {
+      Long l = Long.valueOf(i);
+      regionsToSeqids.put(l.toString().getBytes(), l);
+    }
+    byte [][] regions =
+      HLog.findMemstoresWithEditsEqualOrOlderThan(1, regionsToSeqids);
+    assertEquals(2, regions.length);
+    assertTrue(Bytes.equals(regions[0], "0".getBytes()) ||
+        Bytes.equals(regions[0], "1".getBytes()));
+    regions = HLog.findMemstoresWithEditsEqualOrOlderThan(3, regionsToSeqids);
+    int count = 4;
+    assertEquals(count, regions.length);
+    // Regions returned are not ordered.
+    for (int i = 0; i < count; i++) {
+      assertTrue(Bytes.equals(regions[i], "0".getBytes()) ||
+        Bytes.equals(regions[i], "1".getBytes()) ||
+        Bytes.equals(regions[i], "2".getBytes()) ||
+        Bytes.equals(regions[i], "3".getBytes()));
+    }
+  }
+
+  private void verifySplits(List<Path> splits, final int howmany)
+  throws IOException {
+    assertEquals(howmany, splits.size());
+    for (int i = 0; i < splits.size(); i++) {
+      LOG.info("Verifying=" + splits.get(i));
+      HLog.Reader reader = HLog.getReader(fs, splits.get(i), conf);
+      try {
+        int count = 0;
+        String previousRegion = null;
+        long seqno = -1;
+        HLog.Entry entry = new HLog.Entry();
+        while((entry = reader.next(entry)) != null) {
+          HLogKey key = entry.getKey();
+          String region = Bytes.toString(key.getEncodedRegionName());
+          // Assert that all edits are for same region.
+          if (previousRegion != null) {
+            assertEquals(previousRegion, region);
+          }
+          LOG.info("oldseqno=" + seqno + ", newseqno=" + key.getLogSeqNum());
+          assertTrue(seqno < key.getLogSeqNum());
+          seqno = key.getLogSeqNum();
+          previousRegion = region;
+          count++;
+        }
+        assertEquals(howmany * howmany, count);
+      } finally {
+        reader.close();
+      }
+    }
+  }
+  
+  // For this test to pass, requires:
+  // 1. HDFS-200 (append support)
+  // 2. HDFS-988 (SafeMode should freeze file operations
+  //              [FSNamesystem.nextGenerationStampForBlock])
+  // 3. HDFS-142 (on restart, maintain pendingCreates)
+  @Test
+  public void testAppendClose() throws Exception {
+    byte [] tableName = Bytes.toBytes(getName());
+    HRegionInfo regioninfo = new HRegionInfo(new HTableDescriptor(tableName),
+        HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW, false);
+    Path subdir = new Path(dir, "hlogdir");
+    Path archdir = new Path(dir, "hlogdir_archive");
+    HLog wal = new HLog(fs, subdir, archdir, conf);
+    final int total = 20;
+
+    for (int i = 0; i < total; i++) {
+      WALEdit kvs = new WALEdit();
+      kvs.add(new KeyValue(Bytes.toBytes(i), tableName, tableName));
+      wal.append(regioninfo, tableName, kvs, System.currentTimeMillis());
+    }
+    // Now call sync to send the data to HDFS datanodes
+    wal.sync();
+     int namenodePort = cluster.getNameNodePort();
+    final Path walPath = wal.computeFilename();
+    
+
+    // Stop the cluster.  (ensure restart since we're sharing MiniDFSCluster)
+    try {
+      cluster.getNameNode().setSafeMode(SafeModeAction.SAFEMODE_ENTER);
+      cluster.shutdown();
+      try {
+        // wal.writer.close() will throw an exception,
+        // but still call this since it closes the LogSyncer thread first
+        wal.close();
+      } catch (IOException e) {
+        LOG.info(e);
+      }
+      fs.close(); // closing FS last so DFSOutputStream can't call close
+      LOG.info("STOPPED first instance of the cluster");
+    } finally {
+      // Restart the cluster
+      while (cluster.isClusterUp()){
+        LOG.error("Waiting for cluster to go down");
+        Thread.sleep(1000);
+      }
+      cluster = new MiniDFSCluster(namenodePort, conf, 5, false, true, true, null, null, null, null);
+      cluster.waitActive();
+      fs = cluster.getFileSystem();
+      LOG.info("START second instance.");
+    }
+
+    // set the lease period to be 1 second so that the
+    // namenode triggers lease recovery upon append request
+    Method setLeasePeriod = cluster.getClass()
+      .getDeclaredMethod("setLeasePeriod", new Class[]{Long.TYPE, Long.TYPE});
+    setLeasePeriod.setAccessible(true);
+    setLeasePeriod.invoke(cluster,
+                          new Object[]{new Long(1000), new Long(1000)});
+    try {
+      Thread.sleep(1000);
+    } catch (InterruptedException e) {
+      LOG.info(e);
+    }
+    
+    // Now try recovering the log, like the HMaster would do
+    final FileSystem recoveredFs = fs;
+    final Configuration rlConf = conf;
+    
+    class RecoverLogThread extends Thread {
+      public Exception exception = null;
+      public void run() {
+          try {
+            FSUtils.recoverFileLease(recoveredFs, walPath, rlConf);
+          } catch (IOException e) {
+            exception = e;
+          }
+      }
+    }
+
+    RecoverLogThread t = new RecoverLogThread();
+    t.start();
+    // Timeout after 60 sec. Without correct patches, would be an infinite loop
+    t.join(60 * 1000);
+    if(t.isAlive()) {
+      t.interrupt();
+      throw new Exception("Timed out waiting for HLog.recoverLog()");
+    }
+
+    if (t.exception != null)
+      throw t.exception;
+
+    // Make sure you can read all the content
+    SequenceFile.Reader reader
+      = new SequenceFile.Reader(this.fs, walPath, this.conf);
+    int count = 0;
+    HLogKey key = HLog.newKey(conf);
+    WALEdit val = new WALEdit();
+    while (reader.next(key, val)) {
+      count++;
+      assertTrue("Should be one KeyValue per WALEdit",
+                 val.getKeyValues().size() == 1);
+    }
+    assertEquals(total, count);
+    reader.close();
+  }
+
+  /**
+   * Tests that we can write out an edit, close, and then read it back in again.
+   * @throws IOException
+   */
+  @Test
+  public void testEditAdd() throws IOException {
+    final int COL_COUNT = 10;
+    final byte [] tableName = Bytes.toBytes("tablename");
+    final byte [] row = Bytes.toBytes("row");
+    HLog.Reader reader = null;
+    HLog log = new HLog(fs, dir, oldLogDir, conf);
+    try {
+      // Write columns named 1, 2, 3, etc. and then values of single byte
+      // 1, 2, 3...
+      long timestamp = System.currentTimeMillis();
+      WALEdit cols = new WALEdit();
+      for (int i = 0; i < COL_COUNT; i++) {
+        cols.add(new KeyValue(row, Bytes.toBytes("column"),
+            Bytes.toBytes(Integer.toString(i)),
+          timestamp, new byte[] { (byte)(i + '0') }));
+      }
+      HRegionInfo info = new HRegionInfo(new HTableDescriptor(tableName),
+        row,Bytes.toBytes(Bytes.toString(row) + "1"), false);
+      log.append(info, tableName, cols, System.currentTimeMillis());
+      long logSeqId = log.startCacheFlush();
+      log.completeCacheFlush(info.getEncodedNameAsBytes(), tableName, logSeqId, info.isMetaRegion());
+      log.close();
+      Path filename = log.computeFilename();
+      log = null;
+      // Now open a reader on the log and assert append worked.
+      reader = HLog.getReader(fs, filename, conf);
+      // Above we added all columns on a single row so we only read one
+      // entry in the below... thats why we have '1'.
+      for (int i = 0; i < 1; i++) {
+        HLog.Entry entry = reader.next(null);
+        if (entry == null) break;
+        HLogKey key = entry.getKey();
+        WALEdit val = entry.getEdit();
+        assertTrue(Bytes.equals(info.getEncodedNameAsBytes(), key.getEncodedRegionName()));
+        assertTrue(Bytes.equals(tableName, key.getTablename()));
+        KeyValue kv = val.getKeyValues().get(0);
+        assertTrue(Bytes.equals(row, kv.getRow()));
+        assertEquals((byte)(i + '0'), kv.getValue()[0]);
+        System.out.println(key + " " + val);
+      }
+      HLog.Entry entry = null;
+      while ((entry = reader.next(null)) != null) {
+        HLogKey key = entry.getKey();
+        WALEdit val = entry.getEdit();
+        // Assert only one more row... the meta flushed row.
+        assertTrue(Bytes.equals(info.getEncodedNameAsBytes(), key.getEncodedRegionName()));
+        assertTrue(Bytes.equals(tableName, key.getTablename()));
+        KeyValue kv = val.getKeyValues().get(0);
+        assertTrue(Bytes.equals(HLog.METAROW, kv.getRow()));
+        assertTrue(Bytes.equals(HLog.METAFAMILY, kv.getFamily()));
+        assertEquals(0, Bytes.compareTo(HLog.COMPLETE_CACHE_FLUSH,
+          val.getKeyValues().get(0).getValue()));
+        System.out.println(key + " " + val);
+      }
+    } finally {
+      if (log != null) {
+        log.closeAndDelete();
+      }
+      if (reader != null) {
+        reader.close();
+      }
+    }
+  }
+
+  /**
+   * @throws IOException
+   */
+  @Test
+  public void testAppend() throws IOException {
+    final int COL_COUNT = 10;
+    final byte [] tableName = Bytes.toBytes("tablename");
+    final byte [] row = Bytes.toBytes("row");
+    Reader reader = null;
+    HLog log = new HLog(fs, dir, oldLogDir, conf);
+    try {
+      // Write columns named 1, 2, 3, etc. and then values of single byte
+      // 1, 2, 3...
+      long timestamp = System.currentTimeMillis();
+      WALEdit cols = new WALEdit();
+      for (int i = 0; i < COL_COUNT; i++) {
+        cols.add(new KeyValue(row, Bytes.toBytes("column"),
+          Bytes.toBytes(Integer.toString(i)),
+          timestamp, new byte[] { (byte)(i + '0') }));
+      }
+      HRegionInfo hri = new HRegionInfo(new HTableDescriptor(tableName),
+          HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW);
+      log.append(hri, tableName, cols, System.currentTimeMillis());
+      long logSeqId = log.startCacheFlush();
+      log.completeCacheFlush(hri.getEncodedNameAsBytes(), tableName, logSeqId, false);
+      log.close();
+      Path filename = log.computeFilename();
+      log = null;
+      // Now open a reader on the log and assert append worked.
+      reader = HLog.getReader(fs, filename, conf);
+      HLog.Entry entry = reader.next();
+      assertEquals(COL_COUNT, entry.getEdit().size());
+      int idx = 0;
+      for (KeyValue val : entry.getEdit().getKeyValues()) {
+        assertTrue(Bytes.equals(hri.getEncodedNameAsBytes(),
+          entry.getKey().getEncodedRegionName()));
+        assertTrue(Bytes.equals(tableName, entry.getKey().getTablename()));
+        assertTrue(Bytes.equals(row, val.getRow()));
+        assertEquals((byte)(idx + '0'), val.getValue()[0]);
+        System.out.println(entry.getKey() + " " + val);
+        idx++;
+      }
+
+      // Get next row... the meta flushed row.
+      entry = reader.next();
+      assertEquals(1, entry.getEdit().size());
+      for (KeyValue val : entry.getEdit().getKeyValues()) {
+        assertTrue(Bytes.equals(hri.getEncodedNameAsBytes(),
+          entry.getKey().getEncodedRegionName()));
+        assertTrue(Bytes.equals(tableName, entry.getKey().getTablename()));
+        assertTrue(Bytes.equals(HLog.METAROW, val.getRow()));
+        assertTrue(Bytes.equals(HLog.METAFAMILY, val.getFamily()));
+        assertEquals(0, Bytes.compareTo(HLog.COMPLETE_CACHE_FLUSH,
+          val.getValue()));
+        System.out.println(entry.getKey() + " " + val);
+      }
+    } finally {
+      if (log != null) {
+        log.closeAndDelete();
+      }
+      if (reader != null) {
+        reader.close();
+      }
+    }
+  }
+
+  /**
+   * Test that we can visit entries before they are appended
+   * @throws Exception
+   */
+  @Test
+  public void testVisitors() throws Exception {
+    final int COL_COUNT = 10;
+    final byte [] tableName = Bytes.toBytes("tablename");
+    final byte [] row = Bytes.toBytes("row");
+    HLog log = new HLog(fs, dir, oldLogDir, conf);
+    DumbWALObserver visitor = new DumbWALObserver();
+    log.registerWALActionsListener(visitor);
+    long timestamp = System.currentTimeMillis();
+    HRegionInfo hri = new HRegionInfo(new HTableDescriptor(tableName),
+        HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW);
+    for (int i = 0; i < COL_COUNT; i++) {
+      WALEdit cols = new WALEdit();
+      cols.add(new KeyValue(row, Bytes.toBytes("column"),
+          Bytes.toBytes(Integer.toString(i)),
+          timestamp, new byte[]{(byte) (i + '0')}));
+      log.append(hri, tableName, cols, System.currentTimeMillis());
+    }
+    assertEquals(COL_COUNT, visitor.increments);
+    log.unregisterWALActionsListener(visitor);
+    WALEdit cols = new WALEdit();
+    cols.add(new KeyValue(row, Bytes.toBytes("column"),
+        Bytes.toBytes(Integer.toString(11)),
+        timestamp, new byte[]{(byte) (11 + '0')}));
+    log.append(hri, tableName, cols, System.currentTimeMillis());
+    assertEquals(COL_COUNT, visitor.increments);
+  }
+
+  @Test
+  public void testLogCleaning() throws Exception {
+    LOG.info("testLogCleaning");
+    final byte [] tableName = Bytes.toBytes("testLogCleaning");
+    final byte [] tableName2 = Bytes.toBytes("testLogCleaning2");
+
+    HLog log = new HLog(fs, dir, oldLogDir, conf);
+    HRegionInfo hri = new HRegionInfo(new HTableDescriptor(tableName),
+        HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW);
+    HRegionInfo hri2 = new HRegionInfo(new HTableDescriptor(tableName2),
+        HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW);
+
+    // Add a single edit and make sure that rolling won't remove the file
+    // Before HBASE-3198 it used to delete it
+    addEdits(log, hri, tableName, 1);
+    log.rollWriter();
+    assertEquals(1, log.getNumLogFiles());
+
+    // See if there's anything wrong with more than 1 edit
+    addEdits(log, hri, tableName, 2);
+    log.rollWriter();
+    assertEquals(2, log.getNumLogFiles());
+
+    // Now mix edits from 2 regions, still no flushing
+    addEdits(log, hri, tableName, 1);
+    addEdits(log, hri2, tableName2, 1);
+    addEdits(log, hri, tableName, 1);
+    addEdits(log, hri2, tableName2, 1);
+    log.rollWriter();
+    assertEquals(3, log.getNumLogFiles());
+
+    // Flush the first region, we expect to see the first two files getting
+    // archived
+    long seqId = log.startCacheFlush();
+    log.completeCacheFlush(hri.getEncodedNameAsBytes(), tableName, seqId, false);
+    log.rollWriter();
+    assertEquals(2, log.getNumLogFiles());
+
+    // Flush the second region, which removes all the remaining output files
+    // since the oldest was completely flushed and the two others only contain
+    // flush information
+    seqId = log.startCacheFlush();
+    log.completeCacheFlush(hri2.getEncodedNameAsBytes(), tableName2, seqId, false);
+    log.rollWriter();
+    assertEquals(0, log.getNumLogFiles());
+  }
+
+  private void addEdits(HLog log, HRegionInfo hri, byte [] tableName,
+                        int times) throws IOException {
+    final byte [] row = Bytes.toBytes("row");
+    for (int i = 0; i < times; i++) {
+      long timestamp = System.currentTimeMillis();
+      WALEdit cols = new WALEdit();
+      cols.add(new KeyValue(row, row, row, timestamp, row));
+      log.append(hri, tableName, cols, timestamp);
+    }
+  }
+
+  static class DumbWALObserver implements WALObserver {
+    int increments = 0;
+
+    @Override
+    public void visitLogEntryBeforeWrite(HRegionInfo info, HLogKey logKey,
+                                         WALEdit logEdit) {
+      increments++;
+    }
+
+    @Override
+    public void logRolled(Path newFile) {
+      // TODO Auto-generated method stub
+      
+    }
+
+    @Override
+    public void logRollRequested() {
+      // TODO Auto-generated method stub
+      
+    }
+
+    @Override
+    public void logCloseRequested() {
+      // not interested
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogMethods.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogMethods.java
new file mode 100644
index 0000000..50d297b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogMethods.java
@@ -0,0 +1,166 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import static org.junit.Assert.*;
+
+import java.io.IOException;
+import java.util.NavigableSet;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.KeyValueTestUtil;
+import org.apache.hadoop.hbase.MultithreadedTestUtil;
+import org.apache.hadoop.hbase.MultithreadedTestUtil.TestContext;
+import org.apache.hadoop.hbase.MultithreadedTestUtil.TestThread;
+import org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.EntryBuffers;
+import org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.RegionEntryBuffer;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Test;
+import static org.mockito.Mockito.mock;
+
+/**
+ * Simple testing of a few HLog methods.
+ */
+public class TestHLogMethods {
+  private static final byte[] TEST_REGION = Bytes.toBytes("test_region");;
+  private static final byte[] TEST_TABLE = Bytes.toBytes("test_table");
+  
+  private final HBaseTestingUtility util = new HBaseTestingUtility();
+
+  /**
+   * Assert that getSplitEditFilesSorted returns files in expected order and
+   * that it skips moved-aside files.
+   * @throws IOException
+   */
+  @Test public void testGetSplitEditFilesSorted() throws IOException {
+    FileSystem fs = FileSystem.get(util.getConfiguration());
+    Path regiondir = HBaseTestingUtility.getTestDir("regiondir");
+    fs.delete(regiondir, true);
+    fs.mkdirs(regiondir);
+    Path recoverededits = HLog.getRegionDirRecoveredEditsDir(regiondir);
+    String first = HLogSplitter.formatRecoveredEditsFileName(-1);
+    createFile(fs, recoverededits, first);
+    createFile(fs, recoverededits, HLogSplitter.formatRecoveredEditsFileName(0));
+    createFile(fs, recoverededits, HLogSplitter.formatRecoveredEditsFileName(1));
+    createFile(fs, recoverededits, HLogSplitter
+        .formatRecoveredEditsFileName(11));
+    createFile(fs, recoverededits, HLogSplitter.formatRecoveredEditsFileName(2));
+    createFile(fs, recoverededits, HLogSplitter
+        .formatRecoveredEditsFileName(50));
+    String last = HLogSplitter.formatRecoveredEditsFileName(Long.MAX_VALUE);
+    createFile(fs, recoverededits, last);
+    createFile(fs, recoverededits,
+      Long.toString(Long.MAX_VALUE) + "." + System.currentTimeMillis());
+    NavigableSet<Path> files = HLog.getSplitEditFilesSorted(fs, regiondir);
+    assertEquals(7, files.size());
+    assertEquals(files.pollFirst().getName(), first);
+    assertEquals(files.pollLast().getName(), last);
+    assertEquals(files.pollFirst().getName(),
+      HLogSplitter
+        .formatRecoveredEditsFileName(0));
+    assertEquals(files.pollFirst().getName(),
+      HLogSplitter
+        .formatRecoveredEditsFileName(1));
+    assertEquals(files.pollFirst().getName(),
+      HLogSplitter
+        .formatRecoveredEditsFileName(2));
+    assertEquals(files.pollFirst().getName(),
+      HLogSplitter
+        .formatRecoveredEditsFileName(11));
+  }
+
+  private void createFile(final FileSystem fs, final Path testdir,
+      final String name)
+  throws IOException {
+    FSDataOutputStream fdos = fs.create(new Path(testdir, name), true);
+    fdos.close();
+  }
+
+  @Test
+  public void testRegionEntryBuffer() throws Exception {
+    HLogSplitter.RegionEntryBuffer reb = new HLogSplitter.RegionEntryBuffer(
+        TEST_TABLE, TEST_REGION);
+    assertEquals(0, reb.heapSize());
+
+    reb.appendEntry(createTestLogEntry(1));
+    assertTrue(reb.heapSize() > 0);
+  }
+  
+  @Test
+  public void testEntrySink() throws Exception {
+    Configuration conf = new Configuration();
+    HLogSplitter splitter = HLogSplitter.createLogSplitter(
+        conf, mock(Path.class), mock(Path.class), mock(Path.class),
+        mock(FileSystem.class));
+
+    EntryBuffers sink = splitter.new EntryBuffers(1*1024*1024);
+    for (int i = 0; i < 1000; i++) {
+      HLog.Entry entry = createTestLogEntry(i);
+      sink.appendEntry(entry);
+    }
+    
+    assertTrue(sink.totalBuffered > 0);
+    long amountInChunk = sink.totalBuffered;
+    // Get a chunk
+    RegionEntryBuffer chunk = sink.getChunkToWrite();
+    assertEquals(chunk.heapSize(), amountInChunk);
+    
+    // Make sure it got marked that a thread is "working on this"
+    assertTrue(sink.isRegionCurrentlyWriting(TEST_REGION));
+
+    // Insert some more entries
+    for (int i = 0; i < 500; i++) {
+      HLog.Entry entry = createTestLogEntry(i);
+      sink.appendEntry(entry);
+    }    
+    // Asking for another chunk shouldn't work since the first one
+    // is still writing
+    assertNull(sink.getChunkToWrite());
+    
+    // If we say we're done writing the first chunk, then we should be able
+    // to get the second
+    sink.doneWriting(chunk);
+    
+    RegionEntryBuffer chunk2 = sink.getChunkToWrite();
+    assertNotNull(chunk2);
+    assertNotSame(chunk, chunk2);
+    long amountInChunk2 = sink.totalBuffered;
+    // The second chunk had fewer rows than the first
+    assertTrue(amountInChunk2 < amountInChunk);
+    
+    sink.doneWriting(chunk2);
+    assertEquals(0, sink.totalBuffered);
+  }
+  
+  private HLog.Entry createTestLogEntry(int i) {
+    long seq = i;
+    long now = i * 1000;
+    
+    WALEdit edit = new WALEdit();
+    edit.add(KeyValueTestUtil.create("row", "fam", "qual", 1234, "val"));
+    HLogKey key = new HLogKey(TEST_REGION, TEST_TABLE, seq, now);
+    HLog.Entry entry = new HLog.Entry(key, edit);
+    return entry;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java
new file mode 100644
index 0000000..779af98
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java
@@ -0,0 +1,1098 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import static org.junit.Assert.*;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.regionserver.wal.HLog.Entry;
+import org.apache.hadoop.hbase.regionserver.wal.HLog.Reader;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.hadoop.ipc.RemoteException;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.ImmutableList;
+
+/**
+ * Testing {@link HLog} splitting code.
+ */
+public class TestHLogSplit {
+
+  private final static Log LOG = LogFactory.getLog(TestHLogSplit.class);
+
+  private Configuration conf;
+  private FileSystem fs;
+
+  private final static HBaseTestingUtility
+          TEST_UTIL = new HBaseTestingUtility();
+
+
+  private static final Path hbaseDir = new Path("/hbase");
+  private static final Path hlogDir = new Path(hbaseDir, "hlog");
+  private static final Path oldLogDir = new Path(hbaseDir, "hlog.old");
+  private static final Path corruptDir = new Path(hbaseDir, ".corrupt");
+
+  private static final int NUM_WRITERS = 10;
+  private static final int ENTRIES = 10; // entries per writer per region
+
+  private HLog.Writer[] writer = new HLog.Writer[NUM_WRITERS];
+  private long seq = 0;
+  private static final byte[] TABLE_NAME = "t1".getBytes();
+  private static final byte[] FAMILY = "f1".getBytes();
+  private static final byte[] QUALIFIER = "q1".getBytes();
+  private static final byte[] VALUE = "v1".getBytes();
+  private static final String HLOG_FILE_PREFIX = "hlog.dat.";
+  private static List<String> regions;
+  private static final String HBASE_SKIP_ERRORS = "hbase.hlog.split.skip.errors";
+  private static final Path tabledir =
+      new Path(hbaseDir, Bytes.toString(TABLE_NAME));
+
+  static enum Corruptions {
+    INSERT_GARBAGE_ON_FIRST_LINE,
+    INSERT_GARBAGE_IN_THE_MIDDLE,
+    APPEND_GARBAGE,
+    TRUNCATE,
+  }
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.getConfiguration().
+            setInt("hbase.regionserver.flushlogentries", 1);
+    TEST_UTIL.getConfiguration().
+            setBoolean("dfs.support.append", true);
+    TEST_UTIL.getConfiguration().
+            setStrings("hbase.rootdir", hbaseDir.toString());
+    TEST_UTIL.getConfiguration().
+            setClass("hbase.regionserver.hlog.writer.impl",
+                InstrumentedSequenceFileLogWriter.class, HLog.Writer.class);
+
+    TEST_UTIL.startMiniDFSCluster(2);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniDFSCluster();
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    flushToConsole("Cleaning up cluster for new test\n"
+        + "--------------------------");
+    conf = TEST_UTIL.getConfiguration();
+    fs = TEST_UTIL.getDFSCluster().getFileSystem();
+    FileStatus[] entries = fs.listStatus(new Path("/"));
+    flushToConsole("Num entries in /:" + entries.length);
+    for (FileStatus dir : entries){
+      assertTrue("Deleting " + dir.getPath(),
+          fs.delete(dir.getPath(), true));
+    }
+    seq = 0;
+    regions = new ArrayList<String>();
+    Collections.addAll(regions, "bbb", "ccc");
+    InstrumentedSequenceFileLogWriter.activateFailure = false;
+    // Set the soft lease for hdfs to be down from default of 5 minutes or so.
+    TEST_UTIL.setNameNodeNameSystemLeasePeriod(100, 50000);
+  }
+
+  @After
+  public void tearDown() throws Exception {
+  }
+
+  /**
+   * @throws IOException 
+   * @see https://issues.apache.org/jira/browse/HBASE-3020
+   */
+  @Test public void testRecoveredEditsPathForMeta() throws IOException {
+    FileSystem fs = FileSystem.get(TEST_UTIL.getConfiguration());
+    byte [] encoded = HRegionInfo.FIRST_META_REGIONINFO.getEncodedNameAsBytes();
+    Path tdir = new Path(hbaseDir, Bytes.toString(HConstants.META_TABLE_NAME));
+    Path regiondir = new Path(tdir,
+        HRegionInfo.FIRST_META_REGIONINFO.getEncodedName());
+    fs.mkdirs(regiondir);
+    long now = System.currentTimeMillis();
+    HLog.Entry entry =
+      new HLog.Entry(new HLogKey(encoded, HConstants.META_TABLE_NAME, 1, now),
+      new WALEdit());
+    Path p = HLogSplitter.getRegionSplitEditsPath(fs, entry, hbaseDir);
+    String parentOfParent = p.getParent().getParent().getName();
+    assertEquals(parentOfParent, HRegionInfo.FIRST_META_REGIONINFO.getEncodedName());
+  }
+
+  @Test(expected = OrphanHLogAfterSplitException.class)
+  public void testSplitFailsIfNewHLogGetsCreatedAfterSplitStarted()
+  throws IOException {
+    AtomicBoolean stop = new AtomicBoolean(false);
+    
+    FileStatus[] stats = fs.listStatus(new Path("/hbase/t1"));
+    assertTrue("Previous test should clean up table dir",
+        stats == null || stats.length == 0);
+
+    generateHLogs(-1);
+    
+    try {
+    (new ZombieNewLogWriterRegionServer(stop)).start();
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+    } finally {
+      stop.set(true);
+    }
+  }
+
+  @Test
+  public void testSplitPreservesEdits() throws IOException{
+    final String REGION = "region__1";
+    regions.removeAll(regions);
+    regions.add(REGION);
+
+    generateHLogs(1, 10, -1);
+    fs.initialize(fs.getUri(), conf);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+      hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    Path originalLog = (fs.listStatus(oldLogDir))[0].getPath();
+    Path splitLog = getLogForRegion(hbaseDir, TABLE_NAME, REGION);
+
+    assertEquals("edits differ after split", true, logsAreEqual(originalLog, splitLog));
+  }
+
+
+  @Test
+  public void testEmptyLogFiles() throws IOException {
+
+    injectEmptyFile(".empty", true);
+    generateHLogs(Integer.MAX_VALUE);
+    injectEmptyFile("empty", true);
+
+    // make fs act as a different client now
+    // initialize will create a new DFSClient with a new client ID
+    fs.initialize(fs.getUri(), conf);
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+
+    for (String region : regions) {
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, region);
+      assertEquals(NUM_WRITERS * ENTRIES, countHLog(logfile, fs, conf));
+    }
+
+  }
+
+
+  @Test
+  public void testEmptyOpenLogFiles() throws IOException {
+    injectEmptyFile(".empty", false);
+    generateHLogs(Integer.MAX_VALUE);
+    injectEmptyFile("empty", false);
+
+    // make fs act as a different client now
+    // initialize will create a new DFSClient with a new client ID
+    fs.initialize(fs.getUri(), conf);
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    for (String region : regions) {
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, region);
+      assertEquals(NUM_WRITERS * ENTRIES, countHLog(logfile, fs, conf));
+    }
+  }
+
+  @Test
+  public void testOpenZeroLengthReportedFileButWithDataGetsSplit() throws IOException {
+    // generate logs but leave hlog.dat.5 open.
+    generateHLogs(5);
+
+    fs.initialize(fs.getUri(), conf);
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    for (String region : regions) {
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, region);
+      assertEquals(NUM_WRITERS * ENTRIES, countHLog(logfile, fs, conf));
+    }
+
+
+  }
+
+
+  @Test
+  public void testTralingGarbageCorruptionFileSkipErrorsPasses() throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, true);
+    generateHLogs(Integer.MAX_VALUE);
+    corruptHLog(new Path(hlogDir, HLOG_FILE_PREFIX + "5"),
+            Corruptions.APPEND_GARBAGE, true, fs);
+    fs.initialize(fs.getUri(), conf);
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+    for (String region : regions) {
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, region);
+      assertEquals(NUM_WRITERS * ENTRIES, countHLog(logfile, fs, conf));
+    }
+
+
+  }
+
+  @Test
+  public void testFirstLineCorruptionLogFileSkipErrorsPasses() throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, true);
+    generateHLogs(Integer.MAX_VALUE);
+    corruptHLog(new Path(hlogDir, HLOG_FILE_PREFIX + "5"),
+            Corruptions.INSERT_GARBAGE_ON_FIRST_LINE, true, fs);
+    fs.initialize(fs.getUri(), conf);
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+    for (String region : regions) {
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, region);
+      assertEquals((NUM_WRITERS - 1) * ENTRIES, countHLog(logfile, fs, conf));
+    }
+
+
+  }
+
+
+  @Test
+  public void testMiddleGarbageCorruptionSkipErrorsReadsHalfOfFile() throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, true);
+    generateHLogs(Integer.MAX_VALUE);
+    corruptHLog(new Path(hlogDir, HLOG_FILE_PREFIX + "5"),
+            Corruptions.INSERT_GARBAGE_IN_THE_MIDDLE, false, fs);
+    fs.initialize(fs.getUri(), conf);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    for (String region : regions) {
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, region);
+      // the entries in the original logs are alternating regions
+      // considering the sequence file header, the middle corruption should
+      // affect at least half of the entries
+      int goodEntries = (NUM_WRITERS - 1) * ENTRIES;
+      int firstHalfEntries = (int) Math.ceil(ENTRIES / 2) - 1;
+      assertTrue("The file up to the corrupted area hasn't been parsed",
+              goodEntries + firstHalfEntries <= countHLog(logfile, fs, conf));
+    }
+  }
+
+  @Test
+  public void testCorruptedFileGetsArchivedIfSkipErrors() throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, true);
+    Class<?> backupClass = conf.getClass("hbase.regionserver.hlog.reader.impl",
+        Reader.class);
+    InstrumentedSequenceFileLogWriter.activateFailure = false;
+    HLog.resetLogReaderClass();
+
+    try {
+    Path c1 = new Path(hlogDir, HLOG_FILE_PREFIX + "0");
+      conf.setClass("hbase.regionserver.hlog.reader.impl",
+          FaultySequenceFileLogReader.class, HLog.Reader.class);
+      for (FaultySequenceFileLogReader.FailureType  failureType : FaultySequenceFileLogReader.FailureType.values()) {
+        conf.set("faultysequencefilelogreader.failuretype", failureType.name());
+        generateHLogs(1, ENTRIES, -1);
+        fs.initialize(fs.getUri(), conf);
+        HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+            hbaseDir, hlogDir, oldLogDir, fs);
+        logSplitter.splitLog();
+        FileStatus[] archivedLogs = fs.listStatus(corruptDir);
+        assertEquals("expected a different file", c1.getName(), archivedLogs[0]
+            .getPath().getName());
+        assertEquals(archivedLogs.length, 1);
+        fs.delete(new Path(oldLogDir, HLOG_FILE_PREFIX + "0"), false);
+      }
+    } finally {
+      conf.setClass("hbase.regionserver.hlog.reader.impl", backupClass,
+          Reader.class);
+      HLog.resetLogReaderClass();
+    }
+  }
+
+  @Test(expected = IOException.class)
+  public void testTrailingGarbageCorruptionLogFileSkipErrorsFalseThrows()
+      throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, false);
+    Class<?> backupClass = conf.getClass("hbase.regionserver.hlog.reader.impl",
+        Reader.class);
+    InstrumentedSequenceFileLogWriter.activateFailure = false;
+    HLog.resetLogReaderClass();
+
+    try {
+      conf.setClass("hbase.regionserver.hlog.reader.impl",
+          FaultySequenceFileLogReader.class, HLog.Reader.class);
+      conf.set("faultysequencefilelogreader.failuretype", FaultySequenceFileLogReader.FailureType.BEGINNING.name());
+      generateHLogs(Integer.MAX_VALUE);
+    fs.initialize(fs.getUri(), conf);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+    } finally {
+      conf.setClass("hbase.regionserver.hlog.reader.impl", backupClass,
+          Reader.class);
+      HLog.resetLogReaderClass();
+    }
+
+  }
+
+  @Test
+  public void testCorruptedLogFilesSkipErrorsFalseDoesNotTouchLogs()
+      throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, false);
+    Class<?> backupClass = conf.getClass("hbase.regionserver.hlog.reader.impl",
+        Reader.class);
+    InstrumentedSequenceFileLogWriter.activateFailure = false;
+    HLog.resetLogReaderClass();
+
+    try {
+      conf.setClass("hbase.regionserver.hlog.reader.impl",
+          FaultySequenceFileLogReader.class, HLog.Reader.class);
+      conf.set("faultysequencefilelogreader.failuretype", FaultySequenceFileLogReader.FailureType.BEGINNING.name());
+      generateHLogs(-1);
+      fs.initialize(fs.getUri(), conf);
+      HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+          hbaseDir, hlogDir, oldLogDir, fs);
+      try {
+        logSplitter.splitLog();
+      } catch (IOException e) {
+        assertEquals(
+            "if skip.errors is false all files should remain in place",
+            NUM_WRITERS, fs.listStatus(hlogDir).length);
+      }
+    } finally {
+      conf.setClass("hbase.regionserver.hlog.reader.impl", backupClass,
+          Reader.class);
+      HLog.resetLogReaderClass();
+    }
+
+  }
+
+  @Test
+  public void testEOFisIgnored() throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, false);
+
+    final String REGION = "region__1";
+    regions.removeAll(regions);
+    regions.add(REGION);
+
+    int entryCount = 10;
+    Path c1 = new Path(hlogDir, HLOG_FILE_PREFIX + "0");
+    generateHLogs(1, entryCount, -1);
+    corruptHLog(c1, Corruptions.TRUNCATE, true, fs);
+
+    fs.initialize(fs.getUri(), conf);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    Path originalLog = (fs.listStatus(oldLogDir))[0].getPath();
+    Path splitLog = getLogForRegion(hbaseDir, TABLE_NAME, REGION);
+
+    int actualCount = 0;
+    HLog.Reader in = HLog.getReader(fs, splitLog, conf);
+    HLog.Entry entry;
+    while ((entry = in.next()) != null) ++actualCount;
+    assertEquals(entryCount-1, actualCount);
+
+    // should not have stored the EOF files as corrupt
+    FileStatus[] archivedLogs = fs.listStatus(corruptDir);
+    assertEquals(archivedLogs.length, 0);
+  }
+  
+  @Test
+  public void testLogsGetArchivedAfterSplit() throws IOException {
+    conf.setBoolean(HBASE_SKIP_ERRORS, false);
+
+    generateHLogs(-1);
+
+    fs.initialize(fs.getUri(), conf);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    FileStatus[] archivedLogs = fs.listStatus(oldLogDir);
+
+    assertEquals("wrong number of files in the archive log", NUM_WRITERS, archivedLogs.length);
+  }
+
+  @Test
+  public void testSplit() throws IOException {
+    generateHLogs(-1);
+    fs.initialize(fs.getUri(), conf);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    for (String region : regions) {
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, region);
+      assertEquals(NUM_WRITERS * ENTRIES, countHLog(logfile, fs, conf));
+
+    }
+  }
+
+  @Test
+  public void testLogDirectoryShouldBeDeletedAfterSuccessfulSplit()
+  throws IOException {
+    generateHLogs(-1);
+    fs.initialize(fs.getUri(), conf);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+    FileStatus [] statuses = null;
+    try {
+      statuses = fs.listStatus(hlogDir);
+      if (statuses != null) {
+        Assert.fail("Files left in log dir: " +
+            Joiner.on(",").join(FileUtil.stat2Paths(statuses)));
+      }
+    } catch (FileNotFoundException e) {
+      // hadoop 0.21 throws FNFE whereas hadoop 0.20 returns null
+    }
+  }
+/* DISABLED for now.  TODO: HBASE-2645 
+  @Test
+  public void testLogCannotBeWrittenOnceParsed() throws IOException {
+    AtomicLong counter = new AtomicLong(0);
+    AtomicBoolean stop = new AtomicBoolean(false);
+    generateHLogs(9);
+    fs.initialize(fs.getUri(), conf);
+
+    Thread zombie = new ZombieLastLogWriterRegionServer(writer[9], counter, stop);
+
+
+
+    try {
+      zombie.start();
+
+      HLog.splitLog(hbaseDir, hlogDir, oldLogDir, fs, conf);
+
+      Path logfile = getLogForRegion(hbaseDir, TABLE_NAME, "juliet");
+
+      // It's possible that the writer got an error while appending and didn't count it
+      // however the entry will in fact be written to file and split with the rest
+      long numberOfEditsInRegion = countHLog(logfile, fs, conf);
+      assertTrue("The log file could have at most 1 extra log entry, but " +
+              "can't have less. Zombie could write "+counter.get() +" and logfile had only"+ numberOfEditsInRegion+" "  + logfile, counter.get() == numberOfEditsInRegion ||
+                      counter.get() + 1 == numberOfEditsInRegion);
+    } finally {
+      stop.set(true);
+    }
+  }
+*/
+
+  @Test
+  public void testSplitWillNotTouchLogsIfNewHLogGetsCreatedAfterSplitStarted()
+  throws IOException {
+    AtomicBoolean stop = new AtomicBoolean(false);
+    generateHLogs(-1);
+    fs.initialize(fs.getUri(), conf);
+    Thread zombie = new ZombieNewLogWriterRegionServer(stop);
+    
+    try {
+      zombie.start();
+      try {
+        HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+            hbaseDir, hlogDir, oldLogDir, fs);
+        logSplitter.splitLog();
+      } catch (IOException ex) {/* expected */}
+      int logFilesNumber = fs.listStatus(hlogDir).length;
+
+      assertEquals("Log files should not be archived if there's an extra file after split",
+              NUM_WRITERS + 1, logFilesNumber);
+    } finally {
+      stop.set(true);
+    }
+
+  }
+
+
+
+  @Test(expected = IOException.class)
+  public void testSplitWillFailIfWritingToRegionFails() throws Exception {
+    //leave 5th log open so we could append the "trap"
+    generateHLogs(4);
+
+    fs.initialize(fs.getUri(), conf);
+
+    String region = "break";
+    Path regiondir = new Path(tabledir, region);
+    fs.mkdirs(regiondir);
+
+    InstrumentedSequenceFileLogWriter.activateFailure = false;
+    appendEntry(writer[4], TABLE_NAME, Bytes.toBytes(region),
+        ("r" + 999).getBytes(), FAMILY, QUALIFIER, VALUE, 0);
+    writer[4].close();
+
+    try {
+      InstrumentedSequenceFileLogWriter.activateFailure = true;
+      HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+          hbaseDir, hlogDir, oldLogDir, fs);
+      logSplitter.splitLog();
+
+    } catch (IOException e) {
+      assertEquals("This exception is instrumented and should only be thrown for testing", e.getMessage());
+      throw e;
+    } finally {
+      InstrumentedSequenceFileLogWriter.activateFailure = false;
+    }
+  }
+
+
+  // @Test TODO this test has been disabled since it was created!
+  // It currently fails because the second split doesn't output anything
+  // -- because there are no region dirs after we move aside the first
+  // split result
+  public void testSplittingLargeNumberOfRegionsConsistency() throws IOException {
+
+    regions.removeAll(regions);
+    for (int i=0; i<100; i++) {
+      regions.add("region__"+i);
+    }
+
+    generateHLogs(1, 100, -1);
+    fs.initialize(fs.getUri(), conf);
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+    fs.rename(oldLogDir, hlogDir);
+    Path firstSplitPath = new Path(hbaseDir, Bytes.toString(TABLE_NAME) + ".first");
+    Path splitPath = new Path(hbaseDir, Bytes.toString(TABLE_NAME));
+    fs.rename(splitPath,
+            firstSplitPath);
+
+
+    fs.initialize(fs.getUri(), conf);
+    logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+
+    assertEquals(0, compareHLogSplitDirs(firstSplitPath, splitPath));
+  }
+
+  @Test
+  public void testSplitDeletedRegion() throws IOException {
+    regions.removeAll(regions);
+    String region = "region_that_splits";
+    regions.add(region);
+
+    generateHLogs(1);
+
+    fs.initialize(fs.getUri(), conf);
+
+    Path regiondir = new Path(tabledir, region);
+    fs.delete(regiondir, true);
+
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(conf,
+        hbaseDir, hlogDir, oldLogDir, fs);
+    logSplitter.splitLog();
+    
+    assertFalse(fs.exists(regiondir));
+  }
+  
+  @Test
+  public void testIOEOnOutputThread() throws Exception {
+    conf.setBoolean(HBASE_SKIP_ERRORS, false);
+
+    generateHLogs(-1);
+
+    fs.initialize(fs.getUri(), conf);
+    // Set up a splitter that will throw an IOE on the output side
+    HLogSplitter logSplitter = new HLogSplitter(
+        conf, hbaseDir, hlogDir, oldLogDir, fs) {
+      protected HLog.Writer createWriter(FileSystem fs, Path logfile, Configuration conf)
+      throws IOException {
+        HLog.Writer mockWriter = Mockito.mock(HLog.Writer.class);
+        Mockito.doThrow(new IOException("Injected")).when(mockWriter).append(Mockito.<HLog.Entry>any());
+        return mockWriter;
+        
+      }
+    };
+    try {
+      logSplitter.splitLog();
+      fail("Didn't throw!");
+    } catch (IOException ioe) {
+      assertTrue(ioe.toString().contains("Injected"));
+    }
+  }
+  
+  /**
+   * Test log split process with fake data and lots of edits to trigger threading
+   * issues.
+   */
+  @Test
+  public void testThreading() throws Exception {
+    doTestThreading(20000, 128*1024*1024, 0);
+  }
+  
+  /**
+   * Test blocking behavior of the log split process if writers are writing slower
+   * than the reader is reading.
+   */
+  @Test
+  public void testThreadingSlowWriterSmallBuffer() throws Exception {
+    doTestThreading(200, 1024, 50);
+  }
+  
+  /**
+   * Sets up a log splitter with a mock reader and writer. The mock reader generates
+   * a specified number of edits spread across 5 regions. The mock writer optionally
+   * sleeps for each edit it is fed.
+   * *
+   * After the split is complete, verifies that the statistics show the correct number
+   * of edits output into each region.
+   * 
+   * @param numFakeEdits number of fake edits to push through pipeline
+   * @param bufferSize size of in-memory buffer
+   * @param writerSlowness writer threads will sleep this many ms per edit
+   */
+  private void doTestThreading(final int numFakeEdits,
+      final int bufferSize,
+      final int writerSlowness) throws Exception {
+
+    Configuration localConf = new Configuration(conf);
+    localConf.setInt("hbase.regionserver.hlog.splitlog.buffersize", bufferSize);
+
+    // Create a fake log file (we'll override the reader to produce a stream of edits)
+    FSDataOutputStream out = fs.create(new Path(hlogDir, HLOG_FILE_PREFIX + ".fake"));
+    out.close();
+
+    // Make region dirs for our destination regions so the output doesn't get skipped
+    final List<String> regions = ImmutableList.of("r0", "r1", "r2", "r3", "r4"); 
+    makeRegionDirs(fs, regions);
+
+    // Create a splitter that reads and writes the data without touching disk
+    HLogSplitter logSplitter = new HLogSplitter(
+        localConf, hbaseDir, hlogDir, oldLogDir, fs) {
+      
+      /* Produce a mock writer that doesn't write anywhere */
+      protected HLog.Writer createWriter(FileSystem fs, Path logfile, Configuration conf)
+      throws IOException {
+        HLog.Writer mockWriter = Mockito.mock(HLog.Writer.class);
+        Mockito.doAnswer(new Answer<Void>() {
+          int expectedIndex = 0;
+          
+          @Override
+          public Void answer(InvocationOnMock invocation) {
+            if (writerSlowness > 0) {
+              try {
+                Thread.sleep(writerSlowness);
+              } catch (InterruptedException ie) {
+                Thread.currentThread().interrupt();
+              }
+            }
+            HLog.Entry entry = (Entry) invocation.getArguments()[0];
+            WALEdit edit = entry.getEdit();
+            List<KeyValue> keyValues = edit.getKeyValues();
+            assertEquals(1, keyValues.size());
+            KeyValue kv = keyValues.get(0);
+            
+            // Check that the edits come in the right order.
+            assertEquals(expectedIndex, Bytes.toInt(kv.getRow()));
+            expectedIndex++;
+            return null;
+          }
+        }).when(mockWriter).append(Mockito.<HLog.Entry>any());
+        return mockWriter;        
+      }
+      
+      
+      /* Produce a mock reader that generates fake entries */
+      protected Reader getReader(FileSystem fs, Path curLogFile, Configuration conf)
+      throws IOException {
+        Reader mockReader = Mockito.mock(Reader.class);
+        Mockito.doAnswer(new Answer<HLog.Entry>() {
+          int index = 0;
+
+          @Override
+          public HLog.Entry answer(InvocationOnMock invocation) throws Throwable {
+            if (index >= numFakeEdits) return null;
+           
+            // Generate r0 through r4 in round robin fashion
+            int regionIdx = index % regions.size();
+            byte region[] = new byte[] {(byte)'r', (byte) (0x30 + regionIdx)};
+            
+            HLog.Entry ret = createTestEntry(TABLE_NAME, region,
+                Bytes.toBytes((int)(index / regions.size())),
+                FAMILY, QUALIFIER, VALUE, index);
+            index++;
+            return ret;
+          }
+        }).when(mockReader).next();
+        return mockReader;
+      }
+    };
+    
+    logSplitter.splitLog();
+    
+    // Verify number of written edits per region
+
+    Map<byte[], Long> outputCounts = logSplitter.getOutputCounts();
+    for (Map.Entry<byte[], Long> entry : outputCounts.entrySet()) {
+      LOG.info("Got " + entry.getValue() + " output edits for region " + 
+          Bytes.toString(entry.getKey()));
+      
+      assertEquals((long)entry.getValue(), numFakeEdits / regions.size());
+    }
+    assertEquals(regions.size(), outputCounts.size());
+  }
+  
+  
+
+  /**
+   * This thread will keep writing to the file after the split process has started
+   * It simulates a region server that was considered dead but woke up and wrote
+   * some more to he last log entry
+   */
+  class ZombieLastLogWriterRegionServer extends Thread {
+    AtomicLong editsCount;
+    AtomicBoolean stop;
+    Path log;
+    HLog.Writer lastLogWriter;
+    public ZombieLastLogWriterRegionServer(HLog.Writer writer, AtomicLong counter, AtomicBoolean stop) {
+      this.stop = stop;
+      this.editsCount = counter;
+      this.lastLogWriter = writer;
+    }
+
+    @Override
+    public void run() {
+      if (stop.get()){
+        return;
+      }
+      flushToConsole("starting");
+      while (true) {
+        try {
+          String region = "juliet";
+          
+          fs.mkdirs(new Path(new Path(hbaseDir, region), region));
+          appendEntry(lastLogWriter, TABLE_NAME, region.getBytes(),
+                  ("r" + editsCount).getBytes(), FAMILY, QUALIFIER, VALUE, 0);
+          lastLogWriter.sync();
+          editsCount.incrementAndGet();
+          try {
+            Thread.sleep(1);
+          } catch (InterruptedException e) {
+            //
+          }
+
+
+        } catch (IOException ex) {
+          if (ex instanceof RemoteException) {
+            flushToConsole("Juliet: got RemoteException " +
+                    ex.getMessage() + " while writing " + (editsCount.get() + 1));
+            break;
+          } else {
+            assertTrue("Failed to write " + editsCount.get(), false);
+          }
+
+        }
+      }
+
+
+    }
+  }
+
+  /**
+   * This thread will keep adding new log files
+   * It simulates a region server that was considered dead but woke up and wrote
+   * some more to a new hlog
+   */
+  class ZombieNewLogWriterRegionServer extends Thread {
+    AtomicBoolean stop;
+    public ZombieNewLogWriterRegionServer(AtomicBoolean stop) {
+      super("ZombieNewLogWriterRegionServer");
+      this.stop = stop;
+    }
+
+    @Override
+    public void run() {
+      if (stop.get()) {
+        return;
+      }
+      Path tableDir = new Path(hbaseDir, new String(TABLE_NAME));
+      Path regionDir = new Path(tableDir, regions.get(0));      
+      Path recoveredEdits = new Path(regionDir, HLogSplitter.RECOVERED_EDITS);
+      String region = "juliet";
+      Path julietLog = new Path(hlogDir, HLOG_FILE_PREFIX + ".juliet");
+      try {
+
+        while (!fs.exists(recoveredEdits) && !stop.get()) {
+          flushToConsole("Juliet: split not started, sleeping a bit...");
+          Threads.sleep(10);
+        }
+
+        fs.mkdirs(new Path(tableDir, region));
+        HLog.Writer writer = HLog.createWriter(fs,
+                julietLog, conf);
+        appendEntry(writer, "juliet".getBytes(), ("juliet").getBytes(),
+                ("r").getBytes(), FAMILY, QUALIFIER, VALUE, 0);
+        writer.close();
+        flushToConsole("Juliet file creator: created file " + julietLog);
+      } catch (IOException e1) {
+        assertTrue("Failed to create file " + julietLog, false);
+      }
+    }
+  }
+
+  private void flushToConsole(String s) {
+    System.out.println(s);
+    System.out.flush();
+  }
+
+
+  private void generateHLogs(int leaveOpen) throws IOException {
+    generateHLogs(NUM_WRITERS, ENTRIES, leaveOpen);
+  }
+
+  private void makeRegionDirs(FileSystem fs, List<String> regions) throws IOException {
+    for (String region : regions) {
+      flushToConsole("Creating dir for region " + region);
+      fs.mkdirs(new Path(tabledir, region));
+    }
+  }
+  
+  private void generateHLogs(int writers, int entries, int leaveOpen) throws IOException {
+    makeRegionDirs(fs, regions);
+    for (int i = 0; i < writers; i++) {
+      writer[i] = HLog.createWriter(fs, new Path(hlogDir, HLOG_FILE_PREFIX + i), conf);
+      for (int j = 0; j < entries; j++) {
+        int prefix = 0;
+        for (String region : regions) {
+          String row_key = region + prefix++ + i + j;
+          appendEntry(writer[i], TABLE_NAME, region.getBytes(),
+                  row_key.getBytes(), FAMILY, QUALIFIER, VALUE, seq);
+        }
+      }
+      if (i != leaveOpen) {
+        writer[i].close();
+        flushToConsole("Closing writer " + i);
+      }
+    }
+  }
+
+  private Path getLogForRegion(Path rootdir, byte[] table, String region)
+  throws IOException {
+    Path tdir = HTableDescriptor.getTableDir(rootdir, table);
+    Path editsdir = HLog.getRegionDirRecoveredEditsDir(HRegion.getRegionDir(tdir,
+      Bytes.toString(region.getBytes())));
+    FileStatus [] files = this.fs.listStatus(editsdir);
+    assertEquals(1, files.length);
+    return files[0].getPath();
+  }
+
+  private void corruptHLog(Path path, Corruptions corruption, boolean close,
+                           FileSystem fs) throws IOException {
+
+    FSDataOutputStream out;
+    int fileSize = (int) fs.listStatus(path)[0].getLen();
+
+    FSDataInputStream in = fs.open(path);
+    byte[] corrupted_bytes = new byte[fileSize];
+    in.readFully(0, corrupted_bytes, 0, fileSize);
+    in.close();
+
+    switch (corruption) {
+      case APPEND_GARBAGE:
+        out = fs.append(path);
+        out.write("-----".getBytes());
+        closeOrFlush(close, out);
+        break;
+
+      case INSERT_GARBAGE_ON_FIRST_LINE:
+        fs.delete(path, false);
+        out = fs.create(path);
+        out.write(0);
+        out.write(corrupted_bytes);
+        closeOrFlush(close, out);
+        break;
+
+      case INSERT_GARBAGE_IN_THE_MIDDLE:
+        fs.delete(path, false);
+        out = fs.create(path);
+        int middle = (int) Math.floor(corrupted_bytes.length / 2);
+        out.write(corrupted_bytes, 0, middle);
+        out.write(0);
+        out.write(corrupted_bytes, middle, corrupted_bytes.length - middle);
+        closeOrFlush(close, out);
+        break;
+        
+      case TRUNCATE:
+        fs.delete(path, false);
+        out = fs.create(path);
+        out.write(corrupted_bytes, 0, fileSize-32);
+        closeOrFlush(close, out);
+        
+        break;
+    }
+
+
+  }
+
+  private void closeOrFlush(boolean close, FSDataOutputStream out)
+  throws IOException {
+    if (close) {
+      out.close();
+    } else {
+      out.sync();
+      // Not in 0out.hflush();
+    }
+  }
+
+  @SuppressWarnings("unused")
+  private void dumpHLog(Path log, FileSystem fs, Configuration conf) throws IOException {
+    HLog.Entry entry;
+    HLog.Reader in = HLog.getReader(fs, log, conf);
+    while ((entry = in.next()) != null) {
+      System.out.println(entry);
+    }
+  }
+
+  private int countHLog(Path log, FileSystem fs, Configuration conf) throws IOException {
+    int count = 0;
+    HLog.Reader in = HLog.getReader(fs, log, conf);
+    while (in.next() != null) {
+      count++;
+    }
+    return count;
+  }
+
+
+  public long appendEntry(HLog.Writer writer, byte[] table, byte[] region,
+                          byte[] row, byte[] family, byte[] qualifier,
+                          byte[] value, long seq)
+          throws IOException {
+
+    writer.append(createTestEntry(table, region, row, family, qualifier, value, seq));
+    writer.sync();
+    return seq;
+  }
+  
+  private HLog.Entry createTestEntry(
+      byte[] table, byte[] region,
+      byte[] row, byte[] family, byte[] qualifier,
+      byte[] value, long seq) {
+    long time = System.nanoTime();
+    WALEdit edit = new WALEdit();
+    seq++;
+    edit.add(new KeyValue(row, family, qualifier, time, KeyValue.Type.Put, value));
+    return new HLog.Entry(new HLogKey(region, table, seq, time), edit);
+  }
+
+
+  private void injectEmptyFile(String suffix, boolean closeFile)
+          throws IOException {
+    HLog.Writer writer = HLog.createWriter(
+            fs, new Path(hlogDir, HLOG_FILE_PREFIX + suffix), conf);
+    if (closeFile) writer.close();
+  }
+
+  @SuppressWarnings("unused")
+  private void listLogs(FileSystem fs, Path dir) throws IOException {
+    for (FileStatus file : fs.listStatus(dir)) {
+      System.out.println(file.getPath());
+    }
+
+  }
+
+  private int compareHLogSplitDirs(Path p1, Path p2) throws IOException {
+    FileStatus[] f1 = fs.listStatus(p1);
+    FileStatus[] f2 = fs.listStatus(p2);
+    assertNotNull("Path " + p1 + " doesn't exist", f1);
+    assertNotNull("Path " + p2 + " doesn't exist", f2);
+    
+    System.out.println("Files in " + p1 + ": " +
+        Joiner.on(",").join(FileUtil.stat2Paths(f1)));
+    System.out.println("Files in " + p2 + ": " +
+        Joiner.on(",").join(FileUtil.stat2Paths(f2)));
+    assertEquals(f1.length, f2.length);
+
+    for (int i = 0; i < f1.length; i++) {
+      // Regions now have a directory named RECOVERED_EDITS_DIR and in here
+      // are split edit files. In below presume only 1.
+      Path rd1 = HLog.getRegionDirRecoveredEditsDir(f1[i].getPath());
+      FileStatus[] rd1fs = fs.listStatus(rd1);
+      assertEquals(1, rd1fs.length);
+      Path rd2 = HLog.getRegionDirRecoveredEditsDir(f2[i].getPath());
+      FileStatus[] rd2fs = fs.listStatus(rd2);
+      assertEquals(1, rd2fs.length);
+      if (!logsAreEqual(rd1fs[0].getPath(), rd2fs[0].getPath())) {
+        return -1;
+      }
+    }
+    return 0;
+  }
+
+  private boolean logsAreEqual(Path p1, Path p2) throws IOException {
+    HLog.Reader in1, in2;
+    in1 = HLog.getReader(fs, p1, conf);
+    in2 = HLog.getReader(fs, p2, conf);
+    HLog.Entry entry1;
+    HLog.Entry entry2;
+    while ((entry1 = in1.next()) != null) {
+      entry2 = in2.next();
+      if ((entry1.getKey().compareTo(entry2.getKey()) != 0) ||
+              (!entry1.getEdit().toString().equals(entry2.getEdit().toString()))) {
+        return false;
+      }
+    }
+    return true;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java
new file mode 100644
index 0000000..287f1fb
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java
@@ -0,0 +1,326 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.commons.logging.impl.Log4JLogger;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.hdfs.server.namenode.LeaseManager;
+import org.apache.log4j.Level;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Test log deletion as logs are rolled.
+ */
+public class TestLogRolling  {
+  private static final Log LOG = LogFactory.getLog(TestLogRolling.class);
+  private HRegionServer server;
+  private HLog log;
+  private String tableName;
+  private byte[] value;
+  private static FileSystem fs;
+  private static MiniDFSCluster dfsCluster;
+  private static HBaseAdmin admin;
+  private static MiniHBaseCluster cluster;
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+ // verbose logging on classes that are touched in these tests
+ {
+   ((Log4JLogger)DataNode.LOG).getLogger().setLevel(Level.ALL);
+   ((Log4JLogger)LeaseManager.LOG).getLogger().setLevel(Level.ALL);
+   ((Log4JLogger)FSNamesystem.LOG).getLogger().setLevel(Level.ALL);
+   ((Log4JLogger)DFSClient.LOG).getLogger().setLevel(Level.ALL);
+   ((Log4JLogger)HRegionServer.LOG).getLogger().setLevel(Level.ALL);
+   ((Log4JLogger)HRegion.LOG).getLogger().setLevel(Level.ALL);
+   ((Log4JLogger)HLog.LOG).getLogger().setLevel(Level.ALL);
+ }
+
+  /**
+   * constructor
+   * @throws Exception
+   */
+  public TestLogRolling()  {
+    // start one regionserver and a minidfs.
+    super();
+      this.server = null;
+      this.log = null;
+      this.tableName = null;
+      this.value = null;
+
+      String className = this.getClass().getName();
+      StringBuilder v = new StringBuilder(className);
+      while (v.length() < 1000) {
+        v.append(className);
+      }
+      value = Bytes.toBytes(v.toString());
+  }
+
+  // Need to override this setup so we can edit the config before it gets sent
+ // to the HDFS & HBase cluster startup.
+ @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    /**** configuration for testLogRolling ****/
+    // Force a region split after every 768KB
+    TEST_UTIL.getConfiguration().setLong("hbase.hregion.max.filesize", 768L * 1024L);
+
+    // We roll the log after every 32 writes
+    TEST_UTIL.getConfiguration().setInt("hbase.regionserver.maxlogentries", 32);
+
+    // For less frequently updated regions flush after every 2 flushes
+    TEST_UTIL.getConfiguration().setInt("hbase.hregion.memstore.optionalflushcount", 2);
+
+    // We flush the cache after every 8192 bytes
+    TEST_UTIL.getConfiguration().setInt("hbase.hregion.memstore.flush.size", 8192);
+
+    // Increase the amount of time between client retries
+    TEST_UTIL.getConfiguration().setLong("hbase.client.pause", 15 * 1000);
+
+    // Reduce thread wake frequency so that other threads can get
+    // a chance to run.
+    TEST_UTIL.getConfiguration().setInt(HConstants.THREAD_WAKE_FREQUENCY, 2 * 1000);
+
+   /**** configuration for testLogRollOnDatanodeDeath ****/
+   // make sure log.hflush() calls syncFs() to open a pipeline
+    TEST_UTIL.getConfiguration().setBoolean("dfs.support.append", true);
+   // lower the namenode & datanode heartbeat so the namenode
+   // quickly detects datanode failures
+    TEST_UTIL.getConfiguration().setInt("heartbeat.recheck.interval", 5000);
+    TEST_UTIL.getConfiguration().setInt("dfs.heartbeat.interval", 1);
+   // the namenode might still try to choose the recently-dead datanode
+   // for a pipeline, so try to a new pipeline multiple times
+    TEST_UTIL.getConfiguration().setInt("dfs.client.block.write.retries", 30);
+    TEST_UTIL.startMiniCluster(2);
+
+    cluster = TEST_UTIL.getHBaseCluster();
+    dfsCluster = TEST_UTIL.getDFSCluster();
+    fs = TEST_UTIL.getTestFileSystem();
+    admin = TEST_UTIL.getHBaseAdmin();
+  }
+
+  @AfterClass
+  public  static void tearDownAfterClass() throws IOException  {
+    TEST_UTIL.cleanupTestDir();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private void startAndWriteData() throws IOException {
+    // When the META table can be opened, the region servers are running
+    new HTable(TEST_UTIL.getConfiguration(), HConstants.META_TABLE_NAME);
+    this.server = cluster.getRegionServerThreads().get(0).getRegionServer();
+    this.log = server.getWAL();
+
+    // Create the test table and open it
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+    admin.createTable(desc);
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), tableName);
+
+    server = TEST_UTIL.getRSForFirstRegionInTable(Bytes.toBytes(tableName));
+    this.log = server.getWAL();
+    for (int i = 1; i <= 256; i++) {    // 256 writes should cause 8 log rolls
+      Put put = new Put(Bytes.toBytes("row" + String.format("%1$04d", i)));
+      put.add(HConstants.CATALOG_FAMILY, null, value);
+      table.put(put);
+      if (i % 32 == 0) {
+        // After every 32 writes sleep to let the log roller run
+        try {
+          Thread.sleep(2000);
+        } catch (InterruptedException e) {
+          // continue
+        }
+      }
+    }
+  }
+
+  /**
+   * Tests that logs are deleted
+   * @throws IOException
+   * @throws FailedLogCloseException
+   */
+  @Test
+  public void testLogRolling() throws FailedLogCloseException, IOException {
+    this.tableName = getName();
+      startAndWriteData();
+      LOG.info("after writing there are " + log.getNumLogFiles() + " log files");
+
+      // flush all regions
+
+      List<HRegion> regions =
+        new ArrayList<HRegion>(server.getOnlineRegionsLocalContext());
+      for (HRegion r: regions) {
+        r.flushcache();
+      }
+
+      // Now roll the log
+      log.rollWriter();
+
+      int count = log.getNumLogFiles();
+      LOG.info("after flushing all regions and rolling logs there are " +
+          log.getNumLogFiles() + " log files");
+      assertTrue(("actual count: " + count), count <= 2);
+  }
+
+  private static String getName() {
+    return "TestLogRolling";
+  }
+
+  void writeData(HTable table, int rownum) throws IOException {
+    Put put = new Put(Bytes.toBytes("row" + String.format("%1$04d", rownum)));
+    put.add(HConstants.CATALOG_FAMILY, null, value);
+    table.put(put);
+
+    // sleep to let the log roller run (if it needs to)
+    try {
+      Thread.sleep(2000);
+    } catch (InterruptedException e) {
+      // continue
+    }
+  }
+
+  /**
+   * Give me the HDFS pipeline for this log file
+   */
+  DatanodeInfo[] getPipeline(HLog log) throws IllegalArgumentException,
+      IllegalAccessException, InvocationTargetException {
+    OutputStream stm = log.getOutputStream();
+    Method getPipeline = null;
+    for (Method m : stm.getClass().getDeclaredMethods()) {
+      if (m.getName().endsWith("getPipeline")) {
+        getPipeline = m;
+        getPipeline.setAccessible(true);
+        break;
+      }
+    }
+
+    assertTrue("Need DFSOutputStream.getPipeline() for this test",
+        null != getPipeline);
+    Object repl = getPipeline.invoke(stm, new Object[] {} /* NO_ARGS */);
+    return (DatanodeInfo[]) repl;
+  }
+
+  /**
+   * Tests that logs are rolled upon detecting datanode death
+   * Requires an HDFS jar with HDFS-826 & syncFs() support (HDFS-200)
+   * @throws IOException
+   * @throws InterruptedException
+   * @throws InvocationTargetException 
+   * @throws IllegalAccessException
+   * @throws IllegalArgumentException 
+    */
+  @Test
+  public void testLogRollOnDatanodeDeath() throws IOException,
+      InterruptedException, IllegalArgumentException, IllegalAccessException,
+      InvocationTargetException {
+    assertTrue("This test requires HLog file replication.",
+      fs.getDefaultReplication() > 1);
+    LOG.info("Replication=" + fs.getDefaultReplication());
+    // When the META table can be opened, the region servers are running
+    new HTable(TEST_UTIL.getConfiguration(), HConstants.META_TABLE_NAME);
+
+    this.server = cluster.getRegionServer(0);
+    this.log = server.getWAL();
+    
+    // Create the test table and open it
+    String tableName = getName();
+    HTableDescriptor desc = new HTableDescriptor(tableName);
+    desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY));
+
+    if (admin.tableExists(tableName)) {
+      admin.disableTable(tableName);
+      admin.deleteTable(tableName);
+    }
+    admin.createTable(desc);
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), tableName);
+
+    server = TEST_UTIL.getRSForFirstRegionInTable(Bytes.toBytes(tableName));
+    this.log = server.getWAL();
+
+    assertTrue("Need HDFS-826 for this test", log.canGetCurReplicas());
+    // don't run this test without append support (HDFS-200 & HDFS-142)
+    assertTrue("Need append support for this test", FSUtils
+        .isAppendSupported(TEST_UTIL.getConfiguration()));
+
+    // add up the datanode count, to ensure proper replication when we kill 1
+    dfsCluster
+        .startDataNodes(TEST_UTIL.getConfiguration(), 1, true, null, null);
+    dfsCluster.waitActive();
+    assertTrue(dfsCluster.getDataNodes().size() >= fs.getDefaultReplication() + 1);
+
+    writeData(table, 2);
+
+    table.setAutoFlush(true);
+
+    long curTime = System.currentTimeMillis();
+    long oldFilenum = log.getFilenum();
+    assertTrue("Log should have a timestamp older than now",
+        curTime > oldFilenum && oldFilenum != -1);
+
+    assertTrue("The log shouldn't have rolled yet", oldFilenum == log.getFilenum());
+    DatanodeInfo[] pipeline = getPipeline(log);
+    assertTrue(pipeline.length == fs.getDefaultReplication());
+
+    // kill a datanode in the pipeline to force a log roll on the next sync()
+    assertTrue(dfsCluster.stopDataNode(pipeline[0].getName()) != null);
+    Thread.sleep(10000);
+    // this write should succeed, but trigger a log roll
+    writeData(table, 2);
+    long newFilenum = log.getFilenum();
+
+    assertTrue("Missing datanode should've triggered a log roll",
+        newFilenum > oldFilenum && newFilenum > curTime);
+
+    // write some more log data (this should use a new hdfs_out)
+    writeData(table, 3);
+    assertTrue("The log should not roll again.", log.getFilenum() == newFilenum);
+    assertTrue("New log file should have the default replication", log
+        .getLogReplication() == fs.getDefaultReplication());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALObserver.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALObserver.java
new file mode 100644
index 0000000..5b95154
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALObserver.java
@@ -0,0 +1,146 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import static org.junit.Assert.*;
+
+/**
+ * Test that the actions are called while playing with an HLog
+ */
+public class TestWALObserver {
+  protected static final Log LOG = LogFactory.getLog(TestWALObserver.class);
+
+  private final static HBaseTestingUtility TEST_UTIL =
+      new HBaseTestingUtility();
+
+  private final static byte[] SOME_BYTES =  Bytes.toBytes("t");
+  private static FileSystem fs;
+  private static Path oldLogDir;
+  private static Path logDir;
+  private static Configuration conf;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    conf = TEST_UTIL.getConfiguration();
+    conf.setInt("hbase.regionserver.maxlogs", 5);
+    fs = FileSystem.get(conf);
+    oldLogDir = new Path(HBaseTestingUtility.getTestDir(),
+        HConstants.HREGION_OLDLOGDIR_NAME);
+    logDir = new Path(HBaseTestingUtility.getTestDir(),
+        HConstants.HREGION_LOGDIR_NAME);
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    fs.delete(logDir, true);
+    fs.delete(oldLogDir, true);
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    setUp();
+  }
+
+  /**
+   * Add a bunch of dummy data and roll the logs every two insert. We
+   * should end up with 10 rolled files (plus the roll called in
+   * the constructor). Also test adding a listener while it's running.
+   */
+  @Test
+  public void testActionListener() throws Exception {
+    DummyWALObserver observer = new DummyWALObserver();
+    List<WALObserver> list = new ArrayList<WALObserver>();
+    list.add(observer);
+    DummyWALObserver laterobserver = new DummyWALObserver();
+    HLog hlog = new HLog(fs, logDir, oldLogDir, conf, list, null);
+    HRegionInfo hri = new HRegionInfo(new HTableDescriptor(SOME_BYTES),
+        SOME_BYTES, SOME_BYTES, false);
+
+    for (int i = 0; i < 20; i++) {
+      byte[] b = Bytes.toBytes(i+"");
+      KeyValue kv = new KeyValue(b,b,b);
+      WALEdit edit = new WALEdit();
+      edit.add(kv);
+      HLogKey key = new HLogKey(b,b, 0, 0);
+      hlog.append(hri, key, edit);
+      if (i == 10) {
+        hlog.registerWALActionsListener(laterobserver);
+      }
+      if (i % 2 == 0) {
+        hlog.rollWriter();
+      }
+    }
+
+    hlog.close();
+    hlog.closeAndDelete();
+
+    assertEquals(11, observer.logRollCounter);
+    assertEquals(5, laterobserver.logRollCounter);
+    assertEquals(2, observer.closedCount);
+  }
+
+  /**
+   * Just counts when methods are called
+   */
+  static class DummyWALObserver implements WALObserver {
+    public int logRollCounter = 0;
+    public int closedCount = 0;
+
+    @Override
+    public void logRolled(Path newFile) {
+      logRollCounter++;
+    }
+
+    @Override
+    public void logRollRequested() {
+      // Not interested
+    }
+
+    @Override
+    public void visitLogEntryBeforeWrite(HRegionInfo info, HLogKey logKey,
+        WALEdit logEdit) {
+      // Not interested
+      
+    }
+
+    @Override
+    public void logCloseRequested() {
+      closedCount++;
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
new file mode 100644
index 0000000..d10ab13
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java
@@ -0,0 +1,513 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.wal;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.hfile.HFile;
+import org.apache.hadoop.hbase.regionserver.FlushRequester;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdge;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Test replay of edits out of a WAL split.
+ */
+public class TestWALReplay {
+  public static final Log LOG = LogFactory.getLog(TestWALReplay.class);
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private final EnvironmentEdge ee = EnvironmentEdgeManager.getDelegate();
+  private Path hbaseRootDir = null;
+  private Path oldLogDir;
+  private Path logDir;
+  private FileSystem fs;
+  private Configuration conf;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    Configuration conf = TEST_UTIL.getConfiguration();
+    conf.setBoolean("dfs.support.append", true);
+    // The below config supported by 0.20-append and CDH3b2
+    conf.setInt("dfs.client.block.recovery.retries", 2);
+    conf.setInt("hbase.regionserver.flushlogentries", 1);
+    TEST_UTIL.startMiniDFSCluster(3);
+    TEST_UTIL.setNameNodeNameSystemLeasePeriod(100, 10000);
+    Path hbaseRootDir =
+      TEST_UTIL.getDFSCluster().getFileSystem().makeQualified(new Path("/hbase"));
+    LOG.info("hbase.rootdir=" + hbaseRootDir);
+    conf.set(HConstants.HBASE_DIR, hbaseRootDir.toString());
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniDFSCluster();
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    this.conf = HBaseConfiguration.create(TEST_UTIL.getConfiguration());
+    this.fs = TEST_UTIL.getDFSCluster().getFileSystem();
+    this.hbaseRootDir = new Path(this.conf.get(HConstants.HBASE_DIR));
+    this.oldLogDir = new Path(this.hbaseRootDir, HConstants.HREGION_OLDLOGDIR_NAME);
+    this.logDir = new Path(this.hbaseRootDir, HConstants.HREGION_LOGDIR_NAME);
+    if (TEST_UTIL.getDFSCluster().getFileSystem().exists(this.hbaseRootDir)) {
+      TEST_UTIL.getDFSCluster().getFileSystem().delete(this.hbaseRootDir, true);
+    }
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    TEST_UTIL.getDFSCluster().getFileSystem().delete(this.hbaseRootDir, true);
+  }
+
+  /*
+   * @param p Directory to cleanup
+   */
+  private void deleteDir(final Path p) throws IOException {
+    if (this.fs.exists(p)) {
+      if (!this.fs.delete(p, true)) {
+        throw new IOException("Failed remove of " + p);
+      }
+    }
+  }
+
+  /**
+   * Tests for hbase-2727.
+   * @throws Exception
+   * @see https://issues.apache.org/jira/browse/HBASE-2727
+   */
+  @Test
+  public void test2727() throws Exception {
+    // Test being able to have > 1 set of edits in the recovered.edits directory.
+    // Ensure edits are replayed properly.
+    final String tableNameStr = "test2727";
+    HRegionInfo hri = createBasic3FamilyHRegionInfo(tableNameStr);
+    Path basedir = new Path(hbaseRootDir, tableNameStr);
+    deleteDir(basedir);
+    fs.mkdirs(new Path(basedir, hri.getEncodedName()));
+
+    final byte [] tableName = Bytes.toBytes(tableNameStr);
+    final byte [] rowName = tableName;
+
+    HLog wal1 = createWAL(this.conf);
+    // Add 1k to each family.
+    final int countPerFamily = 1000;
+    for (HColumnDescriptor hcd: hri.getTableDesc().getFamilies()) {
+      addWALEdits(tableName, hri, rowName, hcd.getName(), countPerFamily, ee, wal1);
+    }
+    wal1.close();
+    runWALSplit(this.conf);
+
+    HLog wal2 = createWAL(this.conf);
+    // Up the sequenceid so that these edits are after the ones added above.
+    wal2.setSequenceNumber(wal1.getSequenceNumber());
+    // Add 1k to each family.
+    for (HColumnDescriptor hcd: hri.getTableDesc().getFamilies()) {
+      addWALEdits(tableName, hri, rowName, hcd.getName(), countPerFamily, ee, wal2);
+    }
+    wal2.close();
+    runWALSplit(this.conf);
+
+    HLog wal3 = createWAL(this.conf);
+    wal3.setSequenceNumber(wal2.getSequenceNumber());
+    try {
+      final HRegion region = new HRegion(basedir, wal3, this.fs, this.conf, hri,
+          null);
+      long seqid = region.initialize();
+      assertTrue(seqid > wal3.getSequenceNumber());
+
+      // TODO: Scan all.
+      region.close();
+    } finally {
+      wal3.closeAndDelete();
+    }
+  }
+
+  /**
+   * Test case of HRegion that is only made out of bulk loaded files.  Assert
+   * that we don't 'crash'.
+   * @throws IOException
+   * @throws IllegalAccessException
+   * @throws NoSuchFieldException
+   * @throws IllegalArgumentException
+   * @throws SecurityException
+   */
+  @Test
+  public void testRegionMadeOfBulkLoadedFilesOnly()
+  throws IOException, SecurityException, IllegalArgumentException,
+      NoSuchFieldException, IllegalAccessException, InterruptedException {
+    final String tableNameStr = "testReplayEditsWrittenViaHRegion";
+    final HRegionInfo hri = createBasic3FamilyHRegionInfo(tableNameStr);
+    final Path basedir = new Path(this.hbaseRootDir, tableNameStr);
+    deleteDir(basedir);
+    HLog wal = createWAL(this.conf);
+    HRegion region = HRegion.openHRegion(hri, wal, this.conf);
+    Path f =  new Path(basedir, "hfile");
+    HFile.Writer writer = new HFile.Writer(this.fs, f);
+    byte [] family = hri.getTableDesc().getFamilies().iterator().next().getName();
+    byte [] row = Bytes.toBytes(tableNameStr);
+    writer.append(new KeyValue(row, family, family, row));
+    writer.close();
+    region.bulkLoadHFile(f.toString(), family);
+    // Add an edit so something in the WAL
+    region.put((new Put(row)).add(family, family, family));
+    wal.sync();
+
+    // Now 'crash' the region by stealing its wal
+    final Configuration newConf = HBaseConfiguration.create(this.conf);
+    User user = HBaseTestingUtility.getDifferentUser(newConf,
+        tableNameStr);
+    user.runAs(new PrivilegedExceptionAction() {
+      public Object run() throws Exception {
+        runWALSplit(newConf);
+        HLog wal2 = createWAL(newConf);
+        HRegion region2 = new HRegion(basedir, wal2, FileSystem.get(newConf),
+          newConf, hri, null);
+        long seqid2 = region2.initialize();
+        assertTrue(seqid2 > -1);
+
+        // I can't close wal1.  Its been appropriated when we split.
+        region2.close();
+        wal2.closeAndDelete();
+        return null;
+      }
+    });
+  }
+
+  /**
+   * Test writing edits into an HRegion, closing it, splitting logs, opening
+   * Region again.  Verify seqids.
+   * @throws IOException
+   * @throws IllegalAccessException
+   * @throws NoSuchFieldException
+   * @throws IllegalArgumentException
+   * @throws SecurityException
+   */
+  @Test
+  public void testReplayEditsWrittenViaHRegion()
+  throws IOException, SecurityException, IllegalArgumentException,
+      NoSuchFieldException, IllegalAccessException, InterruptedException {
+    final String tableNameStr = "testReplayEditsWrittenViaHRegion";
+    final HRegionInfo hri = createBasic3FamilyHRegionInfo(tableNameStr);
+    final Path basedir = new Path(this.hbaseRootDir, tableNameStr);
+    deleteDir(basedir);
+    final byte[] rowName = Bytes.toBytes(tableNameStr);
+    final int countPerFamily = 10;
+
+    // Write countPerFamily edits into the three families.  Do a flush on one
+    // of the families during the load of edits so its seqid is not same as
+    // others to test we do right thing when different seqids.
+    HLog wal = createWAL(this.conf);
+    HRegion region = new HRegion(basedir, wal, this.fs, this.conf, hri, null);
+    long seqid = region.initialize();
+    // HRegionServer usually does this. It knows the largest seqid across all regions.
+    wal.setSequenceNumber(seqid);
+    boolean first = true;
+    for (HColumnDescriptor hcd: hri.getTableDesc().getFamilies()) {
+      addRegionEdits(rowName, hcd.getName(), countPerFamily, this.ee, region, "x");
+      if (first ) {
+        // If first, so we have at least one family w/ different seqid to rest.
+        region.flushcache();
+        first = false;
+      }
+    }
+    // Now assert edits made it in.
+    final Get g = new Get(rowName);
+    Result result = region.get(g, null);
+    assertEquals(countPerFamily * hri.getTableDesc().getFamilies().size(),
+      result.size());
+    // Now close the region, split the log, reopen the region and assert that
+    // replay of log has no effect, that our seqids are calculated correctly so
+    // all edits in logs are seen as 'stale'/old.
+    region.close();
+    wal.close();
+    runWALSplit(this.conf);
+    HLog wal2 = createWAL(this.conf);
+    HRegion region2 = new HRegion(basedir, wal2, this.fs, this.conf, hri, null) {
+      @Override
+      protected boolean restoreEdit(Store s, KeyValue kv) {
+        super.restoreEdit(s, kv);
+        throw new RuntimeException("Called when it should not have been!");
+      }
+    };
+    long seqid2 = region2.initialize();
+    // HRegionServer usually does this. It knows the largest seqid across all regions.
+    wal2.setSequenceNumber(seqid2);
+    assertTrue(seqid + result.size() < seqid2);
+
+    // Next test.  Add more edits, then 'crash' this region by stealing its wal
+    // out from under it and assert that replay of the log adds the edits back
+    // correctly when region is opened again.
+    for (HColumnDescriptor hcd: hri.getTableDesc().getFamilies()) {
+      addRegionEdits(rowName, hcd.getName(), countPerFamily, this.ee, region2, "y");
+    }
+    // Get count of edits.
+    final Result result2 = region2.get(g, null);
+    assertEquals(2 * result.size(), result2.size());
+    wal2.sync();
+    // Set down maximum recovery so we dfsclient doesn't linger retrying something
+    // long gone.
+    HBaseTestingUtility.setMaxRecoveryErrorCount(wal2.getOutputStream(), 1);
+    final Configuration newConf = HBaseConfiguration.create(this.conf);
+    User user = HBaseTestingUtility.getDifferentUser(newConf,
+      tableNameStr);
+    user.runAs(new PrivilegedExceptionAction() {
+      public Object run() throws Exception {
+        runWALSplit(newConf);
+        FileSystem newFS = FileSystem.get(newConf);
+        // Make a new wal for new region open.
+        HLog wal3 = createWAL(newConf);
+        final AtomicInteger countOfRestoredEdits = new AtomicInteger(0);
+        HRegion region3 = new HRegion(basedir, wal3, newFS, newConf, hri, null) {
+          @Override
+          protected boolean restoreEdit(Store s, KeyValue kv) {
+            boolean b = super.restoreEdit(s, kv);
+            countOfRestoredEdits.incrementAndGet();
+            return b;
+          }
+        };
+        long seqid3 = region3.initialize();
+        // HRegionServer usually does this. It knows the largest seqid across all regions.
+        wal3.setSequenceNumber(seqid3);
+        Result result3 = region3.get(g, null);
+        // Assert that count of cells is same as before crash.
+        assertEquals(result2.size(), result3.size());
+        assertEquals(hri.getTableDesc().getFamilies().size() * countPerFamily,
+          countOfRestoredEdits.get());
+
+        // I can't close wal1.  Its been appropriated when we split.
+        region3.close();
+        wal3.closeAndDelete();
+        return null;
+      }
+    });
+  }
+
+  /**
+   * Create an HRegion with the result of a HLog split and test we only see the
+   * good edits
+   * @throws Exception
+   */
+  @Test
+  public void testReplayEditsWrittenIntoWAL() throws Exception {
+    final String tableNameStr = "testReplayEditsWrittenIntoWAL";
+    final HRegionInfo hri = createBasic3FamilyHRegionInfo(tableNameStr);
+    final Path basedir = new Path(hbaseRootDir, tableNameStr);
+    deleteDir(basedir);
+    fs.mkdirs(new Path(basedir, hri.getEncodedName()));
+    final HLog wal = createWAL(this.conf);
+    final byte[] tableName = Bytes.toBytes(tableNameStr);
+    final byte[] rowName = tableName;
+    final byte[] regionName = hri.getEncodedNameAsBytes();
+
+    // Add 1k to each family.
+    final int countPerFamily = 1000;
+    for (HColumnDescriptor hcd: hri.getTableDesc().getFamilies()) {
+      addWALEdits(tableName, hri, rowName, hcd.getName(), countPerFamily, ee, wal);
+    }
+
+    // Add a cache flush, shouldn't have any effect
+    long logSeqId = wal.startCacheFlush();
+    wal.completeCacheFlush(regionName, tableName, logSeqId, hri.isMetaRegion());
+
+    // Add an edit to another family, should be skipped.
+    WALEdit edit = new WALEdit();
+    long now = ee.currentTimeMillis();
+    edit.add(new KeyValue(rowName, Bytes.toBytes("another family"), rowName,
+      now, rowName));
+    wal.append(hri, tableName, edit, now);
+
+    // Delete the c family to verify deletes make it over.
+    edit = new WALEdit();
+    now = ee.currentTimeMillis();
+    edit.add(new KeyValue(rowName, Bytes.toBytes("c"), null, now,
+      KeyValue.Type.DeleteFamily));
+    wal.append(hri, tableName, edit, now);
+
+    // Sync.
+    wal.sync();
+    // Set down maximum recovery so we dfsclient doesn't linger retrying something
+    // long gone.
+    HBaseTestingUtility.setMaxRecoveryErrorCount(wal.getOutputStream(), 1);
+    // Make a new conf and a new fs for the splitter to run on so we can take
+    // over old wal.
+    final Configuration newConf = HBaseConfiguration.create(this.conf);
+    User user = HBaseTestingUtility.getDifferentUser(newConf,
+      ".replay.wal.secondtime");
+    user.runAs(new PrivilegedExceptionAction(){
+      public Object run() throws Exception {
+        runWALSplit(newConf);
+        FileSystem newFS = FileSystem.get(newConf);
+        // 100k seems to make for about 4 flushes during HRegion#initialize.
+        newConf.setInt("hbase.hregion.memstore.flush.size", 1024 * 100);
+        // Make a new wal for new region.
+        HLog newWal = createWAL(newConf);
+        final AtomicInteger flushcount = new AtomicInteger(0);
+        try {
+          final HRegion region = new HRegion(basedir, newWal, newFS, newConf, hri,
+              null) {
+            protected boolean internalFlushcache(HLog wal, long myseqid)
+            throws IOException {
+              boolean b = super.internalFlushcache(wal, myseqid);
+              flushcount.incrementAndGet();
+              return b;
+            };
+          };
+          long seqid = region.initialize();
+          // We flushed during init.
+          assertTrue(flushcount.get() > 0);
+          assertTrue(seqid > wal.getSequenceNumber());
+
+          Get get = new Get(rowName);
+          Result result = region.get(get, -1);
+          // Make sure we only see the good edits
+          assertEquals(countPerFamily * (hri.getTableDesc().getFamilies().size() - 1),
+            result.size());
+          region.close();
+        } finally {
+          newWal.closeAndDelete();
+        }
+        return null;
+      }
+    });
+  }
+
+  // Flusher used in this test.  Keep count of how often we are called and
+  // actually run the flush inside here.
+  class TestFlusher implements FlushRequester {
+    private int count = 0;
+    private HRegion r;
+
+    @Override
+    public void requestFlush(HRegion region) {
+      count++;
+      try {
+        r.flushcache();
+      } catch (IOException e) {
+        throw new RuntimeException("Exception flushing", e);
+      }
+    }
+  }
+
+  private void addWALEdits (final byte [] tableName, final HRegionInfo hri,
+      final byte [] rowName, final byte [] family,
+      final int count, EnvironmentEdge ee, final HLog wal)
+  throws IOException {
+    String familyStr = Bytes.toString(family);
+    for (int j = 0; j < count; j++) {
+      byte[] qualifierBytes = Bytes.toBytes(Integer.toString(j));
+      byte[] columnBytes = Bytes.toBytes(familyStr + ":" + Integer.toString(j));
+      WALEdit edit = new WALEdit();
+      edit.add(new KeyValue(rowName, family, qualifierBytes,
+        ee.currentTimeMillis(), columnBytes));
+      wal.append(hri, tableName, edit, ee.currentTimeMillis());
+    }
+  }
+
+  private void addRegionEdits (final byte [] rowName, final byte [] family,
+      final int count, EnvironmentEdge ee, final HRegion r,
+      final String qualifierPrefix)
+  throws IOException {
+    for (int j = 0; j < count; j++) {
+      byte[] qualifier = Bytes.toBytes(qualifierPrefix + Integer.toString(j));
+      Put p = new Put(rowName);
+      p.add(family, qualifier, ee.currentTimeMillis(), rowName);
+      r.put(p);
+    }
+  }
+
+  /*
+   * Creates an HRI around an HTD that has <code>tableName</code> and three
+   * column families named 'a','b', and 'c'.
+   * @param tableName Name of table to use when we create HTableDescriptor.
+   */
+  private HRegionInfo createBasic3FamilyHRegionInfo(final String tableName) {
+    HTableDescriptor htd = new HTableDescriptor(tableName);
+    HColumnDescriptor a = new HColumnDescriptor(Bytes.toBytes("a"));
+    htd.addFamily(a);
+    HColumnDescriptor b = new HColumnDescriptor(Bytes.toBytes("b"));
+    htd.addFamily(b);
+    HColumnDescriptor c = new HColumnDescriptor(Bytes.toBytes("c"));
+    htd.addFamily(c);
+    return new HRegionInfo(htd, null, null, false);
+  }
+
+
+  /*
+   * Run the split.  Verify only single split file made.
+   * @param c
+   * @return The single split file made
+   * @throws IOException
+   */
+  private Path runWALSplit(final Configuration c) throws IOException {
+    FileSystem fs = FileSystem.get(c);
+    HLogSplitter logSplitter = HLogSplitter.createLogSplitter(c,
+        this.hbaseRootDir, this.logDir, this.oldLogDir, fs);
+    List<Path> splits = logSplitter.splitLog();
+    // Split should generate only 1 file since there's only 1 region
+    assertEquals(1, splits.size());
+    // Make sure the file exists
+    assertTrue(fs.exists(splits.get(0)));
+    LOG.info("Split file=" + splits.get(0));
+    return splits.get(0);
+  }
+
+  /*
+   * @param c
+   * @return WAL with retries set down from 5 to 1 only.
+   * @throws IOException
+   */
+  private HLog createWAL(final Configuration c) throws IOException {
+    HLog wal = new HLog(FileSystem.get(c), logDir, oldLogDir, c);
+    // Set down maximum recovery so we dfsclient doesn't linger retrying something
+    // long gone.
+    HBaseTestingUtility.setMaxRecoveryErrorCount(wal.getOutputStream(), 1);
+    return wal;
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/replication/ReplicationSourceDummy.java b/0.90/src/test/java/org/apache/hadoop/hbase/replication/ReplicationSourceDummy.java
new file mode 100644
index 0000000..9d3e862
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/replication/ReplicationSourceDummy.java
@@ -0,0 +1,90 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceInterface;
+import org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+/**
+ * Source that does nothing at all, helpful to test ReplicationSourceManager
+ */
+public class ReplicationSourceDummy implements ReplicationSourceInterface {
+
+  ReplicationSourceManager manager;
+  String peerClusterId;
+  Path currentPath;
+
+  @Override
+  public void init(Configuration conf, FileSystem fs,
+                   ReplicationSourceManager manager, Stoppable stopper,
+                   AtomicBoolean replicating, String peerClusterId)
+      throws IOException {
+    this.manager = manager;
+    this.peerClusterId = peerClusterId;
+  }
+
+  @Override
+  public void enqueueLog(Path log) {
+    this.currentPath = log;
+  }
+
+  @Override
+  public Path getCurrentPath() {
+    return this.currentPath;
+  }
+
+  @Override
+  public void startup() {
+
+  }
+
+  @Override
+  public void terminate(String reason) {
+
+  }
+
+  @Override
+  public void terminate(String reason, Exception e) {
+
+  }
+
+  @Override
+  public String getPeerClusterZnode() {
+    return peerClusterId;
+  }
+
+  @Override
+  public String getPeerClusterId() {
+    return peerClusterId;
+  }
+
+  @Override
+  public void setSourceEnabled(boolean status) {
+
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/replication/TestReplication.java b/0.90/src/test/java/org/apache/hadoop/hbase/replication/TestReplication.java
new file mode 100644
index 0000000..bd813de
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/replication/TestReplication.java
@@ -0,0 +1,604 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.UnknownScannerException;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.replication.ReplicationAdmin;
+import org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.JVMClusterUtil;
+import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.apache.hadoop.mapreduce.Job;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestReplication {
+
+  private static final Log LOG = LogFactory.getLog(TestReplication.class);
+
+  private static Configuration conf1;
+  private static Configuration conf2;
+
+  private static ZooKeeperWatcher zkw1;
+  private static ZooKeeperWatcher zkw2;
+
+  private static ReplicationAdmin admin;
+  private static String slaveClusterKey;
+
+  private static HTable htable1;
+  private static HTable htable2;
+
+  private static HBaseTestingUtility utility1;
+  private static HBaseTestingUtility utility2;
+  private static final int NB_ROWS_IN_BATCH = 100;
+  private static final int NB_ROWS_IN_BIG_BATCH =
+      NB_ROWS_IN_BATCH * 10;
+  private static final long SLEEP_TIME = 500;
+  private static final int NB_RETRIES = 10;
+
+  private static final byte[] tableName = Bytes.toBytes("test");
+  private static final byte[] famName = Bytes.toBytes("f");
+  private static final byte[] row = Bytes.toBytes("row");
+  private static final byte[] noRepfamName = Bytes.toBytes("norep");
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    conf1 = HBaseConfiguration.create();
+    conf1.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/1");
+    // smaller block size and capacity to trigger more operations
+    // and test them
+    conf1.setInt("hbase.regionserver.hlog.blocksize", 1024*20);
+    conf1.setInt("replication.source.size.capacity", 1024);
+    conf1.setLong("replication.source.sleepforretries", 100);
+    conf1.setInt("hbase.regionserver.maxlogs", 10);
+    conf1.setLong("hbase.master.logcleaner.ttl", 10);
+    conf1.setLong("hbase.client.retries.number", 5);
+    conf1.setBoolean(HConstants.REPLICATION_ENABLE_KEY, true);
+    conf1.setBoolean("dfs.support.append", true);
+    conf1.setLong(HConstants.THREAD_WAKE_FREQUENCY, 100);
+
+    utility1 = new HBaseTestingUtility(conf1);
+    utility1.startMiniZKCluster();
+    MiniZooKeeperCluster miniZK = utility1.getZkCluster();
+    zkw1 = new ZooKeeperWatcher(conf1, "cluster1", null);
+    admin = new ReplicationAdmin(conf1);
+    LOG.info("Setup first Zk");
+
+    conf2 = HBaseConfiguration.create();
+    conf2.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/2");
+    conf2.setInt("hbase.client.retries.number", 6);
+    conf2.setBoolean(HConstants.REPLICATION_ENABLE_KEY, true);
+    conf2.setBoolean("dfs.support.append", true);
+
+    utility2 = new HBaseTestingUtility(conf2);
+    utility2.setZkCluster(miniZK);
+    zkw2 = new ZooKeeperWatcher(conf2, "cluster2", null);
+
+    slaveClusterKey = conf2.get(HConstants.ZOOKEEPER_QUORUM)+":" +
+            conf2.get("hbase.zookeeper.property.clientPort")+":/2";
+    admin.addPeer("2", slaveClusterKey);
+    setIsReplication(true);
+
+    LOG.info("Setup second Zk");
+
+    utility1.startMiniCluster(2);
+    utility2.startMiniCluster(2);
+
+    HTableDescriptor table = new HTableDescriptor(tableName);
+    HColumnDescriptor fam = new HColumnDescriptor(famName);
+    fam.setScope(HConstants.REPLICATION_SCOPE_GLOBAL);
+    table.addFamily(fam);
+    fam = new HColumnDescriptor(noRepfamName);
+    table.addFamily(fam);
+    HBaseAdmin admin1 = new HBaseAdmin(conf1);
+    HBaseAdmin admin2 = new HBaseAdmin(conf2);
+    admin1.createTable(table);
+    admin2.createTable(table);
+
+    htable1 = new HTable(conf1, tableName);
+    htable1.setWriteBufferSize(1024);
+    htable2 = new HTable(conf2, tableName);
+  }
+
+  private static void setIsReplication(boolean rep) throws Exception {
+    LOG.info("Set rep " + rep);
+    admin.setReplicating(rep);
+    Thread.sleep(SLEEP_TIME);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+
+    // Starting and stopping replication can make us miss new logs,
+    // rolling like this makes sure the most recent one gets added to the queue
+    for ( JVMClusterUtil.RegionServerThread r :
+        utility1.getHBaseCluster().getRegionServerThreads()) {
+      r.getRegionServer().getWAL().rollWriter();
+    }
+    utility1.truncateTable(tableName);
+    // truncating the table will send one Delete per row to the slave cluster
+    // in an async fashion, which is why we cannot just call truncateTable on
+    // utility2 since late writes could make it to the slave in some way.
+    // Instead, we truncate the first table and wait for all the Deletes to
+    // make it to the slave.
+    Scan scan = new Scan();
+    int lastCount = 0;
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        fail("Waited too much time for truncate");
+      }
+      ResultScanner scanner = htable2.getScanner(scan);
+      Result[] res = scanner.next(NB_ROWS_IN_BIG_BATCH);
+      scanner.close();
+      if (res.length != 0) {
+       if (res.length < lastCount) {
+          i--; // Don't increment timeout if we make progress
+        }
+        lastCount = res.length;
+        LOG.info("Still got " + res.length + " rows");
+        Thread.sleep(SLEEP_TIME);
+      } else {
+        break;
+      }
+    }
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    utility2.shutdownMiniCluster();
+    utility1.shutdownMiniCluster();
+  }
+
+  /**
+   * Add a row, check it's replicated, delete it, check's gone
+   * @throws Exception
+   */
+  @Test
+  public void testSimplePutDelete() throws Exception {
+    LOG.info("testSimplePutDelete");
+    Put put = new Put(row);
+    put.add(famName, row, row);
+
+    htable1 = new HTable(conf1, tableName);
+    htable1.put(put);
+
+    Get get = new Get(row);
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        fail("Waited too much time for put replication");
+      }
+      Result res = htable2.get(get);
+      if (res.size() == 0) {
+        LOG.info("Row not available");
+        Thread.sleep(SLEEP_TIME);
+      } else {
+        assertArrayEquals(res.value(), row);
+        break;
+      }
+    }
+
+    Delete del = new Delete(row);
+    htable1.delete(del);
+
+    get = new Get(row);
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        fail("Waited too much time for del replication");
+      }
+      Result res = htable2.get(get);
+      if (res.size() >= 1) {
+        LOG.info("Row not deleted");
+        Thread.sleep(SLEEP_TIME);
+      } else {
+        break;
+      }
+    }
+  }
+
+  /**
+   * Try a small batch upload using the write buffer, check it's replicated
+   * @throws Exception
+   */
+  @Test
+  public void testSmallBatch() throws Exception {
+    LOG.info("testSmallBatch");
+    Put put;
+    // normal Batch tests
+    htable1.setAutoFlush(false);
+    for (int i = 0; i < NB_ROWS_IN_BATCH; i++) {
+      put = new Put(Bytes.toBytes(i));
+      put.add(famName, row, row);
+      htable1.put(put);
+    }
+    htable1.flushCommits();
+
+    Scan scan = new Scan();
+
+    ResultScanner scanner1 = htable1.getScanner(scan);
+    Result[] res1 = scanner1.next(NB_ROWS_IN_BATCH);
+    scanner1.close();
+    assertEquals(NB_ROWS_IN_BATCH, res1.length);
+
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        fail("Waited too much time for normal batch replication");
+      }
+      ResultScanner scanner = htable2.getScanner(scan);
+      Result[] res = scanner.next(NB_ROWS_IN_BATCH);
+      scanner.close();
+      if (res.length != NB_ROWS_IN_BATCH) {
+        LOG.info("Only got " + res.length + " rows");
+        Thread.sleep(SLEEP_TIME);
+      } else {
+        break;
+      }
+    }
+
+    htable1.setAutoFlush(true);
+
+  }
+
+  /**
+   * Test stopping replication, trying to insert, make sure nothing's
+   * replicated, enable it, try replicating and it should work
+   * @throws Exception
+   */
+  @Test
+  public void testStartStop() throws Exception {
+
+    // Test stopping replication
+    setIsReplication(false);
+
+    Put put = new Put(Bytes.toBytes("stop start"));
+    put.add(famName, row, row);
+    htable1.put(put);
+
+    Get get = new Get(Bytes.toBytes("stop start"));
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        break;
+      }
+      Result res = htable2.get(get);
+      if(res.size() >= 1) {
+        fail("Replication wasn't stopped");
+
+      } else {
+        LOG.info("Row not replicated, let's wait a bit more...");
+        Thread.sleep(SLEEP_TIME);
+      }
+    }
+
+    // Test restart replication
+    setIsReplication(true);
+
+    htable1.put(put);
+
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        fail("Waited too much time for put replication");
+      }
+      Result res = htable2.get(get);
+      if(res.size() == 0) {
+        LOG.info("Row not available");
+        Thread.sleep(SLEEP_TIME);
+      } else {
+        assertArrayEquals(res.value(), row);
+        break;
+      }
+    }
+
+    put = new Put(Bytes.toBytes("do not rep"));
+    put.add(noRepfamName, row, row);
+    htable1.put(put);
+
+    get = new Get(Bytes.toBytes("do not rep"));
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i == NB_RETRIES-1) {
+        break;
+      }
+      Result res = htable2.get(get);
+      if (res.size() >= 1) {
+        fail("Not supposed to be replicated");
+      } else {
+        LOG.info("Row not replicated, let's wait a bit more...");
+        Thread.sleep(SLEEP_TIME);
+      }
+    }
+
+  }
+
+  /**
+   * Integration test for TestReplicationAdmin, removes and re-add a peer
+   * cluster
+   * @throws Exception
+   */
+  @Test
+  public void testAddAndRemoveClusters() throws Exception {
+    LOG.info("testAddAndRemoveClusters");
+    admin.removePeer("2");
+    Thread.sleep(SLEEP_TIME);
+    byte[] rowKey = Bytes.toBytes("Won't be replicated");
+    Put put = new Put(rowKey);
+    put.add(famName, row, row);
+    htable1.put(put);
+
+    Get get = new Get(rowKey);
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i == NB_RETRIES-1) {
+        break;
+      }
+      Result res = htable2.get(get);
+      if (res.size() >= 1) {
+        fail("Not supposed to be replicated");
+      } else {
+        LOG.info("Row not replicated, let's wait a bit more...");
+        Thread.sleep(SLEEP_TIME);
+      }
+    }
+
+    admin.addPeer("2", slaveClusterKey);
+    Thread.sleep(SLEEP_TIME);
+    rowKey = Bytes.toBytes("do rep");
+    put = new Put(rowKey);
+    put.add(famName, row, row);
+    LOG.info("Adding new row");
+    htable1.put(put);
+
+    get = new Get(rowKey);
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        fail("Waited too much time for put replication");
+      }
+      Result res = htable2.get(get);
+      if (res.size() == 0) {
+        LOG.info("Row not available");
+        Thread.sleep(SLEEP_TIME*i);
+      } else {
+        assertArrayEquals(res.value(), row);
+        break;
+      }
+    }
+  }
+
+  /**
+   * Do a more intense version testSmallBatch, one  that will trigger
+   * hlog rolling and other non-trivial code paths
+   * @throws Exception
+   */
+  @Test
+  public void loadTesting() throws Exception {
+    htable1.setWriteBufferSize(1024);
+    htable1.setAutoFlush(false);
+    for (int i = 0; i < NB_ROWS_IN_BIG_BATCH; i++) {
+      Put put = new Put(Bytes.toBytes(i));
+      put.add(famName, row, row);
+      htable1.put(put);
+    }
+    htable1.flushCommits();
+
+    Scan scan = new Scan();
+
+    ResultScanner scanner = htable1.getScanner(scan);
+    Result[] res = scanner.next(NB_ROWS_IN_BIG_BATCH);
+    scanner.close();
+
+    assertEquals(NB_ROWS_IN_BATCH *10, res.length);
+
+    scan = new Scan();
+
+    for (int i = 0; i < NB_RETRIES; i++) {
+
+      scanner = htable2.getScanner(scan);
+      res = scanner.next(NB_ROWS_IN_BIG_BATCH);
+      scanner.close();
+      if (res.length != NB_ROWS_IN_BIG_BATCH) {
+        if (i == NB_RETRIES-1) {
+          int lastRow = -1;
+          for (Result result : res) {
+            int currentRow = Bytes.toInt(result.getRow());
+            for (int row = lastRow+1; row < currentRow; row++) {
+              LOG.error("Row missing: " + row);
+            }
+            lastRow = currentRow;
+          }
+          LOG.error("Last row: " + lastRow);
+          fail("Waited too much time for normal batch replication, "
+              + res.length + " instead of " + NB_ROWS_IN_BIG_BATCH);
+        } else {
+          LOG.info("Only got " + res.length + " rows");
+          Thread.sleep(SLEEP_TIME);
+        }
+      } else {
+        break;
+      }
+    }
+  }
+
+  /**
+   * Do a small loading into a table, make sure the data is really the same,
+   * then run the VerifyReplication job to check the results. Do a second
+   * comparison where all the cells are different.
+   * @throws Exception
+   */
+  @Test
+  public void testVerifyRepJob() throws Exception {
+    // Populate the tables, at the same time it guarantees that the tables are
+    // identical since it does the check
+    testSmallBatch();
+
+    String[] args = new String[] {"2", Bytes.toString(tableName)};
+    Job job = VerifyReplication.createSubmittableJob(conf1, args);
+    if (job == null) {
+      fail("Job wasn't created, see the log");
+    }
+    if (!job.waitForCompletion(true)) {
+      fail("Job failed, see the log");
+    }
+    assertEquals(NB_ROWS_IN_BATCH, job.getCounters().
+        findCounter(VerifyReplication.Verifier.Counters.GOODROWS).getValue());
+    assertEquals(0, job.getCounters().
+        findCounter(VerifyReplication.Verifier.Counters.BADROWS).getValue());
+
+    Scan scan = new Scan();
+    ResultScanner rs = htable2.getScanner(scan);
+    Put put = null;
+    for (Result result : rs) {
+      put = new Put(result.getRow());
+      KeyValue firstVal = result.raw()[0];
+      put.add(firstVal.getFamily(),
+          firstVal.getQualifier(), Bytes.toBytes("diff data"));
+      htable2.put(put);
+    }
+    Delete delete = new Delete(put.getRow());
+    htable2.delete(delete);
+    job = VerifyReplication.createSubmittableJob(conf1, args);
+    if (job == null) {
+      fail("Job wasn't created, see the log");
+    }
+    if (!job.waitForCompletion(true)) {
+      fail("Job failed, see the log");
+    }
+    assertEquals(0, job.getCounters().
+            findCounter(VerifyReplication.Verifier.Counters.GOODROWS).getValue());
+        assertEquals(NB_ROWS_IN_BATCH, job.getCounters().
+            findCounter(VerifyReplication.Verifier.Counters.BADROWS).getValue());
+  }
+
+  /**
+   * Load up multiple tables over 2 region servers and kill a source during
+   * the upload. The failover happens internally.
+   * @throws Exception
+   */
+  @Test
+  public void queueFailover() throws Exception {
+    utility1.createMultiRegions(htable1, famName);
+
+    // killing the RS with .META. can result into failed puts until we solve
+    // IO fencing
+    int rsToKill1 =
+        utility1.getHBaseCluster().getServerWithMeta() == 0 ? 1 : 0;
+    int rsToKill2 =
+        utility2.getHBaseCluster().getServerWithMeta() == 0 ? 1 : 0;
+
+    // Takes about 20 secs to run the full loading, kill around the middle
+    Thread killer1 = killARegionServer(utility1, 7500, rsToKill1);
+    Thread killer2 = killARegionServer(utility2, 10000, rsToKill2);
+
+    LOG.info("Start loading table");
+    int initialCount = utility1.loadTable(htable1, famName);
+    LOG.info("Done loading table");
+    killer1.join(5000);
+    killer2.join(5000);
+    LOG.info("Done waiting for threads");
+
+    Result[] res;
+    while (true) {
+      try {
+        Scan scan = new Scan();
+        ResultScanner scanner = htable1.getScanner(scan);
+        res = scanner.next(initialCount);
+        scanner.close();
+        break;
+      } catch (UnknownScannerException ex) {
+        LOG.info("Cluster wasn't ready yet, restarting scanner");
+      }
+    }
+    // Test we actually have all the rows, we may miss some because we
+    // don't have IO fencing.
+    if (res.length != initialCount) {
+      LOG.warn("We lost some rows on the master cluster!");
+      // We don't really expect the other cluster to have more rows
+      initialCount = res.length;
+    }
+
+    Scan scan2 = new Scan();
+
+    int lastCount = 0;
+
+    for (int i = 0; i < NB_RETRIES; i++) {
+      if (i==NB_RETRIES-1) {
+        fail("Waited too much time for queueFailover replication");
+      }
+      ResultScanner scanner2 = htable2.getScanner(scan2);
+      Result[] res2 = scanner2.next(initialCount * 2);
+      scanner2.close();
+      if (res2.length < initialCount) {
+        if (lastCount < res2.length) {
+          i--; // Don't increment timeout if we make progress
+        }
+        lastCount = res2.length;
+        LOG.info("Only got " + lastCount + " rows instead of " +
+            initialCount + " current i=" + i);
+        Thread.sleep(SLEEP_TIME*2);
+      } else {
+        break;
+      }
+    }
+  }
+
+  private static Thread killARegionServer(final HBaseTestingUtility utility,
+                                   final long timeout, final int rs) {
+    Thread killer = new Thread() {
+      public void run() {
+        try {
+          Thread.sleep(timeout);
+          utility.expireRegionServerSession(rs);
+        } catch (Exception e) {
+          LOG.error(e);
+        }
+      }
+    };
+    killer.start();
+    return killer;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java b/0.90/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java
new file mode 100644
index 0000000..f019c93
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java
@@ -0,0 +1,103 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+
+public class TestReplicationSource {
+
+  private static final Log LOG =
+      LogFactory.getLog(TestReplicationSource.class);
+  private final static HBaseTestingUtility TEST_UTIL =
+      new HBaseTestingUtility();
+  private static FileSystem fs;
+  private static Path oldLogDir;
+  private static Path logDir;
+  private static Configuration conf = HBaseConfiguration.create();
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniDFSCluster(1);
+    fs = TEST_UTIL.getDFSCluster().getFileSystem();
+    oldLogDir = new Path(fs.getHomeDirectory(),
+        HConstants.HREGION_OLDLOGDIR_NAME);
+    logDir = new Path(fs.getHomeDirectory(),
+        HConstants.HREGION_LOGDIR_NAME);
+  }
+
+  /**
+   * Sanity check that we can move logs around while we are reading
+   * from them. Should this test fail, ReplicationSource would have a hard
+   * time reading logs that are being archived.
+   * @throws Exception
+   */
+  @Test
+  public void testLogMoving() throws Exception{
+    Path logPath = new Path(logDir, "log");
+    HLog.Writer writer = HLog.createWriter(fs, logPath, conf);
+    for(int i = 0; i < 3; i++) {
+      byte[] b = Bytes.toBytes(Integer.toString(i));
+      KeyValue kv = new KeyValue(b,b,b);
+      WALEdit edit = new WALEdit();
+      edit.add(kv);
+      HLogKey key = new HLogKey(b, b, 0, 0);
+      writer.append(new HLog.Entry(key, edit));
+      writer.sync();
+    }
+    writer.close();
+
+    HLog.Reader reader = HLog.getReader(fs, logPath, conf);
+    HLog.Entry entry = reader.next();
+    assertNotNull(entry);
+
+    Path oldLogPath = new Path(oldLogDir, "log");
+    fs.rename(logPath, oldLogPath);
+
+    entry = reader.next();
+    assertNotNull(entry);
+
+    entry = reader.next();
+    entry = reader.next();
+
+    assertNull(entry);
+
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java b/0.90/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java
new file mode 100644
index 0000000..d54adac
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java
@@ -0,0 +1,254 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.Stoppable;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestReplicationSink {
+  private static final Log LOG = LogFactory.getLog(TestReplicationSink.class);
+  private static final int BATCH_SIZE = 10;
+  private static final long SLEEP_TIME = 500;
+
+  private final static HBaseTestingUtility TEST_UTIL =
+      new HBaseTestingUtility();
+
+  private static ReplicationSink SINK;
+
+  private static final byte[] TABLE_NAME1 =
+      Bytes.toBytes("table1");
+  private static final byte[] TABLE_NAME2 =
+      Bytes.toBytes("table2");
+
+  private static final byte[] FAM_NAME1 = Bytes.toBytes("info1");
+  private static final byte[] FAM_NAME2 = Bytes.toBytes("info2");
+
+  private static HTable table1;
+  private static Stoppable STOPPABLE = new Stoppable() {
+    final AtomicBoolean stop = new AtomicBoolean(false);
+
+    @Override
+    public boolean isStopped() {
+      return this.stop.get();
+    }
+
+    @Override
+    public void stop(String why) {
+      LOG.info("STOPPING BECAUSE: " + why);
+      this.stop.set(true);
+    }
+    
+  };
+
+  private static HTable table2;
+
+   /**
+   * @throws java.lang.Exception
+   */
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.getConfiguration().setBoolean("dfs.support.append", true);
+    TEST_UTIL.getConfiguration().setBoolean(HConstants.REPLICATION_ENABLE_KEY, true);
+    TEST_UTIL.startMiniCluster(3);
+    SINK =
+      new ReplicationSink(new Configuration(TEST_UTIL.getConfiguration()), STOPPABLE);
+    table1 = TEST_UTIL.createTable(TABLE_NAME1, FAM_NAME1);
+    table2 = TEST_UTIL.createTable(TABLE_NAME2, FAM_NAME2);
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    STOPPABLE.stop("Shutting down");
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  /**
+   * @throws java.lang.Exception
+   */
+  @Before
+  public void setUp() throws Exception {
+    table1 = TEST_UTIL.truncateTable(TABLE_NAME1);
+    table2 = TEST_UTIL.truncateTable(TABLE_NAME2);
+    Thread.sleep(SLEEP_TIME);
+  }
+
+  /**
+   * Insert a whole batch of entries
+   * @throws Exception
+   */
+  @Test
+  public void testBatchSink() throws Exception {
+    HLog.Entry[] entries = new HLog.Entry[BATCH_SIZE];
+    for(int i = 0; i < BATCH_SIZE; i++) {
+      entries[i] = createEntry(TABLE_NAME1, i, KeyValue.Type.Put);
+    }
+    SINK.replicateEntries(entries);
+    Scan scan = new Scan();
+    ResultScanner scanRes = table1.getScanner(scan);
+    assertEquals(BATCH_SIZE, scanRes.next(BATCH_SIZE).length);
+  }
+
+  /**
+   * Insert a mix of puts and deletes
+   * @throws Exception
+   */
+  @Test
+  public void testMixedPutDelete() throws Exception {
+    HLog.Entry[] entries = new HLog.Entry[BATCH_SIZE/2];
+    for(int i = 0; i < BATCH_SIZE/2; i++) {
+      entries[i] = createEntry(TABLE_NAME1, i, KeyValue.Type.Put);
+    }
+    SINK.replicateEntries(entries);
+
+    entries = new HLog.Entry[BATCH_SIZE];
+    for(int i = 0; i < BATCH_SIZE; i++) {
+      entries[i] = createEntry(TABLE_NAME1, i,
+          i % 2 != 0 ? KeyValue.Type.Put: KeyValue.Type.DeleteColumn);
+    }
+
+    SINK.replicateEntries(entries);
+    Scan scan = new Scan();
+    ResultScanner scanRes = table1.getScanner(scan);
+    assertEquals(BATCH_SIZE/2, scanRes.next(BATCH_SIZE).length);
+  }
+
+  /**
+   * Insert to 2 different tables
+   * @throws Exception
+   */
+  @Test
+  public void testMixedPutTables() throws Exception {
+    HLog.Entry[] entries = new HLog.Entry[BATCH_SIZE];
+    for(int i = 0; i < BATCH_SIZE; i++) {
+      entries[i] =
+          createEntry( i % 2 == 0 ? TABLE_NAME2 : TABLE_NAME1,
+              i, KeyValue.Type.Put);
+    }
+
+    SINK.replicateEntries(entries);
+    Scan scan = new Scan();
+    ResultScanner scanRes = table2.getScanner(scan);
+    for(Result res : scanRes) {
+      assertTrue(Bytes.toInt(res.getRow()) % 2 == 0);
+    }
+  }
+
+  /**
+   * Insert then do different types of deletes
+   * @throws Exception
+   */
+  @Test
+  public void testMixedDeletes() throws Exception {
+    HLog.Entry[] entries = new HLog.Entry[3];
+    for(int i = 0; i < 3; i++) {
+      entries[i] = createEntry(TABLE_NAME1, i, KeyValue.Type.Put);
+    }
+    SINK.replicateEntries(entries);
+    entries = new HLog.Entry[3];
+
+    entries[0] = createEntry(TABLE_NAME1, 0, KeyValue.Type.DeleteColumn);
+    entries[1] = createEntry(TABLE_NAME1, 1, KeyValue.Type.DeleteFamily);
+    entries[2] = createEntry(TABLE_NAME1, 2, KeyValue.Type.DeleteColumn);
+
+    SINK.replicateEntries(entries);
+
+    Scan scan = new Scan();
+    ResultScanner scanRes = table1.getScanner(scan);
+    assertEquals(0, scanRes.next(3).length);
+  }
+
+  /**
+   * Puts are buffered, but this tests when a delete (not-buffered) is applied
+   * before the actual Put that creates it.
+   * @throws Exception
+   */
+  @Test
+  public void testApplyDeleteBeforePut() throws Exception {
+    HLog.Entry[] entries = new HLog.Entry[5];
+    for(int i = 0; i < 2; i++) {
+      entries[i] = createEntry(TABLE_NAME1, i, KeyValue.Type.Put);
+    }
+    entries[2] = createEntry(TABLE_NAME1, 1, KeyValue.Type.DeleteFamily);
+    for(int i = 3; i < 5; i++) {
+      entries[i] = createEntry(TABLE_NAME1, i, KeyValue.Type.Put);
+    }
+    SINK.replicateEntries(entries);
+    Get get = new Get(Bytes.toBytes(1));
+    Result res = table1.get(get);
+    assertEquals(0, res.size());
+  }
+
+  private HLog.Entry createEntry(byte [] table, int row,  KeyValue.Type type) {
+    byte[] fam = Bytes.equals(table, TABLE_NAME1) ? FAM_NAME1 : FAM_NAME2;
+    byte[] rowBytes = Bytes.toBytes(row);
+    // Just make sure we don't get the same ts for two consecutive rows with
+    // same key
+    try {
+      Thread.sleep(1);
+    } catch (InterruptedException e) {
+      LOG.info("Was interrupted while sleep, meh", e);
+    }
+    final long now = System.currentTimeMillis();
+    KeyValue kv = null;
+    if(type.getCode() == KeyValue.Type.Put.getCode()) {
+      kv = new KeyValue(rowBytes, fam, fam, now,
+          KeyValue.Type.Put, Bytes.toBytes(row));
+    } else if (type.getCode() == KeyValue.Type.DeleteColumn.getCode()) {
+        kv = new KeyValue(rowBytes, fam, fam,
+            now, KeyValue.Type.DeleteColumn);
+    } else if (type.getCode() == KeyValue.Type.DeleteFamily.getCode()) {
+        kv = new KeyValue(rowBytes, fam, null,
+            now, KeyValue.Type.DeleteFamily);
+    }
+
+    HLogKey key = new HLogKey(table, table, now, now);
+
+    WALEdit edit = new WALEdit();
+    edit.add(kv);
+
+    return new HLog.Entry(key, edit);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java b/0.90/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
new file mode 100644
index 0000000..20a1ff8
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java
@@ -0,0 +1,248 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.replication.regionserver;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
+import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
+import org.apache.hadoop.hbase.regionserver.wal.WALObserver;
+import org.apache.hadoop.hbase.replication.ReplicationSourceDummy;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.zookeeper.ZKUtil;
+import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher;
+import org.junit.After;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Ignore;
+import org.junit.Test;
+
+import java.net.URLEncoder;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+import static org.junit.Assert.assertEquals;
+
+public class TestReplicationSourceManager {
+
+  private static final Log LOG =
+      LogFactory.getLog(TestReplicationSourceManager.class);
+
+  private static Configuration conf;
+
+  private static HBaseTestingUtility utility;
+
+  private static Replication replication;
+
+  private static ReplicationSourceManager manager;
+
+  private static ZooKeeperWatcher zkw;
+
+  private static HTableDescriptor htd;
+
+  private static HRegionInfo hri;
+
+  private static final byte[] r1 = Bytes.toBytes("r1");
+
+  private static final byte[] r2 = Bytes.toBytes("r2");
+
+  private static final byte[] f1 = Bytes.toBytes("f1");
+
+  private static final byte[] f2 = Bytes.toBytes("f2");
+
+  private static final byte[] test = Bytes.toBytes("test");
+
+  private static FileSystem fs;
+
+  private static Path oldLogDir;
+
+  private static Path logDir;
+
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+
+    conf = HBaseConfiguration.create();
+    conf.set("replication.replicationsource.implementation",
+        ReplicationSourceDummy.class.getCanonicalName());
+    conf.setBoolean(HConstants.REPLICATION_ENABLE_KEY, true);
+    utility = new HBaseTestingUtility(conf);
+    utility.startMiniZKCluster();
+
+    zkw = new ZooKeeperWatcher(conf, "test", null);
+    ZKUtil.createWithParents(zkw, "/hbase/replication");
+    ZKUtil.createWithParents(zkw, "/hbase/replication/peers/1");
+    ZKUtil.setData(zkw, "/hbase/replication/peers/1",Bytes.toBytes(
+          conf.get(HConstants.ZOOKEEPER_QUORUM)+":" +
+          conf.get("hbase.zookeeper.property.clientPort")+":/1"));
+    ZKUtil.createWithParents(zkw, "/hbase/replication/state");
+    ZKUtil.setData(zkw, "/hbase/replication/state", Bytes.toBytes("true"));
+
+    replication = new Replication(new DummyServer(), fs, logDir, oldLogDir);
+    manager = replication.getReplicationManager();
+    fs = FileSystem.get(conf);
+    oldLogDir = new Path(utility.getTestDir(),
+        HConstants.HREGION_OLDLOGDIR_NAME);
+    logDir = new Path(utility.getTestDir(),
+        HConstants.HREGION_LOGDIR_NAME);
+
+    manager.addSource("1");
+
+    htd = new HTableDescriptor(test);
+    HColumnDescriptor col = new HColumnDescriptor("f1");
+    col.setScope(HConstants.REPLICATION_SCOPE_GLOBAL);
+    htd.addFamily(col);
+    col = new HColumnDescriptor("f2");
+    col.setScope(HConstants.REPLICATION_SCOPE_LOCAL);
+    htd.addFamily(col);
+
+    hri = new HRegionInfo(htd, r1, r2);
+
+
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    manager.join();
+    utility.shutdownMiniCluster();
+  }
+
+  @Before
+  public void setUp() throws Exception {
+    fs.delete(logDir, true);
+    fs.delete(oldLogDir, true);
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    setUp();
+  }
+
+  @Test
+  public void testLogRoll() throws Exception {
+    long seq = 0;
+    long baseline = 1000;
+    long time = baseline;
+    KeyValue kv = new KeyValue(r1, f1, r1);
+    WALEdit edit = new WALEdit();
+    edit.add(kv);
+
+    List<WALObserver> listeners = new ArrayList<WALObserver>();
+    listeners.add(replication);
+    HLog hlog = new HLog(fs, logDir, oldLogDir, conf, listeners,
+      URLEncoder.encode("regionserver:60020", "UTF8"));
+
+    manager.init();
+
+    // Testing normal log rolling every 20
+    for(long i = 1; i < 101; i++) {
+      if(i > 1 && i % 20 == 0) {
+        hlog.rollWriter();
+      }
+      LOG.info(i);
+      HLogKey key = new HLogKey(hri.getRegionName(),
+        test, seq++, System.currentTimeMillis());
+      hlog.append(hri, key, edit);
+    }
+
+    // Simulate a rapid insert that's followed
+    // by a report that's still not totally complete (missing last one)
+    LOG.info(baseline + " and " + time);
+    baseline += 101;
+    time = baseline;
+    LOG.info(baseline + " and " + time);
+
+    for (int i = 0; i < 3; i++) {
+      HLogKey key = new HLogKey(hri.getRegionName(),
+        test, seq++, System.currentTimeMillis());
+      hlog.append(hri, key, edit);
+    }
+
+    assertEquals(6, manager.getHLogs().size());
+
+    hlog.rollWriter();
+
+    manager.logPositionAndCleanOldLogs(manager.getSources().get(0).getCurrentPath(),
+        "1", 0, false);
+
+    HLogKey key = new HLogKey(hri.getRegionName(),
+          test, seq++, System.currentTimeMillis());
+    hlog.append(hri, key, edit);
+
+    assertEquals(1, manager.getHLogs().size());
+
+
+    // TODO Need a case with only 2 HLogs and we only want to delete the first one
+  }
+
+  static class DummyServer implements Server {
+
+    @Override
+    public Configuration getConfiguration() {
+      return conf;
+    }
+
+    @Override
+    public ZooKeeperWatcher getZooKeeper() {
+      return zkw;
+    }
+
+    @Override
+    public CatalogTracker getCatalogTracker() {
+      return null;  //To change body of implemented methods use File | Settings | File Templates.
+    }
+
+    @Override
+    public String getServerName() {
+      return null;  //To change body of implemented methods use File | Settings | File Templates.
+    }
+
+    @Override
+    public void abort(String why, Throwable e) {
+      //To change body of implemented methods use File | Settings | File Templates.
+    }
+
+    @Override
+    public void stop(String why) {
+      //To change body of implemented methods use File | Settings | File Templates.
+    }
+
+    @Override
+    public boolean isStopped() {
+      return false;  //To change body of implemented methods use File | Settings | File Templates.
+    }
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
new file mode 100644
index 0000000..6b723be
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/HBaseRESTTestingUtility.java
@@ -0,0 +1,89 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.rest.filter.GzipFilter;
+import org.apache.hadoop.util.StringUtils;
+import org.mortbay.jetty.Server;
+import org.mortbay.jetty.servlet.Context;
+import org.mortbay.jetty.servlet.ServletHolder;
+
+import com.sun.jersey.spi.container.servlet.ServletContainer;
+
+public class HBaseRESTTestingUtility {
+
+  static final Log LOG = LogFactory.getLog(HBaseRESTTestingUtility.class);
+
+  private int testServletPort;
+  private Server server;
+
+  public int getServletPort() {
+    return testServletPort;
+  }
+
+  public void startServletContainer(Configuration conf) throws Exception {
+    if (server != null) {
+      LOG.error("ServletContainer already running");
+      return;
+    }
+
+    // Inject the conf for the test by being first to make singleton
+    RESTServlet.getInstance(conf);
+
+    // set up the Jersey servlet container for Jetty
+    ServletHolder sh = new ServletHolder(ServletContainer.class);
+    sh.setInitParameter(
+      "com.sun.jersey.config.property.resourceConfigClass",
+      ResourceConfig.class.getCanonicalName());
+    sh.setInitParameter("com.sun.jersey.config.property.packages",
+      "jetty");
+
+    LOG.info("configured " + ServletContainer.class.getName());
+    
+    // set up Jetty and run the embedded server
+    server = new Server(0);
+    server.setSendServerVersion(false);
+    server.setSendDateHeader(false);
+      // set up context
+    Context context = new Context(server, "/", Context.SESSIONS);
+    context.addServlet(sh, "/*");
+    context.addFilter(GzipFilter.class, "/*", 0);
+      // start the server
+    server.start();
+      // get the port
+    testServletPort = server.getConnectors()[0].getLocalPort();
+
+    LOG.info("started " + server.getClass().getName() + " on port " + 
+      testServletPort);
+  }
+
+  public void shutdownServletContainer() {
+    if (server != null) try {
+      server.stop();
+      server = null;
+      RESTServlet.stop();
+    } catch (Exception e) {
+      LOG.warn(StringUtils.stringifyException(e));
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java
new file mode 100644
index 0000000..23673c7
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java
@@ -0,0 +1,1255 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.rest;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.io.PrintStream;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+import java.util.TreeMap;
+import java.util.Arrays;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.lang.reflect.Constructor;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.PageFilter;
+import org.apache.hadoop.hbase.filter.WhileMatchFilter;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteAdmin;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Hash;
+import org.apache.hadoop.hbase.util.MurmurHash;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.Writable;
+import org.apache.hadoop.mapreduce.InputSplit;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobContext;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.RecordReader;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
+import org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer;
+import org.apache.hadoop.util.LineReader;
+
+/**
+ * Script used evaluating Stargate performance and scalability.  Runs a SG
+ * client that steps through one of a set of hardcoded tests or 'experiments'
+ * (e.g. a random reads test, a random writes test, etc.). Pass on the
+ * command-line which test to run and how many clients are participating in
+ * this experiment. Run <code>java PerformanceEvaluation --help</code> to
+ * obtain usage.
+ * 
+ * <p>This class sets up and runs the evaluation programs described in
+ * Section 7, <i>Performance Evaluation</i>, of the <a
+ * href="http://labs.google.com/papers/bigtable.html">Bigtable</a>
+ * paper, pages 8-10.
+ * 
+ * <p>If number of clients > 1, we start up a MapReduce job. Each map task
+ * runs an individual client. Each client does about 1GB of data.
+ */
+public class PerformanceEvaluation  {
+  protected static final Log LOG = LogFactory.getLog(PerformanceEvaluation.class.getName());
+  
+  private static final int ROW_LENGTH = 1000;
+  private static final int ONE_GB = 1024 * 1024 * 1000;
+  private static final int ROWS_PER_GB = ONE_GB / ROW_LENGTH;
+  
+  public static final byte [] TABLE_NAME = Bytes.toBytes("TestTable");
+  public static final byte [] FAMILY_NAME = Bytes.toBytes("info");
+  public static final byte [] QUALIFIER_NAME = Bytes.toBytes("data");
+
+  protected static final HTableDescriptor TABLE_DESCRIPTOR;
+  static {
+    TABLE_DESCRIPTOR = new HTableDescriptor(TABLE_NAME);
+    TABLE_DESCRIPTOR.addFamily(new HColumnDescriptor(FAMILY_NAME));
+  }
+
+  protected Map<String, CmdDescriptor> commands = new TreeMap<String, CmdDescriptor>();
+  protected static Cluster cluster = new Cluster();
+  protected static String accessToken = null;
+
+  volatile Configuration conf;
+  private boolean nomapred = false;
+  private int N = 1;
+  private int R = ROWS_PER_GB;
+  private int B = 100;
+
+  private static final Path PERF_EVAL_DIR = new Path("performance_evaluation");
+  /**
+   * Regex to parse lines in input file passed to mapreduce task.
+   */
+  public static final Pattern LINE_PATTERN =
+    Pattern.compile("startRow=(\\d+),\\s+" +
+        "perClientRunRows=(\\d+),\\s+" +
+        "totalRows=(\\d+),\\s+" + 
+        "clients=(\\d+),\\s+" + 
+        "rowsPerPut=(\\d+)");
+
+  /**
+   * Enum for map metrics.  Keep it out here rather than inside in the Map
+   * inner-class so we can find associated properties.
+   */
+  protected static enum Counter {
+    /** elapsed time */
+    ELAPSED_TIME,
+    /** number of rows */
+    ROWS}
+
+  /**
+   * Constructor
+   * @param c Configuration object
+   */
+  public PerformanceEvaluation(final Configuration c) {
+    this.conf = c;
+
+    addCommandDescriptor(RandomReadTest.class, "randomRead",
+        "Run random read test");
+    addCommandDescriptor(RandomSeekScanTest.class, "randomSeekScan",
+        "Run random seek and scan 100 test");
+    addCommandDescriptor(RandomScanWithRange10Test.class, "scanRange10",
+        "Run random seek scan with both start and stop row (max 10 rows)");
+    addCommandDescriptor(RandomScanWithRange100Test.class, "scanRange100",
+        "Run random seek scan with both start and stop row (max 100 rows)");
+    addCommandDescriptor(RandomScanWithRange1000Test.class, "scanRange1000",
+        "Run random seek scan with both start and stop row (max 1000 rows)");
+    addCommandDescriptor(RandomScanWithRange10000Test.class, "scanRange10000",
+        "Run random seek scan with both start and stop row (max 10000 rows)");
+    addCommandDescriptor(RandomWriteTest.class, "randomWrite",
+        "Run random write test");
+    addCommandDescriptor(SequentialReadTest.class, "sequentialRead",
+        "Run sequential read test");
+    addCommandDescriptor(SequentialWriteTest.class, "sequentialWrite",
+        "Run sequential write test");
+    addCommandDescriptor(ScanTest.class, "scan",
+        "Run scan test (read every row)");
+    addCommandDescriptor(FilteredScanTest.class, "filterScan",
+        "Run scan test using a filter to find a specific row based on it's value (make sure to use --rows=20)");
+  }
+
+  protected void addCommandDescriptor(Class<? extends Test> cmdClass, 
+      String name, String description) {
+    CmdDescriptor cmdDescriptor = 
+      new CmdDescriptor(cmdClass, name, description);
+    commands.put(name, cmdDescriptor);
+  }
+  
+  /**
+   * Implementations can have their status set.
+   */
+  static interface Status {
+    /**
+     * Sets status
+     * @param msg status message
+     * @throws IOException
+     */
+    void setStatus(final String msg) throws IOException;
+  }
+  
+  /**
+   *  This class works as the InputSplit of Performance Evaluation
+   *  MapReduce InputFormat, and the Record Value of RecordReader. 
+   *  Each map task will only read one record from a PeInputSplit, 
+   *  the record value is the PeInputSplit itself.
+   */
+  public static class PeInputSplit extends InputSplit implements Writable {
+    private int startRow = 0;
+    private int rows = 0;
+    private int totalRows = 0;
+    private int clients = 0;
+    private int rowsPerPut = 1;
+
+    public PeInputSplit() {
+      this.startRow = 0;
+      this.rows = 0;
+      this.totalRows = 0;
+      this.clients = 0;
+      this.rowsPerPut = 1;
+    }
+
+    public PeInputSplit(int startRow, int rows, int totalRows, int clients,
+        int rowsPerPut) {
+      this.startRow = startRow;
+      this.rows = rows;
+      this.totalRows = totalRows;
+      this.clients = clients;
+      this.rowsPerPut = 1;
+    }
+    
+    @Override
+    public void readFields(DataInput in) throws IOException {
+      this.startRow = in.readInt();
+      this.rows = in.readInt();
+      this.totalRows = in.readInt();
+      this.clients = in.readInt();
+      this.rowsPerPut = in.readInt();
+    }
+
+    @Override
+    public void write(DataOutput out) throws IOException {
+      out.writeInt(startRow);
+      out.writeInt(rows);
+      out.writeInt(totalRows);
+      out.writeInt(clients);
+      out.writeInt(rowsPerPut);
+    }
+    
+    @Override
+    public long getLength() throws IOException, InterruptedException {
+      return 0;
+    }
+  
+    @Override
+    public String[] getLocations() throws IOException, InterruptedException {
+      return new String[0];
+    }
+    
+    public int getStartRow() {
+      return startRow;
+    }
+    
+    public int getRows() {
+      return rows;
+    }
+    
+    public int getTotalRows() {
+      return totalRows;
+    }
+    
+    public int getClients() {
+      return clients;
+    }
+
+    public int getRowsPerPut() {
+      return rowsPerPut;
+    }
+  }
+
+  /**
+   *  InputFormat of Performance Evaluation MapReduce job.
+   *  It extends from FileInputFormat, want to use it's methods such as setInputPaths(). 
+   */
+  public static class PeInputFormat extends FileInputFormat<NullWritable, PeInputSplit> {
+
+    @Override
+    public List<InputSplit> getSplits(JobContext job) throws IOException {
+      // generate splits
+      List<InputSplit> splitList = new ArrayList<InputSplit>();
+      
+      for (FileStatus file: listStatus(job)) {
+        Path path = file.getPath();
+        FileSystem fs = path.getFileSystem(job.getConfiguration());
+        FSDataInputStream fileIn = fs.open(path);
+        LineReader in = new LineReader(fileIn, job.getConfiguration());
+        int lineLen = 0;
+        while(true) {
+          Text lineText = new Text();
+          lineLen = in.readLine(lineText);
+          if(lineLen <= 0) {
+          break;
+          }
+          Matcher m = LINE_PATTERN.matcher(lineText.toString());
+          if((m != null) && m.matches()) {
+            int startRow = Integer.parseInt(m.group(1));
+            int rows = Integer.parseInt(m.group(2));
+            int totalRows = Integer.parseInt(m.group(3));
+            int clients = Integer.parseInt(m.group(4));
+            int rowsPerPut = Integer.parseInt(m.group(5));
+
+            LOG.debug("split["+ splitList.size() + "] " + 
+                     " startRow=" + startRow +
+                     " rows=" + rows +
+                     " totalRows=" + totalRows +
+                     " clients=" + clients +
+                     " rowsPerPut=" + rowsPerPut);
+
+            PeInputSplit newSplit =
+              new PeInputSplit(startRow, rows, totalRows, clients, rowsPerPut);
+            splitList.add(newSplit);
+          }
+        }
+        in.close();
+      }
+      
+      LOG.info("Total # of splits: " + splitList.size());
+      return splitList;
+    }
+    
+    @Override
+    public RecordReader<NullWritable, PeInputSplit> createRecordReader(InputSplit split,
+                            TaskAttemptContext context) {
+      return new PeRecordReader();
+    }
+    
+    public static class PeRecordReader extends RecordReader<NullWritable, PeInputSplit> {
+      private boolean readOver = false;
+      private PeInputSplit split = null;
+      private NullWritable key = null;
+      private PeInputSplit value = null;
+      
+      @Override
+      public void initialize(InputSplit split, TaskAttemptContext context) 
+                  throws IOException, InterruptedException {
+        this.readOver = false;
+        this.split = (PeInputSplit)split;
+      }
+      
+      @Override
+      public boolean nextKeyValue() throws IOException, InterruptedException {
+        if(readOver) {
+          return false;
+        }
+        
+        key = NullWritable.get();
+        value = (PeInputSplit)split;
+        
+        readOver = true;
+        return true;
+      }
+      
+      @Override
+      public NullWritable getCurrentKey() throws IOException, InterruptedException {
+        return key;
+      }
+      
+      @Override
+      public PeInputSplit getCurrentValue() throws IOException, InterruptedException {
+        return value;
+      }
+      
+      @Override
+      public float getProgress() throws IOException, InterruptedException {
+        if(readOver) {
+          return 1.0f;
+        } else {
+          return 0.0f;
+        }
+      }
+      
+      @Override
+      public void close() throws IOException {
+        // do nothing
+      }
+    }
+  }
+  
+  /**
+   * MapReduce job that runs a performance evaluation client in each map task.
+   */
+  public static class EvaluationMapTask 
+      extends Mapper<NullWritable, PeInputSplit, LongWritable, LongWritable> {
+
+    /** configuration parameter name that contains the command */
+    public final static String CMD_KEY = "EvaluationMapTask.command";
+    /** configuration parameter name that contains the PE impl */
+    public static final String PE_KEY = "EvaluationMapTask.performanceEvalImpl";
+
+    private Class<? extends Test> cmd;
+    private PerformanceEvaluation pe;
+
+    @Override
+    protected void setup(Context context) throws IOException, InterruptedException {
+      this.cmd = forName(context.getConfiguration().get(CMD_KEY), Test.class);
+
+      // this is required so that extensions of PE are instantiated within the
+      // map reduce task...
+      Class<? extends PerformanceEvaluation> peClass =
+          forName(context.getConfiguration().get(PE_KEY), PerformanceEvaluation.class);
+      try {
+        this.pe = peClass.getConstructor(Configuration.class)
+            .newInstance(context.getConfiguration());
+      } catch (Exception e) {
+        throw new IllegalStateException("Could not instantiate PE instance", e);
+      }
+    }
+
+    private <Type> Class<? extends Type> forName(String className, Class<Type> type) {
+      Class<? extends Type> clazz = null;
+      try {
+        clazz = Class.forName(className).asSubclass(type);
+      } catch (ClassNotFoundException e) {
+        throw new IllegalStateException("Could not find class for name: " + className, e);
+      }
+      return clazz;
+    }
+
+    protected void map(NullWritable key, PeInputSplit value, final Context context) 
+           throws IOException, InterruptedException {
+      
+      Status status = new Status() {
+        public void setStatus(String msg) {
+           context.setStatus(msg); 
+        }
+      };
+      
+      // Evaluation task
+      long elapsedTime = this.pe.runOneClient(this.cmd, value.getStartRow(),
+        value.getRows(), value.getTotalRows(), value.getRowsPerPut(), status);
+      // Collect how much time the thing took. Report as map output and
+      // to the ELAPSED_TIME counter.
+      context.getCounter(Counter.ELAPSED_TIME).increment(elapsedTime);
+      context.getCounter(Counter.ROWS).increment(value.rows);
+      context.write(new LongWritable(value.startRow), new LongWritable(elapsedTime));
+      context.progress();
+    }
+  }
+  
+  /*
+   * If table does not already exist, create.
+   * @param c Client to use checking.
+   * @return True if we created the table.
+   * @throws IOException
+   */
+  private boolean checkTable() throws IOException {
+    HTableDescriptor tableDescriptor = getTableDescriptor();
+    RemoteAdmin admin =
+      new RemoteAdmin(new Client(cluster), conf, accessToken);
+    if (!admin.isTableAvailable(tableDescriptor.getName())) {
+      admin.createTable(tableDescriptor);
+      return true;
+    }
+    return false;
+  }
+
+  protected HTableDescriptor getTableDescriptor() {
+    return TABLE_DESCRIPTOR;
+  }
+
+  /*
+   * We're to run multiple clients concurrently.  Setup a mapreduce job.  Run
+   * one map per client.  Then run a single reduce to sum the elapsed times.
+   * @param cmd Command to run.
+   * @throws IOException
+   */
+  private void runNIsMoreThanOne(final Class<? extends Test> cmd)
+  throws IOException, InterruptedException, ClassNotFoundException {
+    checkTable();
+    if (nomapred) {
+      doMultipleClients(cmd);
+    } else {
+      doMapReduce(cmd);
+    }
+  }
+  
+  /*
+   * Run all clients in this vm each to its own thread.
+   * @param cmd Command to run.
+   * @throws IOException
+   */
+  private void doMultipleClients(final Class<? extends Test> cmd) throws IOException {
+    final List<Thread> threads = new ArrayList<Thread>(N);
+    final int perClientRows = R/N;
+    for (int i = 0; i < N; i++) {
+      Thread t = new Thread (Integer.toString(i)) {
+        @Override
+        public void run() {
+          super.run();
+          PerformanceEvaluation pe = new PerformanceEvaluation(conf);
+          int index = Integer.parseInt(getName());
+          try {
+            long elapsedTime = pe.runOneClient(cmd, index * perClientRows,
+              perClientRows, R, B, new Status() {
+                  public void setStatus(final String msg) throws IOException {
+                    LOG.info("client-" + getName() + " " + msg);
+                  }
+                });
+            LOG.info("Finished " + getName() + " in " + elapsedTime +
+              "ms writing " + perClientRows + " rows");
+          } catch (IOException e) {
+            throw new RuntimeException(e);
+          }
+        }
+      };
+      threads.add(t);
+    }
+    for (Thread t: threads) {
+      t.start();
+    }
+    for (Thread t: threads) {
+      while(t.isAlive()) {
+        try {
+          t.join();
+        } catch (InterruptedException e) {
+          LOG.debug("Interrupted, continuing" + e.toString());
+        }
+      }
+    }
+  }
+  
+  /*
+   * Run a mapreduce job.  Run as many maps as asked-for clients.
+   * Before we start up the job, write out an input file with instruction
+   * per client regards which row they are to start on.
+   * @param cmd Command to run.
+   * @throws IOException
+   */
+  private void doMapReduce(final Class<? extends Test> cmd) throws IOException,
+        InterruptedException, ClassNotFoundException {
+    Path inputDir = writeInputFile(this.conf);
+    this.conf.set(EvaluationMapTask.CMD_KEY, cmd.getName());
+    this.conf.set(EvaluationMapTask.PE_KEY, getClass().getName());
+    Job job = new Job(this.conf);
+    job.setJarByClass(PerformanceEvaluation.class);
+    job.setJobName("HBase Performance Evaluation");
+    
+    job.setInputFormatClass(PeInputFormat.class);
+    PeInputFormat.setInputPaths(job, inputDir);
+    
+    job.setOutputKeyClass(LongWritable.class);
+    job.setOutputValueClass(LongWritable.class);
+    
+    job.setMapperClass(EvaluationMapTask.class);
+    job.setReducerClass(LongSumReducer.class);
+        
+    job.setNumReduceTasks(1);
+    
+    job.setOutputFormatClass(TextOutputFormat.class);
+    TextOutputFormat.setOutputPath(job, new Path(inputDir,"outputs"));
+    
+    job.waitForCompletion(true);
+  }
+  
+  /*
+   * Write input file of offsets-per-client for the mapreduce job.
+   * @param c Configuration
+   * @return Directory that contains file written.
+   * @throws IOException
+   */
+  private Path writeInputFile(final Configuration c) throws IOException {
+    FileSystem fs = FileSystem.get(c);
+    if (!fs.exists(PERF_EVAL_DIR)) {
+      fs.mkdirs(PERF_EVAL_DIR);
+    }
+    SimpleDateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss");
+    Path subdir = new Path(PERF_EVAL_DIR, formatter.format(new Date()));
+    fs.mkdirs(subdir);
+    Path inputFile = new Path(subdir, "input.txt");
+    PrintStream out = new PrintStream(fs.create(inputFile));
+    // Make input random.
+    Map<Integer, String> m = new TreeMap<Integer, String>();
+    Hash h = MurmurHash.getInstance();
+    int perClientRows = (R / N);
+    try {
+      for (int i = 0; i < 10; i++) {
+        for (int j = 0; j < N; j++) {
+          String s = "startRow=" + ((j * perClientRows) + (i * (perClientRows/10))) +
+          ", perClientRunRows=" + (perClientRows / 10) +
+          ", totalRows=" + R +
+          ", clients=" + N +
+          ", rowsPerPut=" + B;
+          int hash = h.hash(Bytes.toBytes(s));
+          m.put(hash, s);
+        }
+      }
+      for (Map.Entry<Integer, String> e: m.entrySet()) {
+        out.println(e.getValue());
+      }
+    } finally {
+      out.close();
+    }
+    return subdir;
+  }
+
+  /**
+   * Describes a command.
+   */
+  static class CmdDescriptor {
+    private Class<? extends Test> cmdClass;
+    private String name;
+    private String description;
+
+    CmdDescriptor(Class<? extends Test> cmdClass, String name, String description) {
+      this.cmdClass = cmdClass;
+      this.name = name;
+      this.description = description;
+    }
+
+    public Class<? extends Test> getCmdClass() {
+      return cmdClass;
+    }
+
+    public String getName() {
+      return name;
+    }
+
+    public String getDescription() {
+      return description;
+    }
+  }
+
+  /**
+   * Wraps up options passed to {@link org.apache.hadoop.hbase.PerformanceEvaluation.Test
+   * tests}.  This makes the reflection logic a little easier to understand...
+   */
+  static class TestOptions {
+    private int startRow;
+    private int perClientRunRows;
+    private int totalRows;
+    private byte[] tableName;
+    private int rowsPerPut;
+
+    TestOptions() {
+    }
+
+    TestOptions(int startRow, int perClientRunRows, int totalRows, byte[] tableName, int rowsPerPut) {
+      this.startRow = startRow;
+      this.perClientRunRows = perClientRunRows;
+      this.totalRows = totalRows;
+      this.tableName = tableName;
+      this.rowsPerPut = rowsPerPut;
+    }
+
+    public int getStartRow() {
+      return startRow;
+    }
+
+    public int getPerClientRunRows() {
+      return perClientRunRows;
+    }
+
+    public int getTotalRows() {
+      return totalRows;
+    }
+
+    public byte[] getTableName() {
+      return tableName;
+    }
+
+    public int getRowsPerPut() {
+      return rowsPerPut;
+    }
+  }
+
+  /*
+   * A test.
+   * Subclass to particularize what happens per row.
+   */
+  static abstract class Test {
+    // Below is make it so when Tests are all running in the one
+    // jvm, that they each have a differently seeded Random. 
+    private static final Random randomSeed =
+      new Random(System.currentTimeMillis());
+    private static long nextRandomSeed() {
+      return randomSeed.nextLong();
+    }
+    protected final Random rand = new Random(nextRandomSeed());
+
+    protected final int startRow;
+    protected final int perClientRunRows;
+    protected final int totalRows;
+    protected final Status status;
+    protected byte[] tableName;
+    protected RemoteHTable table;
+    protected volatile Configuration conf;
+
+    /**
+     * Note that all subclasses of this class must provide a public contructor
+     * that has the exact same list of arguments.
+     */
+    Test(final Configuration conf, final TestOptions options, final Status status) {
+      super();
+      this.startRow = options.getStartRow();
+      this.perClientRunRows = options.getPerClientRunRows();
+      this.totalRows = options.getTotalRows();
+      this.status = status;
+      this.tableName = options.getTableName();
+      this.table = null;
+      this.conf = conf;
+    }
+    
+    protected String generateStatus(final int sr, final int i, final int lr) {
+      return sr + "/" + i + "/" + lr;
+    }
+    
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 10;
+      return period == 0? this.perClientRunRows: period;
+    }
+    
+    void testSetup() throws IOException {
+      this.table = new RemoteHTable(new Client(cluster), conf, tableName,
+        accessToken);
+    }
+
+    void testTakedown()  throws IOException {
+      this.table.close();
+    }
+    
+    /*
+     * Run test
+     * @return Elapsed time.
+     * @throws IOException
+     */
+    long test() throws IOException {
+      long elapsedTime;
+      testSetup();
+      long startTime = System.currentTimeMillis();
+      try {
+        testTimed();
+        elapsedTime = System.currentTimeMillis() - startTime;
+      } finally {
+        testTakedown();
+      }
+      return elapsedTime;
+    }
+
+    /**
+     * Provides an extension point for tests that don't want a per row invocation.
+     */
+    void testTimed() throws IOException {
+      int lastRow = this.startRow + this.perClientRunRows;
+      // Report on completion of 1/10th of total.
+      for (int i = this.startRow; i < lastRow; i++) {
+        testRow(i);
+        if (status != null && i > 0 && (i % getReportingPeriod()) == 0) {
+          status.setStatus(generateStatus(this.startRow, i, lastRow));
+        }
+      }
+    }
+
+    /*
+    * Test for individual row.
+    * @param i Row index.
+    */
+    void testRow(final int i) throws IOException {
+    }
+  }
+
+  @SuppressWarnings("unused")
+  static class RandomSeekScanTest extends Test {
+    RandomSeekScanTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Scan scan = new Scan(getRandomRow(this.rand, this.totalRows));
+      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      scan.setFilter(new WhileMatchFilter(new PageFilter(120)));
+      ResultScanner s = this.table.getScanner(scan);
+      //int count = 0;
+      for (Result rr = null; (rr = s.next()) != null;) {
+        // LOG.info("" + count++ + " " + rr.toString());
+      }
+      s.close();
+    }
+ 
+    @Override
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 100;
+      return period == 0? this.perClientRunRows: period;
+    }
+
+  }
+
+  @SuppressWarnings("unused")
+  static abstract class RandomScanWithRangeTest extends Test {
+    RandomScanWithRangeTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Pair<byte[], byte[]> startAndStopRow = getStartAndStopRow();
+      Scan scan = new Scan(startAndStopRow.getFirst(), startAndStopRow.getSecond());
+      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      ResultScanner s = this.table.getScanner(scan);
+      int count = 0;
+      for (Result rr = null; (rr = s.next()) != null;) {
+        count++;
+      }
+
+      if (i % 100 == 0) {
+        LOG.info(String.format("Scan for key range %s - %s returned %s rows",
+            Bytes.toString(startAndStopRow.getFirst()),
+            Bytes.toString(startAndStopRow.getSecond()), count));
+      }
+
+      s.close();
+    }
+
+    protected abstract Pair<byte[],byte[]> getStartAndStopRow();
+
+    protected Pair<byte[], byte[]> generateStartAndStopRows(int maxRange) {
+      int start = this.rand.nextInt(Integer.MAX_VALUE) % totalRows;
+      int stop = start + maxRange;
+      return new Pair<byte[],byte[]>(format(start), format(stop));
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 100;
+      return period == 0? this.perClientRunRows: period;
+    }
+  }
+
+  static class RandomScanWithRange10Test extends RandomScanWithRangeTest {
+    RandomScanWithRange10Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(10);
+    }
+  }
+
+  static class RandomScanWithRange100Test extends RandomScanWithRangeTest {
+    RandomScanWithRange100Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(100);
+    }
+  }
+
+  static class RandomScanWithRange1000Test extends RandomScanWithRangeTest {
+    RandomScanWithRange1000Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(1000);
+    }
+  }
+
+  static class RandomScanWithRange10000Test extends RandomScanWithRangeTest {
+    RandomScanWithRange10000Test(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    protected Pair<byte[], byte[]> getStartAndStopRow() {
+      return generateStartAndStopRows(10000);
+    }
+  }
+
+  static class RandomReadTest extends Test {
+    RandomReadTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(final int i) throws IOException {
+      Get get = new Get(getRandomRow(this.rand, this.totalRows));
+      get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      this.table.get(get);
+    }
+
+    @Override
+    protected int getReportingPeriod() {
+      int period = this.perClientRunRows / 100;
+      return period == 0? this.perClientRunRows: period;
+    }
+
+  }
+  
+  static class RandomWriteTest extends Test {
+    int rowsPerPut;
+
+    RandomWriteTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+      rowsPerPut = options.getRowsPerPut();
+    }
+    
+    @Override
+    void testTimed() throws IOException {
+      int lastRow = this.startRow + this.perClientRunRows;
+      // Report on completion of 1/10th of total.
+      List<Put> puts = new ArrayList<Put>();
+      for (int i = this.startRow; i < lastRow; i += rowsPerPut) {
+        for (int j = 0; j < rowsPerPut; j++) {
+          byte [] row = getRandomRow(this.rand, this.totalRows);
+          Put put = new Put(row);
+          byte[] value = generateValue(this.rand);
+          put.add(FAMILY_NAME, QUALIFIER_NAME, value);
+          puts.add(put);
+          if (status != null && i > 0 && (i % getReportingPeriod()) == 0) {
+            status.setStatus(generateStatus(this.startRow, i, lastRow));
+          }
+        }
+        table.put(puts);
+      }
+    }
+  }
+  
+  static class ScanTest extends Test {
+    private ResultScanner testScanner;
+
+    ScanTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+    
+    @Override
+    void testSetup() throws IOException {
+      super.testSetup();
+    }
+    
+    @Override
+    void testTakedown() throws IOException {
+      if (this.testScanner != null) {
+        this.testScanner.close();
+      }
+      super.testTakedown();
+    }
+    
+    
+    @Override
+    void testRow(final int i) throws IOException {
+      if (this.testScanner == null) {
+        Scan scan = new Scan(format(this.startRow));
+        scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+        this.testScanner = table.getScanner(scan);
+      }
+      testScanner.next();
+    }
+
+  }
+  
+  static class SequentialReadTest extends Test {
+    SequentialReadTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+    
+    @Override
+    void testRow(final int i) throws IOException {
+      Get get = new Get(format(i));
+      get.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      table.get(get);
+    }
+
+  }
+  
+  static class SequentialWriteTest extends Test {
+    int rowsPerPut;
+
+    SequentialWriteTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+      rowsPerPut = options.getRowsPerPut();
+    }
+
+    @Override
+    void testTimed() throws IOException {
+      int lastRow = this.startRow + this.perClientRunRows;
+      // Report on completion of 1/10th of total.
+      List<Put> puts = new ArrayList<Put>();
+      for (int i = this.startRow; i < lastRow; i += rowsPerPut) {
+        for (int j = 0; j < rowsPerPut; j++) {
+          Put put = new Put(format(i + j));
+          byte[] value = generateValue(this.rand);
+          put.add(FAMILY_NAME, QUALIFIER_NAME, value);
+          puts.add(put);
+          if (status != null && i > 0 && (i % getReportingPeriod()) == 0) {
+            status.setStatus(generateStatus(this.startRow, i, lastRow));
+          }
+        }
+        table.put(puts);
+      }
+    }
+  }
+
+  static class FilteredScanTest extends Test {
+    protected static final Log LOG = LogFactory.getLog(FilteredScanTest.class.getName());
+
+    FilteredScanTest(Configuration conf, TestOptions options, Status status) {
+      super(conf, options, status);
+    }
+
+    @Override
+    void testRow(int i) throws IOException {
+      byte[] value = generateValue(this.rand);
+      Scan scan = constructScan(value);
+      ResultScanner scanner = null;
+      try {
+        scanner = this.table.getScanner(scan);
+        while (scanner.next() != null) {
+        }
+      } finally {
+        if (scanner != null) scanner.close();
+      }
+    }
+
+    protected Scan constructScan(byte[] valuePrefix) throws IOException {
+      Filter filter = new SingleColumnValueFilter(
+          FAMILY_NAME, QUALIFIER_NAME, CompareFilter.CompareOp.EQUAL,
+          new BinaryComparator(valuePrefix)
+      );
+      Scan scan = new Scan();
+      scan.addColumn(FAMILY_NAME, QUALIFIER_NAME);
+      scan.setFilter(filter);
+      return scan;
+    }
+  }
+  
+  /*
+   * Format passed integer.
+   * @param number
+   * @return Returns zero-prefixed 10-byte wide decimal version of passed
+   * number (Does absolute in case number is negative).
+   */
+  public static byte [] format(final int number) {
+    byte [] b = new byte[10];
+    int d = Math.abs(number);
+    for (int i = b.length - 1; i >= 0; i--) {
+      b[i] = (byte)((d % 10) + '0');
+      d /= 10;
+    }
+    return b;
+  }
+  
+  /*
+   * This method takes some time and is done inline uploading data.  For
+   * example, doing the mapfile test, generation of the key and value
+   * consumes about 30% of CPU time.
+   * @return Generated random value to insert into a table cell.
+   */
+  public static byte[] generateValue(final Random r) {
+    byte [] b = new byte [ROW_LENGTH];
+    r.nextBytes(b);
+    return b;
+  }
+  
+  static byte [] getRandomRow(final Random random, final int totalRows) {
+    return format(random.nextInt(Integer.MAX_VALUE) % totalRows);
+  }
+  
+  long runOneClient(final Class<? extends Test> cmd, final int startRow,
+                    final int perClientRunRows, final int totalRows, 
+                    final int rowsPerPut, final Status status)
+  throws IOException {
+    status.setStatus("Start " + cmd + " at offset " + startRow + " for " +
+      perClientRunRows + " rows");
+    long totalElapsedTime = 0;
+
+    Test t = null;
+    TestOptions options = new TestOptions(startRow, perClientRunRows,
+      totalRows, getTableDescriptor().getName(), rowsPerPut);
+    try {
+      Constructor<? extends Test> constructor = cmd.getDeclaredConstructor(
+          Configuration.class, TestOptions.class, Status.class);
+      t = constructor.newInstance(this.conf, options, status);
+    } catch (NoSuchMethodException e) {
+      throw new IllegalArgumentException("Invalid command class: " +
+          cmd.getName() + ".  It does not provide a constructor as described by" +
+          "the javadoc comment.  Available constructors are: " +
+          Arrays.toString(cmd.getConstructors()));
+    } catch (Exception e) {
+      throw new IllegalStateException("Failed to construct command class", e);
+    }
+    totalElapsedTime = t.test();
+
+    status.setStatus("Finished " + cmd + " in " + totalElapsedTime +
+      "ms at offset " + startRow + " for " + perClientRunRows + " rows");
+    return totalElapsedTime;
+  }
+  
+  private void runNIsOne(final Class<? extends Test> cmd) {
+    Status status = new Status() {
+      public void setStatus(String msg) throws IOException {
+        LOG.info(msg);
+      }
+    };
+
+    try {
+      checkTable();
+      runOneClient(cmd, 0, R, R, B, status);
+    } catch (Exception e) {
+      LOG.error("Failed", e);
+    } 
+  }
+
+  private void runTest(final Class<? extends Test> cmd) throws IOException,
+          InterruptedException, ClassNotFoundException {
+    if (N == 1) {
+      // If there is only one client and one HRegionServer, we assume nothing
+      // has been set up at all.
+      runNIsOne(cmd);
+    } else {
+      // Else, run 
+      runNIsMoreThanOne(cmd);
+    }
+  }
+
+  protected void printUsage() {
+    printUsage(null);
+  }
+  
+  protected void printUsage(final String message) {
+    if (message != null && message.length() > 0) {
+      System.err.println(message);
+    }
+    System.err.println("Usage: java " + this.getClass().getName() + " \\");
+    System.err.println("  [--option] [--option=value] <command> <nclients>");
+    System.err.println();
+    System.err.println("Options:");
+    System.err.println(" host          String. Specify Stargate endpoint.");
+    System.err.println(" token         String. API access token.");
+    System.err.println(" rows          Integer. Rows each client runs. Default: One million");
+    System.err.println(" rowsPerPut    Integer. Rows each Stargate (multi)Put. Default: 100");
+    System.err.println(" nomapred      (Flag) Run multiple clients using threads " +
+      "(rather than use mapreduce)");
+    System.err.println();
+    System.err.println("Command:");
+    for (CmdDescriptor command : commands.values()) {
+      System.err.println(String.format(" %-15s %s", command.getName(), command.getDescription()));
+    }
+    System.err.println();
+    System.err.println("Args:");
+    System.err.println(" nclients      Integer. Required. Total number of " +
+      "clients (and HRegionServers)");
+    System.err.println("               running: 1 <= value <= 500");
+    System.err.println("Examples:");
+    System.err.println(" To run a single evaluation client:");
+    System.err.println(" $ bin/hbase " + this.getClass().getName()
+        + " sequentialWrite 1");
+  }
+
+  private void getArgs(final int start, final String[] args) {
+    if(start + 1 > args.length) {
+      throw new IllegalArgumentException("must supply the number of clients");
+    }
+    N = Integer.parseInt(args[start]);
+    if (N < 1) {
+      throw new IllegalArgumentException("Number of clients must be > 1");
+    }
+    // Set total number of rows to write.
+    R = R * N;
+  }
+  
+  public int doCommandLine(final String[] args) {
+    // Process command-line args. TODO: Better cmd-line processing
+    // (but hopefully something not as painful as cli options).    
+    int errCode = -1;
+    if (args.length < 1) {
+      printUsage();
+      return errCode;
+    }
+
+    try {
+      for (int i = 0; i < args.length; i++) {
+        String cmd = args[i];
+        if (cmd.equals("-h")) {
+          printUsage();
+          errCode = 0;
+          break;
+        }
+       
+        final String nmr = "--nomapred";
+        if (cmd.startsWith(nmr)) {
+          nomapred = true;
+          continue;
+        }
+        
+        final String rows = "--rows=";
+        if (cmd.startsWith(rows)) {
+          R = Integer.parseInt(cmd.substring(rows.length()));
+          continue;
+        }
+
+        final String rowsPerPut = "--rowsPerPut=";
+        if (cmd.startsWith(rowsPerPut)) {
+          this.B = Integer.parseInt(cmd.substring(rowsPerPut.length()));
+          continue;
+        }
+
+        final String host = "--host=";
+        if (cmd.startsWith(host)) {
+          cluster.add(cmd.substring(host.length()));
+          continue;
+        }
+
+        final String token = "--token=";
+        if (cmd.startsWith(token)) {
+          accessToken = cmd.substring(token.length());
+          continue;
+        }
+
+        Class<? extends Test> cmdClass = determineCommandClass(cmd);
+        if (cmdClass != null) {
+          getArgs(i + 1, args);
+          if (cluster.isEmpty()) {
+            String s = conf.get("stargate.hostname", "localhost");
+            if (s.contains(":")) {
+              cluster.add(s);
+            } else {
+              cluster.add(s, conf.getInt("stargate.port", 8080));
+            }
+          }
+          runTest(cmdClass);
+          errCode = 0;
+          break;
+        }
+    
+        printUsage();
+        break;
+      }
+    } catch (Exception e) {
+      e.printStackTrace();
+    }
+    
+    return errCode;
+  }
+
+  private Class<? extends Test> determineCommandClass(String cmd) {
+    CmdDescriptor descriptor = commands.get(cmd);
+    return descriptor != null ? descriptor.getCmdClass() : null;
+  }
+
+  /**
+   * @param args
+   */
+  public static void main(final String[] args) {
+    Configuration c = HBaseConfiguration.create();
+    System.exit(new PerformanceEvaluation(c).doCommandLine(args));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java
new file mode 100644
index 0000000..c41c740
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java
@@ -0,0 +1,132 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.util.zip.GZIPInputStream;
+import java.util.zip.GZIPOutputStream;
+
+import org.apache.commons.httpclient.Header;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestGzipFilter {
+  private static final String TABLE = "TestGzipFilter";
+  private static final String CFA = "a";
+  private static final String COLUMN_1 = CFA + ":1";
+  private static final String COLUMN_2 = CFA + ":2";
+  private static final String ROW_1 = "testrow1";
+  private static final byte[] VALUE_1 = Bytes.toBytes("testvalue1");
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL =
+    new HBaseRESTTestingUtility();
+  private static Client client;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    client = new Client(new Cluster().add("localhost",
+      REST_TEST_UTIL.getServletPort()));
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    if (admin.tableExists(TABLE)) {
+      return;
+    }
+    HTableDescriptor htd = new HTableDescriptor(TABLE);
+    htd.addFamily(new HColumnDescriptor(CFA));
+    admin.createTable(htd);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testGzipFilter() throws Exception {
+    String path = "/" + TABLE + "/" + ROW_1 + "/" + COLUMN_1;
+
+    ByteArrayOutputStream bos = new ByteArrayOutputStream();
+    GZIPOutputStream os = new GZIPOutputStream(bos);
+    os.write(VALUE_1);
+    os.close();
+    byte[] value_1_gzip = bos.toByteArray();
+
+    // input side filter
+
+    Header[] headers = new Header[2];
+    headers[0] = new Header("Content-Type", Constants.MIMETYPE_BINARY);
+    headers[1] = new Header("Content-Encoding", "gzip");
+    Response response = client.put(path, headers, value_1_gzip);
+    assertEquals(response.getCode(), 200);
+
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), TABLE);
+    Get get = new Get(Bytes.toBytes(ROW_1));
+    get.addColumn(Bytes.toBytes(CFA), Bytes.toBytes("1"));
+    Result result = table.get(get);
+    byte[] value = result.getValue(Bytes.toBytes(CFA), Bytes.toBytes("1"));
+    assertNotNull(value);
+    assertTrue(Bytes.equals(value, VALUE_1));
+
+    // output side filter
+
+    headers[0] = new Header("Accept", Constants.MIMETYPE_BINARY);
+    headers[1] = new Header("Accept-Encoding", "gzip");
+    response = client.get(path, headers);
+    assertEquals(response.getCode(), 200);
+    ByteArrayInputStream bis = new ByteArrayInputStream(response.getBody());
+    GZIPInputStream is = new GZIPInputStream(bis);
+    value = new byte[VALUE_1.length];
+    is.read(value, 0, VALUE_1.length);
+    assertTrue(Bytes.equals(value, VALUE_1));
+    is.close();
+  }
+
+  @Test
+  public void testErrorNotGzipped() throws Exception {
+    String path = "/" + TABLE + "/" + ROW_1 + "/" + COLUMN_2;
+    Header[] headers = new Header[2];
+    headers[0] = new Header("Accept", Constants.MIMETYPE_BINARY);
+    headers[1] = new Header("Accept-Encoding", "gzip");
+    Response response = client.get(path, headers);
+    assertEquals(response.getCode(), 404);
+    String contentEncoding = response.getHeader("Content-Encoding");
+    assertTrue(contentEncoding == null || !contentEncoding.contains("gzip"));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java
new file mode 100644
index 0000000..adaf549
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java
@@ -0,0 +1,486 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.StringWriter;
+import java.net.URLEncoder;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Marshaller;
+import javax.xml.bind.Unmarshaller;
+
+import org.apache.commons.httpclient.Header;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.rest.model.CellModel;
+import org.apache.hadoop.hbase.rest.model.CellSetModel;
+import org.apache.hadoop.hbase.rest.model.RowModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestRowResource {
+  private static final String TABLE = "TestRowResource";
+  private static final String CFA = "a";
+  private static final String CFB = "b";
+  private static final String COLUMN_1 = CFA + ":1";
+  private static final String COLUMN_2 = CFB + ":2";
+  private static final String ROW_1 = "testrow1";
+  private static final String VALUE_1 = "testvalue1";
+  private static final String ROW_2 = "testrow2";
+  private static final String VALUE_2 = "testvalue2";
+  private static final String ROW_3 = "testrow3";
+  private static final String VALUE_3 = "testvalue3";
+  private static final String ROW_4 = "testrow4";
+  private static final String VALUE_4 = "testvalue4";
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static Client client;
+  private static JAXBContext context;
+  private static Marshaller marshaller;
+  private static Unmarshaller unmarshaller;
+  private static Configuration conf;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    conf = TEST_UTIL.getConfiguration();
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(conf);
+    context = JAXBContext.newInstance(
+        CellModel.class,
+        CellSetModel.class,
+        RowModel.class);
+    marshaller = context.createMarshaller();
+    unmarshaller = context.createUnmarshaller();
+    client = new Client(new Cluster().add("localhost", 
+      REST_TEST_UTIL.getServletPort()));
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    if (admin.tableExists(TABLE)) {
+      return;
+    }
+    HTableDescriptor htd = new HTableDescriptor(TABLE);
+    htd.addFamily(new HColumnDescriptor(CFA));
+    htd.addFamily(new HColumnDescriptor(CFB));
+    admin.createTable(htd);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private static Response deleteRow(String table, String row) 
+      throws IOException {
+    StringBuilder path = new StringBuilder();
+    path.append('/');
+    path.append(table);
+    path.append('/');
+    path.append(row);
+    Response response = client.delete(path.toString());
+    Thread.yield();
+    return response;
+  }
+
+  private static Response deleteValue(String table, String row, String column)
+      throws IOException {
+    StringBuilder path = new StringBuilder();
+    path.append('/');
+    path.append(table);
+    path.append('/');
+    path.append(row);
+    path.append('/');
+    path.append(column);
+    Response response = client.delete(path.toString());
+    Thread.yield();
+    return response;
+  }
+
+  private static Response getValueXML(String table, String row, String column)
+      throws IOException {
+    StringBuilder path = new StringBuilder();
+    path.append('/');
+    path.append(table);
+    path.append('/');
+    path.append(row);
+    path.append('/');
+    path.append(column);
+    return getValueXML(path.toString());
+  }
+
+  private static Response getValueXML(String url) throws IOException {
+    Response response = client.get(url, Constants.MIMETYPE_XML);
+    return response;
+  }
+
+  private static Response getValuePB(String table, String row, String column) 
+      throws IOException {
+    StringBuilder path = new StringBuilder();
+    path.append('/');
+    path.append(table);
+    path.append('/');
+    path.append(row);
+    path.append('/');
+    path.append(column);
+    return getValuePB(path.toString());
+  }
+
+  private static Response getValuePB(String url) throws IOException {
+    Response response = client.get(url, Constants.MIMETYPE_PROTOBUF); 
+    return response;
+  }
+
+  private static Response putValueXML(String table, String row, String column,
+      String value) throws IOException, JAXBException {
+    StringBuilder path = new StringBuilder();
+    path.append('/');
+    path.append(table);
+    path.append('/');
+    path.append(row);
+    path.append('/');
+    path.append(column);
+    return putValueXML(path.toString(), table, row, column, value);
+  }
+
+  private static Response putValueXML(String url, String table, String row,
+      String column, String value) throws IOException, JAXBException {
+    RowModel rowModel = new RowModel(row);
+    rowModel.addCell(new CellModel(Bytes.toBytes(column),
+      Bytes.toBytes(value)));
+    CellSetModel cellSetModel = new CellSetModel();
+    cellSetModel.addRow(rowModel);
+    StringWriter writer = new StringWriter();
+    marshaller.marshal(cellSetModel, writer);
+    Response response = client.put(url, Constants.MIMETYPE_XML,
+      Bytes.toBytes(writer.toString()));
+    Thread.yield();
+    return response;
+  }
+
+  private static void checkValueXML(String table, String row, String column,
+      String value) throws IOException, JAXBException {
+    Response response = getValueXML(table, row, column);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cellSet = (CellSetModel)
+      unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
+    RowModel rowModel = cellSet.getRows().get(0);
+    CellModel cell = rowModel.getCells().get(0);
+    assertEquals(Bytes.toString(cell.getColumn()), column);
+    assertEquals(Bytes.toString(cell.getValue()), value);
+  }
+
+  private static void checkValueXML(String url, String table, String row,
+      String column, String value) throws IOException, JAXBException {
+    Response response = getValueXML(url);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cellSet = (CellSetModel)
+      unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
+    RowModel rowModel = cellSet.getRows().get(0);
+    CellModel cell = rowModel.getCells().get(0);
+    assertEquals(Bytes.toString(cell.getColumn()), column);
+    assertEquals(Bytes.toString(cell.getValue()), value);
+  }
+
+  private static Response putValuePB(String table, String row, String column,
+      String value) throws IOException {
+    StringBuilder path = new StringBuilder();
+    path.append('/');
+    path.append(table);
+    path.append('/');
+    path.append(row);
+    path.append('/');
+    path.append(column);
+    return putValuePB(path.toString(), table, row, column, value);
+  }
+
+  private static Response putValuePB(String url, String table, String row,
+      String column, String value) throws IOException {
+    RowModel rowModel = new RowModel(row);
+    rowModel.addCell(new CellModel(Bytes.toBytes(column),
+      Bytes.toBytes(value)));
+    CellSetModel cellSetModel = new CellSetModel();
+    cellSetModel.addRow(rowModel);
+    Response response = client.put(url, Constants.MIMETYPE_PROTOBUF,
+      cellSetModel.createProtobufOutput());
+    Thread.yield();
+    return response;
+  }
+
+  private static void checkValuePB(String table, String row, String column, 
+      String value) throws IOException {
+    Response response = getValuePB(table, row, column);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cellSet = new CellSetModel();
+    cellSet.getObjectFromMessage(response.getBody());
+    RowModel rowModel = cellSet.getRows().get(0);
+    CellModel cell = rowModel.getCells().get(0);
+    assertEquals(Bytes.toString(cell.getColumn()), column);
+    assertEquals(Bytes.toString(cell.getValue()), value);
+  }
+
+  @Test
+  public void testDelete() throws IOException, JAXBException {
+    Response response;
+    
+    response = putValueXML(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 200);
+    response = putValueXML(TABLE, ROW_1, COLUMN_2, VALUE_2);
+    assertEquals(response.getCode(), 200);
+    checkValueXML(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    checkValueXML(TABLE, ROW_1, COLUMN_2, VALUE_2);
+
+    response = deleteValue(TABLE, ROW_1, COLUMN_1);
+    assertEquals(response.getCode(), 200);
+    response = getValueXML(TABLE, ROW_1, COLUMN_1);
+    assertEquals(response.getCode(), 404);
+    checkValueXML(TABLE, ROW_1, COLUMN_2, VALUE_2);
+
+    response = deleteRow(TABLE, ROW_1);
+    assertEquals(response.getCode(), 200);    
+    response = getValueXML(TABLE, ROW_1, COLUMN_1);
+    assertEquals(response.getCode(), 404);
+    response = getValueXML(TABLE, ROW_1, COLUMN_2);
+    assertEquals(response.getCode(), 404);
+  }
+
+  @Test
+  public void testForbidden() throws IOException, JAXBException {
+    Response response;
+
+    conf.set("hbase.rest.readonly", "true");
+
+    response = putValueXML(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 403);
+    response = putValuePB(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 403);
+    response = deleteValue(TABLE, ROW_1, COLUMN_1);
+    assertEquals(response.getCode(), 403);
+    response = deleteRow(TABLE, ROW_1);
+    assertEquals(response.getCode(), 403);
+
+    conf.set("hbase.rest.readonly", "false");
+
+    response = putValueXML(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 200);
+    response = putValuePB(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 200);
+    response = deleteValue(TABLE, ROW_1, COLUMN_1);
+    assertEquals(response.getCode(), 200);
+    response = deleteRow(TABLE, ROW_1);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testSingleCellGetPutXML() throws IOException, JAXBException {
+    Response response = getValueXML(TABLE, ROW_1, COLUMN_1);
+    assertEquals(response.getCode(), 404);
+
+    response = putValueXML(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 200);
+    checkValueXML(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    response = putValueXML(TABLE, ROW_1, COLUMN_1, VALUE_2);
+    assertEquals(response.getCode(), 200);
+    checkValueXML(TABLE, ROW_1, COLUMN_1, VALUE_2);
+
+    response = deleteRow(TABLE, ROW_1);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testSingleCellGetPutPB() throws IOException, JAXBException {
+    Response response = getValuePB(TABLE, ROW_1, COLUMN_1);
+    assertEquals(response.getCode(), 404);
+
+    response = putValuePB(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 200);
+    checkValuePB(TABLE, ROW_1, COLUMN_1, VALUE_1);
+
+    response = putValuePB(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    assertEquals(response.getCode(), 200);
+    checkValuePB(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    response = putValueXML(TABLE, ROW_1, COLUMN_1, VALUE_2);
+    assertEquals(response.getCode(), 200);
+    checkValuePB(TABLE, ROW_1, COLUMN_1, VALUE_2);
+
+    response = deleteRow(TABLE, ROW_1);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testSingleCellGetPutBinary() throws IOException {
+    final String path = "/" + TABLE + "/" + ROW_3 + "/" + COLUMN_1;
+    final byte[] body = Bytes.toBytes(VALUE_3);
+    Response response = client.put(path, Constants.MIMETYPE_BINARY, body);
+    assertEquals(response.getCode(), 200);
+    Thread.yield();
+
+    response = client.get(path, Constants.MIMETYPE_BINARY);
+    assertEquals(response.getCode(), 200);
+    assertTrue(Bytes.equals(response.getBody(), body));
+    boolean foundTimestampHeader = false;
+    for (Header header: response.getHeaders()) {
+      if (header.getName().equals("X-Timestamp")) {
+        foundTimestampHeader = true;
+        break;
+      }
+    }
+    assertTrue(foundTimestampHeader);
+
+    response = deleteRow(TABLE, ROW_3);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testSingleCellGetJSON() throws IOException, JAXBException {
+    final String path = "/" + TABLE + "/" + ROW_4 + "/" + COLUMN_1;
+    Response response = client.put(path, Constants.MIMETYPE_BINARY,
+      Bytes.toBytes(VALUE_4));
+    assertEquals(response.getCode(), 200);
+    Thread.yield();
+    response = client.get(path, Constants.MIMETYPE_JSON);
+    assertEquals(response.getCode(), 200);
+    response = deleteRow(TABLE, ROW_4);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testURLEncodedKey() throws IOException, JAXBException {
+    String urlKey = "http://example.com/foo";
+    StringBuilder path = new StringBuilder();
+    path.append('/');
+    path.append(TABLE);
+    path.append('/');
+    path.append(URLEncoder.encode(urlKey, HConstants.UTF8_ENCODING));
+    path.append('/');
+    path.append(COLUMN_1);
+    Response response;
+    response = putValueXML(path.toString(), TABLE, urlKey, COLUMN_1,
+      VALUE_1);
+    assertEquals(response.getCode(), 200);
+    checkValueXML(path.toString(), TABLE, urlKey, COLUMN_1, VALUE_1);
+  }
+
+  @Test
+  public void testNoSuchCF() throws IOException, JAXBException {
+    final String goodPath = "/" + TABLE + "/" + ROW_1 + "/" + CFA+":";
+    final String badPath = "/" + TABLE + "/" + ROW_1 + "/" + "BAD";
+    Response response = client.post(goodPath, Constants.MIMETYPE_BINARY,
+      Bytes.toBytes(VALUE_1));
+    assertEquals(response.getCode(), 200);
+    assertEquals(client.get(goodPath, Constants.MIMETYPE_BINARY).getCode(),
+      200);
+    assertEquals(client.get(badPath, Constants.MIMETYPE_BINARY).getCode(),
+      404);
+    assertEquals(client.get(goodPath, Constants.MIMETYPE_BINARY).getCode(),
+      200);
+  }
+
+  @Test
+  public void testMultiCellGetPutXML() throws IOException, JAXBException {
+    String path = "/" + TABLE + "/fakerow";  // deliberate nonexistent row
+
+    CellSetModel cellSetModel = new CellSetModel();
+    RowModel rowModel = new RowModel(ROW_1);
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_1),
+      Bytes.toBytes(VALUE_1)));
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_2),
+      Bytes.toBytes(VALUE_2)));
+    cellSetModel.addRow(rowModel);
+    rowModel = new RowModel(ROW_2);
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_1),
+      Bytes.toBytes(VALUE_3)));
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_2),
+      Bytes.toBytes(VALUE_4)));
+    cellSetModel.addRow(rowModel);
+    StringWriter writer = new StringWriter();
+    marshaller.marshal(cellSetModel, writer);
+    Response response = client.put(path, Constants.MIMETYPE_XML,
+      Bytes.toBytes(writer.toString()));
+    Thread.yield();
+
+    // make sure the fake row was not actually created
+    response = client.get(path, Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 404);
+
+    // check that all of the values were created
+    checkValueXML(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    checkValueXML(TABLE, ROW_1, COLUMN_2, VALUE_2);
+    checkValueXML(TABLE, ROW_2, COLUMN_1, VALUE_3);
+    checkValueXML(TABLE, ROW_2, COLUMN_2, VALUE_4);
+
+    response = deleteRow(TABLE, ROW_1);
+    assertEquals(response.getCode(), 200);
+    response = deleteRow(TABLE, ROW_2);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testMultiCellGetPutPB() throws IOException {
+    String path = "/" + TABLE + "/fakerow";  // deliberate nonexistent row
+
+    CellSetModel cellSetModel = new CellSetModel();
+    RowModel rowModel = new RowModel(ROW_1);
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_1),
+      Bytes.toBytes(VALUE_1)));
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_2),
+      Bytes.toBytes(VALUE_2)));
+    cellSetModel.addRow(rowModel);
+    rowModel = new RowModel(ROW_2);
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_1),
+      Bytes.toBytes(VALUE_3)));
+    rowModel.addCell(new CellModel(Bytes.toBytes(COLUMN_2),
+      Bytes.toBytes(VALUE_4)));
+    cellSetModel.addRow(rowModel);
+    Response response = client.put(path, Constants.MIMETYPE_PROTOBUF,
+      cellSetModel.createProtobufOutput());
+    Thread.yield();
+
+    // make sure the fake row was not actually created
+    response = client.get(path, Constants.MIMETYPE_PROTOBUF);
+    assertEquals(response.getCode(), 404);
+
+    // check that all of the values were created
+    checkValuePB(TABLE, ROW_1, COLUMN_1, VALUE_1);
+    checkValuePB(TABLE, ROW_1, COLUMN_2, VALUE_2);
+    checkValuePB(TABLE, ROW_2, COLUMN_1, VALUE_3);
+    checkValuePB(TABLE, ROW_2, COLUMN_2, VALUE_4);
+
+    response = deleteRow(TABLE, ROW_1);
+    assertEquals(response.getCode(), 200);
+    response = deleteRow(TABLE, ROW_2);
+    assertEquals(response.getCode(), 200);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
new file mode 100644
index 0000000..4b9c9bd
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
@@ -0,0 +1,346 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.StringWriter;
+import java.util.Iterator;
+import java.util.Random;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Marshaller;
+import javax.xml.bind.Unmarshaller;
+
+import org.apache.commons.httpclient.Header;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.rest.model.CellModel;
+import org.apache.hadoop.hbase.rest.model.CellSetModel;
+import org.apache.hadoop.hbase.rest.model.RowModel;
+import org.apache.hadoop.hbase.rest.model.ScannerModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestScannerResource {
+  private static final String TABLE = "TestScannerResource";
+  private static final String NONEXISTENT_TABLE = "ThisTableDoesNotExist";
+  private static final String CFA = "a";
+  private static final String CFB = "b";
+  private static final String COLUMN_1 = CFA + ":1";
+  private static final String COLUMN_2 = CFB + ":2";
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static Client client;
+  private static JAXBContext context;
+  private static Marshaller marshaller;
+  private static Unmarshaller unmarshaller;
+  private static int expectedRows1;
+  private static int expectedRows2;
+  private static Configuration conf;
+
+  private static int insertData(String tableName, String column, double prob)
+      throws IOException {
+    Random rng = new Random();
+    int count = 0;
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), tableName);
+    byte[] k = new byte[3];
+    byte [][] famAndQf = KeyValue.parseColumn(Bytes.toBytes(column));
+    for (byte b1 = 'a'; b1 < 'z'; b1++) {
+      for (byte b2 = 'a'; b2 < 'z'; b2++) {
+        for (byte b3 = 'a'; b3 < 'z'; b3++) {
+          if (rng.nextDouble() < prob) {
+            k[0] = b1;
+            k[1] = b2;
+            k[2] = b3;
+            Put put = new Put(k);
+            put.add(famAndQf[0], famAndQf[1], k);
+            table.put(put);
+            count++;
+          }
+        }
+      }
+    }
+    table.flushCommits();
+    return count;
+  }
+
+  private static int countCellSet(CellSetModel model) {
+    int count = 0;
+    Iterator<RowModel> rows = model.getRows().iterator();
+    while (rows.hasNext()) {
+      RowModel row = rows.next();
+      Iterator<CellModel> cells = row.getCells().iterator();
+      while (cells.hasNext()) {
+        cells.next();
+        count++;
+      }
+    }
+    return count;
+  }
+
+  private static int fullTableScan(ScannerModel model) throws IOException {
+    model.setBatch(100);
+    Response response = client.put("/" + TABLE + "/scanner",
+      Constants.MIMETYPE_PROTOBUF, model.createProtobufOutput());
+    assertEquals(response.getCode(), 201);
+    String scannerURI = response.getLocation();
+    assertNotNull(scannerURI);
+    int count = 0;
+    while (true) {
+      response = client.get(scannerURI, Constants.MIMETYPE_PROTOBUF);
+      assertTrue(response.getCode() == 200 || response.getCode() == 204);
+      if (response.getCode() == 200) {
+        CellSetModel cellSet = new CellSetModel();
+        cellSet.getObjectFromMessage(response.getBody());
+        Iterator<RowModel> rows = cellSet.getRows().iterator();
+        while (rows.hasNext()) {
+          RowModel row = rows.next();
+          Iterator<CellModel> cells = row.getCells().iterator();
+          while (cells.hasNext()) {
+            cells.next();
+            count++;
+          }
+        }
+      } else {
+        break;
+      }
+    }
+    // delete the scanner
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 200);
+    return count;
+  }
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    conf = TEST_UTIL.getConfiguration();
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(conf);
+    client = new Client(new Cluster().add("localhost",
+      REST_TEST_UTIL.getServletPort()));
+    context = JAXBContext.newInstance(
+      CellModel.class,
+      CellSetModel.class,
+      RowModel.class,
+      ScannerModel.class);
+    marshaller = context.createMarshaller();
+    unmarshaller = context.createUnmarshaller();
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    if (admin.tableExists(TABLE)) {
+      return;
+    }
+    HTableDescriptor htd = new HTableDescriptor(TABLE);
+    htd.addFamily(new HColumnDescriptor(CFA));
+    htd.addFamily(new HColumnDescriptor(CFB));
+    admin.createTable(htd);
+    expectedRows1 = insertData(TABLE, COLUMN_1, 1.0);
+    expectedRows2 = insertData(TABLE, COLUMN_2, 0.5);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testSimpleScannerXML() throws IOException, JAXBException {
+    final int BATCH_SIZE = 5;
+    // new scanner
+    ScannerModel model = new ScannerModel();
+    model.setBatch(BATCH_SIZE);
+    model.addColumn(Bytes.toBytes(COLUMN_1));
+    StringWriter writer = new StringWriter();
+    marshaller.marshal(model, writer);
+    byte[] body = Bytes.toBytes(writer.toString());
+
+    // test put operation is forbidden in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    Response response = client.put("/" + TABLE + "/scanner",
+      Constants.MIMETYPE_XML, body);
+    assertEquals(response.getCode(), 403);
+    String scannerURI = response.getLocation();
+    assertNull(scannerURI);
+
+    // recall previous put operation with read-only off
+    conf.set("hbase.rest.readonly", "false");
+    response = client.put("/" + TABLE + "/scanner", Constants.MIMETYPE_XML,
+      body);
+    assertEquals(response.getCode(), 201);
+    scannerURI = response.getLocation();
+    assertNotNull(scannerURI);
+
+    // get a cell set
+    response = client.get(scannerURI, Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cellSet = (CellSetModel)
+      unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
+    // confirm batch size conformance
+    assertEquals(countCellSet(cellSet), BATCH_SIZE);
+
+    // test delete scanner operation is forbidden in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 403);
+
+    // recall previous delete scanner operation with read-only off
+    conf.set("hbase.rest.readonly", "false");
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testSimpleScannerPB() throws IOException {
+    final int BATCH_SIZE = 10;
+    // new scanner
+    ScannerModel model = new ScannerModel();
+    model.setBatch(BATCH_SIZE);
+    model.addColumn(Bytes.toBytes(COLUMN_1));
+
+    // test put operation is forbidden in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    Response response = client.put("/" + TABLE + "/scanner",
+      Constants.MIMETYPE_PROTOBUF, model.createProtobufOutput());
+    assertEquals(response.getCode(), 403);
+    String scannerURI = response.getLocation();
+    assertNull(scannerURI);
+
+    // recall previous put operation with read-only off
+    conf.set("hbase.rest.readonly", "false");
+    response = client.put("/" + TABLE + "/scanner",
+      Constants.MIMETYPE_PROTOBUF, model.createProtobufOutput());
+    assertEquals(response.getCode(), 201);
+    scannerURI = response.getLocation();
+    assertNotNull(scannerURI);
+
+    // get a cell set
+    response = client.get(scannerURI, Constants.MIMETYPE_PROTOBUF);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cellSet = new CellSetModel();
+    cellSet.getObjectFromMessage(response.getBody());
+    // confirm batch size conformance
+    assertEquals(countCellSet(cellSet), BATCH_SIZE);
+
+    // test delete scanner operation is forbidden in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 403);
+
+    // recall previous delete scanner operation with read-only off
+    conf.set("hbase.rest.readonly", "false");
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testSimpleScannerBinary() throws IOException {
+    // new scanner
+    ScannerModel model = new ScannerModel();
+    model.setBatch(1);
+    model.addColumn(Bytes.toBytes(COLUMN_1));
+
+    // test put operation is forbidden in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    Response response = client.put("/" + TABLE + "/scanner",
+      Constants.MIMETYPE_PROTOBUF, model.createProtobufOutput());
+    assertEquals(response.getCode(), 403);
+    String scannerURI = response.getLocation();
+    assertNull(scannerURI);
+
+    // recall previous put operation with read-only off
+    conf.set("hbase.rest.readonly", "false");
+    response = client.put("/" + TABLE + "/scanner",
+      Constants.MIMETYPE_PROTOBUF, model.createProtobufOutput());
+    assertEquals(response.getCode(), 201);
+    scannerURI = response.getLocation();
+    assertNotNull(scannerURI);
+
+    // get a cell
+    response = client.get(scannerURI, Constants.MIMETYPE_BINARY);
+    assertEquals(response.getCode(), 200);
+    // verify that data was returned
+    assertTrue(response.getBody().length > 0);
+    // verify that the expected X-headers are present
+    boolean foundRowHeader = false, foundColumnHeader = false,
+      foundTimestampHeader = false;
+    for (Header header: response.getHeaders()) {
+      if (header.getName().equals("X-Row")) {
+        foundRowHeader = true;
+      } else if (header.getName().equals("X-Column")) {
+        foundColumnHeader = true;
+      } else if (header.getName().equals("X-Timestamp")) {
+        foundTimestampHeader = true;
+      }
+    }
+    assertTrue(foundRowHeader);
+    assertTrue(foundColumnHeader);
+    assertTrue(foundTimestampHeader);
+
+    // test delete scanner operation is forbidden in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 403);
+
+    // recall previous delete scanner operation with read-only off
+    conf.set("hbase.rest.readonly", "false");
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testFullTableScan() throws IOException {
+    ScannerModel model = new ScannerModel();
+    model.addColumn(Bytes.toBytes(COLUMN_1));
+    assertEquals(fullTableScan(model), expectedRows1);
+
+    model = new ScannerModel();
+    model.addColumn(Bytes.toBytes(COLUMN_2));
+    assertEquals(fullTableScan(model), expectedRows2);
+  }
+
+  @Test
+  public void testTableDoesNotExist() throws IOException, JAXBException {
+    ScannerModel model = new ScannerModel();
+    StringWriter writer = new StringWriter();
+    marshaller.marshal(model, writer);
+    byte[] body = Bytes.toBytes(writer.toString());
+    Response response = client.put("/" + NONEXISTENT_TABLE +
+      "/scanner", Constants.MIMETYPE_XML, body);
+    assertEquals(response.getCode(), 404);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java
new file mode 100644
index 0000000..1a539ad
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java
@@ -0,0 +1,990 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.StringWriter;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.List;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.Marshaller;
+import javax.xml.bind.Unmarshaller;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.filter.BinaryComparator;
+import org.apache.hadoop.hbase.filter.Filter;
+import org.apache.hadoop.hbase.filter.FilterList;
+import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
+import org.apache.hadoop.hbase.filter.InclusiveStopFilter;
+import org.apache.hadoop.hbase.filter.PageFilter;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.filter.QualifierFilter;
+import org.apache.hadoop.hbase.filter.RegexStringComparator;
+import org.apache.hadoop.hbase.filter.RowFilter;
+import org.apache.hadoop.hbase.filter.SkipFilter;
+import org.apache.hadoop.hbase.filter.SubstringComparator;
+import org.apache.hadoop.hbase.filter.ValueFilter;
+import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
+import org.apache.hadoop.hbase.filter.FilterList.Operator;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.rest.model.CellModel;
+import org.apache.hadoop.hbase.rest.model.CellSetModel;
+import org.apache.hadoop.hbase.rest.model.RowModel;
+import org.apache.hadoop.hbase.rest.model.ScannerModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestScannersWithFilters {
+
+  private static final Log LOG = LogFactory.getLog(TestScannersWithFilters.class);
+
+  private static final String TABLE = "TestScannersWithFilters";
+
+  private static final byte [][] ROWS_ONE = {
+    Bytes.toBytes("testRowOne-0"), Bytes.toBytes("testRowOne-1"),
+    Bytes.toBytes("testRowOne-2"), Bytes.toBytes("testRowOne-3")
+  };
+
+  private static final byte [][] ROWS_TWO = {
+    Bytes.toBytes("testRowTwo-0"), Bytes.toBytes("testRowTwo-1"),
+    Bytes.toBytes("testRowTwo-2"), Bytes.toBytes("testRowTwo-3")
+  };
+
+  private static final byte [][] FAMILIES = {
+    Bytes.toBytes("testFamilyOne"), Bytes.toBytes("testFamilyTwo")
+  };
+
+  private static final byte [][] QUALIFIERS_ONE = {
+    Bytes.toBytes("testQualifierOne-0"), Bytes.toBytes("testQualifierOne-1"),
+    Bytes.toBytes("testQualifierOne-2"), Bytes.toBytes("testQualifierOne-3")
+  };
+
+  private static final byte [][] QUALIFIERS_TWO = {
+    Bytes.toBytes("testQualifierTwo-0"), Bytes.toBytes("testQualifierTwo-1"),
+    Bytes.toBytes("testQualifierTwo-2"), Bytes.toBytes("testQualifierTwo-3")
+  };
+
+  private static final byte [][] VALUES = {
+    Bytes.toBytes("testValueOne"), Bytes.toBytes("testValueTwo")
+  };
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static Client client;
+  private static JAXBContext context;
+  private static Marshaller marshaller;
+  private static Unmarshaller unmarshaller;
+  private static long numRows = ROWS_ONE.length + ROWS_TWO.length;
+  private static long colsPerRow = FAMILIES.length * QUALIFIERS_ONE.length;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    context = JAXBContext.newInstance(
+        CellModel.class,
+        CellSetModel.class,
+        RowModel.class,
+        ScannerModel.class);
+    marshaller = context.createMarshaller();
+    unmarshaller = context.createUnmarshaller();
+    client = new Client(new Cluster().add("localhost", 
+      REST_TEST_UTIL.getServletPort()));
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    if (!admin.tableExists(TABLE)) {
+      HTableDescriptor htd = new HTableDescriptor(TABLE);
+      htd.addFamily(new HColumnDescriptor(FAMILIES[0]));
+      htd.addFamily(new HColumnDescriptor(FAMILIES[1]));
+      admin.createTable(htd);
+      HTable table = new HTable(TEST_UTIL.getConfiguration(), TABLE);
+      // Insert first half
+      for(byte [] ROW : ROWS_ONE) {
+        Put p = new Put(ROW);
+        for(byte [] QUALIFIER : QUALIFIERS_ONE) {
+          p.add(FAMILIES[0], QUALIFIER, VALUES[0]);
+        }
+        table.put(p);
+      }
+      for(byte [] ROW : ROWS_TWO) {
+        Put p = new Put(ROW);
+        for(byte [] QUALIFIER : QUALIFIERS_TWO) {
+          p.add(FAMILIES[1], QUALIFIER, VALUES[1]);
+        }
+        table.put(p);
+      }
+      
+      // Insert second half (reverse families)
+      for(byte [] ROW : ROWS_ONE) {
+        Put p = new Put(ROW);
+        for(byte [] QUALIFIER : QUALIFIERS_ONE) {
+          p.add(FAMILIES[1], QUALIFIER, VALUES[0]);
+        }
+        table.put(p);
+      }
+      for(byte [] ROW : ROWS_TWO) {
+        Put p = new Put(ROW);
+        for(byte [] QUALIFIER : QUALIFIERS_TWO) {
+          p.add(FAMILIES[0], QUALIFIER, VALUES[1]);
+        }
+        table.put(p);
+      }
+      
+      // Delete the second qualifier from all rows and families
+      for(byte [] ROW : ROWS_ONE) {
+        Delete d = new Delete(ROW);
+        d.deleteColumns(FAMILIES[0], QUALIFIERS_ONE[1]);
+        d.deleteColumns(FAMILIES[1], QUALIFIERS_ONE[1]);
+        table.delete(d);
+      }    
+      for(byte [] ROW : ROWS_TWO) {
+        Delete d = new Delete(ROW);
+        d.deleteColumns(FAMILIES[0], QUALIFIERS_TWO[1]);
+        d.deleteColumns(FAMILIES[1], QUALIFIERS_TWO[1]);
+        table.delete(d);
+      }
+      colsPerRow -= 2;
+      
+      // Delete the second rows from both groups, one column at a time
+      for(byte [] QUALIFIER : QUALIFIERS_ONE) {
+        Delete d = new Delete(ROWS_ONE[1]);
+        d.deleteColumns(FAMILIES[0], QUALIFIER);
+        d.deleteColumns(FAMILIES[1], QUALIFIER);
+        table.delete(d);
+      }
+      for(byte [] QUALIFIER : QUALIFIERS_TWO) {
+        Delete d = new Delete(ROWS_TWO[1]);
+        d.deleteColumns(FAMILIES[0], QUALIFIER);
+        d.deleteColumns(FAMILIES[1], QUALIFIER);
+        table.delete(d);
+      }
+      numRows -= 2;
+    }
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private static void verifyScan(Scan s, long expectedRows, long expectedKeys) 
+      throws Exception {
+    ScannerModel model = ScannerModel.fromScan(s);
+    model.setBatch(Integer.MAX_VALUE); // fetch it all at once
+    StringWriter writer = new StringWriter();
+    marshaller.marshal(model, writer);
+    LOG.debug(writer.toString());
+    byte[] body = Bytes.toBytes(writer.toString());
+    Response response = client.put("/" + TABLE + "/scanner", 
+      Constants.MIMETYPE_XML, body);
+    assertEquals(response.getCode(), 201);
+    String scannerURI = response.getLocation();
+    assertNotNull(scannerURI);
+
+    // get a cell set
+    response = client.get(scannerURI, Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cells = (CellSetModel)
+      unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
+
+    int rows = cells.getRows().size();
+    assertTrue("Scanned too many rows! Only expected " + expectedRows + 
+        " total but scanned " + rows, expectedRows == rows);
+    for (RowModel row: cells.getRows()) {
+      int count = row.getCells().size();
+      assertEquals("Expected " + expectedKeys + " keys per row but " +
+        "returned " + count, expectedKeys, count);
+    }
+
+    // delete the scanner
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 200);
+  }
+
+  private static void verifyScanFull(Scan s, KeyValue [] kvs) 
+      throws Exception {
+    ScannerModel model = ScannerModel.fromScan(s);
+    model.setBatch(Integer.MAX_VALUE); // fetch it all at once
+    StringWriter writer = new StringWriter();
+    marshaller.marshal(model, writer);
+    LOG.debug(writer.toString());
+    byte[] body = Bytes.toBytes(writer.toString());
+    Response response = client.put("/" + TABLE + "/scanner", 
+      Constants.MIMETYPE_XML, body);
+    assertEquals(response.getCode(), 201);
+    String scannerURI = response.getLocation();
+    assertNotNull(scannerURI);
+
+    // get a cell set
+    response = client.get(scannerURI, Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cellSet = (CellSetModel)
+      unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
+
+    // delete the scanner
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 200);
+
+    int row = 0;
+    int idx = 0;
+    Iterator<RowModel> i = cellSet.getRows().iterator();
+    for (boolean done = true; done; row++) {
+      done = i.hasNext();
+      if (!done) break;
+      RowModel rowModel = i.next();
+      List<CellModel> cells = rowModel.getCells();
+      if (cells.isEmpty()) break;
+      assertTrue("Scanned too many keys! Only expected " + kvs.length + 
+        " total but already scanned " + (cells.size() + idx), 
+        kvs.length >= idx + cells.size());
+      for (CellModel cell: cells) {
+        assertTrue("Row mismatch", 
+            Bytes.equals(rowModel.getKey(), kvs[idx].getRow()));
+        byte[][] split = KeyValue.parseColumn(cell.getColumn());
+        assertTrue("Family mismatch", 
+            Bytes.equals(split[0], kvs[idx].getFamily()));
+        assertTrue("Qualifier mismatch", 
+            Bytes.equals(split[1], kvs[idx].getQualifier()));
+        assertTrue("Value mismatch", 
+            Bytes.equals(cell.getValue(), kvs[idx].getValue()));
+        idx++;
+      }
+    }
+    assertEquals("Expected " + kvs.length + " total keys but scanned " + idx,
+      kvs.length, idx);
+  }
+
+  private static void verifyScanNoEarlyOut(Scan s, long expectedRows,
+      long expectedKeys) throws Exception {
+    ScannerModel model = ScannerModel.fromScan(s);
+    model.setBatch(Integer.MAX_VALUE); // fetch it all at once
+    StringWriter writer = new StringWriter();
+    marshaller.marshal(model, writer);
+    LOG.debug(writer.toString());
+    byte[] body = Bytes.toBytes(writer.toString());
+    Response response = client.put("/" + TABLE + "/scanner", 
+      Constants.MIMETYPE_XML, body);
+    assertEquals(response.getCode(), 201);
+    String scannerURI = response.getLocation();
+    assertNotNull(scannerURI);
+
+    // get a cell set
+    response = client.get(scannerURI, Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    CellSetModel cellSet = (CellSetModel)
+      unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
+
+    // delete the scanner
+    response = client.delete(scannerURI);
+    assertEquals(response.getCode(), 200);
+
+    Iterator<RowModel> i = cellSet.getRows().iterator();
+    int j = 0;
+    for (boolean done = true; done; j++) {
+      done = i.hasNext();
+      if (!done) break;
+      RowModel rowModel = i.next();
+      List<CellModel> cells = rowModel.getCells();
+      if (cells.isEmpty()) break;
+      assertTrue("Scanned too many rows! Only expected " + expectedRows + 
+        " total but already scanned " + (j+1), expectedRows > j);
+      assertEquals("Expected " + expectedKeys + " keys per row but " +
+        "returned " + cells.size(), expectedKeys, cells.size());
+    }
+    assertEquals("Expected " + expectedRows + " rows but scanned " + j +
+      " rows", expectedRows, j);
+  }
+
+  @Test
+  public void testNoFilter() throws Exception {
+    // No filter
+    long expectedRows = numRows;
+    long expectedKeys = colsPerRow;
+    
+    // Both families
+    Scan s = new Scan();
+    verifyScan(s, expectedRows, expectedKeys);
+
+    // One family
+    s = new Scan();
+    s.addFamily(FAMILIES[0]);
+    verifyScan(s, expectedRows, expectedKeys/2);
+  }
+
+  @Test
+  public void testPrefixFilter() throws Exception {
+    // Grab rows from group one (half of total)
+    long expectedRows = numRows / 2;
+    long expectedKeys = colsPerRow;
+    Scan s = new Scan();
+    s.setFilter(new PrefixFilter(Bytes.toBytes("testRowOne")));
+    verifyScan(s, expectedRows, expectedKeys);
+  }
+
+  @Test
+  public void testPageFilter() throws Exception {
+    // KVs in first 6 rows
+    KeyValue [] expectedKVs = {
+      // testRowOne-0
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowOne-2
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowOne-3
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+      new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+      // testRowTwo-0
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+      // testRowTwo-2
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+      // testRowTwo-3
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+      new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1])
+    };
+    
+    // Grab all 6 rows
+    long expectedRows = 6;
+    long expectedKeys = colsPerRow;
+    Scan s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, expectedKVs);
+    
+    // Grab first 4 rows (6 cols per row)
+    expectedRows = 4;
+    expectedKeys = colsPerRow;
+    s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, Arrays.copyOf(expectedKVs, 24));
+    
+    // Grab first 2 rows
+    expectedRows = 2;
+    expectedKeys = colsPerRow;
+    s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, Arrays.copyOf(expectedKVs, 12));
+
+    // Grab first row
+    expectedRows = 1;
+    expectedKeys = colsPerRow;
+    s = new Scan();
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScan(s, expectedRows, expectedKeys);
+    s.setFilter(new PageFilter(expectedRows));
+    verifyScanFull(s, Arrays.copyOf(expectedKVs, 6));    
+  }
+
+  @Test
+  public void testInclusiveStopFilter() throws Exception {
+    // Grab rows from group one
+    
+    // If we just use start/stop row, we get total/2 - 1 rows
+    long expectedRows = (numRows / 2) - 1;
+    long expectedKeys = colsPerRow;
+    Scan s = new Scan(Bytes.toBytes("testRowOne-0"), 
+        Bytes.toBytes("testRowOne-3"));
+    verifyScan(s, expectedRows, expectedKeys);
+    
+    // Now use start row with inclusive stop filter
+    expectedRows = numRows / 2;
+    s = new Scan(Bytes.toBytes("testRowOne-0"));
+    s.setFilter(new InclusiveStopFilter(Bytes.toBytes("testRowOne-3")));
+    verifyScan(s, expectedRows, expectedKeys);
+
+    // Grab rows from group two
+    
+    // If we just use start/stop row, we get total/2 - 1 rows
+    expectedRows = (numRows / 2) - 1;
+    expectedKeys = colsPerRow;
+    s = new Scan(Bytes.toBytes("testRowTwo-0"), 
+        Bytes.toBytes("testRowTwo-3"));
+    verifyScan(s, expectedRows, expectedKeys);
+    
+    // Now use start row with inclusive stop filter
+    expectedRows = numRows / 2;
+    s = new Scan(Bytes.toBytes("testRowTwo-0"));
+    s.setFilter(new InclusiveStopFilter(Bytes.toBytes("testRowTwo-3")));
+    verifyScan(s, expectedRows, expectedKeys);
+  }
+
+  @Test
+  public void testQualifierFilter() throws Exception {
+    // Match two keys (one from each family) in half the rows
+    long expectedRows = numRows / 2;
+    long expectedKeys = 2;
+    Filter f = new QualifierFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    Scan s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys less than same qualifier
+    // Expect only two keys (one from each family) in half the rows
+    expectedRows = numRows / 2;
+    expectedKeys = 2;
+    f = new QualifierFilter(CompareOp.LESS,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys less than or equal
+    // Expect four keys (two from each family) in half the rows
+    expectedRows = numRows / 2;
+    expectedKeys = 4;
+    f = new QualifierFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys not equal
+    // Expect four keys (two from each family)
+    // Only look in first group of rows
+    expectedRows = numRows / 2;
+    expectedKeys = 4;
+    f = new QualifierFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys greater or equal
+    // Expect four keys (two from each family)
+    // Only look in first group of rows
+    expectedRows = numRows / 2;
+    expectedKeys = 4;
+    f = new QualifierFilter(CompareOp.GREATER_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys greater
+    // Expect two keys (one from each family)
+    // Only look in first group of rows
+    expectedRows = numRows / 2;
+    expectedKeys = 2;
+    f = new QualifierFilter(CompareOp.GREATER,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2")));
+    s = new Scan(HConstants.EMPTY_START_ROW, Bytes.toBytes("testRowTwo"));
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys not equal to
+    // Look across rows and fully validate the keys and ordering
+    // Expect varied numbers of keys, 4 per row in group one, 6 per row in
+    // group two
+    f = new QualifierFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(QUALIFIERS_ONE[2]));
+    s = new Scan();
+    s.setFilter(f);
+    
+    KeyValue [] kvs = {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+     
+    // Test across rows and groups with a regex
+    // Filter out "test*-2"
+    // Expect 4 keys per row across both groups
+    f = new QualifierFilter(CompareOp.NOT_EQUAL,
+        new RegexStringComparator("test.+-2"));
+    s = new Scan();
+    s.setFilter(f);
+    
+    kvs = new KeyValue [] {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+  }
+
+  @Test
+  public void testRowFilter() throws Exception {
+    // Match a single row, all keys
+    long expectedRows = 1;
+    long expectedKeys = colsPerRow;
+    Filter f = new RowFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    Scan s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match a two rows, one from each group, using regex
+    expectedRows = 2;
+    expectedKeys = colsPerRow;
+    f = new RowFilter(CompareOp.EQUAL,
+        new RegexStringComparator("testRow.+-2"));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match rows less than
+    // Expect all keys in one row
+    expectedRows = 1;
+    expectedKeys = colsPerRow;
+    f = new RowFilter(CompareOp.LESS,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match rows less than or equal
+    // Expect all keys in two rows
+    expectedRows = 2;
+    expectedKeys = colsPerRow;
+    f = new RowFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match rows not equal
+    // Expect all keys in all but one row
+    expectedRows = numRows - 1;
+    expectedKeys = colsPerRow;
+    f = new RowFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys greater or equal
+    // Expect all keys in all but one row
+    expectedRows = numRows - 1;
+    expectedKeys = colsPerRow;
+    f = new RowFilter(CompareOp.GREATER_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match keys greater
+    // Expect all keys in all but two rows
+    expectedRows = numRows - 2;
+    expectedKeys = colsPerRow;
+    f = new RowFilter(CompareOp.GREATER,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match rows not equal to testRowTwo-2
+    // Look across rows and fully validate the keys and ordering
+    // Should see all keys in all rows but testRowTwo-2
+    f = new RowFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testRowOne-2")));
+    s = new Scan();
+    s.setFilter(f);
+    
+    KeyValue [] kvs = {
+        // testRowOne-0
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[0], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowOne-3
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+    
+    // Test across rows and groups with a regex
+    // Filter out everything that doesn't match "*-2"
+    // Expect all keys in two rows
+    f = new RowFilter(CompareOp.EQUAL,
+        new RegexStringComparator(".+-2"));
+    s = new Scan();
+    s.setFilter(f);
+    
+    kvs = new KeyValue [] {
+        // testRowOne-2
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[3], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[2], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[1], QUALIFIERS_ONE[3], VALUES[0]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1])
+    };
+    verifyScanFull(s, kvs);
+  }
+
+  @Test
+  public void testValueFilter() throws Exception {
+    // Match group one rows
+    long expectedRows = numRows / 2;
+    long expectedKeys = colsPerRow;
+    Filter f = new ValueFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    Scan s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match group two rows
+    expectedRows = numRows / 2;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueTwo")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match all values using regex
+    expectedRows = numRows;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.EQUAL,
+        new RegexStringComparator("testValue((One)|(Two))"));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values less than
+    // Expect group one rows
+    expectedRows = numRows / 2;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.LESS,
+        new BinaryComparator(Bytes.toBytes("testValueTwo")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match values less than or equal
+    // Expect all rows
+    expectedRows = numRows;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueTwo")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+
+    // Match values less than or equal
+    // Expect group one rows
+    expectedRows = numRows / 2;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.LESS_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match values not equal
+    // Expect half the rows
+    expectedRows = numRows / 2;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match values greater or equal
+    // Expect all rows
+    expectedRows = numRows;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.GREATER_OR_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match values greater
+    // Expect half rows
+    expectedRows = numRows / 2;
+    expectedKeys = colsPerRow;
+    f = new ValueFilter(CompareOp.GREATER,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, expectedRows, expectedKeys);
+    
+    // Match values not equal to testValueOne
+    // Look across rows and fully validate the keys and ordering
+    // Should see all keys in all group two rows
+    f = new ValueFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testValueOne")));
+    s = new Scan();
+    s.setFilter(f);
+    
+    KeyValue [] kvs = {
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+  }
+
+  @Test
+  public void testSkipFilter() throws Exception {
+    // Test for qualifier regex: "testQualifierOne-2"
+    // Should only get rows from second group, and all keys
+    Filter f = new SkipFilter(new QualifierFilter(CompareOp.NOT_EQUAL,
+        new BinaryComparator(Bytes.toBytes("testQualifierOne-2"))));
+    Scan s = new Scan();
+    s.setFilter(f);
+    
+    KeyValue [] kvs = {
+        // testRowTwo-0
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-2
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+        // testRowTwo-3
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[3], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[2], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[1], QUALIFIERS_TWO[3], VALUES[1]),
+    };
+    verifyScanFull(s, kvs);
+  }
+
+  @Test
+  public void testFilterList() throws Exception {
+    // Test getting a single row, single key using Row, Qualifier, and Value 
+    // regular expression and substring filters
+    // Use must pass all
+    List<Filter> filters = new ArrayList<Filter>();
+    filters.add(new RowFilter(CompareOp.EQUAL,
+      new RegexStringComparator(".+-2")));
+    filters.add(new QualifierFilter(CompareOp.EQUAL,
+      new RegexStringComparator(".+-2")));
+    filters.add(new ValueFilter(CompareOp.EQUAL,
+      new SubstringComparator("One")));
+    Filter f = new FilterList(Operator.MUST_PASS_ALL, filters);
+    Scan s = new Scan();
+    s.addFamily(FAMILIES[0]);
+    s.setFilter(f);
+    KeyValue [] kvs = {
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[2], VALUES[0])
+    };
+    verifyScanFull(s, kvs);
+
+    // Test getting everything with a MUST_PASS_ONE filter including row, qf,
+    // val, regular expression and substring filters
+    filters.clear();
+    filters.add(new RowFilter(CompareOp.EQUAL,
+      new RegexStringComparator(".+Two.+")));
+    filters.add(new QualifierFilter(CompareOp.EQUAL,
+      new RegexStringComparator(".+-2")));
+    filters.add(new ValueFilter(CompareOp.EQUAL,
+      new SubstringComparator("One")));
+    f = new FilterList(Operator.MUST_PASS_ONE, filters);
+    s = new Scan();
+    s.setFilter(f);
+    verifyScanNoEarlyOut(s, numRows, colsPerRow);
+  }
+
+  @Test
+  public void testFirstKeyOnlyFilter() throws Exception {
+    Scan s = new Scan();
+    s.setFilter(new FirstKeyOnlyFilter());
+    // Expected KVs, the first KV from each of the remaining 6 rows
+    KeyValue [] kvs = {
+        new KeyValue(ROWS_ONE[0], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[2], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_ONE[3], FAMILIES[0], QUALIFIERS_ONE[0], VALUES[0]),
+        new KeyValue(ROWS_TWO[0], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[2], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1]),
+        new KeyValue(ROWS_TWO[3], FAMILIES[0], QUALIFIERS_TWO[0], VALUES[1])
+    };
+    verifyScanFull(s, kvs);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java
new file mode 100644
index 0000000..96791e5
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java
@@ -0,0 +1,167 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.StringWriter;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.rest.model.ColumnSchemaModel;
+import org.apache.hadoop.hbase.rest.model.TableSchemaModel;
+import org.apache.hadoop.hbase.rest.model.TestTableSchemaModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestSchemaResource {
+  private static String TABLE1 = "TestSchemaResource1";
+  private static String TABLE2 = "TestSchemaResource2";
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL =
+    new HBaseRESTTestingUtility();
+  private static Client client;
+  private static JAXBContext context;
+  private static Configuration conf;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    conf = TEST_UTIL.getConfiguration();
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(conf);
+    client = new Client(new Cluster().add("localhost",
+      REST_TEST_UTIL.getServletPort()));
+    context = JAXBContext.newInstance(
+      ColumnSchemaModel.class,
+      TableSchemaModel.class);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private static byte[] toXML(TableSchemaModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return Bytes.toBytes(writer.toString());
+  }
+
+  private static TableSchemaModel fromXML(byte[] content)
+      throws JAXBException {
+    return (TableSchemaModel) context.createUnmarshaller()
+      .unmarshal(new ByteArrayInputStream(content));
+  }
+
+  @Test
+  public void testTableCreateAndDeleteXML() throws IOException, JAXBException {
+    String schemaPath = "/" + TABLE1 + "/schema";
+    TableSchemaModel model;
+    Response response;
+
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    assertFalse(admin.tableExists(TABLE1));
+
+    // create the table
+    model = TestTableSchemaModel.buildTestModel(TABLE1);
+    TestTableSchemaModel.checkModel(model, TABLE1);
+    response = client.put(schemaPath, Constants.MIMETYPE_XML, toXML(model));
+    assertEquals(response.getCode(), 201);
+
+    // recall the same put operation but in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    response = client.put(schemaPath, Constants.MIMETYPE_XML, toXML(model));
+    assertEquals(response.getCode(), 403);
+
+    // make sure HBase concurs, and wait for the table to come online
+    admin.enableTable(TABLE1);
+
+    // retrieve the schema and validate it
+    response = client.get(schemaPath, Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    model = fromXML(response.getBody());
+    TestTableSchemaModel.checkModel(model, TABLE1);
+
+    // delete the table
+    client.delete(schemaPath);
+
+    // make sure HBase concurs
+    assertFalse(admin.tableExists(TABLE1));
+
+    // return read-only setting back to default
+    conf.set("hbase.rest.readonly", "false");
+  }
+
+  @Test
+  public void testTableCreateAndDeletePB() throws IOException, JAXBException {
+    String schemaPath = "/" + TABLE2 + "/schema";
+    TableSchemaModel model;
+    Response response;
+
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    assertFalse(admin.tableExists(TABLE2));
+
+    // create the table
+    model = TestTableSchemaModel.buildTestModel(TABLE2);
+    TestTableSchemaModel.checkModel(model, TABLE2);
+    response = client.put(schemaPath, Constants.MIMETYPE_PROTOBUF,
+      model.createProtobufOutput());
+    assertEquals(response.getCode(), 201);
+
+    // recall the same put operation but in read-only mode
+    conf.set("hbase.rest.readonly", "true");
+    response = client.put(schemaPath, Constants.MIMETYPE_PROTOBUF,
+      model.createProtobufOutput());
+    assertEquals(response.getCode(), 403);
+
+    // make sure HBase concurs, and wait for the table to come online
+    admin.enableTable(TABLE2);
+
+    // retrieve the schema and validate it
+    response = client.get(schemaPath, Constants.MIMETYPE_PROTOBUF);
+    assertEquals(response.getCode(), 200);
+    model = new TableSchemaModel();
+    model.getObjectFromMessage(response.getBody());
+    TestTableSchemaModel.checkModel(model, TABLE2);
+
+    // delete the table
+    client.delete(schemaPath);
+
+    // make sure HBase concurs
+    assertFalse(admin.tableExists(TABLE2));
+
+    // return read-only setting back to default
+    conf.set("hbase.rest.readonly", "false");
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java
new file mode 100644
index 0000000..6933f78
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java
@@ -0,0 +1,110 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.rest.model.StorageClusterStatusModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestStatusResource {
+  private static final byte[] ROOT_REGION_NAME = Bytes.toBytes("-ROOT-,,0");
+  private static final byte[] META_REGION_NAME = Bytes.toBytes(".META.,,1");
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static Client client;
+  private static JAXBContext context;
+  
+  private static void validate(StorageClusterStatusModel model) {
+    assertNotNull(model);
+    assertTrue(model.getRegions() >= 1);
+    assertTrue(model.getRequests() >= 0);
+    assertTrue(model.getAverageLoad() >= 0.0);
+    assertNotNull(model.getLiveNodes());
+    assertNotNull(model.getDeadNodes());
+    assertFalse(model.getLiveNodes().isEmpty());
+    boolean foundRoot = false, foundMeta = false;
+    for (StorageClusterStatusModel.Node node: model.getLiveNodes()) {
+      assertNotNull(node.getName());
+      assertTrue(node.getStartCode() > 0L);
+      assertTrue(node.getRequests() >= 0);
+      for (StorageClusterStatusModel.Node.Region region: node.getRegions()) {
+        if (Bytes.equals(region.getName(), ROOT_REGION_NAME)) {
+          foundRoot = true;
+        } else if (Bytes.equals(region.getName(), META_REGION_NAME)) {
+          foundMeta = true;
+        }
+      }
+    }
+    assertTrue(foundRoot);
+    assertTrue(foundMeta);
+  }
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    client = new Client(new Cluster().add("localhost", 
+      REST_TEST_UTIL.getServletPort()));
+    context = JAXBContext.newInstance(StorageClusterStatusModel.class);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testGetClusterStatusXML() throws IOException, JAXBException {
+    Response response = client.get("/status/cluster", Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    StorageClusterStatusModel model = (StorageClusterStatusModel)
+      context.createUnmarshaller().unmarshal(
+        new ByteArrayInputStream(response.getBody()));
+    validate(model);
+  }
+
+  @Test
+  public void testGetClusterStatusPB() throws IOException {
+    Response response = client.get("/status/cluster", 
+      Constants.MIMETYPE_PROTOBUF);
+    assertEquals(response.getCode(), 200);
+    StorageClusterStatusModel model = new StorageClusterStatusModel();
+    model.getObjectFromMessage(response.getBody());
+    validate(model);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java
new file mode 100644
index 0000000..c55cb18
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java
@@ -0,0 +1,239 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.net.InetSocketAddress;
+import java.util.Iterator;
+import java.util.Map;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HServerAddress;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.rest.model.TableModel;
+import org.apache.hadoop.hbase.rest.model.TableInfoModel;
+import org.apache.hadoop.hbase.rest.model.TableListModel;
+import org.apache.hadoop.hbase.rest.model.TableRegionModel;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.util.StringUtils;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestTableResource {
+  private static final Log LOG = LogFactory.getLog(TestTableResource.class);
+
+  private static String TABLE = "TestTableResource";
+  private static String COLUMN_FAMILY = "test";
+  private static String COLUMN = COLUMN_FAMILY + ":qualifier";
+  private static Map<HRegionInfo,HServerAddress> regionMap;
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static Client client;
+  private static JAXBContext context;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    client = new Client(new Cluster().add("localhost", 
+      REST_TEST_UTIL.getServletPort()));
+    context = JAXBContext.newInstance(
+        TableModel.class,
+        TableInfoModel.class,
+        TableListModel.class,
+        TableRegionModel.class);
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    if (admin.tableExists(TABLE)) {
+      return;
+    }
+    HTableDescriptor htd = new HTableDescriptor(TABLE);
+    htd.addFamily(new HColumnDescriptor(COLUMN_FAMILY));
+    admin.createTable(htd);
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), TABLE);
+    byte[] k = new byte[3];
+    byte [][] famAndQf = KeyValue.parseColumn(Bytes.toBytes(COLUMN));
+    for (byte b1 = 'a'; b1 < 'z'; b1++) {
+      for (byte b2 = 'a'; b2 < 'z'; b2++) {
+        for (byte b3 = 'a'; b3 < 'z'; b3++) {
+          k[0] = b1;
+          k[1] = b2;
+          k[2] = b3;
+          Put put = new Put(k);
+          put.add(famAndQf[0], famAndQf[1], k);
+          table.put(put);
+        }
+      }
+    }
+    table.flushCommits();
+    // get the initial layout (should just be one region)
+    Map<HRegionInfo,HServerAddress> m = table.getRegionsInfo();
+    assertEquals(m.size(), 1);
+    // tell the master to split the table
+    admin.split(TABLE);
+    // give some time for the split to happen
+    try {
+      Thread.sleep(15 * 1000);
+    } catch (InterruptedException e) {
+      LOG.warn(StringUtils.stringifyException(e));
+    }
+    // check again
+    m = table.getRegionsInfo();
+    // should have two regions now
+    assertEquals(m.size(), 2);
+    regionMap = m;
+    LOG.info("regions: " + regionMap);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private static void checkTableList(TableListModel model) {
+    boolean found = false;
+    Iterator<TableModel> tables = model.getTables().iterator();
+    assertTrue(tables.hasNext());
+    while (tables.hasNext()) {
+      TableModel table = tables.next();
+      if (table.getName().equals(TABLE)) {
+        found = true;
+        break;
+      }
+    }
+    assertTrue(found);
+  }
+
+  void checkTableInfo(TableInfoModel model) {
+    assertEquals(model.getName(), TABLE);
+    Iterator<TableRegionModel> regions = model.getRegions().iterator();
+    assertTrue(regions.hasNext());
+    while (regions.hasNext()) {
+      TableRegionModel region = regions.next();
+      boolean found = false;
+      for (Map.Entry<HRegionInfo,HServerAddress> e: regionMap.entrySet()) {
+        HRegionInfo hri = e.getKey();
+        String hriRegionName = hri.getRegionNameAsString();
+        String regionName = region.getName();
+        if (hriRegionName.equals(regionName)) {
+          found = true;
+          byte[] startKey = hri.getStartKey();
+          byte[] endKey = hri.getEndKey();
+          InetSocketAddress sa = e.getValue().getInetSocketAddress();
+          String location = sa.getHostName() + ":" +
+            Integer.valueOf(sa.getPort());
+          assertEquals(hri.getRegionId(), region.getId());
+          assertTrue(Bytes.equals(startKey, region.getStartKey()));
+          assertTrue(Bytes.equals(endKey, region.getEndKey()));
+          assertEquals(location, region.getLocation());
+          break;
+        }
+      }
+      assertTrue(found);
+    }
+  }
+
+  @Test
+  public void testTableListText() throws IOException {
+    Response response = client.get("/", Constants.MIMETYPE_TEXT);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testTableListXML() throws IOException, JAXBException {
+    Response response = client.get("/", Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    TableListModel model = (TableListModel)
+      context.createUnmarshaller()
+        .unmarshal(new ByteArrayInputStream(response.getBody()));
+    checkTableList(model);
+  }
+
+  @Test
+  public void testTableListJSON() throws IOException {
+    Response response = client.get("/", Constants.MIMETYPE_JSON);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testTableListPB() throws IOException, JAXBException {
+    Response response = client.get("/", Constants.MIMETYPE_PROTOBUF);
+    assertEquals(response.getCode(), 200);
+    TableListModel model = new TableListModel();
+    model.getObjectFromMessage(response.getBody());
+    checkTableList(model);
+  }
+
+  @Test
+  public void testTableInfoText() throws IOException {
+    Response response = client.get("/" + TABLE + "/regions",
+      Constants.MIMETYPE_TEXT);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testTableInfoXML() throws IOException, JAXBException {
+    Response response = client.get("/" + TABLE + "/regions", 
+      Constants.MIMETYPE_XML);
+    assertEquals(response.getCode(), 200);
+    TableInfoModel model = (TableInfoModel)
+      context.createUnmarshaller()
+        .unmarshal(new ByteArrayInputStream(response.getBody()));
+    checkTableInfo(model);
+  }
+
+  @Test
+  public void testTableInfoJSON() throws IOException {
+    Response response = client.get("/" + TABLE + "/regions", 
+      Constants.MIMETYPE_JSON);
+    assertEquals(response.getCode(), 200);
+  }
+
+  @Test
+  public void testTableInfoPB() throws IOException, JAXBException {
+    Response response = client.get("/" + TABLE + "/regions",
+      Constants.MIMETYPE_PROTOBUF);
+    assertEquals(response.getCode(), 200);
+    TableInfoModel model = new TableInfoModel();
+    model.getObjectFromMessage(response.getBody());
+    checkTableInfo(model);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestTransform.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestTransform.java
new file mode 100644
index 0000000..a65a924
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestTransform.java
@@ -0,0 +1,115 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestTransform {
+  private static final String TABLE = "TestTransform";
+  private static final String CFA = "a";
+  private static final String CFB = "b";
+  private static final String COLUMN_1 = CFA + ":1";
+  private static final String COLUMN_2 = CFB + ":2";
+  private static final String ROW_1 = "testrow1";
+  private static final byte[] VALUE_1 = Bytes.toBytes("testvalue1");
+  private static final byte[] VALUE_2 = Bytes.toBytes("testvalue2");
+  private static final byte[] VALUE_2_BASE64 = Bytes.toBytes("dGVzdHZhbHVlMg==");
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static Client client;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    client = new Client(new Cluster().add("localhost", 
+      REST_TEST_UTIL.getServletPort()));
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    if (admin.tableExists(TABLE)) {
+      return;
+    }
+    HTableDescriptor htd = new HTableDescriptor(TABLE);
+    htd.addFamily(new HColumnDescriptor(CFA));
+    HColumnDescriptor cfB = new HColumnDescriptor(CFB);
+    cfB.setValue("Transform$1", "*:Base64");
+    htd.addFamily(cfB);
+    admin.createTable(htd);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testTransform() throws Exception {
+    String path1 = "/" + TABLE + "/" + ROW_1 + "/" + COLUMN_1;
+    String path2 = "/" + TABLE + "/" + ROW_1 + "/" + COLUMN_2;
+
+    // store value 1
+    Response response = client.put(path1, Constants.MIMETYPE_BINARY, VALUE_1);
+    assertEquals(response.getCode(), 200);
+
+    // store value 2 (stargate should transform into base64)
+    response = client.put(path2, Constants.MIMETYPE_BINARY, VALUE_2);
+    assertEquals(response.getCode(), 200);
+
+    // get the table contents directly
+    HTable table = new HTable(TEST_UTIL.getConfiguration(), TABLE);
+    Get get = new Get(Bytes.toBytes(ROW_1));
+    get.addFamily(Bytes.toBytes(CFA));
+    get.addFamily(Bytes.toBytes(CFB));
+    Result result = table.get(get);
+    // value 1 should not be transformed
+    byte[] value = result.getValue(Bytes.toBytes(CFA), Bytes.toBytes("1"));
+    assertNotNull(value);
+    assertTrue(Bytes.equals(value, VALUE_1));
+    // value 2 should have been base64 encoded
+    value = result.getValue(Bytes.toBytes(CFB), Bytes.toBytes("2"));
+    assertNotNull(value);
+    assertTrue(Bytes.equals(value, VALUE_2_BASE64));
+    table.close();
+
+    // stargate should decode the transformed value back to original bytes
+    response = client.get(path2, Constants.MIMETYPE_BINARY);
+    assertEquals(response.getCode(), 200);
+    value = response.getBody();
+    assertTrue(Bytes.equals(value, VALUE_2));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java
new file mode 100644
index 0000000..f9fb489
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java
@@ -0,0 +1,163 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.Response;
+import org.apache.hadoop.hbase.rest.model.StorageClusterVersionModel;
+import org.apache.hadoop.hbase.rest.model.VersionModel;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import com.sun.jersey.spi.container.servlet.ServletContainer;
+
+public class TestVersionResource {
+  private static final Log LOG = LogFactory.getLog(TestVersionResource.class);
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static Client client;
+  private static JAXBContext context;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    client = new Client(new Cluster().add("localhost", 
+      REST_TEST_UTIL.getServletPort()));
+    context = JAXBContext.newInstance(
+      VersionModel.class,
+      StorageClusterVersionModel.class);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  private static void validate(VersionModel model) {
+    assertNotNull(model);
+    assertNotNull(model.getRESTVersion());
+    assertEquals(model.getRESTVersion(), RESTServlet.VERSION_STRING);
+    String osVersion = model.getOSVersion(); 
+    assertNotNull(osVersion);
+    assertTrue(osVersion.contains(System.getProperty("os.name")));
+    assertTrue(osVersion.contains(System.getProperty("os.version")));
+    assertTrue(osVersion.contains(System.getProperty("os.arch")));
+    String jvmVersion = model.getJVMVersion();
+    assertNotNull(jvmVersion);
+    assertTrue(jvmVersion.contains(System.getProperty("java.vm.vendor")));
+    assertTrue(jvmVersion.contains(System.getProperty("java.version")));
+    assertTrue(jvmVersion.contains(System.getProperty("java.vm.version")));
+    assertNotNull(model.getServerVersion());
+    String jerseyVersion = model.getJerseyVersion();
+    assertNotNull(jerseyVersion);
+    assertEquals(jerseyVersion, ServletContainer.class.getPackage()
+      .getImplementationVersion());
+  }
+
+  @Test
+  public void testGetStargateVersionText() throws IOException {
+    Response response = client.get("/version", Constants.MIMETYPE_TEXT);
+    assertTrue(response.getCode() == 200);
+    String body = Bytes.toString(response.getBody());
+    assertTrue(body.length() > 0);
+    assertTrue(body.contains(RESTServlet.VERSION_STRING));
+    assertTrue(body.contains(System.getProperty("java.vm.vendor")));
+    assertTrue(body.contains(System.getProperty("java.version")));
+    assertTrue(body.contains(System.getProperty("java.vm.version")));
+    assertTrue(body.contains(System.getProperty("os.name")));
+    assertTrue(body.contains(System.getProperty("os.version")));
+    assertTrue(body.contains(System.getProperty("os.arch")));
+    assertTrue(body.contains(ServletContainer.class.getPackage()
+      .getImplementationVersion()));
+  }
+
+  @Test
+  public void testGetStargateVersionXML() throws IOException, JAXBException {
+    Response response = client.get("/version", Constants.MIMETYPE_XML);
+    assertTrue(response.getCode() == 200);
+    VersionModel model = (VersionModel)
+      context.createUnmarshaller().unmarshal(
+        new ByteArrayInputStream(response.getBody()));
+    validate(model);
+    LOG.info("success retrieving Stargate version as XML");
+  }
+
+  @Test
+  public void testGetStargateVersionJSON() throws IOException {
+    Response response = client.get("/version", Constants.MIMETYPE_JSON);
+    assertTrue(response.getCode() == 200);
+  }
+
+  @Test
+  public void testGetStargateVersionPB() throws IOException {
+    Response response = client.get("/version", Constants.MIMETYPE_PROTOBUF);
+    assertTrue(response.getCode() == 200);
+    VersionModel model = new VersionModel();
+    model.getObjectFromMessage(response.getBody());
+    validate(model);
+    LOG.info("success retrieving Stargate version as protobuf");
+  }
+
+  @Test
+  public void testGetStorageClusterVersionText() throws IOException {
+    Response response = client.get("/version/cluster", 
+      Constants.MIMETYPE_TEXT);
+    assertTrue(response.getCode() == 200);
+  }
+
+  @Test
+  public void testGetStorageClusterVersionXML() throws IOException,
+      JAXBException {
+    Response response = client.get("/version/cluster",Constants.MIMETYPE_XML);
+    assertTrue(response.getCode() == 200);
+    StorageClusterVersionModel clusterVersionModel = 
+      (StorageClusterVersionModel)
+        context.createUnmarshaller().unmarshal(
+          new ByteArrayInputStream(response.getBody()));
+    assertNotNull(clusterVersionModel);
+    assertNotNull(clusterVersionModel.getVersion());
+    LOG.info("success retrieving storage cluster version as XML");
+  }
+
+  @Test
+  public void doTestGetStorageClusterVersionJSON() throws IOException {
+    Response response = client.get("/version/cluster", Constants.MIMETYPE_JSON);
+    assertTrue(response.getCode() == 200);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteAdmin.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteAdmin.java
new file mode 100644
index 0000000..38295ce
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteAdmin.java
@@ -0,0 +1,95 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.client;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.rest.HBaseRESTTestingUtility;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestRemoteAdmin {
+
+  private static final String TABLE_1 = "TestRemoteAdmin_Table_1";
+  private static final String TABLE_2 = "TestRemoteAdmin_Table_2";
+  private static final byte[] COLUMN_1 = Bytes.toBytes("a");
+
+  static final HTableDescriptor DESC_1;
+  static {
+    DESC_1 = new HTableDescriptor(TABLE_1);
+    DESC_1.addFamily(new HColumnDescriptor(COLUMN_1));
+  }
+  static final HTableDescriptor DESC_2;
+  static {
+    DESC_2 = new HTableDescriptor(TABLE_2);
+    DESC_2.addFamily(new HColumnDescriptor(COLUMN_1));
+  }
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static HBaseAdmin localAdmin;
+  private static RemoteAdmin remoteAdmin;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    localAdmin = TEST_UTIL.getHBaseAdmin();
+    remoteAdmin = new RemoteAdmin(new Client(
+      new Cluster().add("localhost", REST_TEST_UTIL.getServletPort())),
+      TEST_UTIL.getConfiguration());
+    if (localAdmin.tableExists(TABLE_1)) {
+      localAdmin.disableTable(TABLE_1);
+      localAdmin.deleteTable(TABLE_1);
+    }
+    if (!localAdmin.tableExists(TABLE_2)) {
+      localAdmin.createTable(DESC_2);
+    }
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testCreateTable() throws Exception {
+    assertFalse(remoteAdmin.isTableAvailable(TABLE_1));
+    remoteAdmin.createTable(DESC_1);
+    assertTrue(remoteAdmin.isTableAvailable(TABLE_1));
+  }
+
+  @Test
+  public void testDeleteTable() throws Exception {
+    assertTrue(remoteAdmin.isTableAvailable(TABLE_2));
+    remoteAdmin.deleteTable(TABLE_2);
+    assertFalse(remoteAdmin.isTableAvailable(TABLE_2));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
new file mode 100644
index 0000000..4c6cf99
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
@@ -0,0 +1,347 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.client;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.rest.HBaseRESTTestingUtility;
+import org.apache.hadoop.hbase.rest.client.Client;
+import org.apache.hadoop.hbase.rest.client.Cluster;
+import org.apache.hadoop.hbase.rest.client.RemoteHTable;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import static org.junit.Assert.*;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestRemoteTable {
+  private static final Log LOG = LogFactory.getLog(TestRemoteTable.class);
+  private static final String TABLE = "TestRemoteTable";
+  private static final byte[] ROW_1 = Bytes.toBytes("testrow1");
+  private static final byte[] ROW_2 = Bytes.toBytes("testrow2");
+  private static final byte[] ROW_3 = Bytes.toBytes("testrow3");
+  private static final byte[] ROW_4 = Bytes.toBytes("testrow4");
+  private static final byte[] COLUMN_1 = Bytes.toBytes("a");
+  private static final byte[] COLUMN_2 = Bytes.toBytes("b");
+  private static final byte[] COLUMN_3 = Bytes.toBytes("c");
+  private static final byte[] QUALIFIER_1 = Bytes.toBytes("1");
+  private static final byte[] QUALIFIER_2 = Bytes.toBytes("2");
+  private static final byte[] VALUE_1 = Bytes.toBytes("testvalue1");
+  private static final byte[] VALUE_2 = Bytes.toBytes("testvalue2");
+
+  private static final long ONE_HOUR = 60 * 60 * 1000;
+  private static final long TS_2 = System.currentTimeMillis();
+  private static final long TS_1 = TS_2 - ONE_HOUR;
+
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
+    new HBaseRESTTestingUtility();
+  private static RemoteHTable remoteTable;
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
+    HBaseAdmin admin = TEST_UTIL.getHBaseAdmin();
+    LOG.info("Admin Connection=" + admin.getConnection() + ", " + 
+      admin.getConnection().getZooKeeperWatcher());
+    if (!admin.tableExists(TABLE)) {
+      HTableDescriptor htd = new HTableDescriptor(TABLE);
+      htd.addFamily(new HColumnDescriptor(COLUMN_1));
+      htd.addFamily(new HColumnDescriptor(COLUMN_2));
+      htd.addFamily(new HColumnDescriptor(COLUMN_3));
+      admin.createTable(htd);
+      HTable table = new HTable(TEST_UTIL.getConfiguration(), TABLE);
+      LOG.info("Table connection=" + table.getConnection() + ", " +
+        admin.getConnection().getZooKeeperWatcher());
+      Put put = new Put(ROW_1);
+      put.add(COLUMN_1, QUALIFIER_1, TS_2, VALUE_1);
+      table.put(put);
+      put = new Put(ROW_2);
+      put.add(COLUMN_1, QUALIFIER_1, TS_1, VALUE_1);
+      put.add(COLUMN_1, QUALIFIER_1, TS_2, VALUE_2);
+      put.add(COLUMN_2, QUALIFIER_2, TS_2, VALUE_2);
+      table.put(put);
+      table.flushCommits();
+    }
+    remoteTable = new RemoteHTable(
+      new Client(new Cluster().add("localhost", 
+          REST_TEST_UTIL.getServletPort())),
+        TEST_UTIL.getConfiguration(), TABLE, null);
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    remoteTable.close();
+    REST_TEST_UTIL.shutdownServletContainer();
+    TEST_UTIL.shutdownMiniCluster();
+  }
+
+  @Test
+  public void testGetTableDescriptor() throws IOException {
+    HTableDescriptor local = new HTable(TEST_UTIL.getConfiguration(),
+      TABLE).getTableDescriptor();
+    assertEquals(remoteTable.getTableDescriptor(), local);
+  }
+
+  @Test
+  public void testGet() throws IOException {
+    Get get = new Get(ROW_1);
+    Result result = remoteTable.get(get);
+    byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_1, value1));
+    assertNull(value2);
+
+    get = new Get(ROW_1);
+    get.addFamily(COLUMN_3);
+    result = remoteTable.get(get);
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNull(value1);
+    assertNull(value2);
+
+    get = new Get(ROW_1);
+    get.addColumn(COLUMN_1, QUALIFIER_1);
+    get.addColumn(COLUMN_2, QUALIFIER_2);
+    result = remoteTable.get(get);
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_1, value1));
+    assertNull(value2);
+
+    get = new Get(ROW_2);
+    result = remoteTable.get(get);    
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_2, value1)); // @TS_2
+    assertNotNull(value2);
+    assertTrue(Bytes.equals(VALUE_2, value2));
+
+    get = new Get(ROW_2);
+    get.addFamily(COLUMN_1);
+    result = remoteTable.get(get);    
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_2, value1)); // @TS_2
+    assertNull(value2);
+
+    get = new Get(ROW_2);
+    get.addColumn(COLUMN_1, QUALIFIER_1);
+    get.addColumn(COLUMN_2, QUALIFIER_2);
+    result = remoteTable.get(get);    
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_2, value1)); // @TS_2
+    assertNotNull(value2);
+    assertTrue(Bytes.equals(VALUE_2, value2));
+
+    // test timestamp
+
+    get = new Get(ROW_2);
+    get.addFamily(COLUMN_1);
+    get.addFamily(COLUMN_2);
+    get.setTimeStamp(TS_1);
+    result = remoteTable.get(get);    
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_1, value1)); // @TS_1
+    assertNull(value2);
+
+    // test timerange
+
+    get = new Get(ROW_2);
+    get.addFamily(COLUMN_1);
+    get.addFamily(COLUMN_2);
+    get.setTimeRange(0, TS_1 + 1);
+    result = remoteTable.get(get);    
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_1, value1)); // @TS_1
+    assertNull(value2);
+
+    // test maxVersions
+
+    get = new Get(ROW_2);
+    get.addFamily(COLUMN_1);
+    get.setMaxVersions(2);
+    result = remoteTable.get(get);
+    int count = 0;
+    for (KeyValue kv: result.list()) {
+      if (Bytes.equals(COLUMN_1, kv.getFamily()) && TS_1 == kv.getTimestamp()) {
+        assertTrue(Bytes.equals(VALUE_1, kv.getValue())); // @TS_1
+        count++;
+      }
+      if (Bytes.equals(COLUMN_1, kv.getFamily()) && TS_2 == kv.getTimestamp()) {
+        assertTrue(Bytes.equals(VALUE_2, kv.getValue())); // @TS_2
+        count++;
+      }
+    }
+    assertEquals(2, count);
+  }
+
+  @Test
+  public void testPut() throws IOException {
+    Put put = new Put(ROW_3);
+    put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
+    remoteTable.put(put);
+
+    Get get = new Get(ROW_3);
+    get.addFamily(COLUMN_1);
+    Result result = remoteTable.get(get);
+    byte[] value = result.getValue(COLUMN_1, QUALIFIER_1);
+    assertNotNull(value);
+    assertTrue(Bytes.equals(VALUE_1, value));
+
+    // multiput
+
+    List<Put> puts = new ArrayList<Put>();
+    put = new Put(ROW_3);
+    put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+    puts.add(put);
+    put = new Put(ROW_4);
+    put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
+    puts.add(put);
+    put = new Put(ROW_4);
+    put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+    puts.add(put);
+    remoteTable.put(puts);
+
+    get = new Get(ROW_3);
+    get.addFamily(COLUMN_2);
+    result = remoteTable.get(get);
+    value = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value);
+    assertTrue(Bytes.equals(VALUE_2, value));
+    get = new Get(ROW_4);
+    result = remoteTable.get(get);
+    value = result.getValue(COLUMN_1, QUALIFIER_1);
+    assertNotNull(value);
+    assertTrue(Bytes.equals(VALUE_1, value));
+    value = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value);
+    assertTrue(Bytes.equals(VALUE_2, value));
+  }
+
+  public void testDelete() throws IOException {
+    Put put = new Put(ROW_3);
+    put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
+    put.add(COLUMN_2, QUALIFIER_2, VALUE_2);
+    remoteTable.put(put);
+
+    Get get = new Get(ROW_3);
+    get.addFamily(COLUMN_1);
+    get.addFamily(COLUMN_2);
+    Result result = remoteTable.get(get);
+    byte[] value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    byte[] value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_1, value1));
+    assertNotNull(value2);
+    assertTrue(Bytes.equals(VALUE_2, value2));
+
+    Delete delete = new Delete(ROW_3);
+    delete.deleteColumn(COLUMN_2, QUALIFIER_2);
+    remoteTable.delete(delete);
+    
+    get = new Get(ROW_3);
+    get.addFamily(COLUMN_1);
+    get.addFamily(COLUMN_2);
+    result = remoteTable.get(get);
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNotNull(value1);
+    assertTrue(Bytes.equals(VALUE_1, value1));
+    assertNull(value2);
+
+    delete = new Delete(ROW_3);
+    remoteTable.delete(delete);
+
+    get = new Get(ROW_3);
+    get.addFamily(COLUMN_1);
+    get.addFamily(COLUMN_2);
+    result = remoteTable.get(get);
+    value1 = result.getValue(COLUMN_1, QUALIFIER_1);
+    value2 = result.getValue(COLUMN_2, QUALIFIER_2);
+    assertNull(value1);
+    assertNull(value2);
+  }
+
+  public void testScanner() throws IOException {
+    List<Put> puts = new ArrayList<Put>();
+    Put put = new Put(ROW_1);
+    put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
+    puts.add(put);
+    put = new Put(ROW_2);
+    put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
+    puts.add(put);
+    put = new Put(ROW_3);
+    put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
+    puts.add(put);
+    put = new Put(ROW_4);
+    put.add(COLUMN_1, QUALIFIER_1, VALUE_1);
+    puts.add(put);
+    remoteTable.put(puts);
+
+    ResultScanner scanner = remoteTable.getScanner(new Scan());
+
+    Result[] results = scanner.next(1);
+    assertNotNull(results);
+    assertEquals(1, results.length);
+    assertTrue(Bytes.equals(ROW_1, results[0].getRow()));
+
+    results = scanner.next(3);
+    assertNotNull(results);
+    assertEquals(3, results.length);
+    assertTrue(Bytes.equals(ROW_2, results[0].getRow()));
+    assertTrue(Bytes.equals(ROW_3, results[1].getRow()));
+    assertTrue(Bytes.equals(ROW_4, results[2].getRow()));
+
+    results = scanner.next(1);
+    assertNull(results);
+
+    scanner.close();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java
new file mode 100644
index 0000000..d4f1cac
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java
@@ -0,0 +1,104 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestCellModel extends TestCase {
+
+  private static final long TIMESTAMP = 1245219839331L;
+  private static final byte[] COLUMN = Bytes.toBytes("testcolumn");
+  private static final byte[] VALUE = Bytes.toBytes("testvalue");
+
+  private static final String AS_XML =
+    "<Cell timestamp=\"1245219839331\"" +
+      " column=\"dGVzdGNvbHVtbg==\">" +
+      "dGVzdHZhbHVl</Cell>";
+
+  private static final String AS_PB = 
+    "Egp0ZXN0Y29sdW1uGOO6i+eeJCIJdGVzdHZhbHVl";
+
+  private JAXBContext context;
+
+  public TestCellModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(CellModel.class);
+  }
+
+  private CellModel buildTestModel() {
+    CellModel model = new CellModel();
+    model.setColumn(COLUMN);
+    model.setTimestamp(TIMESTAMP);
+    model.setValue(VALUE);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(CellModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private CellModel fromXML(String xml) throws JAXBException {
+    return (CellModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(CellModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private CellModel fromPB(String pb) throws IOException {
+    return (CellModel) 
+      new CellModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  private void checkModel(CellModel model) {
+    assertTrue(Bytes.equals(model.getColumn(), COLUMN));
+    assertTrue(Bytes.equals(model.getValue(), VALUE));
+    assertTrue(model.hasUserTimestamp());
+    assertEquals(model.getTimestamp(), TIMESTAMP);
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java
new file mode 100644
index 0000000..0f334a0
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java
@@ -0,0 +1,154 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+import java.util.Iterator;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestCellSetModel extends TestCase {
+
+  private static final byte[] ROW1 = Bytes.toBytes("testrow1");
+  private static final byte[] COLUMN1 = Bytes.toBytes("testcolumn1");
+  private static final byte[] VALUE1 = Bytes.toBytes("testvalue1");
+  private static final long TIMESTAMP1 = 1245219839331L;
+  private static final byte[] ROW2 = Bytes.toBytes("testrow1");
+  private static final byte[] COLUMN2 = Bytes.toBytes("testcolumn2");
+  private static final byte[] VALUE2 = Bytes.toBytes("testvalue2");
+  private static final long TIMESTAMP2 = 1245239813319L;
+  private static final byte[] COLUMN3 = Bytes.toBytes("testcolumn3");
+  private static final byte[] VALUE3 = Bytes.toBytes("testvalue3");
+  private static final long TIMESTAMP3 = 1245393318192L;
+
+  private static final String AS_XML =
+    "<CellSet>" + 
+      "<Row key=\"dGVzdHJvdzE=\">" + 
+        "<Cell timestamp=\"1245219839331\" column=\"dGVzdGNvbHVtbjE=\">" + 
+          "dGVzdHZhbHVlMQ==</Cell>" + 
+        "</Row>" + 
+      "<Row key=\"dGVzdHJvdzE=\">" + 
+        "<Cell timestamp=\"1245239813319\" column=\"dGVzdGNvbHVtbjI=\">" +
+          "dGVzdHZhbHVlMg==</Cell>" + 
+        "<Cell timestamp=\"1245393318192\" column=\"dGVzdGNvbHVtbjM=\">" + 
+          "dGVzdHZhbHVlMw==</Cell>" + 
+        "</Row>" +
+      "</CellSet>";
+
+  private static final String AS_PB = 
+    "CiwKCHRlc3Ryb3cxEiASC3Rlc3Rjb2x1bW4xGOO6i+eeJCIKdGVzdHZhbHVlMQpOCgh0ZXN0cm93" +
+    "MRIgEgt0ZXN0Y29sdW1uMhjHyc7wniQiCnRlc3R2YWx1ZTISIBILdGVzdGNvbHVtbjMYsOLnuZ8k" +
+    "Igp0ZXN0dmFsdWUz";
+
+  private JAXBContext context;
+
+  public TestCellSetModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(
+        CellModel.class,
+        CellSetModel.class,
+        RowModel.class);
+  }
+
+  private CellSetModel buildTestModel() {
+    CellSetModel model = new CellSetModel();
+    RowModel row;
+    row = new RowModel();
+    row.setKey(ROW1);
+    row.addCell(new CellModel(COLUMN1, TIMESTAMP1, VALUE1));
+    model.addRow(row);
+    row = new RowModel();
+    row.setKey(ROW2);
+    row.addCell(new CellModel(COLUMN2, TIMESTAMP2, VALUE2));
+    row.addCell(new CellModel(COLUMN3, TIMESTAMP3, VALUE3));
+    model.addRow(row);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(CellSetModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private CellSetModel fromXML(String xml) throws JAXBException {
+    return (CellSetModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(CellSetModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private CellSetModel fromPB(String pb) throws IOException {
+    return (CellSetModel) 
+      new CellSetModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  private void checkModel(CellSetModel model) {
+    Iterator<RowModel> rows = model.getRows().iterator();
+    RowModel row = rows.next();
+    assertTrue(Bytes.equals(ROW1, row.getKey()));
+    Iterator<CellModel> cells = row.getCells().iterator();
+    CellModel cell = cells.next();
+    assertTrue(Bytes.equals(COLUMN1, cell.getColumn()));
+    assertTrue(Bytes.equals(VALUE1, cell.getValue()));
+    assertTrue(cell.hasUserTimestamp());
+    assertEquals(cell.getTimestamp(), TIMESTAMP1);
+    assertFalse(cells.hasNext());
+    row = rows.next();
+    assertTrue(Bytes.equals(ROW2, row.getKey()));
+    cells = row.getCells().iterator();
+    cell = cells.next();
+    assertTrue(Bytes.equals(COLUMN2, cell.getColumn()));
+    assertTrue(Bytes.equals(VALUE2, cell.getValue()));
+    assertTrue(cell.hasUserTimestamp());
+    assertEquals(cell.getTimestamp(), TIMESTAMP2);
+    cell = cells.next();
+    assertTrue(Bytes.equals(COLUMN3, cell.getColumn()));
+    assertTrue(Bytes.equals(VALUE3, cell.getValue()));
+    assertTrue(cell.hasUserTimestamp());
+    assertEquals(cell.getTimestamp(), TIMESTAMP3);
+    assertFalse(cells.hasNext());
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java
new file mode 100644
index 0000000..dbbb99b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java
@@ -0,0 +1,102 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.StringReader;
+import java.io.StringWriter;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import junit.framework.TestCase;
+
+public class TestColumnSchemaModel extends TestCase {
+
+  protected static final String COLUMN_NAME = "testcolumn";
+  protected static final boolean BLOCKCACHE = true;
+  protected static final int BLOCKSIZE = 16384;
+  protected static final String BLOOMFILTER = "NONE";
+  protected static final String COMPRESSION = "GZ";
+  protected static final boolean IN_MEMORY = false;
+  protected static final int TTL = 86400;
+  protected static final int VERSIONS = 1;
+
+  protected static final String AS_XML =
+    "<ColumnSchema name=\"testcolumn\"" +
+      " BLOCKSIZE=\"16384\"" +
+      " BLOOMFILTER=\"NONE\"" +
+      " BLOCKCACHE=\"true\"" +
+      " COMPRESSION=\"GZ\"" +
+      " VERSIONS=\"1\"" +
+      " TTL=\"86400\"" +
+      " IN_MEMORY=\"false\"/>";
+
+  private JAXBContext context;
+
+  public TestColumnSchemaModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(ColumnSchemaModel.class);
+  }
+
+  protected static ColumnSchemaModel buildTestModel() {
+    ColumnSchemaModel model = new ColumnSchemaModel();
+    model.setName(COLUMN_NAME);
+    model.__setBlockcache(BLOCKCACHE);
+    model.__setBlocksize(BLOCKSIZE);
+    model.__setBloomfilter(BLOOMFILTER);
+    model.__setCompression(COMPRESSION);
+    model.__setInMemory(IN_MEMORY);
+    model.__setTTL(TTL);
+    model.__setVersions(VERSIONS);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(ColumnSchemaModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private ColumnSchemaModel fromXML(String xml) throws JAXBException {
+    return (ColumnSchemaModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  protected static void checkModel(ColumnSchemaModel model) {
+    assertEquals(model.getName(), COLUMN_NAME);
+    assertEquals(model.__getBlockcache(), BLOCKCACHE);
+    assertEquals(model.__getBlocksize(), BLOCKSIZE);
+    assertEquals(model.__getBloomfilter(), BLOOMFILTER);
+    assertTrue(model.__getCompression().equalsIgnoreCase(COMPRESSION));
+    assertEquals(model.__getInMemory(), IN_MEMORY);
+    assertEquals(model.__getTTL(), TTL);
+    assertEquals(model.__getVersions(), VERSIONS);
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java
new file mode 100644
index 0000000..e7fd1f6
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java
@@ -0,0 +1,93 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.StringReader;
+import java.io.StringWriter;
+import java.util.Iterator;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestRowModel extends TestCase {
+
+  private static final byte[] ROW1 = Bytes.toBytes("testrow1");
+  private static final byte[] COLUMN1 = Bytes.toBytes("testcolumn1");
+  private static final byte[] VALUE1 = Bytes.toBytes("testvalue1");
+  private static final long TIMESTAMP1 = 1245219839331L;
+
+  private static final String AS_XML =
+    "<Row key=\"dGVzdHJvdzE=\">" + 
+      "<Cell timestamp=\"1245219839331\" column=\"dGVzdGNvbHVtbjE=\">" + 
+        "dGVzdHZhbHVlMQ==</Cell>" + 
+      "</Row>";
+
+  private JAXBContext context;
+
+  public TestRowModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(
+        CellModel.class,
+        RowModel.class);
+  }
+
+  private RowModel buildTestModel() {
+    RowModel model = new RowModel();
+    model.setKey(ROW1);
+    model.addCell(new CellModel(COLUMN1, TIMESTAMP1, VALUE1));
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(RowModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private RowModel fromXML(String xml) throws JAXBException {
+    return (RowModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  private void checkModel(RowModel model) {
+    assertTrue(Bytes.equals(ROW1, model.getKey()));
+    Iterator<CellModel> cells = model.getCells().iterator();
+    CellModel cell = cells.next();
+    assertTrue(Bytes.equals(COLUMN1, cell.getColumn()));
+    assertTrue(Bytes.equals(VALUE1, cell.getValue()));
+    assertTrue(cell.hasUserTimestamp());
+    assertEquals(cell.getTimestamp(), TIMESTAMP1);
+    assertFalse(cells.hasNext());
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
new file mode 100644
index 0000000..f2b0d3c
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
@@ -0,0 +1,128 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestScannerModel extends TestCase {
+  private static final byte[] START_ROW = Bytes.toBytes("abracadabra");
+  private static final byte[] END_ROW = Bytes.toBytes("zzyzx");
+  private static final byte[] COLUMN1 = Bytes.toBytes("column1");
+  private static final byte[] COLUMN2 = Bytes.toBytes("column2:foo");
+  private static final long START_TIME = 1245219839331L;
+  private static final long END_TIME = 1245393318192L;
+  private static final int BATCH = 100;
+
+  private static final String AS_XML =
+    "<Scanner startTime=\"1245219839331\"" +
+      " startRow=\"YWJyYWNhZGFicmE=\"" + 
+      " endTime=\"1245393318192\"" +
+      " endRow=\"enp5eng=\"" +
+      " batch=\"100\">" +
+        "<column>Y29sdW1uMQ==</column>" +
+        "<column>Y29sdW1uMjpmb28=</column>" +
+      "</Scanner>";
+
+  private static final String AS_PB = 
+    "CgthYnJhY2FkYWJyYRIFenp5engaB2NvbHVtbjEaC2NvbHVtbjI6Zm9vIGQo47qL554kMLDi57mf" +
+    "JA==";
+
+  private JAXBContext context;
+
+  public TestScannerModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(ScannerModel.class);
+  }
+
+  private ScannerModel buildTestModel() {
+    ScannerModel model = new ScannerModel();
+    model.setStartRow(START_ROW);
+    model.setEndRow(END_ROW);
+    model.addColumn(COLUMN1);
+    model.addColumn(COLUMN2);
+    model.setStartTime(START_TIME);
+    model.setEndTime(END_TIME);
+    model.setBatch(BATCH);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(ScannerModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private ScannerModel fromXML(String xml) throws JAXBException {
+    return (ScannerModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(ScannerModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private ScannerModel fromPB(String pb) throws IOException {
+    return (ScannerModel) 
+      new ScannerModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  private void checkModel(ScannerModel model) {
+    assertTrue(Bytes.equals(model.getStartRow(), START_ROW));
+    assertTrue(Bytes.equals(model.getEndRow(), END_ROW));
+    boolean foundCol1 = false, foundCol2 = false;
+    for (byte[] column: model.getColumns()) {
+      if (Bytes.equals(column, COLUMN1)) {
+        foundCol1 = true;
+      } else if (Bytes.equals(column, COLUMN2)) {
+        foundCol2 = true;
+      }
+    }
+    assertTrue(foundCol1);
+    assertTrue(foundCol2);
+    assertEquals(model.getStartTime(), START_TIME);
+    assertEquals(model.getEndTime(), END_TIME);
+    assertEquals(model.getBatch(), BATCH);
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java
new file mode 100644
index 0000000..40be2c4
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java
@@ -0,0 +1,148 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+import java.util.Iterator;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestStorageClusterStatusModel extends TestCase {
+
+  private static final String AS_XML =
+    "<ClusterStatus requests=\"0\" regions=\"2\" averageLoad=\"1.0\">" +
+    "<DeadNodes/>" + 
+    "<LiveNodes><Node startCode=\"1245219839331\" requests=\"0\"" + 
+      " name=\"test1\" maxHeapSizeMB=\"1024\" heapSizeMB=\"128\">" + 
+        "<Region stores=\"1\" storefiles=\"1\" storefileSizeMB=\"0\"" + 
+        " storefileIndexSizeMB=\"0\" name=\"LVJPT1QtLCww\"" + 
+        " memstoreSizeMB=\"0\"/></Node>" + 
+      "<Node startCode=\"1245239331198\" requests=\"0\" name=\"test2\"" + 
+        " maxHeapSizeMB=\"1024\" heapSizeMB=\"512\">" + 
+        "<Region stores=\"1\" storefiles=\"1\" storefileSizeMB=\"0\"" +
+        " storefileIndexSizeMB=\"0\" name=\"Lk1FVEEuLCwxMjQ2MDAwMDQzNzI0\"" +
+        " memstoreSizeMB=\"0\"/></Node>"+
+    "</LiveNodes></ClusterStatus>";
+
+  private static final String AS_PB = 
+"Ci0KBXRlc3QxEOO6i+eeJBgAIIABKIAIMhUKCS1ST09ULSwsMBABGAEgACgAMAAKOQoFdGVzdDIQ"+
+"/pKx8J4kGAAggAQogAgyIQoVLk1FVEEuLCwxMjQ2MDAwMDQzNzI0EAEYASAAKAAwABgCIAApAAAA"+
+"AAAA8D8=";
+
+  private JAXBContext context;
+
+  public TestStorageClusterStatusModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(StorageClusterStatusModel.class);
+  }
+
+  private StorageClusterStatusModel buildTestModel() {
+    StorageClusterStatusModel model = new StorageClusterStatusModel();
+    model.setRegions(2);
+    model.setRequests(0);
+    model.setAverageLoad(1.0);
+    model.addLiveNode("test1", 1245219839331L, 128, 1024)
+      .addRegion(Bytes.toBytes("-ROOT-,,0"), 1, 1, 0, 0, 0);
+    model.addLiveNode("test2", 1245239331198L, 512, 1024)
+      .addRegion(Bytes.toBytes(".META.,,1246000043724"),1, 1, 0, 0, 0);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(StorageClusterStatusModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private StorageClusterStatusModel fromXML(String xml) throws JAXBException {
+    return (StorageClusterStatusModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(StorageClusterStatusModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private StorageClusterStatusModel fromPB(String pb) throws IOException {
+    return (StorageClusterStatusModel) 
+      new StorageClusterStatusModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  private void checkModel(StorageClusterStatusModel model) {
+    assertEquals(model.getRegions(), 2);
+    assertEquals(model.getRequests(), 0);
+    assertEquals(model.getAverageLoad(), 1.0);
+    Iterator<StorageClusterStatusModel.Node> nodes =
+      model.getLiveNodes().iterator();
+    StorageClusterStatusModel.Node node = nodes.next();
+    assertEquals(node.getName(), "test1");
+    assertEquals(node.getStartCode(), 1245219839331L);
+    assertEquals(node.getHeapSizeMB(), 128);
+    assertEquals(node.getMaxHeapSizeMB(), 1024);
+    Iterator<StorageClusterStatusModel.Node.Region> regions = 
+      node.getRegions().iterator();
+    StorageClusterStatusModel.Node.Region region = regions.next();
+    assertTrue(Bytes.toString(region.getName()).equals("-ROOT-,,0"));
+    assertEquals(region.getStores(), 1);
+    assertEquals(region.getStorefiles(), 1);
+    assertEquals(region.getStorefileSizeMB(), 0);
+    assertEquals(region.getMemstoreSizeMB(), 0);
+    assertEquals(region.getStorefileIndexSizeMB(), 0);
+    assertFalse(regions.hasNext());
+    node = nodes.next();
+    assertEquals(node.getName(), "test2");
+    assertEquals(node.getStartCode(), 1245239331198L);
+    assertEquals(node.getHeapSizeMB(), 512);
+    assertEquals(node.getMaxHeapSizeMB(), 1024);
+    regions = node.getRegions().iterator();
+    region = regions.next();
+    assertEquals(Bytes.toString(region.getName()), ".META.,,1246000043724");
+    assertEquals(region.getStores(), 1);
+    assertEquals(region.getStorefiles(), 1);
+    assertEquals(region.getStorefileSizeMB(), 0);
+    assertEquals(region.getMemstoreSizeMB(), 0);
+    assertEquals(region.getStorefileIndexSizeMB(), 0);
+    assertFalse(regions.hasNext());
+    assertFalse(nodes.hasNext());
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java
new file mode 100644
index 0000000..e0d1b0f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java
@@ -0,0 +1,73 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.StringReader;
+import java.io.StringWriter;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import junit.framework.TestCase;
+
+public class TestStorageClusterVersionModel extends TestCase {
+  private static final String VERSION = "0.0.1-testing";
+
+  private static final String AS_XML =
+    "<ClusterVersion>" + VERSION + "</ClusterVersion>";
+
+  private JAXBContext context;
+
+  public TestStorageClusterVersionModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(StorageClusterVersionModel.class);
+  }
+
+  private StorageClusterVersionModel buildTestModel() {
+    StorageClusterVersionModel model = new StorageClusterVersionModel();
+    model.setVersion(VERSION);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(StorageClusterVersionModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private StorageClusterVersionModel fromXML(String xml) throws JAXBException {
+    return (StorageClusterVersionModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  private void checkModel(StorageClusterVersionModel model) {
+    assertEquals(model.getVersion(), VERSION);
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java
new file mode 100644
index 0000000..0998231
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java
@@ -0,0 +1,116 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+import java.util.Iterator;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestTableInfoModel extends TestCase {
+  private static final String TABLE = "testtable";
+  private static final byte[] START_KEY = Bytes.toBytes("abracadbra");
+  private static final byte[] END_KEY = Bytes.toBytes("zzyzx");
+  private static final long ID = 8731042424L;
+  private static final String LOCATION = "testhost:9876";
+  
+  private static final String AS_XML =
+    "<TableInfo name=\"testtable\">" +
+      "<Region location=\"testhost:9876\"" +
+        " endKey=\"enp5eng=\"" +
+        " startKey=\"YWJyYWNhZGJyYQ==\"" +
+        " id=\"8731042424\"" +
+        " name=\"testtable,abracadbra,8731042424\"/>" +
+    "</TableInfo>";
+
+  private static final String AS_PB = 
+    "Cgl0ZXN0dGFibGUSSQofdGVzdHRhYmxlLGFicmFjYWRicmEsODczMTA0MjQyNBIKYWJyYWNhZGJy" +
+    "YRoFenp5engg+MSkwyAqDXRlc3Rob3N0Ojk4NzY=";
+
+  private JAXBContext context;
+
+  public TestTableInfoModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(
+        TableInfoModel.class,
+        TableRegionModel.class);
+  }
+
+  private TableInfoModel buildTestModel() {
+    TableInfoModel model = new TableInfoModel();
+    model.setName(TABLE);
+    model.add(new TableRegionModel(TABLE, ID, START_KEY, END_KEY, LOCATION));
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(TableInfoModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private TableInfoModel fromXML(String xml) throws JAXBException {
+    return (TableInfoModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(TableInfoModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private TableInfoModel fromPB(String pb) throws IOException {
+    return (TableInfoModel) 
+      new TableInfoModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  private void checkModel(TableInfoModel model) {
+    assertEquals(model.getName(), TABLE);
+    Iterator<TableRegionModel> regions = model.getRegions().iterator();
+    TableRegionModel region = regions.next();
+    assertTrue(Bytes.equals(region.getStartKey(), START_KEY));
+    assertTrue(Bytes.equals(region.getEndKey(), END_KEY));
+    assertEquals(region.getId(), ID);
+    assertEquals(region.getLocation(), LOCATION);
+    assertFalse(regions.hasNext());
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java
new file mode 100644
index 0000000..4dc449a
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java
@@ -0,0 +1,107 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+import java.util.Iterator;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+
+import junit.framework.TestCase;
+
+public class TestTableListModel extends TestCase {
+  private static final String TABLE1 = "table1";
+  private static final String TABLE2 = "table2";
+  private static final String TABLE3 = "table3";
+  
+  private static final String AS_XML =
+    "<TableList><table name=\"table1\"/><table name=\"table2\"/>" +
+      "<table name=\"table3\"/></TableList>";
+
+  private static final String AS_PB = "CgZ0YWJsZTEKBnRhYmxlMgoGdGFibGUz";
+
+  private JAXBContext context;
+
+  public TestTableListModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(
+        TableListModel.class,
+        TableModel.class);
+  }
+
+  private TableListModel buildTestModel() {
+    TableListModel model = new TableListModel();
+    model.add(new TableModel(TABLE1));
+    model.add(new TableModel(TABLE2));
+    model.add(new TableModel(TABLE3));
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(TableListModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private TableListModel fromXML(String xml) throws JAXBException {
+    return (TableListModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(TableListModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private TableListModel fromPB(String pb) throws IOException {
+    return (TableListModel) 
+      new TableListModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  private void checkModel(TableListModel model) {
+    Iterator<TableModel> tables = model.getTables().iterator();
+    TableModel table = tables.next();
+    assertEquals(table.getName(), TABLE1);
+    table = tables.next();
+    assertEquals(table.getName(), TABLE2);
+    table = tables.next();
+    assertEquals(table.getName(), TABLE3);
+    assertFalse(tables.hasNext());
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java
new file mode 100644
index 0000000..c02dfda
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java
@@ -0,0 +1,106 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.StringReader;
+import java.io.StringWriter;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.util.Bytes;
+
+import junit.framework.TestCase;
+
+public class TestTableRegionModel extends TestCase {
+  private static final String TABLE = "testtable";
+  private static final byte[] START_KEY = Bytes.toBytes("abracadbra");
+  private static final byte[] END_KEY = Bytes.toBytes("zzyzx");
+  private static final long ID = 8731042424L;
+  private static final String LOCATION = "testhost:9876";
+
+  private static final String AS_XML =
+    "<Region location=\"testhost:9876\"" +
+      " endKey=\"enp5eng=\"" +
+      " startKey=\"YWJyYWNhZGJyYQ==\"" +
+      " id=\"8731042424\"" +
+      " name=\"testtable,abracadbra,8731042424\"/>";
+
+  private JAXBContext context;
+
+  public TestTableRegionModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(TableRegionModel.class);
+  }
+
+  private TableRegionModel buildTestModel() {
+    TableRegionModel model =
+      new TableRegionModel(TABLE, ID, START_KEY, END_KEY, LOCATION);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(TableRegionModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private TableRegionModel fromXML(String xml) throws JAXBException {
+    return (TableRegionModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  private void checkModel(TableRegionModel model) {
+    assertTrue(Bytes.equals(model.getStartKey(), START_KEY));
+    assertTrue(Bytes.equals(model.getEndKey(), END_KEY));
+    assertEquals(model.getId(), ID);
+    assertEquals(model.getLocation(), LOCATION);
+    assertEquals(model.getName(), 
+      TABLE + "," + Bytes.toString(START_KEY) + "," + Long.toString(ID) +
+      ".ad9860f031282c46ed431d7af8f94aca.");
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testGetName() {
+    TableRegionModel model = buildTestModel();
+    String modelName = model.getName();
+    HRegionInfo hri = new HRegionInfo(new HTableDescriptor(TABLE),
+      START_KEY, END_KEY, false, ID);
+    assertEquals(modelName, hri.getRegionNameAsString());
+  }
+
+  public void testSetName() {
+    TableRegionModel model = buildTestModel();
+    String name = model.getName();
+    model.setName(name);
+    assertEquals(name, model.getName());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java
new file mode 100644
index 0000000..1d0606b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java
@@ -0,0 +1,128 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+import java.util.Iterator;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+
+import junit.framework.TestCase;
+
+public class TestTableSchemaModel extends TestCase {
+
+  public static final String TABLE_NAME = "testTable";
+  private static final boolean IS_META = false;
+  private static final boolean IS_ROOT = false;
+  private static final boolean READONLY = false;
+
+  private static final String AS_XML =
+    "<TableSchema name=\"testTable\"" +
+      " IS_META=\"false\"" +
+      " IS_ROOT=\"false\"" +
+      " READONLY=\"false\">" +
+      TestColumnSchemaModel.AS_XML + 
+    "</TableSchema>";
+
+  private static final String AS_PB = 
+    "Cgl0ZXN0VGFibGUSEAoHSVNfTUVUQRIFZmFsc2USEAoHSVNfUk9PVBIFZmFsc2USEQoIUkVBRE9O" +
+    "TFkSBWZhbHNlGpcBCgp0ZXN0Y29sdW1uEhIKCUJMT0NLU0laRRIFMTYzODQSEwoLQkxPT01GSUxU" +
+    "RVISBE5PTkUSEgoKQkxPQ0tDQUNIRRIEdHJ1ZRIRCgtDT01QUkVTU0lPThICR1oSDQoIVkVSU0lP" +
+    "TlMSATESDAoDVFRMEgU4NjQwMBISCglJTl9NRU1PUlkSBWZhbHNlGICjBSABKgJHWigA";
+
+  private JAXBContext context;
+
+  public TestTableSchemaModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(
+      ColumnSchemaModel.class,
+      TableSchemaModel.class);
+  }
+
+  public static TableSchemaModel buildTestModel() {
+    return buildTestModel(TABLE_NAME);
+  }
+
+  public static TableSchemaModel buildTestModel(String name) {
+    TableSchemaModel model = new TableSchemaModel();
+    model.setName(name);
+    model.__setIsMeta(IS_META);
+    model.__setIsRoot(IS_ROOT);
+    model.__setReadOnly(READONLY);
+    model.addColumnFamily(TestColumnSchemaModel.buildTestModel());
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(TableSchemaModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private TableSchemaModel fromXML(String xml) throws JAXBException {
+    return (TableSchemaModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(TableSchemaModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private TableSchemaModel fromPB(String pb) throws IOException {
+    return (TableSchemaModel) 
+      new TableSchemaModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  public static void checkModel(TableSchemaModel model) {
+    checkModel(model, TABLE_NAME);
+  }
+
+  public static void checkModel(TableSchemaModel model, String tableName) {
+    assertEquals(model.getName(), tableName);
+    assertEquals(model.__getIsMeta(), IS_META);
+    assertEquals(model.__getIsRoot(), IS_ROOT);
+    assertEquals(model.__getReadOnly(), READONLY);
+    Iterator<ColumnSchemaModel> families = model.getColumns().iterator();
+    assertTrue(families.hasNext());
+    ColumnSchemaModel family = families.next();
+    TestColumnSchemaModel.checkModel(family);
+    assertFalse(families.hasNext());
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java
new file mode 100644
index 0000000..f5c2c84
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java
@@ -0,0 +1,112 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.rest.model;
+
+import java.io.IOException;
+import java.io.StringReader;
+import java.io.StringWriter;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.hadoop.hbase.util.Base64;
+
+import junit.framework.TestCase;
+
+public class TestVersionModel extends TestCase {
+  private static final String REST_VERSION = "0.0.1";
+  private static final String OS_VERSION = 
+    "Linux 2.6.18-128.1.6.el5.centos.plusxen amd64";
+  private static final String JVM_VERSION =
+    "Sun Microsystems Inc. 1.6.0_13-11.3-b02";
+  private static final String JETTY_VERSION = "6.1.14";
+  private static final String JERSEY_VERSION = "1.1.0-ea";
+  
+  private static final String AS_XML =
+    "<Version REST=\"" + REST_VERSION + "\"" +
+    " OS=\"" + OS_VERSION + "\"" +
+    " JVM=\"" + JVM_VERSION + "\"" +
+    " Server=\"" + JETTY_VERSION + "\"" +
+    " Jersey=\"" + JERSEY_VERSION + "\"/>";
+
+  private static final String AS_PB = 
+    "CgUwLjAuMRInU3VuIE1pY3Jvc3lzdGVtcyBJbmMuIDEuNi4wXzEzLTExLjMtYjAyGi1MaW51eCAy" +
+    "LjYuMTgtMTI4LjEuNi5lbDUuY2VudG9zLnBsdXN4ZW4gYW1kNjQiBjYuMS4xNCoIMS4xLjAtZWE=";
+
+  private JAXBContext context;
+
+  public TestVersionModel() throws JAXBException {
+    super();
+    context = JAXBContext.newInstance(VersionModel.class);
+  }
+
+  private VersionModel buildTestModel() {
+    VersionModel model = new VersionModel();
+    model.setRESTVersion(REST_VERSION);
+    model.setOSVersion(OS_VERSION);
+    model.setJVMVersion(JVM_VERSION);
+    model.setServerVersion(JETTY_VERSION);
+    model.setJerseyVersion(JERSEY_VERSION);
+    return model;
+  }
+
+  @SuppressWarnings("unused")
+  private String toXML(VersionModel model) throws JAXBException {
+    StringWriter writer = new StringWriter();
+    context.createMarshaller().marshal(model, writer);
+    return writer.toString();
+  }
+
+  private VersionModel fromXML(String xml) throws JAXBException {
+    return (VersionModel)
+      context.createUnmarshaller().unmarshal(new StringReader(xml));
+  }
+
+  @SuppressWarnings("unused")
+  private byte[] toPB(VersionModel model) {
+    return model.createProtobufOutput();
+  }
+
+  private VersionModel fromPB(String pb) throws IOException {
+    return (VersionModel) 
+      new VersionModel().getObjectFromMessage(Base64.decode(AS_PB));
+  }
+
+  private void checkModel(VersionModel model) {
+    assertEquals(model.getRESTVersion(), REST_VERSION);
+    assertEquals(model.getOSVersion(), OS_VERSION);
+    assertEquals(model.getJVMVersion(), JVM_VERSION);
+    assertEquals(model.getServerVersion(), JETTY_VERSION);
+    assertEquals(model.getJerseyVersion(), JERSEY_VERSION);
+  }
+
+  public void testBuildModel() throws Exception {
+    checkModel(buildTestModel());
+  }
+
+  public void testFromXML() throws Exception {
+    checkModel(fromXML(AS_XML));
+  }
+
+  public void testFromPB() throws Exception {
+    checkModel(fromPB(AS_PB));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/security/TestUser.java b/0.90/src/test/java/org/apache/hadoop/hbase/security/TestUser.java
new file mode 100644
index 0000000..e5f4cf9
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/security/TestUser.java
@@ -0,0 +1,81 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.security;
+
+import static org.junit.Assert.*;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.junit.Test;
+
+import java.security.PrivilegedAction;
+import java.security.PrivilegedExceptionAction;
+
+public class TestUser {
+  @Test
+  public void testBasicAttributes() throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    User user = User.createUserForTesting(conf, "simple", new String[]{"foo"});
+    assertEquals("Username should match", "simple", user.getName());
+    assertEquals("Short username should match", "simple", user.getShortName());
+    // don't test shortening of kerberos names because regular Hadoop doesn't support them
+  }
+
+  @Test
+  public void testRunAs() throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    final User user = User.createUserForTesting(conf, "testuser", new String[]{"foo"});
+    final PrivilegedAction<String> action = new PrivilegedAction<String>(){
+      public String run() {
+        User u = User.getCurrent();
+        return u.getName();
+      }
+    };
+
+    String username = user.runAs(action);
+    assertEquals("Current user within runAs() should match",
+        "testuser", username);
+
+    // ensure the next run is correctly set
+    User user2 = User.createUserForTesting(conf, "testuser2", new String[]{"foo"});
+    String username2 = user2.runAs(action);
+    assertEquals("Second username should match second user",
+        "testuser2", username2);
+
+    // check the exception version
+    username = user.runAs(new PrivilegedExceptionAction<String>(){
+      public String run() throws Exception {
+        return User.getCurrent().getName();
+      }
+    });
+    assertEquals("User name in runAs() should match", "testuser", username);
+
+    // verify that nested contexts work
+    user2.runAs(new PrivilegedAction(){
+      public Object run() {
+        String nestedName = user.runAs(action);
+        assertEquals("Nest name should match nested user", "testuser", nestedName);
+        assertEquals("Current name should match current user",
+            "testuser2", User.getCurrent().getName());
+        return null;
+      }
+    });
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java b/0.90/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
new file mode 100644
index 0000000..edd1740
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java
@@ -0,0 +1,396 @@
+/*
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.thrift;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.thrift.generated.BatchMutation;
+import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor;
+import org.apache.hadoop.hbase.thrift.generated.Mutation;
+import org.apache.hadoop.hbase.thrift.generated.TCell;
+import org.apache.hadoop.hbase.thrift.generated.TRowResult;
+import org.apache.hadoop.hbase.util.Bytes;
+
+/**
+ * Unit testing for ThriftServer.HBaseHandler, a part of the
+ * org.apache.hadoop.hbase.thrift package.
+ */
+public class TestThriftServer extends HBaseClusterTestCase {
+
+  // Static names for tables, columns, rows, and values
+  private static byte[] tableAname = Bytes.toBytes("tableA");
+  private static byte[] tableBname = Bytes.toBytes("tableB");
+  private static byte[] columnAname = Bytes.toBytes("columnA:");
+  private static byte[] columnBname = Bytes.toBytes("columnB:");
+  private static byte[] rowAname = Bytes.toBytes("rowA");
+  private static byte[] rowBname = Bytes.toBytes("rowB");
+  private static byte[] valueAname = Bytes.toBytes("valueA");
+  private static byte[] valueBname = Bytes.toBytes("valueB");
+  private static byte[] valueCname = Bytes.toBytes("valueC");
+  private static byte[] valueDname = Bytes.toBytes("valueD");
+
+  /**
+   * Runs all of the tests under a single JUnit test method.  We
+   * consolidate all testing to one method because HBaseClusterTestCase
+   * is prone to OutOfMemoryExceptions when there are three or more
+   * JUnit test methods.
+   *
+   * @throws Exception
+   */
+  public void testAll() throws Exception {
+    // Run all tests
+    doTestTableCreateDrop();
+    doTestTableMutations();
+    doTestTableTimestampsAndColumns();
+    doTestTableScanners();
+  }
+
+  /**
+   * Tests for creating, enabling, disabling, and deleting tables.  Also
+   * tests that creating a table with an invalid column name yields an
+   * IllegalArgument exception.
+   *
+   * @throws Exception
+   */
+  public void doTestTableCreateDrop() throws Exception {
+    ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler(this.conf);
+
+    // Create/enable/disable/delete tables, ensure methods act correctly
+    assertEquals(handler.getTableNames().size(), 0);
+    handler.createTable(tableAname, getColumnDescriptors());
+    assertEquals(handler.getTableNames().size(), 1);
+    assertEquals(handler.getColumnDescriptors(tableAname).size(), 2);
+    assertTrue(handler.isTableEnabled(tableAname));
+    handler.createTable(tableBname, new ArrayList<ColumnDescriptor>());
+    assertEquals(handler.getTableNames().size(), 2);
+    handler.disableTable(tableBname);
+    assertFalse(handler.isTableEnabled(tableBname));
+    handler.deleteTable(tableBname);
+    assertEquals(handler.getTableNames().size(), 1);
+    handler.disableTable(tableAname);
+    /* TODO Reenable.
+    assertFalse(handler.isTableEnabled(tableAname));
+    handler.enableTable(tableAname);
+    assertTrue(handler.isTableEnabled(tableAname));
+    handler.disableTable(tableAname);*/
+    handler.deleteTable(tableAname);
+  }
+
+  /**
+   * Tests adding a series of Mutations and BatchMutations, including a
+   * delete mutation.  Also tests data retrieval, and getting back multiple
+   * versions.
+   *
+   * @throws Exception
+   */
+  public void doTestTableMutations() throws Exception {
+    // Setup
+    ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler(this.conf);
+    handler.createTable(tableAname, getColumnDescriptors());
+
+    // Apply a few Mutations to rowA
+    //     mutations.add(new Mutation(false, columnAname, valueAname));
+    //     mutations.add(new Mutation(false, columnBname, valueBname));
+    handler.mutateRow(tableAname, rowAname, getMutations());
+
+    // Assert that the changes were made
+    assertTrue(Bytes.equals(valueAname,
+      handler.get(tableAname, rowAname, columnAname).get(0).value));
+    TRowResult rowResult1 = handler.getRow(tableAname, rowAname).get(0);
+    assertTrue(Bytes.equals(rowAname, rowResult1.row));
+    assertTrue(Bytes.equals(valueBname,
+      rowResult1.columns.get(columnBname).value));
+
+    // Apply a few BatchMutations for rowA and rowB
+    // rowAmutations.add(new Mutation(true, columnAname, null));
+    // rowAmutations.add(new Mutation(false, columnBname, valueCname));
+    // batchMutations.add(new BatchMutation(rowAname, rowAmutations));
+    // Mutations to rowB
+    // rowBmutations.add(new Mutation(false, columnAname, valueCname));
+    // rowBmutations.add(new Mutation(false, columnBname, valueDname));
+    // batchMutations.add(new BatchMutation(rowBname, rowBmutations));
+    handler.mutateRows(tableAname, getBatchMutations());
+
+    // Assert that changes were made to rowA
+    List<TCell> cells = handler.get(tableAname, rowAname, columnAname);
+    assertFalse(cells.size() > 0);
+    assertTrue(Bytes.equals(valueCname, handler.get(tableAname, rowAname, columnBname).get(0).value));
+    List<TCell> versions = handler.getVer(tableAname, rowAname, columnBname, MAXVERSIONS);
+    assertTrue(Bytes.equals(valueCname, versions.get(0).value));
+    assertTrue(Bytes.equals(valueBname, versions.get(1).value));
+
+    // Assert that changes were made to rowB
+    TRowResult rowResult2 = handler.getRow(tableAname, rowBname).get(0);
+    assertTrue(Bytes.equals(rowBname, rowResult2.row));
+    assertTrue(Bytes.equals(valueCname, rowResult2.columns.get(columnAname).value));
+	  assertTrue(Bytes.equals(valueDname, rowResult2.columns.get(columnBname).value));
+
+    // Apply some deletes
+    handler.deleteAll(tableAname, rowAname, columnBname);
+    handler.deleteAllRow(tableAname, rowBname);
+
+    // Assert that the deletes were applied
+    int size = handler.get(tableAname, rowAname, columnBname).size();
+    assertEquals(0, size);
+    size = handler.getRow(tableAname, rowBname).size();
+    assertEquals(0, size);
+
+    // Teardown
+    handler.disableTable(tableAname);
+    handler.deleteTable(tableAname);
+  }
+
+  /**
+   * Similar to testTableMutations(), except Mutations are applied with
+   * specific timestamps and data retrieval uses these timestamps to
+   * extract specific versions of data.
+   *
+   * @throws Exception
+   */
+  public void doTestTableTimestampsAndColumns() throws Exception {
+    // Setup
+    ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler(this.conf);
+    handler.createTable(tableAname, getColumnDescriptors());
+
+    // Apply timestamped Mutations to rowA
+    long time1 = System.currentTimeMillis();
+    handler.mutateRowTs(tableAname, rowAname, getMutations(), time1);
+
+    Thread.sleep(1000);
+
+    // Apply timestamped BatchMutations for rowA and rowB
+    long time2 = System.currentTimeMillis();
+    handler.mutateRowsTs(tableAname, getBatchMutations(), time2);
+
+    // Apply an overlapping timestamped mutation to rowB
+    handler.mutateRowTs(tableAname, rowBname, getMutations(), time2);
+
+    // the getVerTs is [inf, ts) so you need to increment one.
+    time1 += 1;
+    time2 += 2;
+
+    // Assert that the timestamp-related methods retrieve the correct data
+    assertEquals(2, handler.getVerTs(tableAname, rowAname, columnBname, time2,
+      MAXVERSIONS).size());
+    assertEquals(1, handler.getVerTs(tableAname, rowAname, columnBname, time1,
+      MAXVERSIONS).size());
+
+    TRowResult rowResult1 = handler.getRowTs(tableAname, rowAname, time1).get(0);
+    TRowResult rowResult2 = handler.getRowTs(tableAname, rowAname, time2).get(0);
+    // columnA was completely deleted
+    //assertTrue(Bytes.equals(rowResult1.columns.get(columnAname).value, valueAname));
+    assertTrue(Bytes.equals(rowResult1.columns.get(columnBname).value, valueBname));
+    assertTrue(Bytes.equals(rowResult2.columns.get(columnBname).value, valueCname));
+
+    // ColumnAname has been deleted, and will never be visible even with a getRowTs()
+    assertFalse(rowResult2.columns.containsKey(columnAname));
+
+    List<byte[]> columns = new ArrayList<byte[]>();
+    columns.add(columnBname);
+
+    rowResult1 = handler.getRowWithColumns(tableAname, rowAname, columns).get(0);
+    assertTrue(Bytes.equals(rowResult1.columns.get(columnBname).value, valueCname));
+    assertFalse(rowResult1.columns.containsKey(columnAname));
+
+    rowResult1 = handler.getRowWithColumnsTs(tableAname, rowAname, columns, time1).get(0);
+    assertTrue(Bytes.equals(rowResult1.columns.get(columnBname).value, valueBname));
+    assertFalse(rowResult1.columns.containsKey(columnAname));
+
+    // Apply some timestamped deletes
+    // this actually deletes _everything_.
+    // nukes everything in columnB: forever.
+    handler.deleteAllTs(tableAname, rowAname, columnBname, time1);
+    handler.deleteAllRowTs(tableAname, rowBname, time2);
+
+    // Assert that the timestamp-related methods retrieve the correct data
+    int size = handler.getVerTs(tableAname, rowAname, columnBname, time1, MAXVERSIONS).size();
+    assertEquals(0, size);
+
+    size = handler.getVerTs(tableAname, rowAname, columnBname, time2, MAXVERSIONS).size();
+    assertEquals(1, size);
+
+    // should be available....
+    assertTrue(Bytes.equals(handler.get(tableAname, rowAname, columnBname).get(0).value, valueCname));
+
+    assertEquals(0, handler.getRow(tableAname, rowBname).size());
+
+    // Teardown
+    handler.disableTable(tableAname);
+    handler.deleteTable(tableAname);
+  }
+
+  /**
+   * Tests the four different scanner-opening methods (with and without
+   * a stoprow, with and without a timestamp).
+   *
+   * @throws Exception
+   */
+  public void doTestTableScanners() throws Exception {
+    // Setup
+    ThriftServer.HBaseHandler handler = new ThriftServer.HBaseHandler(this.conf);
+    handler.createTable(tableAname, getColumnDescriptors());
+
+    // Apply timestamped Mutations to rowA
+    long time1 = System.currentTimeMillis();
+    handler.mutateRowTs(tableAname, rowAname, getMutations(), time1);
+
+    // Sleep to assure that 'time1' and 'time2' will be different even with a
+    // coarse grained system timer.
+    Thread.sleep(1000);
+
+    // Apply timestamped BatchMutations for rowA and rowB
+    long time2 = System.currentTimeMillis();
+    handler.mutateRowsTs(tableAname, getBatchMutations(), time2);
+
+    time1 += 1;
+
+    // Test a scanner on all rows and all columns, no timestamp
+    int scanner1 = handler.scannerOpen(tableAname, rowAname, getColumnList(true, true));
+    TRowResult rowResult1a = handler.scannerGet(scanner1).get(0);
+    assertTrue(Bytes.equals(rowResult1a.row, rowAname));
+    // This used to be '1'.  I don't know why when we are asking for two columns
+    // and when the mutations above would seem to add two columns to the row.
+    // -- St.Ack 05/12/2009
+    assertEquals(rowResult1a.columns.size(), 1);
+    assertTrue(Bytes.equals(rowResult1a.columns.get(columnBname).value, valueCname));
+
+    TRowResult rowResult1b = handler.scannerGet(scanner1).get(0);
+    assertTrue(Bytes.equals(rowResult1b.row, rowBname));
+    assertEquals(rowResult1b.columns.size(), 2);
+    assertTrue(Bytes.equals(rowResult1b.columns.get(columnAname).value, valueCname));
+    assertTrue(Bytes.equals(rowResult1b.columns.get(columnBname).value, valueDname));
+    closeScanner(scanner1, handler);
+
+    // Test a scanner on all rows and all columns, with timestamp
+    int scanner2 = handler.scannerOpenTs(tableAname, rowAname, getColumnList(true, true), time1);
+    TRowResult rowResult2a = handler.scannerGet(scanner2).get(0);
+    assertEquals(rowResult2a.columns.size(), 1);
+    // column A deleted, does not exist.
+    //assertTrue(Bytes.equals(rowResult2a.columns.get(columnAname).value, valueAname));
+    assertTrue(Bytes.equals(rowResult2a.columns.get(columnBname).value, valueBname));
+    closeScanner(scanner2, handler);
+
+    // Test a scanner on the first row and first column only, no timestamp
+    int scanner3 = handler.scannerOpenWithStop(tableAname, rowAname, rowBname,
+        getColumnList(true, false));
+    closeScanner(scanner3, handler);
+
+    // Test a scanner on the first row and second column only, with timestamp
+    int scanner4 = handler.scannerOpenWithStopTs(tableAname, rowAname, rowBname,
+        getColumnList(false, true), time1);
+    TRowResult rowResult4a = handler.scannerGet(scanner4).get(0);
+    assertEquals(rowResult4a.columns.size(), 1);
+    assertTrue(Bytes.equals(rowResult4a.columns.get(columnBname).value, valueBname));
+
+    // Teardown
+    handler.disableTable(tableAname);
+    handler.deleteTable(tableAname);
+  }
+
+  /**
+   *
+   * @return a List of ColumnDescriptors for use in creating a table.  Has one
+   * default ColumnDescriptor and one ColumnDescriptor with fewer versions
+   */
+  private List<ColumnDescriptor> getColumnDescriptors() {
+    ArrayList<ColumnDescriptor> cDescriptors = new ArrayList<ColumnDescriptor>();
+
+    // A default ColumnDescriptor
+    ColumnDescriptor cDescA = new ColumnDescriptor();
+    cDescA.name = columnAname;
+    cDescriptors.add(cDescA);
+
+    // A slightly customized ColumnDescriptor (only 2 versions)
+    ColumnDescriptor cDescB = new ColumnDescriptor(columnBname, 2, "NONE",
+        false, "NONE", 0, 0, false, -1);
+    cDescriptors.add(cDescB);
+
+    return cDescriptors;
+  }
+
+  /**
+   *
+   * @param includeA whether or not to include columnA
+   * @param includeB whether or not to include columnB
+   * @return a List of column names for use in retrieving a scanner
+   */
+  private List<byte[]> getColumnList(boolean includeA, boolean includeB) {
+    List<byte[]> columnList = new ArrayList<byte[]>();
+    if (includeA) columnList.add(columnAname);
+    if (includeB) columnList.add(columnBname);
+    return columnList;
+  }
+
+  /**
+   *
+   * @return a List of Mutations for a row, with columnA having valueA
+   * and columnB having valueB
+   */
+  private List<Mutation> getMutations() {
+    List<Mutation> mutations = new ArrayList<Mutation>();
+    mutations.add(new Mutation(false, columnAname, valueAname));
+    mutations.add(new Mutation(false, columnBname, valueBname));
+    return mutations;
+  }
+
+  /**
+   *
+   * @return a List of BatchMutations with the following effects:
+   * (rowA, columnA): delete
+   * (rowA, columnB): place valueC
+   * (rowB, columnA): place valueC
+   * (rowB, columnB): place valueD
+   */
+  private List<BatchMutation> getBatchMutations() {
+    List<BatchMutation> batchMutations = new ArrayList<BatchMutation>();
+
+    // Mutations to rowA.  You can't mix delete and put anymore.
+    List<Mutation> rowAmutations = new ArrayList<Mutation>();
+    rowAmutations.add(new Mutation(true, columnAname, null));
+    batchMutations.add(new BatchMutation(rowAname, rowAmutations));
+
+    rowAmutations = new ArrayList<Mutation>();
+    rowAmutations.add(new Mutation(false, columnBname, valueCname));
+    batchMutations.add(new BatchMutation(rowAname, rowAmutations));
+
+    // Mutations to rowB
+    List<Mutation> rowBmutations = new ArrayList<Mutation>();
+    rowBmutations.add(new Mutation(false, columnAname, valueCname));
+    rowBmutations.add(new Mutation(false, columnBname, valueDname));
+    batchMutations.add(new BatchMutation(rowBname, rowBmutations));
+
+    return batchMutations;
+  }
+
+  /**
+   * Asserts that the passed scanner is exhausted, and then closes
+   * the scanner.
+   *
+   * @param scannerId the scanner to close
+   * @param handler the HBaseHandler interfacing to HBase
+   * @throws Exception
+   */
+  private void closeScanner(int scannerId, ThriftServer.HBaseHandler handler) throws Exception {
+    handler.scannerGet(scannerId);
+    handler.scannerClose(scannerId);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/DisabledTestMetaUtils.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/DisabledTestMetaUtils.java
new file mode 100644
index 0000000..3257e95
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/DisabledTestMetaUtils.java
@@ -0,0 +1,63 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.HBaseClusterTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.MiniHBaseCluster;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+
+/**
+ * Test is flakey.  Needs work.  Fails too often on hudson.
+ */
+public class DisabledTestMetaUtils extends HBaseClusterTestCase {
+  public void testColumnEdits() throws Exception {
+    HBaseAdmin admin = new HBaseAdmin(this.conf);
+    final String oldColumn = "oldcolumn:";
+    // Add three tables
+    for (int i = 0; i < 5; i++) {
+      HTableDescriptor htd = new HTableDescriptor(getName() + i);
+      htd.addFamily(new HColumnDescriptor(oldColumn));
+      admin.createTable(htd);
+    }
+    this.cluster.shutdown();
+    this.cluster = null;
+    MetaUtils utils = new MetaUtils(this.conf);
+    // Add a new column to the third table, getName() + '2', and remove the old.
+    final byte [] editTable = Bytes.toBytes(getName() + 2);
+    final byte [] newColumn = Bytes.toBytes("newcolumn:");
+    utils.addColumn(editTable, new HColumnDescriptor(newColumn));
+    utils.deleteColumn(editTable, Bytes.toBytes(oldColumn));
+    utils.shutdown();
+    // Delete again so we go get it all fresh.
+    HConnectionManager.deleteConnection(conf, false);
+    // Now assert columns were added and deleted.
+    this.cluster = new MiniHBaseCluster(this.conf, 1);
+    // Now assert columns were added and deleted.
+    HTable t = new HTable(conf, editTable);
+    HTableDescriptor htd = t.getTableDescriptor();
+    HColumnDescriptor hcd = htd.getFamily(newColumn);
+    assertTrue(hcd != null);
+    assertNull(htd.getFamily(Bytes.toBytes(oldColumn)));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/EnvironmentEdgeManagerTestHelper.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/EnvironmentEdgeManagerTestHelper.java
new file mode 100644
index 0000000..730f4e3
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/EnvironmentEdgeManagerTestHelper.java
@@ -0,0 +1,36 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+/**
+ * Used by tests to inject an edge into the manager. The intent is to minimise
+ * the use of the injectEdge method giving it default permissions, but in
+ * testing we may need to use this functionality elsewhere.
+ */
+public class EnvironmentEdgeManagerTestHelper {
+
+  public static void reset() {
+    EnvironmentEdgeManager.reset();
+  }
+
+  public static void injectEdge(EnvironmentEdge edge) {
+    EnvironmentEdgeManager.injectEdge(edge);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/SoftValueSortedMapTest.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/SoftValueSortedMapTest.java
new file mode 100644
index 0000000..15f2ca8
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/SoftValueSortedMapTest.java
@@ -0,0 +1,41 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.util.SortedMap;
+import java.util.TreeMap;
+
+public class SoftValueSortedMapTest {
+  private static void testMap(SortedMap<Integer, Integer> map) {
+    System.out.println("Testing " + map.getClass());
+    for(int i = 0; i < 1000000; i++) {
+      map.put(i, i);
+    }
+    System.out.println(map.size());
+    @SuppressWarnings("unused")
+    byte[] block = new byte[849*1024*1024]; // FindBugs DLS_DEAD_LOCAL_STORE
+    System.out.println(map.size());
+  }
+
+  public static void main(String[] args) {
+    testMap(new SoftValueSortedMap<Integer, Integer>());
+    testMap(new TreeMap<Integer, Integer>());
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java
new file mode 100644
index 0000000..9be5c0c
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java
@@ -0,0 +1,67 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.UnsupportedEncodingException;
+import java.util.Map;
+import java.util.TreeMap;
+
+import junit.framework.TestCase;
+
+/**
+ * Test order preservation characteristics of ordered Base64 dialect
+ */
+public class TestBase64 extends TestCase {
+  // Note: uris is sorted. We need to prove that the ordered Base64
+  // preserves that ordering
+  private String[] uris = {
+      "dns://dns.powerset.com/www.powerset.com",
+      "dns:www.powerset.com",
+      "file:///usr/bin/java",
+      "filename",
+      "ftp://one.two.three/index.html",
+      "http://one.two.three/index.html",
+      "https://one.two.three:9443/index.html",
+      "r:dns://com.powerset.dns/www.powerset.com",
+      "r:ftp://three.two.one/index.html",
+      "r:http://three.two.one/index.html",
+      "r:https://three.two.one:9443/index.html"
+  };
+
+  /**
+   * the test
+   * @throws UnsupportedEncodingException
+   */
+  public void testBase64() throws UnsupportedEncodingException {
+    TreeMap<String, String> sorted = new TreeMap<String, String>();
+
+    for (int i = 0; i < uris.length; i++) {
+      byte[] bytes = uris[i].getBytes("UTF-8");
+      sorted.put(Base64.encodeBytes(bytes, Base64.ORDERED), uris[i]);
+    }
+    System.out.println();
+
+    int i = 0;
+    for (Map.Entry<String, String> e: sorted.entrySet()) {
+      assertTrue(uris[i++].compareTo(e.getValue()) == 0);
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestByteBloomFilter.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestByteBloomFilter.java
new file mode 100644
index 0000000..d3b2f61
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestByteBloomFilter.java
@@ -0,0 +1,141 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.nio.ByteBuffer;
+import java.util.BitSet;
+
+import junit.framework.TestCase;
+
+public class TestByteBloomFilter extends TestCase {
+  
+  public void testBasicBloom() throws Exception {
+    ByteBloomFilter bf1 = new ByteBloomFilter(1000, (float)0.01, Hash.MURMUR_HASH, 0);
+    ByteBloomFilter bf2 = new ByteBloomFilter(1000, (float)0.01, Hash.MURMUR_HASH, 0);
+    bf1.allocBloom();
+    bf2.allocBloom();
+    
+    // test 1: verify no fundamental false negatives or positives
+    byte[] key1 = {1,2,3,4,5,6,7,8,9};
+    byte[] key2 = {1,2,3,4,5,6,7,8,7};
+    
+    bf1.add(key1);
+    bf2.add(key2);
+    
+    assertTrue(bf1.contains(key1));
+    assertFalse(bf1.contains(key2));
+    assertFalse(bf2.contains(key1));
+    assertTrue(bf2.contains(key2));
+    
+    byte [] bkey = {1,2,3,4};
+    byte [] bval = "this is a much larger byte array".getBytes();
+    
+    bf1.add(bkey);
+    bf1.add(bval, 1, bval.length-1);
+    
+    assertTrue( bf1.contains(bkey) );
+    assertTrue( bf1.contains(bval, 1, bval.length-1) );
+    assertFalse( bf1.contains(bval) );
+    assertFalse( bf1.contains(bval) );
+    
+    // test 2: serialization & deserialization.  
+    // (convert bloom to byte array & read byte array back in as input)
+    ByteArrayOutputStream bOut = new ByteArrayOutputStream();
+    bf1.writeBloom(new DataOutputStream(bOut));
+    ByteBuffer bb = ByteBuffer.wrap(bOut.toByteArray()); 
+    ByteBloomFilter newBf1 = new ByteBloomFilter(1000, (float)0.01,
+        Hash.MURMUR_HASH, 0);
+    assertTrue(newBf1.contains(key1, bb));
+    assertFalse(newBf1.contains(key2, bb));
+    assertTrue( newBf1.contains(bkey, bb) );
+    assertTrue( newBf1.contains(bval, 1, bval.length-1, bb) );
+    assertFalse( newBf1.contains(bval, bb) );
+    assertFalse( newBf1.contains(bval, bb) );
+    
+    System.out.println("Serialized as " + bOut.size() + " bytes");
+    assertTrue(bOut.size() - bf1.byteSize < 10); //... allow small padding
+  }
+  
+  public void testBloomFold() throws Exception {
+    // test: foldFactor < log(max/actual)
+    ByteBloomFilter b = new ByteBloomFilter(1003, (float)0.01, Hash.MURMUR_HASH, 2);
+    b.allocBloom();
+    int origSize = b.getByteSize();
+    assertEquals(1204, origSize);
+    for (int i = 0; i < 12; ++i) {
+      b.add(Bytes.toBytes(i));
+    }
+    b.compactBloom();
+    assertEquals(origSize>>2, b.getByteSize());
+    int falsePositives = 0;
+    for (int i = 0; i < 25; ++i) {
+      if (b.contains(Bytes.toBytes(i))) {
+        if(i >= 12) falsePositives++;
+      } else {
+        assertFalse(i < 12);
+      }
+    }
+    assertTrue(falsePositives <= 1);
+
+    // test: foldFactor > log(max/actual)
+  }
+
+  public void testBloomPerf() throws Exception {
+    // add
+    float err = (float)0.01;
+    ByteBloomFilter b = new ByteBloomFilter(10*1000*1000, (float)err, Hash.MURMUR_HASH, 3);
+    b.allocBloom();
+    long startTime =  System.currentTimeMillis();
+    int origSize = b.getByteSize();
+    for (int i = 0; i < 1*1000*1000; ++i) {
+      b.add(Bytes.toBytes(i));
+    }
+    long endTime = System.currentTimeMillis();
+    System.out.println("Total Add time = " + (endTime - startTime) + "ms");
+
+    // fold
+    startTime = System.currentTimeMillis();
+    b.compactBloom();
+    endTime = System.currentTimeMillis();
+    System.out.println("Total Fold time = " + (endTime - startTime) + "ms");
+    assertTrue(origSize >= b.getByteSize()<<3);
+    
+    // test
+    startTime = System.currentTimeMillis();
+    int falsePositives = 0;
+    for (int i = 0; i < 2*1000*1000; ++i) {
+      
+      if (b.contains(Bytes.toBytes(i))) {
+        if(i >= 1*1000*1000) falsePositives++;
+      } else {
+        assertFalse(i < 1*1000*1000);
+      }
+    }
+    endTime = System.currentTimeMillis();
+    System.out.println("Total Contains time = " + (endTime - startTime) + "ms");
+    System.out.println("False Positive = " + falsePositives);
+    assertTrue(falsePositives <= (1*1000*1000)*err);
+
+    // test: foldFactor > log(max/actual)
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java
new file mode 100644
index 0000000..e70135f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java
@@ -0,0 +1,205 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import junit.framework.TestCase;
+
+public class TestBytes extends TestCase {
+  public void testNullHashCode() {
+    byte [] b = null;
+    Exception ee = null;
+    try {
+      Bytes.hashCode(b);
+    } catch (Exception e) {
+      ee = e;
+    }
+    assertNotNull(ee);
+  }
+
+  public void testSplit() throws Exception {
+    byte [] lowest = Bytes.toBytes("AAA");
+    byte [] middle = Bytes.toBytes("CCC");
+    byte [] highest = Bytes.toBytes("EEE");
+    byte [][] parts = Bytes.split(lowest, highest, 1);
+    for (int i = 0; i < parts.length; i++) {
+      System.out.println(Bytes.toString(parts[i]));
+    }
+    assertEquals(3, parts.length);
+    assertTrue(Bytes.equals(parts[1], middle));
+    // Now divide into three parts.  Change highest so split is even.
+    highest = Bytes.toBytes("DDD");
+    parts = Bytes.split(lowest, highest, 2);
+    for (int i = 0; i < parts.length; i++) {
+      System.out.println(Bytes.toString(parts[i]));
+    }
+    assertEquals(4, parts.length);
+    // Assert that 3rd part is 'CCC'.
+    assertTrue(Bytes.equals(parts[2], middle));
+  }
+
+  public void testSplit2() throws Exception {
+    // More split tests.
+    byte [] lowest = Bytes.toBytes("http://A");
+    byte [] highest = Bytes.toBytes("http://z");
+    byte [] middle = Bytes.toBytes("http://]");
+    byte [][] parts = Bytes.split(lowest, highest, 1);
+    for (int i = 0; i < parts.length; i++) {
+      System.out.println(Bytes.toString(parts[i]));
+    }
+    assertEquals(3, parts.length);
+    assertTrue(Bytes.equals(parts[1], middle));
+  }
+
+  public void testSplit3() throws Exception {
+    // Test invalid split cases
+    byte [] low = { 1, 1, 1 };
+    byte [] high = { 1, 1, 3 };
+
+    // If swapped, should throw IAE
+    try {
+      Bytes.split(high, low, 1);
+      assertTrue("Should not be able to split if low > high", false);
+    } catch(IllegalArgumentException iae) {
+      // Correct
+    }
+
+    // Single split should work
+    byte [][] parts = Bytes.split(low, high, 1);
+    for (int i = 0; i < parts.length; i++) {
+      System.out.println("" + i + " -> " + Bytes.toStringBinary(parts[i]));
+    }
+    assertTrue("Returned split should have 3 parts but has " + parts.length, parts.length == 3);
+
+    // If split more than once, this should fail
+    parts = Bytes.split(low, high, 2);
+    assertTrue("Returned split but should have failed", parts == null);
+
+    // Split 0 times should throw IAE
+    try {
+      parts = Bytes.split(low, high, 0);
+      assertTrue("Should not be able to split 0 times", false);
+    } catch(IllegalArgumentException iae) {
+      // Correct
+    }
+  }
+
+  public void testToLong() throws Exception {
+    long [] longs = {-1l, 123l, 122232323232l};
+    for (int i = 0; i < longs.length; i++) {
+      byte [] b = Bytes.toBytes(longs[i]);
+      assertEquals(longs[i], Bytes.toLong(b));
+    }
+  }
+
+  public void testToFloat() throws Exception {
+    float [] floats = {-1f, 123.123f, Float.MAX_VALUE};
+    for (int i = 0; i < floats.length; i++) {
+      byte [] b = Bytes.toBytes(floats[i]);
+      assertEquals(floats[i], Bytes.toFloat(b));
+    }
+  }
+
+  public void testToDouble() throws Exception {
+    double [] doubles = {Double.MIN_VALUE, Double.MAX_VALUE};
+    for (int i = 0; i < doubles.length; i++) {
+      byte [] b = Bytes.toBytes(doubles[i]);
+      assertEquals(doubles[i], Bytes.toDouble(b));
+    }
+  }
+
+  public void testBinarySearch() throws Exception {
+    byte [][] arr = {
+        {1},
+        {3},
+        {5},
+        {7},
+        {9},
+        {11},
+        {13},
+        {15},
+    };
+    byte [] key1 = {3,1};
+    byte [] key2 = {4,9};
+    byte [] key2_2 = {4};
+    byte [] key3 = {5,11};
+
+    assertEquals(1, Bytes.binarySearch(arr, key1, 0, 1,
+      Bytes.BYTES_RAWCOMPARATOR));
+    assertEquals(0, Bytes.binarySearch(arr, key1, 1, 1,
+      Bytes.BYTES_RAWCOMPARATOR));
+    assertEquals(-(2+1), Arrays.binarySearch(arr, key2_2,
+      Bytes.BYTES_COMPARATOR));
+    assertEquals(-(2+1), Bytes.binarySearch(arr, key2, 0, 1,
+      Bytes.BYTES_RAWCOMPARATOR));
+    assertEquals(4, Bytes.binarySearch(arr, key2, 1, 1,
+      Bytes.BYTES_RAWCOMPARATOR));
+    assertEquals(2, Bytes.binarySearch(arr, key3, 0, 1,
+      Bytes.BYTES_RAWCOMPARATOR));
+    assertEquals(5, Bytes.binarySearch(arr, key3, 1, 1,
+      Bytes.BYTES_RAWCOMPARATOR));
+  }
+  
+  public void testStartsWith() {
+    assertTrue(Bytes.startsWith(Bytes.toBytes("hello"), Bytes.toBytes("h")));
+    assertTrue(Bytes.startsWith(Bytes.toBytes("hello"), Bytes.toBytes("")));
+    assertTrue(Bytes.startsWith(Bytes.toBytes("hello"), Bytes.toBytes("hello")));
+    assertFalse(Bytes.startsWith(Bytes.toBytes("hello"), Bytes.toBytes("helloworld")));
+    assertFalse(Bytes.startsWith(Bytes.toBytes(""), Bytes.toBytes("hello")));
+  }
+
+  public void testIncrementBytes() throws IOException {
+
+    assertTrue(checkTestIncrementBytes(10, 1));
+    assertTrue(checkTestIncrementBytes(12, 123435445));
+    assertTrue(checkTestIncrementBytes(124634654, 1));
+    assertTrue(checkTestIncrementBytes(10005460, 5005645));
+    assertTrue(checkTestIncrementBytes(1, -1));
+    assertTrue(checkTestIncrementBytes(10, -1));
+    assertTrue(checkTestIncrementBytes(10, -5));
+    assertTrue(checkTestIncrementBytes(1005435000, -5));
+    assertTrue(checkTestIncrementBytes(10, -43657655));
+    assertTrue(checkTestIncrementBytes(-1, 1));
+    assertTrue(checkTestIncrementBytes(-26, 5034520));
+    assertTrue(checkTestIncrementBytes(-10657200, 5));
+    assertTrue(checkTestIncrementBytes(-12343250, 45376475));
+    assertTrue(checkTestIncrementBytes(-10, -5));
+    assertTrue(checkTestIncrementBytes(-12343250, -5));
+    assertTrue(checkTestIncrementBytes(-12, -34565445));
+    assertTrue(checkTestIncrementBytes(-1546543452, -34565445));
+  }
+
+  private static boolean checkTestIncrementBytes(long val, long amount)
+  throws IOException {
+    byte[] value = Bytes.toBytes(val);
+    byte [] testValue = {-1, -1, -1, -1, -1, -1, -1, -1};
+    if (value[0] > 0) {
+      testValue = new byte[Bytes.SIZEOF_LONG];
+    }
+    System.arraycopy(value, 0, testValue, testValue.length - value.length,
+        value.length);
+
+    long incrementResult = Bytes.toLong(Bytes.incrementBytes(value, amount));
+
+    return (Bytes.toLong(testValue) + amount) == incrementResult;
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
new file mode 100644
index 0000000..110cd3f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java
@@ -0,0 +1,58 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import org.apache.hadoop.hbase.io.hfile.Compression;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static org.junit.Assert.*;
+
+public class TestCompressionTest {
+
+  @Test
+  public void testTestCompression() {
+
+    // This test will fail if you run the tests with LZO compression available.
+    try {
+      CompressionTest.testCompression(Compression.Algorithm.LZO);
+      fail(); // always throws
+    } catch (IOException e) {
+      // there should be a 'cause'.
+      assertNotNull(e.getCause());
+    }
+
+    // this is testing the caching of the test results.
+    try {
+      CompressionTest.testCompression(Compression.Algorithm.LZO);
+      fail(); // always throws
+    } catch (IOException e) {
+      // there should be NO cause because it's a direct exception not wrapped
+      assertNull(e.getCause());
+    }
+
+
+    assertFalse(CompressionTest.testCompression("LZO"));
+    assertTrue(CompressionTest.testCompression("NONE"));
+    assertTrue(CompressionTest.testCompression("GZ"));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestDefaultEnvironmentEdge.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestDefaultEnvironmentEdge.java
new file mode 100644
index 0000000..155897b
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestDefaultEnvironmentEdge.java
@@ -0,0 +1,50 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.junit.Test;
+
+import static junit.framework.Assert.assertTrue;
+import static junit.framework.Assert.fail;
+
+/**
+ * Tests to make sure that the default environment edge conforms to appropriate
+ * behaviour.
+ */
+public class TestDefaultEnvironmentEdge {
+
+  @Test
+  public void testGetCurrentTimeUsesSystemClock() {
+    DefaultEnvironmentEdge edge = new DefaultEnvironmentEdge();
+    long systemTime = System.currentTimeMillis();
+    long edgeTime = edge.currentTimeMillis();
+    assertTrue("System time must be either the same or less than the edge time",
+            systemTime < edgeTime || systemTime == edgeTime);
+    try {
+      Thread.sleep(1);
+    } catch (InterruptedException e) {
+      fail(e.getMessage());
+    }
+    long secondEdgeTime = edge.currentTimeMillis();
+    assertTrue("Second time must be greater than the first",
+            secondEdgeTime > edgeTime);
+  }
+
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestEnvironmentEdgeManager.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestEnvironmentEdgeManager.java
new file mode 100644
index 0000000..ee7491d
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestEnvironmentEdgeManager.java
@@ -0,0 +1,63 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.junit.Test;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+import static org.mockito.Mockito.verify;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+public class TestEnvironmentEdgeManager {
+
+  @Test
+  public void testManageSingleton() {
+    EnvironmentEdge edge = EnvironmentEdgeManager.getDelegate();
+    assertNotNull(edge);
+    assertTrue(edge instanceof DefaultEnvironmentEdge);
+    EnvironmentEdgeManager.reset();
+    EnvironmentEdge edge2 = EnvironmentEdgeManager.getDelegate();
+    assertFalse(edge == edge2);
+    IncrementingEnvironmentEdge newEdge = new IncrementingEnvironmentEdge();
+    EnvironmentEdgeManager.injectEdge(newEdge);
+    assertEquals(newEdge, EnvironmentEdgeManager.getDelegate());
+
+    //injecting null will result in default being assigned.
+    EnvironmentEdgeManager.injectEdge(null);
+    EnvironmentEdge nullResult = EnvironmentEdgeManager.getDelegate();
+    assertTrue(nullResult instanceof DefaultEnvironmentEdge);
+  }
+
+  @Test
+  public void testCurrentTimeInMillis() {
+    EnvironmentEdge mock = mock(EnvironmentEdge.class);
+    EnvironmentEdgeManager.injectEdge(mock);
+    long expectation = 3456;
+    when(mock.currentTimeMillis()).thenReturn(expectation);
+    long result = EnvironmentEdgeManager.currentTimeMillis();
+    verify(mock).currentTimeMillis();
+    assertEquals(expectation, result);
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
new file mode 100644
index 0000000..c8fc065
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java
@@ -0,0 +1,48 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.junit.Test;
+
+/**
+ * Test {@link FSUtils}.
+ */
+public class TestFSUtils {
+  @Test public void testIsHDFS() throws Exception {
+    HBaseTestingUtility htu = new HBaseTestingUtility();
+    htu.getConfiguration().setBoolean("dfs.support.append", false);
+    assertFalse(FSUtils.isHDFS(htu.getConfiguration()));
+    assertFalse(FSUtils.isAppendSupported(htu.getConfiguration()));
+    htu.getConfiguration().setBoolean("dfs.support.append", true);
+    MiniDFSCluster cluster = null;
+    try {
+      cluster = htu.startMiniDFSCluster(1);
+      assertTrue(FSUtils.isHDFS(htu.getConfiguration()));
+      assertTrue(FSUtils.isAppendSupported(htu.getConfiguration()));
+    } finally {
+      if (cluster != null) cluster.shutdown();
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java
new file mode 100644
index 0000000..35623f7
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java
@@ -0,0 +1,105 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import static org.junit.Assert.assertEquals;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HServerInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestHBaseFsck {
+
+  final Log LOG = LogFactory.getLog(getClass());
+  private final static HBaseTestingUtility TEST_UTIL =
+      new HBaseTestingUtility();
+  private final static Configuration conf = TEST_UTIL.getConfiguration();
+  private final static byte[] TABLE = Bytes.toBytes("table");
+  private final static byte[] FAM = Bytes.toBytes("fam");
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniCluster(3);
+  }
+
+  @Test
+  public void testHBaseFsck() throws Exception {
+    HBaseFsck fsck = new HBaseFsck(conf);
+    fsck.displayFullReport();
+    fsck.setTimeLag(0);
+    // Most basic check ever, 0 tables
+    int result = fsck.doWork();
+    assertEquals(0, result);
+
+    TEST_UTIL.createTable(TABLE, FAM);
+
+    // We created 1 table, should be fine
+    result = fsck.doWork();
+    assertEquals(0, result);
+
+    // Now let's mess it up and change the assignment in .META. to
+    // point to a different region server
+    HTable meta = new HTable(conf, HTableDescriptor.META_TABLEDESC.getName());
+    ResultScanner scanner = meta.getScanner(new Scan());
+
+    resforloop : for (Result res : scanner) {
+      long startCode = Bytes.toLong(res.getValue(HConstants.CATALOG_FAMILY,
+          HConstants.STARTCODE_QUALIFIER));
+
+      for (JVMClusterUtil.RegionServerThread rs :
+          TEST_UTIL.getHBaseCluster().getRegionServerThreads()) {
+
+        HServerInfo hsi = rs.getRegionServer().getServerInfo();
+
+        // When we find a diff RS, change the assignment and break
+        if (startCode != hsi.getStartCode()) {
+          Put put = new Put(res.getRow());
+          put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER,
+              Bytes.toBytes(hsi.getHostnamePort()));
+          put.add(HConstants.CATALOG_FAMILY, HConstants.STARTCODE_QUALIFIER,
+              Bytes.toBytes(hsi.getStartCode()));
+          meta.put(put);
+          break resforloop;
+        }
+      }
+    }
+
+    // We set this here, but it's really not fixing anything...
+    fsck.setFixErrors();
+    result = fsck.doWork();
+    // Fixed or not, it still reports inconsistencies
+    assertEquals(-1, result);
+
+    Thread.sleep(15000);
+    // Disabled, won't work because the region stays unassigned, see HBASE-3217
+    // new HTable(conf, TABLE).getScanner(new Scan());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestIncrementingEnvironmentEdge.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestIncrementingEnvironmentEdge.java
new file mode 100644
index 0000000..6bb5910
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestIncrementingEnvironmentEdge.java
@@ -0,0 +1,40 @@
+/*
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import org.junit.Test;
+
+import static junit.framework.Assert.assertEquals;
+
+/**
+ * Tests that the incrementing environment edge increments time instead of using
+ * the default.
+ */
+public class TestIncrementingEnvironmentEdge {
+
+  @Test
+  public void testGetCurrentTimeUsesSystemClock() {
+    IncrementingEnvironmentEdge edge = new IncrementingEnvironmentEdge();
+    assertEquals(1, edge.currentTimeMillis());
+    assertEquals(2, edge.currentTimeMillis());
+    assertEquals(3, edge.currentTimeMillis());
+    assertEquals(4, edge.currentTimeMillis());
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestKeying.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestKeying.java
new file mode 100644
index 0000000..7ce5520
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestKeying.java
@@ -0,0 +1,62 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import junit.framework.TestCase;
+
+/**
+ * Tests url transformations
+ */
+public class TestKeying extends TestCase {
+
+  @Override
+  protected void setUp() throws Exception {
+    super.setUp();
+  }
+
+  @Override
+  protected void tearDown() throws Exception {
+    super.tearDown();
+  }
+
+  /**
+   * Test url transformations
+   * @throws Exception
+   */
+  public void testURI() throws Exception {
+    checkTransform("http://abc:bcd@www.example.com/index.html" +
+      "?query=something#middle");
+    checkTransform("file:///usr/bin/java");
+    checkTransform("dns:www.powerset.com");
+    checkTransform("dns://dns.powerset.com/www.powerset.com");
+    checkTransform("http://one.two.three/index.html");
+    checkTransform("https://one.two.three:9443/index.html");
+    checkTransform("ftp://one.two.three/index.html");
+
+    checkTransform("filename");
+  }
+
+  private void checkTransform(final String u) {
+    String k = Keying.createKey(u);
+    String uri = Keying.keyToUri(k);
+    System.out.println("Original url " + u + ", Transformed url " + k);
+    assertEquals(u, uri);
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java
new file mode 100644
index 0000000..8992dbb
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java
@@ -0,0 +1,173 @@
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.util;
+
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.catalog.CatalogTracker;
+import org.apache.hadoop.hbase.catalog.MetaReader;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.HConnection;
+import org.apache.hadoop.hbase.client.HConnectionManager;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.junit.Test;
+
+/**
+ * Tests merging a normal table's regions
+ */
+public class TestMergeTable {
+  private static final Log LOG = LogFactory.getLog(TestMergeTable.class);
+  private final HBaseTestingUtility UTIL = new HBaseTestingUtility();
+  private static final byte [] COLUMN_NAME = Bytes.toBytes("contents");
+  private static final byte [] VALUE;
+  static {
+    // We will use the same value for the rows as that is not really important here
+    String partialValue = String.valueOf(System.currentTimeMillis());
+    StringBuilder val = new StringBuilder();
+    while (val.length() < 1024) {
+      val.append(partialValue);
+    }
+    VALUE = Bytes.toBytes(val.toString());
+  }
+
+  /**
+   * Test merge.
+   * Hand-makes regions of a mergeable size and adds the hand-made regions to
+   * hand-made meta.  The hand-made regions are created offline.  We then start
+   * up mini cluster, disables the hand-made table and starts in on merging.
+   * @throws Exception 
+   */
+  @Test (timeout=300000) public void testMergeTable() throws Exception {
+    // Table we are manually creating offline.
+    HTableDescriptor desc = new HTableDescriptor(Bytes.toBytes("test"));
+    desc.addFamily(new HColumnDescriptor(COLUMN_NAME));
+
+    // Set maximum regionsize down.
+    UTIL.getConfiguration().setLong("hbase.hregion.max.filesize", 64L * 1024L * 1024L);
+    // Make it so we don't split.
+    UTIL.getConfiguration().setInt("hbase.regionserver.regionSplitLimit", 0);
+    // Startup hdfs.  Its in here we'll be putting our manually made regions.
+    UTIL.startMiniDFSCluster(1);
+    // Create hdfs hbase rootdir.
+    Path rootdir = UTIL.createRootDir();
+    FileSystem fs = FileSystem.get(UTIL.getConfiguration());
+    if (fs.exists(rootdir)) {
+      if (fs.delete(rootdir, true)) {
+        LOG.info("Cleaned up existing " + rootdir);
+      }
+    }
+
+    // Now create three data regions: The first is too large to merge since it
+    // will be > 64 MB in size. The second two will be smaller and will be
+    // selected for merging.
+
+    // To ensure that the first region is larger than 64MB we need to write at
+    // least 65536 rows. We will make certain by writing 70000
+    byte [] row_70001 = Bytes.toBytes("row_70001");
+    byte [] row_80001 = Bytes.toBytes("row_80001");
+
+    // Create regions and populate them at same time.
+    HRegion [] regions = {
+      createRegion(desc, null, row_70001, 1, 70000, rootdir),
+      createRegion(desc, row_70001, row_80001, 70001, 10000, rootdir),
+      createRegion(desc, row_80001, null, 80001, 11000, rootdir)
+    };
+
+    // Now create the root and meta regions and insert the data regions
+    // created above into .META.
+    setupROOTAndMeta(rootdir, regions);
+    try {
+      LOG.info("Starting mini zk cluster");
+      UTIL.startMiniZKCluster();
+      LOG.info("Starting mini hbase cluster");
+      UTIL.startMiniHBaseCluster(1, 1);
+      Configuration c = new Configuration(UTIL.getConfiguration());
+      HConnection connection = HConnectionManager.getConnection(c);
+      CatalogTracker ct = new CatalogTracker(connection);
+      ct.start();
+      List<HRegionInfo> originalTableRegions =
+        MetaReader.getTableRegions(ct, desc.getName());
+      LOG.info("originalTableRegions size=" + originalTableRegions.size() +
+        "; " + originalTableRegions);
+      HBaseAdmin admin = new HBaseAdmin(new Configuration(c));
+      admin.disableTable(desc.getName());
+      HMerge.merge(c, FileSystem.get(c), desc.getName());
+      List<HRegionInfo> postMergeTableRegions =
+        MetaReader.getTableRegions(ct, desc.getName());
+      LOG.info("postMergeTableRegions size=" + postMergeTableRegions.size() +
+        "; " + postMergeTableRegions);
+      assertTrue("originalTableRegions=" + originalTableRegions.size() +
+        ", postMergeTableRegions=" + postMergeTableRegions.size(),
+        postMergeTableRegions.size() < originalTableRegions.size());
+    } finally {
+      UTIL.shutdownMiniCluster();
+    }
+  }
+
+  private HRegion createRegion(final HTableDescriptor desc,
+      byte [] startKey, byte [] endKey, int firstRow, int nrows, Path rootdir)
+  throws IOException {
+    HRegionInfo hri = new HRegionInfo(desc, startKey, endKey);
+    HRegion region = HRegion.createHRegion(hri, rootdir, UTIL.getConfiguration());
+    LOG.info("Created region " + region.getRegionNameAsString());
+    for(int i = firstRow; i < firstRow + nrows; i++) {
+      Put put = new Put(Bytes.toBytes("row_" + String.format("%1$05d", i)));
+      put.add(COLUMN_NAME, null,  VALUE);
+      region.put(put);
+      if (i % 10000 == 0) {
+        LOG.info("Flushing write #" + i);
+        region.flushcache();
+      }
+    }
+    region.close();
+    region.getLog().closeAndDelete();
+    return region;
+  }
+
+  protected void setupROOTAndMeta(Path rootdir, final HRegion [] regions)
+  throws IOException {
+    HRegion root =
+      HRegion.createHRegion(HRegionInfo.ROOT_REGIONINFO, rootdir, UTIL.getConfiguration());
+    HRegion meta =
+      HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO, rootdir,
+      UTIL.getConfiguration());
+    HRegion.addRegionToMETA(root, meta);
+    for (HRegion r: regions) {
+      HRegion.addRegionToMETA(meta, r);
+    }
+    meta.close();
+    meta.getLog().closeAndDelete();
+    root.close();
+    root.getLog().closeAndDelete();
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java
new file mode 100644
index 0000000..18cd055
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java
@@ -0,0 +1,279 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseTestCase;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.regionserver.wal.HLog;
+import org.apache.hadoop.hbase.regionserver.HRegion;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.util.ToolRunner;
+
+/** Test stand alone merge tool that can merge arbitrary regions */
+public class TestMergeTool extends HBaseTestCase {
+  static final Log LOG = LogFactory.getLog(TestMergeTool.class);
+//  static final byte [] COLUMN_NAME = Bytes.toBytes("contents:");
+  static final byte [] FAMILY = Bytes.toBytes("contents");
+  static final byte [] QUALIFIER = Bytes.toBytes("dc");
+
+  private final HRegionInfo[] sourceRegions = new HRegionInfo[5];
+  private final HRegion[] regions = new HRegion[5];
+  private HTableDescriptor desc;
+  private byte [][][] rows;
+  private MiniDFSCluster dfsCluster = null;
+
+  @Override
+  public void setUp() throws Exception {
+    // Set the timeout down else this test will take a while to complete.
+    this.conf.setLong("hbase.zookeeper.recoverable.waittime", 1000);
+
+    this.conf.set("hbase.hstore.compactionThreshold", "2");
+
+    // Create table description
+    this.desc = new HTableDescriptor("TestMergeTool");
+    this.desc.addFamily(new HColumnDescriptor(FAMILY));
+
+    /*
+     * Create the HRegionInfos for the regions.
+     */
+    // Region 0 will contain the key range [row_0200,row_0300)
+    sourceRegions[0] = new HRegionInfo(this.desc, Bytes.toBytes("row_0200"),
+      Bytes.toBytes("row_0300"));
+
+    // Region 1 will contain the key range [row_0250,row_0400) and overlaps
+    // with Region 0
+    sourceRegions[1] =
+      new HRegionInfo(this.desc, Bytes.toBytes("row_0250"),
+          Bytes.toBytes("row_0400"));
+
+    // Region 2 will contain the key range [row_0100,row_0200) and is adjacent
+    // to Region 0 or the region resulting from the merge of Regions 0 and 1
+    sourceRegions[2] =
+      new HRegionInfo(this.desc, Bytes.toBytes("row_0100"),
+          Bytes.toBytes("row_0200"));
+
+    // Region 3 will contain the key range [row_0500,row_0600) and is not
+    // adjacent to any of Regions 0, 1, 2 or the merged result of any or all
+    // of those regions
+    sourceRegions[3] =
+      new HRegionInfo(this.desc, Bytes.toBytes("row_0500"),
+          Bytes.toBytes("row_0600"));
+
+    // Region 4 will have empty start and end keys and overlaps all regions.
+    sourceRegions[4] =
+      new HRegionInfo(this.desc, HConstants.EMPTY_BYTE_ARRAY,
+          HConstants.EMPTY_BYTE_ARRAY);
+
+    /*
+     * Now create some row keys
+     */
+    this.rows = new byte [5][][];
+    this.rows[0] = Bytes.toByteArrays(new String[] { "row_0210", "row_0280" });
+    this.rows[1] = Bytes.toByteArrays(new String[] { "row_0260", "row_0350",
+        "row_035" });
+    this.rows[2] = Bytes.toByteArrays(new String[] { "row_0110", "row_0175",
+        "row_0175", "row_0175"});
+    this.rows[3] = Bytes.toByteArrays(new String[] { "row_0525", "row_0560",
+        "row_0560", "row_0560", "row_0560"});
+    this.rows[4] = Bytes.toByteArrays(new String[] { "row_0050", "row_1000",
+        "row_1000", "row_1000", "row_1000", "row_1000" });
+
+    // Start up dfs
+    this.dfsCluster = new MiniDFSCluster(conf, 2, true, (String[])null);
+    this.fs = this.dfsCluster.getFileSystem();
+    System.out.println("fs=" + this.fs);
+    this.conf.set("fs.defaultFS", fs.getUri().toString());
+    Path parentdir = fs.getHomeDirectory();
+    conf.set(HConstants.HBASE_DIR, parentdir.toString());
+    fs.mkdirs(parentdir);
+    FSUtils.setVersion(fs, parentdir);
+
+    // Note: we must call super.setUp after starting the mini cluster or
+    // we will end up with a local file system
+
+    super.setUp();
+    try {
+      // Create root and meta regions
+      createRootAndMetaRegions();
+      /*
+       * Create the regions we will merge
+       */
+      for (int i = 0; i < sourceRegions.length; i++) {
+        regions[i] =
+          HRegion.createHRegion(this.sourceRegions[i], this.testDir, this.conf);
+        /*
+         * Insert data
+         */
+        for (int j = 0; j < rows[i].length; j++) {
+          byte [] row = rows[i][j];
+          Put put = new Put(row);
+          put.add(FAMILY, QUALIFIER, row);
+          regions[i].put(put);
+        }
+        HRegion.addRegionToMETA(meta, regions[i]);
+      }
+      // Close root and meta regions
+      closeRootAndMeta();
+
+    } catch (Exception e) {
+      shutdownDfs(dfsCluster);
+      throw e;
+    }
+  }
+
+  @Override
+  public void tearDown() throws Exception {
+    super.tearDown();
+    shutdownDfs(dfsCluster);
+  }
+
+  /*
+   * @param msg Message that describes this merge
+   * @param regionName1
+   * @param regionName2
+   * @param log Log to use merging.
+   * @param upperbound Verifying, how high up in this.rows to go.
+   * @return Merged region.
+   * @throws Exception
+   */
+  private HRegion mergeAndVerify(final String msg, final String regionName1,
+    final String regionName2, final HLog log, final int upperbound)
+  throws Exception {
+    Merge merger = new Merge(this.conf);
+    LOG.info(msg);
+    System.out.println("fs2=" + this.conf.get("fs.defaultFS"));
+    int errCode = ToolRunner.run(this.conf, merger,
+      new String[] {this.desc.getNameAsString(), regionName1, regionName2}
+    );
+    assertTrue("'" + msg + "' failed", errCode == 0);
+    HRegionInfo mergedInfo = merger.getMergedHRegionInfo();
+
+    // Now verify that we can read all the rows from regions 0, 1
+    // in the new merged region.
+    HRegion merged = HRegion.openHRegion(mergedInfo, log, this.conf);
+    verifyMerge(merged, upperbound);
+    merged.close();
+    LOG.info("Verified " + msg);
+    return merged;
+  }
+
+  private void verifyMerge(final HRegion merged, final int upperbound)
+  throws IOException {
+    //Test
+    Scan scan = new Scan();
+    scan.addFamily(FAMILY);
+    InternalScanner scanner = merged.getScanner(scan);
+    try {
+    List<KeyValue> testRes = null;
+      while (true) {
+        testRes = new ArrayList<KeyValue>();
+        boolean hasNext = scanner.next(testRes);
+        if (!hasNext) {
+          break;
+        }
+      }
+    } finally {
+      scanner.close();
+    }
+
+    //!Test
+
+    for (int i = 0; i < upperbound; i++) {
+      for (int j = 0; j < rows[i].length; j++) {
+        Get get = new Get(rows[i][j]);
+        get.addFamily(FAMILY);
+        Result result = merged.get(get, null);
+        assertEquals(1, result.size());
+        byte [] bytes = result.sorted()[0].getValue();
+        assertNotNull(Bytes.toStringBinary(rows[i][j]), bytes);
+        assertTrue(Bytes.equals(bytes, rows[i][j]));
+      }
+    }
+  }
+
+  /**
+   * Test merge tool.
+   * @throws Exception
+   */
+  public void testMergeTool() throws Exception {
+    // First verify we can read the rows from the source regions and that they
+    // contain the right data.
+    for (int i = 0; i < regions.length; i++) {
+      for (int j = 0; j < rows[i].length; j++) {
+        Get get = new Get(rows[i][j]);
+        get.addFamily(FAMILY);
+        Result result = regions[i].get(get, null);
+        byte [] bytes = result.sorted()[0].getValue();
+        assertNotNull(bytes);
+        assertTrue(Bytes.equals(bytes, rows[i][j]));
+      }
+      // Close the region and delete the log
+      regions[i].close();
+      regions[i].getLog().closeAndDelete();
+    }
+
+    // Create a log that we can reuse when we need to open regions
+    Path logPath = new Path("/tmp", HConstants.HREGION_LOGDIR_NAME + "_" +
+      System.currentTimeMillis());
+    LOG.info("Creating log " + logPath.toString());
+    Path oldLogDir = new Path("/tmp", HConstants.HREGION_OLDLOGDIR_NAME);
+    HLog log = new HLog(this.fs, logPath, oldLogDir, this.conf);
+    try {
+       // Merge Region 0 and Region 1
+      HRegion merged = mergeAndVerify("merging regions 0 and 1",
+        this.sourceRegions[0].getRegionNameAsString(),
+        this.sourceRegions[1].getRegionNameAsString(), log, 2);
+
+      // Merge the result of merging regions 0 and 1 with region 2
+      merged = mergeAndVerify("merging regions 0+1 and 2",
+        merged.getRegionInfo().getRegionNameAsString(),
+        this.sourceRegions[2].getRegionNameAsString(), log, 3);
+
+      // Merge the result of merging regions 0, 1 and 2 with region 3
+      merged = mergeAndVerify("merging regions 0+1+2 and 3",
+        merged.getRegionInfo().getRegionNameAsString(),
+        this.sourceRegions[3].getRegionNameAsString(), log, 4);
+
+      // Merge the result of merging regions 0, 1, 2 and 3 with region 4
+      merged = mergeAndVerify("merging regions 0+1+2+3 and 4",
+        merged.getRegionInfo().getRegionNameAsString(),
+        this.sourceRegions[4].getRegionNameAsString(), log, rows.length);
+    } finally {
+      log.closeAndDelete();
+    }
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/util/TestRootPath.java b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestRootPath.java
new file mode 100644
index 0000000..10c9926
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/util/TestRootPath.java
@@ -0,0 +1,63 @@
+/**
+ * Copyright 2008 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.util;
+
+import junit.framework.TestCase;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test requirement that root directory must be a URI
+ */
+public class TestRootPath extends TestCase {
+  private static final Log LOG = LogFactory.getLog(TestRootPath.class);
+
+  /** The test */
+  public void testRootPath() {
+    try {
+      // Try good path
+      FSUtils.validateRootPath(new Path("file:///tmp/hbase/hbase"));
+    } catch (IOException e) {
+      LOG.fatal("Unexpected exception checking valid path:", e);
+      fail();
+    }
+    try {
+      // Try good path
+      FSUtils.validateRootPath(new Path("hdfs://a:9000/hbase"));
+    } catch (IOException e) {
+      LOG.fatal("Unexpected exception checking valid path:", e);
+      fail();
+    }
+    try {
+      // bad path
+      FSUtils.validateRootPath(new Path("/hbase"));
+      fail();
+    } catch (IOException e) {
+      // Expected.
+      LOG.info("Got expected exception when checking invalid path:", e);
+    }
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java
new file mode 100644
index 0000000..bc71e5f
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java
@@ -0,0 +1,145 @@
+/**
+ * Copyright 2009 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import java.io.ByteArrayInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.Map;
+import java.util.Properties;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.zookeeper.server.quorum.QuorumPeerConfig;
+import org.apache.zookeeper.server.quorum.QuorumPeer.QuorumServer;
+import org.junit.Before;
+import org.junit.Test;
+
+import static junit.framework.Assert.assertEquals;
+import static org.junit.Assert.*;
+
+/**
+ * Test for HQuorumPeer.
+ */
+public class TestHQuorumPeer {
+  private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+  private static int PORT_NO = 21818;
+  private Path dataDir;
+
+
+  @Before public void setup() throws IOException {
+    // Set it to a non-standard port.
+    TEST_UTIL.getConfiguration().setInt("hbase.zookeeper.property.clientPort",
+      PORT_NO);
+    this.dataDir = HBaseTestingUtility.getTestDir(this.getClass().getName());
+    FileSystem fs = FileSystem.get(TEST_UTIL.getConfiguration());
+    if (fs.exists(this.dataDir)) {
+      if (!fs.delete(this.dataDir, true)) {
+        throw new IOException("Failed cleanup of " + this.dataDir);
+      }
+    }
+    if (!fs.mkdirs(this.dataDir)) {
+      throw new IOException("Failed create of " + this.dataDir);
+    }
+  }
+
+  @Test public void testMakeZKProps() {
+    Configuration conf = new Configuration(TEST_UTIL.getConfiguration());
+    conf.set("hbase.zookeeper.property.dataDir", this.dataDir.toString());
+    Properties properties = ZKConfig.makeZKProps(conf);
+    assertEquals(dataDir.toString(), (String)properties.get("dataDir"));
+    assertEquals(Integer.valueOf(PORT_NO),
+      Integer.valueOf(properties.getProperty("clientPort")));
+    assertEquals("localhost:2888:3888", properties.get("server.0"));
+    assertEquals(null, properties.get("server.1"));
+
+    String oldValue = conf.get(HConstants.ZOOKEEPER_QUORUM);
+    conf.set(HConstants.ZOOKEEPER_QUORUM, "a.foo.bar,b.foo.bar,c.foo.bar");
+    properties = ZKConfig.makeZKProps(conf);
+    assertEquals(dataDir.toString(), properties.get("dataDir"));
+    assertEquals(Integer.valueOf(PORT_NO),
+      Integer.valueOf(properties.getProperty("clientPort")));
+    assertEquals("a.foo.bar:2888:3888", properties.get("server.0"));
+    assertEquals("b.foo.bar:2888:3888", properties.get("server.1"));
+    assertEquals("c.foo.bar:2888:3888", properties.get("server.2"));
+    assertEquals(null, properties.get("server.3"));
+    conf.set(HConstants.ZOOKEEPER_QUORUM, oldValue);
+  }
+
+  @Test public void testConfigInjection() throws Exception {
+    String s =
+      "dataDir=" + this.dataDir.toString() + "\n" +
+      "clientPort=2181\n" +
+      "initLimit=2\n" +
+      "syncLimit=2\n" +
+      "server.0=${hbase.master.hostname}:2888:3888\n" +
+      "server.1=server1:2888:3888\n" +
+      "server.2=server2:2888:3888\n";
+
+    System.setProperty("hbase.master.hostname", "localhost");
+    InputStream is = new ByteArrayInputStream(s.getBytes());
+    Configuration conf = TEST_UTIL.getConfiguration();
+    Properties properties = ZKConfig.parseZooCfg(conf, is);
+
+    assertEquals(this.dataDir.toString(), properties.get("dataDir"));
+    assertEquals(Integer.valueOf(2181),
+      Integer.valueOf(properties.getProperty("clientPort")));
+    assertEquals("localhost:2888:3888", properties.get("server.0"));
+
+    HQuorumPeer.writeMyID(properties);
+    QuorumPeerConfig config = new QuorumPeerConfig();
+    config.parseProperties(properties);
+
+    assertEquals(this.dataDir.toString(), config.getDataDir());
+    assertEquals(2181, config.getClientPortAddress().getPort());
+    Map<Long,QuorumServer> servers = config.getServers();
+    assertEquals(3, servers.size());
+    assertTrue(servers.containsKey(Long.valueOf(0)));
+    QuorumServer server = servers.get(Long.valueOf(0));
+    assertEquals("localhost", server.addr.getHostName());
+
+    // Override with system property.
+    System.setProperty("hbase.master.hostname", "foo.bar");
+    is = new ByteArrayInputStream(s.getBytes());
+    properties = ZKConfig.parseZooCfg(conf, is);
+    assertEquals("foo.bar:2888:3888", properties.get("server.0"));
+
+    config.parseProperties(properties);
+
+    servers = config.getServers();
+    server = servers.get(Long.valueOf(0));
+    assertEquals("foo.bar", server.addr.getHostName());
+  }
+
+  /**
+   * Test Case for HBASE-2305
+   */
+  @Test public void testShouldAssignDefaultZookeeperClientPort() {
+    Configuration config = HBaseConfiguration.create();
+    config.clear();
+    Properties p = ZKConfig.makeZKProps(config);
+    assertNotNull(p);
+    assertEquals(2181, p.get("hbase.zookeeper.property.clientPort"));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTable.java b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTable.java
new file mode 100644
index 0000000..32095ea
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTable.java
@@ -0,0 +1,89 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.ZooKeeperConnectionException;
+import org.apache.zookeeper.KeeperException;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestZKTable {
+  private static final Log LOG = LogFactory.getLog(TestZooKeeperNodeTracker.class);
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniZKCluster();
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniZKCluster();
+  }
+
+  @Test
+  public void testTableStates()
+  throws ZooKeeperConnectionException, IOException, KeeperException {
+    final String name = "testDisabled";
+    Abortable abortable = new Abortable() {
+      @Override
+      public void abort(String why, Throwable e) {
+        LOG.info(why, e);
+      }
+    };
+    ZooKeeperWatcher zkw = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+      name, abortable);
+    ZKTable zkt = new ZKTable(zkw);
+    assertTrue(zkt.isEnabledTable(name));
+    assertFalse(zkt.isDisablingTable(name));
+    assertFalse(zkt.isDisabledTable(name));
+    assertFalse(zkt.isEnablingTable(name));
+    assertFalse(zkt.isDisablingOrDisabledTable(name));
+    assertFalse(zkt.isDisabledOrEnablingTable(name));
+    zkt.setDisablingTable(name);
+    assertTrue(zkt.isDisablingTable(name));
+    assertTrue(zkt.isDisablingOrDisabledTable(name));
+    assertFalse(zkt.getDisabledTables().contains(name));
+    zkt.setDisabledTable(name);
+    assertTrue(zkt.isDisabledTable(name));
+    assertTrue(zkt.isDisablingOrDisabledTable(name));
+    assertFalse(zkt.isDisablingTable(name));
+    assertTrue(zkt.getDisabledTables().contains(name));
+    zkt.setEnablingTable(name);
+    assertTrue(zkt.isEnablingTable(name));
+    assertTrue(zkt.isDisabledOrEnablingTable(name));
+    assertFalse(zkt.isDisabledTable(name));
+    assertFalse(zkt.getDisabledTables().contains(name));
+    zkt.setEnabledTable(name);
+    assertTrue(zkt.isEnabledTable(name));
+    assertFalse(zkt.isEnablingTable(name));
+  }
+}
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServerArg.java b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServerArg.java
new file mode 100644
index 0000000..9ec3a77
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServerArg.java
@@ -0,0 +1,44 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.junit.Test;
+
+
+public class TestZooKeeperMainServerArg {
+  private final ZooKeeperMainServerArg parser = new ZooKeeperMainServerArg();
+
+  @Test public void test() {
+    Configuration c = HBaseConfiguration.create();
+    assertEquals("localhost:" + c.get("hbase.zookeeper.property.clientPort"),
+      parser.parse(c));
+    final String port = "1234";
+    c.set("hbase.zookeeper.property.clientPort", port);
+    c.set("hbase.zookeeper.quorum", "example.com");
+    assertEquals("example.com:" + port, parser.parse(c));
+    c.set("hbase.zookeeper.quorum", "example1.com,example2.com,example3.com");
+    assertTrue(port, parser.parse(c).matches("example[1-3]\\.com:" + port));
+  }
+}
\ No newline at end of file
diff --git a/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperNodeTracker.java b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperNodeTracker.java
new file mode 100644
index 0000000..af4efdd
--- /dev/null
+++ b/0.90/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperNodeTracker.java
@@ -0,0 +1,309 @@
+/**
+ * Copyright 2010 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.zookeeper;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Random;
+import java.util.concurrent.Semaphore;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.Abortable;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.master.TestActiveMasterManager.NodeDeletionListener;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Threads;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.apache.zookeeper.ZooKeeper;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+
+public class TestZooKeeperNodeTracker {
+  private static final Log LOG = LogFactory.getLog(TestZooKeeperNodeTracker.class);
+  private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
+
+  private final static Random rand = new Random();
+
+  @BeforeClass
+  public static void setUpBeforeClass() throws Exception {
+    TEST_UTIL.startMiniZKCluster();
+  }
+
+  @AfterClass
+  public static void tearDownAfterClass() throws Exception {
+    TEST_UTIL.shutdownMiniZKCluster();
+  }
+
+  /**
+   * Test that we can interrupt a node that is blocked on a wait.
+   * @throws IOException
+   * @throws InterruptedException
+   */
+  @Test public void testInterruptible() throws IOException, InterruptedException {
+    Abortable abortable = new StubAbortable();
+    ZooKeeperWatcher zk = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+      "testInterruptible", abortable);
+    final TestTracker tracker = new TestTracker(zk, "/xyz", abortable);
+    tracker.start();
+    Thread t = new Thread() {
+      @Override
+      public void run() {
+        try {
+          tracker.blockUntilAvailable();
+        } catch (InterruptedException e) {
+          throw new RuntimeException("Interrupted", e);
+        }
+      }
+    };
+    t.start();
+    while (!t.isAlive()) Threads.sleep(1);
+    tracker.stop();
+    t.join();
+    // If it wasn't interruptible, we'd never get to here.
+  }
+
+  @Test
+  public void testNodeTracker() throws Exception {
+    Abortable abortable = new StubAbortable();
+    ZooKeeperWatcher zk = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(),
+        "testNodeTracker", abortable);
+    ZKUtil.createAndFailSilent(zk, zk.baseZNode);
+
+    final String node =
+      ZKUtil.joinZNode(zk.baseZNode, new Long(rand.nextLong()).toString());
+
+    final byte [] dataOne = Bytes.toBytes("dataOne");
+    final byte [] dataTwo = Bytes.toBytes("dataTwo");
+
+    // Start a ZKNT with no node currently available
+    TestTracker localTracker = new TestTracker(zk, node, abortable);
+    localTracker.start();
+    zk.registerListener(localTracker);
+
+    // Make sure we don't have a node
+    assertNull(localTracker.getData());
+
+    // Spin up a thread with another ZKNT and have it block
+    WaitToGetDataThread thread = new WaitToGetDataThread(zk, node);
+    thread.start();
+
+    // Verify the thread doesn't have a node
+    assertFalse(thread.hasData);
+
+    // Now, start a new ZKNT with the node already available
+    TestTracker secondTracker = new TestTracker(zk, node, null);
+    secondTracker.start();
+    zk.registerListener(secondTracker);
+
+    // Put up an additional zk listener so we know when zk event is done
+    TestingZKListener zkListener = new TestingZKListener(zk, node);
+    zk.registerListener(zkListener);
+    assertEquals(0, zkListener.createdLock.availablePermits());
+
+    // Create a completely separate zk connection for test triggers and avoid
+    // any weird watcher interactions from the test
+    final ZooKeeper zkconn = new ZooKeeper(
+        ZKConfig.getZKQuorumServersString(TEST_UTIL.getConfiguration()), 60000,
+        new StubWatcher());
+
+    // Add the node with data one
+    zkconn.create(node, dataOne, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+
+    // Wait for the zk event to be processed
+    zkListener.waitForCreation();
+    thread.join();
+
+    // Both trackers should have the node available with data one
+    assertNotNull(localTracker.getData());
+    assertNotNull(localTracker.blockUntilAvailable());
+    assertTrue(Bytes.equals(localTracker.getData(), dataOne));
+    assertTrue(thread.hasData);
+    assertTrue(Bytes.equals(thread.tracker.getData(), dataOne));
+    LOG.info("Successfully got data one");
+
+    // Make sure it's available and with the expected data
+    assertNotNull(secondTracker.getData());
+    assertNotNull(secondTracker.blockUntilAvailable());
+    assertTrue(Bytes.equals(secondTracker.getData(), dataOne));
+    LOG.info("Successfully got data one with the second tracker");
+
+    // Drop the node
+    zkconn.delete(node, -1);
+    zkListener.waitForDeletion();
+
+    // Create a new thread but with the existing thread's tracker to wait
+    TestTracker threadTracker = thread.tracker;
+    thread = new WaitToGetDataThread(zk, node, threadTracker);
+    thread.start();
+
+    // Verify other guys don't have data
+    assertFalse(thread.hasData);
+    assertNull(secondTracker.getData());
+    assertNull(localTracker.getData());
+    LOG.info("Successfully made unavailable");
+
+    // Create with second data
+    zkconn.create(node, dataTwo, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+
+    // Wait for the zk event to be processed
+    zkListener.waitForCreation();
+    thread.join();
+
+    // All trackers should have the node available with data two
+    assertNotNull(localTracker.getData());
+    assertNotNull(localTracker.blockUntilAvailable());
+    assertTrue(Bytes.equals(localTracker.getData(), dataTwo));
+    assertNotNull(secondTracker.getData());
+    assertNotNull(secondTracker.blockUntilAvailable());
+    assertTrue(Bytes.equals(secondTracker.getData(), dataTwo));
+    assertTrue(thread.hasData);
+    assertTrue(Bytes.equals(thread.tracker.getData(), dataTwo));
+    LOG.info("Successfully got data two on all trackers and threads");
+
+    // Change the data back to data one
+    zkconn.setData(node, dataOne, -1);
+
+    // Wait for zk event to be processed
+    zkListener.waitForDataChange();
+
+    // All trackers should have the node available with data one
+    assertNotNull(localTracker.getData());
+    assertNotNull(localTracker.blockUntilAvailable());
+    assertTrue(Bytes.equals(localTracker.getData(), dataOne));
+    assertNotNull(secondTracker.getData());
+    assertNotNull(secondTracker.blockUntilAvailable());
+    assertTrue(Bytes.equals(secondTracker.getData(), dataOne));
+    assertTrue(thread.hasData);
+    assertTrue(Bytes.equals(thread.tracker.getData(), dataOne));
+    LOG.info("Successfully got data one following a data change on all trackers and threads");
+  }
+
+  public static class WaitToGetDataThread extends Thread {
+
+    TestTracker tracker;
+    boolean hasData;
+
+    public WaitToGetDataThread(ZooKeeperWatcher zk, String node) {
+      tracker = new TestTracker(zk, node, null);
+      tracker.start();
+      zk.registerListener(tracker);
+      hasData = false;
+    }
+
+    public WaitToGetDataThread(ZooKeeperWatcher zk, String node,
+        TestTracker tracker) {
+      this.tracker = tracker;
+      hasData = false;
+    }
+
+    @Override
+    public void run() {
+      LOG.info("Waiting for data to be available in WaitToGetDataThread");
+      try {
+        tracker.blockUntilAvailable();
+      } catch (InterruptedException e) {
+        e.printStackTrace();
+      }
+      LOG.info("Data now available in tracker from WaitToGetDataThread");
+      hasData = true;
+    }
+  }
+
+  public static class TestTracker extends ZooKeeperNodeTracker {
+    public TestTracker(ZooKeeperWatcher watcher, String node,
+        Abortable abortable) {
+      super(watcher, node, abortable);
+    }
+  }
+
+  public static class TestingZKListener extends ZooKeeperListener {
+    private static final Log LOG = LogFactory.getLog(NodeDeletionListener.class);
+
+    private Semaphore deletedLock;
+    private Semaphore createdLock;
+    private Semaphore changedLock;
+    private String node;
+
+    public TestingZKListener(ZooKeeperWatcher watcher, String node) {
+      super(watcher);
+      deletedLock = new Semaphore(0);
+      createdLock = new Semaphore(0);
+      changedLock = new Semaphore(0);
+      this.node = node;
+    }
+
+    @Override
+    public void nodeDeleted(String path) {
+      if(path.equals(node)) {
+        LOG.debug("nodeDeleted(" + path + ")");
+        deletedLock.release();
+      }
+    }
+
+    @Override
+    public void nodeCreated(String path) {
+      if(path.equals(node)) {
+        LOG.debug("nodeCreated(" + path + ")");
+        createdLock.release();
+      }
+    }
+
+    @Override
+    public void nodeDataChanged(String path) {
+      if(path.equals(node)) {
+        LOG.debug("nodeDataChanged(" + path + ")");
+        changedLock.release();
+      }
+    }
+
+    public void waitForDeletion() throws InterruptedException {
+      deletedLock.acquire();
+    }
+
+    public void waitForCreation() throws InterruptedException {
+      createdLock.acquire();
+    }
+
+    public void waitForDataChange() throws InterruptedException {
+      changedLock.acquire();
+    }
+  }
+
+  public static class StubAbortable implements Abortable {
+    @Override
+    public void abort(final String msg, final Throwable t) {}
+  }
+
+  public static class StubWatcher implements Watcher {
+    @Override
+    public void process(WatchedEvent event) {}
+  }
+}
diff --git a/0.90/src/test/resources/hbase-site.xml b/0.90/src/test/resources/hbase-site.xml
new file mode 100644
index 0000000..641ef67
--- /dev/null
+++ b/0.90/src/test/resources/hbase-site.xml
@@ -0,0 +1,130 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Copyright 2007 The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>hbase.regionserver.msginterval</name>
+    <value>1000</value>
+    <description>Interval between messages from the RegionServer to HMaster
+    in milliseconds.  Default is 15. Set this value low if you want unit
+    tests to be responsive.
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.pause</name>
+    <value>5000</value>
+    <description>General client pause value.  Used mostly as value to wait
+    before running a retry of a failed get, region lookup, etc.</description>
+  </property>
+  <property>
+    <name>hbase.client.retries.number</name>
+    <value>4</value>
+    <description>Maximum retries.  Used as maximum for all retryable
+    operations such as fetching of the root region from root region
+    server, getting a cell's value, starting a row update, etc.
+    Default: 10.
+    </description>
+  </property>
+  <property>
+    <name>hbase.server.thread.wakefrequency</name>
+    <value>1000</value>
+    <description>Time to sleep in between searches for work (in milliseconds).
+    Used as sleep interval by service threads such as META scanner and log roller.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.handler.count</name>
+    <value>5</value>
+    <description>Count of RPC Server instances spun up on RegionServers
+    Same property is used by the HMaster for count of master handlers.
+    Default is 10.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.info.port</name>
+    <value>-1</value>
+    <description>The port for the hbase master web UI
+    Set to -1 if you do not want the info server to run.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port</name>
+    <value>-1</value>
+    <description>The port for the hbase regionserver web UI
+    Set to -1 if you do not want the info server to run.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port.auto</name>
+    <value>true</value>
+    <description>Info server auto port bind. Enables automatic port
+    search if hbase.regionserver.info.port is already in use.
+    Enabled for testing to run multiple tests on one machine.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.lease.thread.wakefrequency</name>
+    <value>3000</value>
+    <description>The interval between checks for expired region server leases.
+    This value has been reduced due to the other reduced values above so that
+    the master will notice a dead region server sooner. The default is 15 seconds.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.optionalcacheflushinterval</name>
+    <value>1000</value>
+    <description>
+    Amount of time to wait since the last time a region was flushed before
+    invoking an optional cache flush. Default 60,000.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.safemode</name>
+    <value>false</value>
+    <description>
+    Turn on/off safe mode in region server. Always on for production, always off
+    for tests.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.max.filesize</name>
+    <value>67108864</value>
+    <description>
+    Maximum desired file size for an HRegion.  If filesize exceeds
+    value + (value / 2), the HRegion is split in two.  Default: 256M.
+
+    Keep the maximum filesize small so we split more often in tests.
+    </description>
+  </property>
+  <property>
+    <name>hadoop.log.dir</name>
+    <value>${user.dir}/../logs</value>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.clientPort</name>
+    <value>21818</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The port at which the clients will connect.
+    </description>
+  </property>
+</configuration>
diff --git a/0.90/src/test/resources/log4j.properties b/0.90/src/test/resources/log4j.properties
new file mode 100644
index 0000000..9d42f74
--- /dev/null
+++ b/0.90/src/test/resources/log4j.properties
@@ -0,0 +1,47 @@
+# Define some default values that can be overridden by system properties
+hbase.root.logger=INFO,console
+hbase.log.dir=.
+hbase.log.file=hbase.log
+
+# Define the root logger to the system property "hbase.root.logger".
+log4j.rootLogger=${hbase.root.logger}
+
+# Logging Threshold
+log4j.threshhold=ALL
+
+#
+# Daily Rolling File Appender
+#
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+
+# Debugging Pattern format
+log4j.appender.DRFA.layout.ConversionPattern=%d %-5p [%t] %C{2}(%L): %m%n
+
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this
+#
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d %-5p [%t] %C{2}(%L): %m%n
+
+# Custom Logging levels
+
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+
+log4j.logger.org.apache.hadoop=WARN
+log4j.logger.org.apache.zookeeper=ERROR
+log4j.logger.org.apache.hadoop.hbase=DEBUG
diff --git a/0.90/src/test/resources/mapred-queues.xml b/0.90/src/test/resources/mapred-queues.xml
new file mode 100644
index 0000000..443f2d9
--- /dev/null
+++ b/0.90/src/test/resources/mapred-queues.xml
@@ -0,0 +1,56 @@
+<?xml version="1.0"?>
+<!-- This is the template for queue configuration. The format supports nesting of
+     queues within queues - a feature called hierarchical queues. All queues are
+     defined within the 'queues' tag which is the top level element for this
+     XML document.
+     The 'aclsEnabled' attribute should be set to true, if ACLs should be checked
+     on queue operations such as submitting jobs, killing jobs etc. -->
+<queues aclsEnabled="false">
+
+  <!-- Configuration for a queue is specified by defining a 'queue' element. -->
+  <queue>
+
+    <!-- Name of a queue. Queue name cannot contain a ':'  -->
+    <name>default</name>
+
+    <!-- properties for a queue, typically used by schedulers,
+    can be defined here -->
+    <properties>
+    </properties>
+
+	<!-- State of the queue. If running, the queue will accept new jobs.
+         If stopped, the queue will not accept new jobs. -->
+    <state>running</state>
+
+    <!-- Specifies the ACLs to check for submitting jobs to this queue.
+         If set to '*', it allows all users to submit jobs to the queue.
+         For specifying a list of users and groups the format to use is
+         user1,user2 group1,group2 -->
+    <acl-submit-job>*</acl-submit-job>
+
+    <!-- Specifies the ACLs to check for modifying jobs in this queue.
+         Modifications include killing jobs, tasks of jobs or changing
+         priorities.
+         If set to '*', it allows all users to submit jobs to the queue.
+         For specifying a list of users and groups the format to use is
+         user1,user2 group1,group2 -->
+    <acl-administer-jobs>*</acl-administer-jobs>
+  </queue>
+
+  <!-- Here is a sample of a hierarchical queue configuration
+       where q2 is a child of q1. In this example, q2 is a leaf level
+       queue as it has no queues configured within it. Currently, ACLs
+       and state are only supported for the leaf level queues.
+       Note also the usage of properties for the queue q2.
+  <queue>
+    <name>q1</name>
+    <queue>
+      <name>q2</name>
+      <properties>
+        <property key="capacity" value="20"/>
+        <property key="user-limit" value="30"/>
+      </properties>
+    </queue>
+  </queue>
+ -->
+</queues>
diff --git a/0.90/src/test/resources/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties b/0.90/src/test/resources/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties
new file mode 100644
index 0000000..28493ff
--- /dev/null
+++ b/0.90/src/test/resources/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties
@@ -0,0 +1,30 @@
+# ResourceBundle properties file for Map-Reduce counters
+
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+CounterGroupName=              HBase Performance Evaluation
+ELAPSED_TIME.name=             Elapsed time in milliseconds
+ROWS.name=									  Row count
+# ResourceBundle properties file for Map-Reduce counters
+
+CounterGroupName=              HBase Performance Evaluation
+ELAPSED_TIME.name=             Elapsed time in milliseconds
+ROWS.name=									  Row count
diff --git a/0.90/src/test/ruby/hbase/admin_test.rb b/0.90/src/test/ruby/hbase/admin_test.rb
new file mode 100644
index 0000000..5e491e4
--- /dev/null
+++ b/0.90/src/test/ruby/hbase/admin_test.rb
@@ -0,0 +1,283 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase'
+
+include HBaseConstants
+
+module Hbase
+  class AdminHelpersTest < Test::Unit::TestCase
+    include TestHelpers
+
+    def setup
+      setup_hbase
+      # Create test table if it does not exist
+      @test_name = "hbase_shell_tests_table"
+      create_test_table(@test_name)
+    end
+
+    define_test "exists? should return true when a table exists" do
+      assert(admin.exists?('.META.'))
+    end
+
+    define_test "exists? should return false when a table exists" do
+      assert(!admin.exists?('.NOT.EXISTS.'))
+    end
+
+    define_test "enabled? should return true for enabled tables" do
+      admin.enable(@test_name)
+      assert(admin.enabled?(@test_name))
+    end
+
+    define_test "enabled? should return false for disabled tables" do
+      admin.disable(@test_name)
+      assert(!admin.enabled?(@test_name))
+    end
+  end
+
+    # Simple administration methods tests
+  class AdminMethodsTest < Test::Unit::TestCase
+    include TestHelpers
+
+    def setup
+      setup_hbase
+      # Create test table if it does not exist
+      @test_name = "hbase_shell_tests_table"
+      create_test_table(@test_name)
+
+      # Create table test table name
+      @create_test_name = 'hbase_create_table_test_table'
+    end
+
+    define_test "list should return a list of tables" do
+      assert(admin.list.member?(@test_name))
+    end
+
+    define_test "list should not return meta tables" do
+      assert(!admin.list.member?('.META.'))
+      assert(!admin.list.member?('-ROOT-'))
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "flush should work" do
+      admin.flush('.META.')
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "compact should work" do
+      admin.compact('.META.')
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "major_compact should work" do
+      admin.major_compact('.META.')
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "split should work" do
+      admin.split('.META.')
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "drop should fail on non-existent tables" do
+      assert_raise(ArgumentError) do
+        admin.drop('.NOT.EXISTS.')
+      end
+    end
+
+    define_test "drop should fail on enabled tables" do
+      assert_raise(ArgumentError) do
+        admin.drop(@test_name)
+      end
+    end
+
+    define_test "drop should drop tables" do
+      admin.disable(@test_name)
+      admin.drop(@test_name)
+      assert(!admin.exists?(@test_name))
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "zk_dump should work" do
+      assert_not_nil(admin.zk_dump)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "create should fail with non-string table names" do
+      assert_raise(ArgumentError) do
+        admin.create(123, 'xxx')
+      end
+    end
+
+    define_test "create should fail with non-string/non-hash column args" do
+      assert_raise(ArgumentError) do
+        admin.create(@create_test_name, 123)
+      end
+    end
+
+    define_test "create should fail without columns" do
+      drop_test_table(@create_test_name)
+      assert_raise(ArgumentError) do
+        admin.create(@create_test_name)
+      end
+    end
+
+    define_test "create should work with string column args" do
+      drop_test_table(@create_test_name)
+      admin.create(@create_test_name, 'a', 'b')
+      assert_equal(['a:', 'b:'], table(@create_test_name).get_all_columns.sort)
+     end
+
+    define_test "create hould work with hash column args" do
+      drop_test_table(@create_test_name)
+      admin.create(@create_test_name, { NAME => 'a'}, { NAME => 'b'})
+      assert_equal(['a:', 'b:'], table(@create_test_name).get_all_columns.sort)
+    end
+
+    #-------------------------------------------------------------------------------
+
+#    define_test "close should work without region server name" do
+#      if admin.exists?(@create_test_name)
+#        admin.disable(@create_test_name)
+#        admin.drop(@create_test_name)
+#      end
+#      admin.create(@create_test_name, 'foo')
+#      admin.close_region(@create_test_name + ',,0')
+#    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "describe should fail for non-existent tables" do
+      assert_raise(ArgumentError) do
+        admin.describe('.NOT.EXISTS.')
+      end
+    end
+
+    define_test "describe should return a description" do
+      assert_not_nil admin.describe(@test_name)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "truncate should empty a table" do
+      table(@test_name).put(1, "x:a", 1)
+      table(@test_name).put(2, "x:a", 2)
+      assert_equal(2, table(@test_name).count)
+      admin.truncate(@test_name)
+      assert_equal(0, table(@test_name).count)
+    end
+
+    define_test "truncate should yield log records" do
+      logs = []
+      admin.truncate(@test_name) do |log|
+        assert_kind_of(String, log)
+        logs << log
+      end
+      assert(!logs.empty?)
+    end
+  end
+
+ # Simple administration methods tests
+  class AdminAlterTableTest < Test::Unit::TestCase
+    include TestHelpers
+
+    def setup
+      setup_hbase
+      # Create test table if it does not exist
+      @test_name = "hbase_shell_tests_table"
+      drop_test_table(@test_name)
+      create_test_table(@test_name)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "alter should fail with non-string table names" do
+      assert_raise(ArgumentError) do
+        admin.alter(123, METHOD => 'delete', NAME => 'y')
+      end
+    end
+
+    define_test "alter should fail with non-existing tables" do
+      assert_raise(ArgumentError) do
+        admin.alter('.NOT.EXISTS.', METHOD => 'delete', NAME => 'y')
+      end
+    end
+
+    define_test "alter should fail with enabled tables" do
+      assert_raise(ArgumentError) do
+        admin.alter(@test_name, METHOD => 'delete', NAME => 'y')
+      end
+    end
+
+    define_test "alter should be able to delete column families" do
+      assert_equal(['x:', 'y:'], table(@test_name).get_all_columns.sort)
+      admin.disable(@test_name)
+      admin.alter(@test_name, METHOD => 'delete', NAME => 'y')
+      admin.enable(@test_name)
+      assert_equal(['x:'], table(@test_name).get_all_columns.sort)
+    end
+
+    define_test "alter should be able to add column families" do
+      assert_equal(['x:', 'y:'], table(@test_name).get_all_columns.sort)
+      admin.disable(@test_name)
+      admin.alter(@test_name, NAME => 'z')
+      admin.enable(@test_name)
+      assert_equal(['x:', 'y:', 'z:'], table(@test_name).get_all_columns.sort)
+    end
+
+    define_test "alter should be able to add column families (name-only alter spec)" do
+      assert_equal(['x:', 'y:'], table(@test_name).get_all_columns.sort)
+      admin.disable(@test_name)
+      admin.alter(@test_name, 'z')
+      admin.enable(@test_name)
+      assert_equal(['x:', 'y:', 'z:'], table(@test_name).get_all_columns.sort)
+    end
+
+    define_test "alter should support more than one alteration in one call" do
+      assert_equal(['x:', 'y:'], table(@test_name).get_all_columns.sort)
+      admin.disable(@test_name)
+      admin.alter(@test_name, { NAME => 'z' }, { METHOD => 'delete', NAME => 'y' })
+      admin.enable(@test_name)
+      assert_equal(['x:', 'z:'], table(@test_name).get_all_columns.sort)
+    end
+
+    define_test 'alter should support shortcut DELETE alter specs' do
+      assert_equal(['x:', 'y:'], table(@test_name).get_all_columns.sort)
+      admin.disable(@test_name)
+      admin.alter(@test_name, 'delete' => 'y')
+      admin.disable(@test_name)
+      assert_equal(['x:'], table(@test_name).get_all_columns.sort)
+    end
+
+    define_test "alter should be able to change table options" do
+      admin.disable(@test_name)
+      admin.alter(@test_name, METHOD => 'table_att', 'MAX_FILESIZE' => 12345678)
+      admin.disable(@test_name)
+      assert_match(/12345678/, admin.describe(@test_name))
+    end
+  end
+end
diff --git a/0.90/src/test/ruby/hbase/hbase_test.rb b/0.90/src/test/ruby/hbase/hbase_test.rb
new file mode 100644
index 0000000..4e3fae3
--- /dev/null
+++ b/0.90/src/test/ruby/hbase/hbase_test.rb
@@ -0,0 +1,50 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase'
+
+module Hbase
+  class HbaseTest < Test::Unit::TestCase
+    def setup
+      @formatter = Shell::Formatter::Console.new()
+      @hbase = ::Hbase::Hbase.new($TEST_CLUSTER.getConfiguration)
+    end
+
+    define_test "Hbase::Hbase constructor should initialize hbase configuration object" do
+      assert_kind_of(org.apache.hadoop.conf.Configuration, @hbase.configuration)
+    end
+
+    define_test "Hbase::Hbase#admin should create a new admin object when called the first time" do
+      assert_kind_of(::Hbase::Admin, @hbase.admin(@formatter))
+    end
+
+    define_test "Hbase::Hbase#admin should create a new admin object every call" do
+      assert_not_same(@hbase.admin(@formatter), @hbase.admin(@formatter))
+    end
+
+    define_test "Hbase::Hbase#table should create a new table object when called the first time" do
+      assert_kind_of(::Hbase::Table, @hbase.table('.META.', @formatter))
+    end
+
+    define_test "Hbase::Hbase#table should create a new table object every call" do
+      assert_not_same(@hbase.table('.META.', @formatter), @hbase.table('.META.', @formatter))
+    end
+  end
+end
diff --git a/0.90/src/test/ruby/hbase/table_test.rb b/0.90/src/test/ruby/hbase/table_test.rb
new file mode 100644
index 0000000..ff197de
--- /dev/null
+++ b/0.90/src/test/ruby/hbase/table_test.rb
@@ -0,0 +1,414 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase'
+
+include HBaseConstants
+
+module Hbase
+  # Constructor tests
+  class TableConstructorTest < Test::Unit::TestCase
+    include TestHelpers
+    def setup
+      setup_hbase
+    end
+
+    define_test "Hbase::Table constructor should fail for non-existent tables" do
+      assert_raise(NativeException) do
+        table('non-existent-table-name')
+      end
+    end
+
+    define_test "Hbase::Table constructor should not fail for existent tables" do
+      assert_nothing_raised do
+        table('.META.')
+      end
+    end
+  end
+
+  # Helper methods tests
+  class TableHelpersTest < Test::Unit::TestCase
+    include TestHelpers
+
+    def setup
+      setup_hbase
+      # Create test table if it does not exist
+      @test_name = "hbase_shell_tests_table"
+      create_test_table(@test_name)
+      @test_table = table(@test_name)
+    end
+
+    define_test "is_meta_table? method should return true for the meta table" do
+      assert(table('.META.').is_meta_table?)
+    end
+
+    define_test "is_meta_table? method should return true for the root table" do
+      assert(table('-ROOT-').is_meta_table?)
+    end
+
+    define_test "is_meta_table? method should return false for a normal table" do
+      assert(!@test_table.is_meta_table?)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "get_all_columns should return columns list" do
+      cols = table('.META.').get_all_columns
+      assert_kind_of(Array, cols)
+      assert(cols.length > 0)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "parse_column_name should not return a qualifier for name-only column specifiers" do
+      col, qual = table('.META.').parse_column_name('foo')
+      assert_not_nil(col)
+      assert_nil(qual)
+    end
+
+    define_test "parse_column_name should not return a qualifier for family-only column specifiers" do
+      col, qual = table('.META.').parse_column_name('foo:')
+      assert_not_nil(col)
+      assert_nil(qual)
+    end
+
+    define_test "parse_column_name should return a qualifier for family:qualifier column specifiers" do
+      col, qual = table('.META.').parse_column_name('foo:bar')
+      assert_not_nil(col)
+      assert_not_nil(qual)
+    end
+  end
+
+  # Simple data management methods tests
+  class TableSimpleMethodsTest < Test::Unit::TestCase
+    include TestHelpers
+
+    def setup
+      setup_hbase
+      # Create test table if it does not exist
+      @test_name = "hbase_shell_tests_table"
+      create_test_table(@test_name)
+      @test_table = table(@test_name)
+    end
+
+    define_test "put should work without timestamp" do
+      @test_table.put("123", "x:a", "1")
+    end
+
+    define_test "put should work with timestamp" do
+      @test_table.put("123", "x:a", "2", Time.now.to_i)
+    end
+
+    define_test "put should work with integer keys" do
+      @test_table.put(123, "x:a", "3")
+    end
+
+    define_test "put should work with integer values" do
+      @test_table.put("123", "x:a", 4)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "delete should work without timestamp" do
+      @test_table.delete("123", "x:a")
+    end
+
+    define_test "delete should work with timestamp" do
+      @test_table.delete("123", "x:a", Time.now.to_i)
+    end
+
+    define_test "delete should work with integer keys" do
+      @test_table.delete(123, "x:a")
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "deleteall should work w/o columns and timestamps" do
+      @test_table.deleteall("123")
+    end
+
+    define_test "deleteall should work with integer keys" do
+      @test_table.deleteall(123)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "incr should work w/o value" do
+      @test_table.incr("123", 'x:cnt1')
+    end
+
+    define_test "incr should work with value" do
+      @test_table.incr("123", 'x:cnt2', 10)
+    end
+
+    define_test "incr should work with integer keys" do
+      @test_table.incr(123, 'x:cnt3')
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "get_counter should work with integer keys" do
+      @test_table.incr(12345, 'x:cnt')
+      assert_kind_of(Fixnum, @test_table.get_counter(12345, 'x:cnt'))
+    end
+
+    define_test "get_counter should return nil for non-existent counters" do
+      assert_nil(@test_table.get_counter(12345, 'x:qqqq'))
+    end
+  end
+
+  # Complex data management methods tests
+  class TableComplexMethodsTest < Test::Unit::TestCase
+    include TestHelpers
+
+    def setup
+      setup_hbase
+      # Create test table if it does not exist
+      @test_name = "hbase_shell_tests_table"
+      create_test_table(@test_name)
+      @test_table = table(@test_name)
+
+      # Test data
+      @test_ts = 12345678
+      @test_table.put(1, "x:a", 1)
+      @test_table.put(1, "x:b", 2, @test_ts)
+
+      @test_table.put(2, "x:a", 11)
+      @test_table.put(2, "x:b", 12, @test_ts)
+    end
+
+    define_test "count should work w/o a block passed" do
+      assert(@test_table.count > 0)
+    end
+
+    define_test "count should work with a block passed (and yield)" do
+      rows = []
+      cnt = @test_table.count(1) do |cnt, row|
+        rows << row
+      end
+      assert(cnt > 0)
+      assert(!rows.empty?)
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "get should work w/o columns specification" do
+      res = @test_table.get('1')
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get should work with integer keys" do
+      res = @test_table.get(1)
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get should work with hash columns spec and a single string COLUMN parameter" do
+      res = @test_table.get('1', COLUMN => 'x:a')
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_nil(res['x:b'])
+    end
+
+    define_test "get should work with hash columns spec and a single string COLUMNS parameter" do
+      res = @test_table.get('1', COLUMNS => 'x:a')
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_nil(res['x:b'])
+    end
+
+    define_test "get should work with hash columns spec and an array of strings COLUMN parameter" do
+      res = @test_table.get('1', COLUMN => [ 'x:a', 'x:b' ])
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get should work with hash columns spec and an array of strings COLUMNS parameter" do
+      res = @test_table.get('1', COLUMNS => [ 'x:a', 'x:b' ])
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get should work with hash columns spec and TIMESTAMP only" do
+      res = @test_table.get('1', TIMESTAMP => @test_ts)
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get should fail with hash columns spec and strange COLUMN value" do
+      assert_raise(ArgumentError) do
+        @test_table.get('1', COLUMN => {})
+      end
+    end
+
+    define_test "get should fail with hash columns spec and strange COLUMNS value" do
+      assert_raise(ArgumentError) do
+        @test_table.get('1', COLUMN => {})
+      end
+    end
+
+    define_test "get should fail with hash columns spec and no TIMESTAMP or COLUMN[S]" do
+      assert_raise(ArgumentError) do
+        @test_table.get('1', { :foo => :bar })
+      end
+    end
+
+    define_test "get should work with a string column spec" do
+      res = @test_table.get('1', 'x:b')
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get should work with an array columns spec" do
+      res = @test_table.get('1', 'x:a', 'x:b')
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get should work with an array or arrays columns spec (yeah, crazy)" do
+      res = @test_table.get('1', ['x:a'], ['x:b'])
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['x:a'])
+      assert_not_nil(res['x:b'])
+    end
+
+    define_test "get with a block should yield (column, value) pairs" do
+      res = {}
+      @test_table.get('1') { |col, val| res[col] = val }
+      assert_equal(res.keys.sort, [ 'x:a', 'x:b' ])
+    end
+
+    #-------------------------------------------------------------------------------
+
+    define_test "scan should work w/o any params" do
+      res = @test_table.scan
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['1'])
+      assert_not_nil(res['1']['x:a'])
+      assert_not_nil(res['1']['x:b'])
+      assert_not_nil(res['2'])
+      assert_not_nil(res['2']['x:a'])
+      assert_not_nil(res['2']['x:b'])
+    end
+
+    define_test "scan should support STARTROW parameter" do
+      res = @test_table.scan STARTROW => '2'
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_nil(res['1'])
+      assert_not_nil(res['2'])
+      assert_not_nil(res['2']['x:a'])
+      assert_not_nil(res['2']['x:b'])
+    end
+
+    define_test "scan should support STOPROW parameter" do
+      res = @test_table.scan STOPROW => '2'
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['1'])
+      assert_not_nil(res['1']['x:a'])
+      assert_not_nil(res['1']['x:b'])
+      assert_nil(res['2'])
+    end
+
+    define_test "scan should support LIMIT parameter" do
+      res = @test_table.scan LIMIT => 1
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['1'])
+      assert_not_nil(res['1']['x:a'])
+      assert_not_nil(res['1']['x:b'])
+      assert_nil(res['2'])
+    end
+
+    define_test "scan should support TIMESTAMP parameter" do
+      res = @test_table.scan TIMESTAMP => @test_ts
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['1'])
+      assert_nil(res['1']['x:a'])
+      assert_not_nil(res['1']['x:b'])
+      assert_not_nil(res['2'])
+      assert_nil(res['2']['x:a'])
+      assert_not_nil(res['2']['x:b'])
+    end
+
+    define_test "scan should support COLUMNS parameter with an array of columns" do
+      res = @test_table.scan COLUMNS => [ 'x:a', 'x:b' ]
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['1'])
+      assert_not_nil(res['1']['x:a'])
+      assert_not_nil(res['1']['x:b'])
+      assert_not_nil(res['2'])
+      assert_not_nil(res['2']['x:a'])
+      assert_not_nil(res['2']['x:b'])
+    end
+
+    define_test "scan should support COLUMNS parameter with a single column name" do
+      res = @test_table.scan COLUMNS => 'x:a'
+      assert_not_nil(res)
+      assert_kind_of(Hash, res)
+      assert_not_nil(res['1'])
+      assert_not_nil(res['1']['x:a'])
+      assert_nil(res['1']['x:b'])
+      assert_not_nil(res['2'])
+      assert_not_nil(res['2']['x:a'])
+      assert_nil(res['2']['x:b'])
+    end
+
+    define_test "scan should fail on invalid COLUMNS parameter types" do
+      assert_raise(ArgumentError) do
+        @test_table.scan COLUMNS => {}
+      end
+    end
+
+    define_test "scan should fail on non-hash params" do
+      assert_raise(ArgumentError) do
+        @test_table.scan 123
+      end
+    end
+
+    define_test "scan with a block should yield rows and return rows counter" do
+      rows = {}
+      res = @test_table.scan { |row, cells| rows[row] = cells }
+      assert_equal(rows.keys.size, res)
+    end
+  end
+end
diff --git a/0.90/src/test/ruby/shell/commands_test.rb b/0.90/src/test/ruby/shell/commands_test.rb
new file mode 100644
index 0000000..1a315a7
--- /dev/null
+++ b/0.90/src/test/ruby/shell/commands_test.rb
@@ -0,0 +1,34 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'shell'
+require 'shell/formatter'
+
+class ShellCommandsTest < Test::Unit::TestCase
+  Shell.commands.each do |name, klass|
+    define_test "#{name} command class #{klass} should respond to help" do
+      assert_respond_to(klass.new(nil), :help)
+    end
+
+    define_test "#{name} command class #{klass} should respond to :command" do
+      assert_respond_to(klass.new(nil), :command)
+    end
+  end
+end
diff --git a/0.90/src/test/ruby/shell/formatter_test.rb b/0.90/src/test/ruby/shell/formatter_test.rb
new file mode 100644
index 0000000..5b5c636
--- /dev/null
+++ b/0.90/src/test/ruby/shell/formatter_test.rb
@@ -0,0 +1,69 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'shell/formatter'
+
+class ShellFormatterTest < Test::Unit::TestCase
+  # Helper method to construct a null formatter
+  def formatter
+    Shell::Formatter::Base.new(:output_stream => STDOUT)
+  end
+
+  #
+  # Constructor tests
+  #
+  define_test "Formatter constructor should not raise error valid IO streams" do
+    assert_nothing_raised do
+      Shell::Formatter::Base.new(:output_stream => STDOUT)
+    end
+  end
+
+  define_test "Formatter constructor should not raise error when no IO stream passed" do
+    assert_nothing_raised do
+      Shell::Formatter::Base.new()
+    end
+  end
+
+  define_test "Formatter constructor should raise error on non-IO streams" do
+    assert_raise TypeError do
+      Shell::Formatter::Base.new(:output_stream => 'foostring')
+    end
+  end
+
+  #-------------------------------------------------------------------------------------------------------
+  # Printing methods tests
+  # FIXME: The tests are just checking that the code has no typos, try to figure out a better way to test
+  #
+  define_test "Formatter#header should work" do
+    formatter.header(['a', 'b'])
+    formatter.header(['a', 'b'], [10, 20])
+  end
+
+  define_test "Formatter#row should work" do
+    formatter.row(['a', 'b'])
+    formatter.row(['xxxxxxxxx xxxxxxxxxxx xxxxxxxxxxx xxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxxxx'])
+    formatter.row(['yyyyyy yyyyyy yyyyy yyy', 'xxxxxxxxx xxxxxxxxxxx xxxxxxxxxxx xxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxx xxxxxxxxxxxxxx  xxx xx x xx xxx xx xx xx x xx x x xxx x x xxx x x xx x x x x x x xx '])
+    formatter.row(["NAME => 'table1', FAMILIES => [{NAME => 'fam2', VERSIONS => 3, COMPRESSION => 'NONE', IN_MEMORY => false, BLOCKCACHE => false, LENGTH => 2147483647, TTL => FOREVER, BLOOMFILTER => NONE}, {NAME => 'fam1', VERSIONS => 3, COMPRESSION => 'NONE', IN_MEMORY => false, BLOCKCACHE => false, LENGTH => 2147483647, TTL => FOREVER, BLOOMFILTER => NONE}]"])
+  end
+
+  define_test "Froematter#footer should work" do
+    formatter.footer(Time.now - 5)
+  end
+end
diff --git a/0.90/src/test/ruby/shell/shell_test.rb b/0.90/src/test/ruby/shell/shell_test.rb
new file mode 100644
index 0000000..4289588
--- /dev/null
+++ b/0.90/src/test/ruby/shell/shell_test.rb
@@ -0,0 +1,70 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase'
+require 'shell'
+require 'shell/formatter'
+
+class ShellTest < Test::Unit::TestCase
+  def setup
+    @formatter = ::Shell::Formatter::Console.new()
+    @hbase = ::Hbase::Hbase.new
+    @shell = Shell::Shell.new(@hbase, @formatter)
+  end
+
+  define_test "Shell::Shell#hbase_admin should return an admin instance" do
+    assert_kind_of(Hbase::Admin, @shell.hbase_admin)
+  end
+
+  define_test "Shell::Shell#hbase_admin should cache admin instances" do
+    assert_same(@shell.hbase_admin, @shell.hbase_admin)
+  end
+
+  #-------------------------------------------------------------------------------
+
+  define_test "Shell::Shell#hbase_table should return a table instance" do
+    assert_kind_of(Hbase::Table, @shell.hbase_table('.META.'))
+  end
+
+  define_test "Shell::Shell#hbase_table should not cache table instances" do
+    assert_not_same(@shell.hbase_table('.META.'), @shell.hbase_table('.META.'))
+  end
+
+  #-------------------------------------------------------------------------------
+
+  define_test "Shell::Shell#export_commands should export command methods to specified object" do
+    module Foo; end
+    assert(!Foo.respond_to?(:version))
+    @shell.export_commands(Foo)
+    assert(Foo.respond_to?(:version))
+  end
+
+  #-------------------------------------------------------------------------------
+
+  define_test "Shell::Shell#command_instance should return a command class" do
+    assert_kind_of(Shell::Commands::Command, @shell.command_instance('version'))
+  end
+
+  #-------------------------------------------------------------------------------
+
+  define_test "Shell::Shell#command should execute a command" do
+    @shell.command('version')
+  end
+end
diff --git a/0.90/src/test/ruby/test_helper.rb b/0.90/src/test/ruby/test_helper.rb
new file mode 100644
index 0000000..1b542f5
--- /dev/null
+++ b/0.90/src/test/ruby/test_helper.rb
@@ -0,0 +1,72 @@
+require 'test/unit'
+
+module Testing
+  module Declarative
+    # define_test "should do something" do
+    #   ...
+    # end
+    def define_test(name, &block)
+      test_name = "test_#{name.gsub(/\s+/,'_')}".to_sym
+      defined = instance_method(test_name) rescue false
+      raise "#{test_name} is already defined in #{self}" if defined
+      if block_given?
+        define_method(test_name, &block)
+      else
+        define_method(test_name) do
+          flunk "No implementation provided for #{name}"
+        end
+      end
+    end
+  end
+end
+
+module Hbase
+  module TestHelpers
+    def setup_hbase
+      @formatter = Shell::Formatter::Console.new()
+      @hbase = ::Hbase::Hbase.new($TEST_CLUSTER.getConfiguration)
+    end
+
+    def table(table)
+      @hbase.table(table, @formatter)
+    end
+
+    def admin
+      @hbase.admin(@formatter)
+    end
+
+    def create_test_table(name)
+      # Create the table if needed
+      unless admin.exists?(name)
+        admin.create name, [{'NAME' => 'x', 'VERSIONS' => 5}, 'y']
+        return
+      end
+
+      # Enable the table if needed
+      unless admin.enabled?(name)
+        admin.enable(name)
+      end
+    end
+
+    def drop_test_table(name)
+      return unless admin.exists?(name)
+      begin
+        admin.disable(name) if admin.enabled?(name)
+      rescue => e
+        puts "IGNORING DISABLE TABLE ERROR: #{e}"
+      end
+      begin
+        admin.drop(name)
+      rescue => e
+        puts "IGNORING DROP TABLE ERROR: #{e}"
+      end
+    end
+  end
+end
+
+# Extend standard unit tests with our helpers
+Test::Unit::TestCase.extend(Testing::Declarative)
+
+# Add the $HBASE_HOME/lib/ruby directory to the ruby
+# load path so I can load up my HBase ruby modules
+$LOAD_PATH.unshift File.join(File.dirname(__FILE__), "..", "..", "main", "ruby")
diff --git a/0.90/src/test/ruby/tests_runner.rb b/0.90/src/test/ruby/tests_runner.rb
new file mode 100644
index 0000000..0dbc5ce
--- /dev/null
+++ b/0.90/src/test/ruby/tests_runner.rb
@@ -0,0 +1,64 @@
+#
+# Copyright 2010 The Apache Software Foundation
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'rubygems'
+require 'rake'
+
+unless defined?($TEST_CLUSTER)
+  include Java
+
+  # Set logging level to avoid verboseness
+  org.apache.log4j.Logger.getRootLogger.setLevel(org.apache.log4j.Level::OFF)
+  org.apache.log4j.Logger.getLogger("org.apache.zookeeper").setLevel(org.apache.log4j.Level::OFF)
+  org.apache.log4j.Logger.getLogger("org.apache.hadoop.hdfs").setLevel(org.apache.log4j.Level::OFF)
+  org.apache.log4j.Logger.getLogger("org.apache.hadoop.hbase").setLevel(org.apache.log4j.Level::OFF)
+  org.apache.log4j.Logger.getLogger("org.apache.hadoop.ipc.HBaseServer").setLevel(org.apache.log4j.Level::OFF)
+
+  java_import org.apache.hadoop.hbase.HBaseTestingUtility
+
+  $TEST_CLUSTER = HBaseTestingUtility.new
+  $TEST_CLUSTER.configuration.setInt("hbase.regionserver.msginterval", 100)
+  $TEST_CLUSTER.configuration.setInt("hbase.client.pause", 250)
+  $TEST_CLUSTER.configuration.setInt("hbase.client.retries.number", 6)
+  $TEST_CLUSTER.startMiniCluster
+  @own_cluster = true
+end
+
+require 'test_helper'
+
+puts "Running tests..."
+
+files = Dir[ File.dirname(__FILE__) + "/**/*_test.rb" ]
+files.each do |file|
+  begin
+    load(file)
+  rescue => e
+    puts "ERROR: #{e}"
+    raise
+  end
+end
+
+Test::Unit::AutoRunner.run
+
+puts "Done with tests! Shutting down the cluster..."
+if @own_cluster
+  $TEST_CLUSTER.shutdownMiniCluster
+  java.lang.System.exit(0)
+end